Updated to boto 2.9.5 in preparation of gsutil roll.

Gsutil is being updated, meaning we'll need a more updated version of boto.
This updates boto to cb44274755436cda44dabedaa7ca204efa7d78f6 from
https://github.com/boto/boto.git.

R=maruel
AUTHOR=hinoka


git-svn-id: svn://svn.chromium.org/boto@8 4f2e627c-b00b-48dd-b1fb-2c643665b734
diff --git a/CONTRIBUTING b/CONTRIBUTING
new file mode 100644
index 0000000..f29942f
--- /dev/null
+++ b/CONTRIBUTING
@@ -0,0 +1,47 @@
+============
+Contributing
+============
+
+For more information, please see the official contribution docs at
+http://docs.pythonboto.org/en/latest/contributing.html.
+
+
+Contributing Code
+=================
+
+* A good patch:
+
+  * is clear.
+  * works across all supported versions of Python.
+  * follows the existing style of the code base (PEP-8).
+  * has comments included as needed.
+
+* A test case that demonstrates the previous flaw that now passes
+  with the included patch.
+* If it adds/changes a public API, it must also include documentation
+  for those changes.
+* Must be appropriately licensed (New BSD).
+
+
+Reporting An Issue/Feature
+==========================
+
+* Check to see if there's an existing issue/pull request for the
+  bug/feature. All issues are at https://github.com/boto/boto/issues
+  and pull reqs are at https://github.com/boto/boto/pulls.
+* If there isn't an existing issue there, please file an issue. The ideal
+  report includes:
+
+  * A description of the problem/suggestion.
+  * How to recreate the bug.
+  * If relevant, including the versions of your:
+
+    * Python interpreter
+    * boto
+    * Optionally of the other dependencies involved
+
+  * If possile, create a pull request with a (failing) test case demonstrating
+    what's wrong. This makes the process for fixing bugs quicker & gets issues
+    resolved sooner.
+
+
diff --git a/LICENSE b/LICENSE
new file mode 100644
index 0000000..07d9e8c
--- /dev/null
+++ b/LICENSE
@@ -0,0 +1,18 @@
+Permission is hereby granted, free of charge, to any person obtaining a
+copy of this software and associated documentation files (the
+"Software"), to deal in the Software without restriction, including
+without limitation the rights to use, copy, modify, merge, publish, dis-
+tribute, sublicense, and/or sell copies of the Software, and to permit
+persons to whom the Software is furnished to do so, subject to the fol-
+lowing conditions:
+
+The above copyright notice and this permission notice shall be included
+in all copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
+OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
+ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
+SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+IN THE SOFTWARE.
diff --git a/README.chromium b/README.chromium
index 054e7e4..4a5d087 100644
--- a/README.chromium
+++ b/README.chromium
@@ -1,5 +1,5 @@
 URL: http://github.com/boto/boto
-Version: 2.6.0
+Version: 2.9.5
 License: MIT License
 
-This is a forked copy of boto at revision 3968
+This is a forked copy of boto at revision cb44274755436cda44dabedaa7ca204efa7d78f6
diff --git a/README.rst b/README.rst
index 3499ad4..77c64aa 100644
--- a/README.rst
+++ b/README.rst
@@ -1,11 +1,15 @@
 ####
 boto
 ####
-boto 2.6.0
-19-Sep-2012
+boto 2.9.5
 
-.. image:: https://secure.travis-ci.org/boto/boto.png?branch=develop
-        :target: https://secure.travis-ci.org/boto/boto
+Released: 28-May-2013
+
+.. image:: https://travis-ci.org/boto/boto.png?branch=develop
+        :target: https://travis-ci.org/boto/boto
+        
+.. image:: https://pypip.in/d/boto/badge.png
+        :target: https://crate.io/packages/boto/
 
 ************
 Introduction
@@ -15,41 +19,70 @@
 At the moment, boto supports:
 
 * Compute
+
   * Amazon Elastic Compute Cloud (EC2)
   * Amazon Elastic Map Reduce (EMR)
   * AutoScaling
-  * Elastic Load Balancing (ELB)
+
 * Content Delivery
+
   * Amazon CloudFront
+
 * Database
+
   * Amazon Relational Data Service (RDS)
   * Amazon DynamoDB
   * Amazon SimpleDB
+  * Amazon ElastiCache
+  * Amazon Redshift
+
 * Deployment and Management
-  * AWS Identity and Access Management (IAM)
-  * Amazon CloudWatch
+
   * AWS Elastic Beanstalk
   * AWS CloudFormation
+  * AWS Data Pipeline
+
+* Identity & Access
+
+  * AWS Identity and Access Management (IAM)
+
 * Application Services
+
   * Amazon CloudSearch
   * Amazon Simple Workflow Service (SWF)
   * Amazon Simple Queue Service (SQS)
   * Amazon Simple Notification Server (SNS)
   * Amazon Simple Email Service (SES)
+
+* Monitoring
+
+  * Amazon CloudWatch
+
 * Networking
+
   * Amazon Route53
   * Amazon Virtual Private Cloud (VPC)
+  * Elastic Load Balancing (ELB)
+
 * Payments and Billing
+
   * Amazon Flexible Payment Service (FPS)
+
 * Storage
+
   * Amazon Simple Storage Service (S3)
   * Amazon Glacier
   * Amazon Elastic Block Store (EBS)
   * Google Cloud Storage
+
 * Workforce
+
   * Amazon Mechanical Turk
+
 * Other
+
   * Marketplace Web Services
+  * AWS Support
 
 The goal of boto is to support the full breadth and depth of Amazon
 Web Services.  In addition, boto provides support for other public
@@ -87,16 +120,6 @@
 To see what has changed over time in boto, you can check out the
 `release notes`_ in the wiki.
 
-*********************************
-Special Note for Python 3.x Users
-*********************************
-
-If you are interested in trying out boto with Python 3.x, check out the
-`neo`_ branch.  This is under active development and the goal is a version
-of boto that works in Python 2.6, 2.7, and 3.x.  Not everything is working
-just yet but many things are and it's worth a look if you are an active
-Python 3.x user.
-
 ***************************
 Finding Out More About Boto
 ***************************
@@ -113,12 +136,14 @@
 Join our IRC channel `#boto` on FreeNode.
 Webchat IRC channel: http://webchat.freenode.net/?channels=boto
 
+Join the `boto-users Google Group`_.
+
 *************************
 Getting Started with Boto
 *************************
 
 Your credentials can be passed into the methods that create
-connections.  Alternatively, boto will check for the existance of the
+connections.  Alternatively, boto will check for the existence of the
 following environment variables to ascertain your credentials:
 
 **AWS_ACCESS_KEY_ID** - Your AWS Access Key ID
@@ -141,3 +166,4 @@
 .. _this: http://code.google.com/p/boto/wiki/BotoConfig
 .. _gitflow: http://nvie.com/posts/a-successful-git-branching-model/
 .. _neo: https://github.com/boto/boto/tree/neo
+.. _boto-users Google Group: https://groups.google.com/forum/?fromgroups#!forum/boto-users
diff --git a/bin/cfadmin b/bin/cfadmin
index 7073452..6fcdd86 100755
--- a/bin/cfadmin
+++ b/bin/cfadmin
@@ -65,6 +65,26 @@
         sys.exit(1)
     cf.create_invalidation_request(dist.id, paths)
 
+def listinvalidations(cf, origin_or_id):
+    """List invalidation requests for a given origin"""
+    dist = None
+    for d in cf.get_all_distributions():
+        if d.id == origin_or_id or d.origin.dns_name == origin_or_id:
+            dist = d
+            break
+    if not dist:
+        print "Distribution not found: %s" % origin_or_id
+        sys.exit(1)
+    results = cf.get_invalidation_requests(dist.id)
+    if results:
+        for result in results:
+            if result.status == "InProgress":
+                result = result.get_invalidation_request()
+                print result.id, result.status, result.paths
+            else:
+                print result.id, result.status
+
+
 if __name__ == "__main__":
     import boto
     import sys
diff --git a/bin/dynamodb_dump b/bin/dynamodb_dump
new file mode 100755
index 0000000..8b6aada
--- /dev/null
+++ b/bin/dynamodb_dump
@@ -0,0 +1,75 @@
+#!/usr/bin/env python
+
+import argparse
+import errno
+import os
+
+import boto
+from boto.compat import json
+
+
+DESCRIPTION = """Dump the contents of one or more DynamoDB tables to the local filesystem.
+
+Each table is dumped into two files:
+  - {table_name}.metadata stores the table's name, schema and provisioned
+    throughput.
+  - {table_name}.data stores the table's actual contents.
+
+Both files are created in the current directory. To write them somewhere else,
+use the --out-dir parameter (the target directory will be created if needed).
+"""
+
+
+def dump_table(table, out_dir):
+    metadata_file = os.path.join(out_dir, "%s.metadata" % table.name)
+    data_file = os.path.join(out_dir, "%s.data" % table.name)
+
+    with open(metadata_file, "w") as metadata_fd:
+        json.dump(
+            {
+                "name": table.name,
+                "schema": table.schema.dict,
+                "read_units": table.read_units,
+                "write_units": table.write_units,
+            },
+            metadata_fd
+        )
+
+    with open(data_file, "w") as data_fd:
+        for item in table.scan():
+            # JSON can't serialize sets -- convert those to lists.
+            data = {}
+            for k, v in item.iteritems():
+                if isinstance(v, (set, frozenset)):
+                    data[k] = list(v)
+                else:
+                    data[k] = v
+
+            data_fd.write(json.dumps(data))
+            data_fd.write("\n")
+
+
+def dynamodb_dump(tables, out_dir):
+    try:
+        os.makedirs(out_dir)
+    except OSError as e:
+        # We don't care if the dir already exists.
+        if e.errno != errno.EEXIST:
+            raise
+
+    conn = boto.connect_dynamodb()
+    for t in tables:
+        dump_table(conn.get_table(t), out_dir)
+
+
+if __name__ == "__main__":
+    parser = argparse.ArgumentParser(
+        prog="dynamodb_dump",
+        description=DESCRIPTION
+    )
+    parser.add_argument("--out-dir", default=".")
+    parser.add_argument("tables", metavar="TABLES", nargs="+")
+
+    namespace = parser.parse_args()
+
+    dynamodb_dump(namespace.tables, namespace.out_dir)
diff --git a/bin/dynamodb_load b/bin/dynamodb_load
new file mode 100755
index 0000000..21dfa17
--- /dev/null
+++ b/bin/dynamodb_load
@@ -0,0 +1,109 @@
+#!/usr/bin/env python
+
+import argparse
+import os
+
+import boto
+from boto.compat import json
+from boto.dynamodb.schema import Schema
+
+
+DESCRIPTION = """Load data into one or more DynamoDB tables.
+
+For each table, data is read from two files:
+  - {table_name}.metadata for the table's name, schema and provisioned
+    throughput (only required if creating the table).
+  - {table_name}.data for the table's actual contents.
+
+Both files are searched for in the current directory. To read them from
+somewhere else, use the --in-dir parameter.
+
+This program does not wipe the tables prior to loading data. However, any
+items present in the data files will overwrite the table's contents.
+"""
+
+
+def _json_iterload(fd):
+    """Lazily load newline-separated JSON objects from a file-like object."""
+    buffer = ""
+    eof = False
+    while not eof:
+        try:
+            # Add a line to the buffer
+            buffer += fd.next()
+        except StopIteration:
+            # We can't let that exception bubble up, otherwise the last
+            # object in the file will never be decoded.
+            eof = True
+        try:
+            # Try to decode a JSON object.
+            json_object = json.loads(buffer.strip())
+
+            # Success: clear the buffer (everything was decoded).
+            buffer = ""
+        except ValueError:
+            if eof and buffer.strip():
+                # No more lines to load and the buffer contains something other
+                # than whitespace: the file is, in fact, malformed.
+                raise
+            # We couldn't decode a complete JSON object: load more lines.
+            continue
+
+        yield json_object
+
+
+def create_table(metadata_fd):
+    """Create a table from a metadata file-like object."""
+
+
+def load_table(table, in_fd):
+    """Load items into a table from a file-like object."""
+    for i in _json_iterload(in_fd):
+        # Convert lists back to sets.
+        data = {}
+        for k, v in i.iteritems():
+            if isinstance(v, list):
+                data[k] = set(v)
+            else:
+                data[k] = v
+        table.new_item(attrs=i).put()
+
+
+def dynamodb_load(tables, in_dir, create_tables):
+    conn = boto.connect_dynamodb()
+    for t in tables:
+        metadata_file = os.path.join(in_dir, "%s.metadata" % t)
+        data_file = os.path.join(in_dir, "%s.data" % t)
+        if create_tables:
+            with open(metadata_file) as meta_fd:
+                metadata = json.load(meta_fd)
+            table = conn.create_table(
+                name=t,
+                schema=Schema(metadata["schema"]),
+                read_units=metadata["read_units"],
+                write_units=metadata["write_units"],
+            )
+            table.refresh(wait_for_active=True)
+        else:
+            table = conn.get_table(t)
+
+        with open(data_file) as in_fd:
+            load_table(table, in_fd)
+
+
+if __name__ == "__main__":
+    parser = argparse.ArgumentParser(
+        prog="dynamodb_load",
+        description=DESCRIPTION
+    )
+    parser.add_argument(
+        "--create-tables",
+        action="store_true",
+        help="Create the tables if they don't exist already (without this flag, attempts to load data into non-existing tables fail)."
+    )
+    parser.add_argument("--in-dir", default=".")
+    parser.add_argument("tables", metavar="TABLES", nargs="+")
+
+    namespace = parser.parse_args()
+
+    dynamodb_load(namespace.tables, namespace.in_dir, namespace.create_tables)
diff --git a/bin/elbadmin b/bin/elbadmin
index 6c8a8c7..e0aaf9d 100755
--- a/bin/elbadmin
+++ b/bin/elbadmin
@@ -107,12 +107,30 @@
 
         print
 
+        # Make map of all instance Id's to Name tags
+        ec2 = boto.connect_ec2()
+
+        instance_health = b.get_instance_health()
+        instances = [state.instance_id for state in instance_health]
+
+        names = {}
+        for r in ec2.get_all_instances(instances):
+            for i in r.instances:
+                names[i.id] = i.tags.get('Name', '')
+
+        name_column_width = max([4] + [len(v) for k,v in names.iteritems()]) + 2
+
         print "Instances"
         print "---------"
-        print "%-12s %-15s %s" % ("ID", "STATE", "DESCRIPTION")
-        for state in b.get_instance_health():
-            print "%-12s %-15s %s" % (state.instance_id, state.state,
-                                      state.description)
+        print "%-12s %-15s %-*s %s" % ("ID",
+                                       "STATE",
+                                       name_column_width, "NAME",
+                                       "DESCRIPTION")
+        for state in instance_health:
+            print "%-12s %-15s %-*s %s" % (state.instance_id,
+                                           state.state,
+                                           name_column_width, names[state.instance_id],
+                                           state.description)
 
         print
 
diff --git a/bin/fetch_file b/bin/fetch_file
index 6b8c4da..9315aec 100755
--- a/bin/fetch_file
+++ b/bin/fetch_file
@@ -15,23 +15,29 @@
 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
 # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
 # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
-# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, 
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
 # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
 # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
 # IN THE SOFTWARE.
 #
 if __name__ == "__main__":
-	from optparse import OptionParser
-	parser = OptionParser(version="0.1", usage="Usage: %prog [options] url")
-	parser.add_option("-o", "--out-file", help="Output file", dest="outfile")
+    from optparse import OptionParser
+    usage = """%prog [options] URI
+Fetch a URI using the boto library and (by default) pipe contents to STDOUT
+The URI can be either an HTTP URL, or "s3://bucket_name/key_name"
+"""
+    parser = OptionParser(version="0.1", usage=usage)
+    parser.add_option("-o", "--out-file",
+                      help="File to receive output instead of STDOUT",
+                      dest="outfile")
 
-	(options, args) = parser.parse_args()
-	if len(args) < 1:
-		parser.print_help()
-		exit(1)
-	from boto.utils import fetch_file
-	f = fetch_file(args[0])
-	if options.outfile:
-		open(options.outfile, "w").write(f.read())
-	else:
-		print f.read()
+    (options, args) = parser.parse_args()
+    if len(args) < 1:
+        parser.print_help()
+        exit(1)
+    from boto.utils import fetch_file
+    f = fetch_file(args[0])
+    if options.outfile:
+        open(options.outfile, "w").write(f.read())
+    else:
+        print f.read()
diff --git a/bin/glacier b/bin/glacier
index aad1e8b..bd28adf 100755
--- a/bin/glacier
+++ b/bin/glacier
@@ -51,15 +51,15 @@
                     created
 
     Common args:
-        access_key - Your AWS Access Key ID.  If not supplied, boto will
-                     use the value of the environment variable
-                     AWS_ACCESS_KEY_ID
-        secret_key - Your AWS Secret Access Key.  If not supplied, boto
-                     will use the value of the environment variable
-                     AWS_SECRET_ACCESS_KEY
-        region     - AWS region to use. Possible vaules: us-east-1, us-west-1,
-                     us-west-2, ap-northeast-1, eu-west-1.
-                     Default: us-east-1
+        --access_key - Your AWS Access Key ID.  If not supplied, boto will
+                       use the value of the environment variable
+                       AWS_ACCESS_KEY_ID
+        --secret_key - Your AWS Secret Access Key.  If not supplied, boto
+                       will use the value of the environment variable
+                       AWS_SECRET_ACCESS_KEY
+        --region     - AWS region to use. Possible values: us-east-1, us-west-1,
+                       us-west-2, ap-northeast-1, eu-west-1.
+                       Default: us-east-1
 
     Vaults operations:
 
@@ -91,18 +91,18 @@
 
 
 def list_vaults(region, access_key=None, secret_key=None):
-    layer2 = connect(region, access_key, secret_key)
+    layer2 = connect(region, access_key = access_key, secret_key = secret_key)
     for vault in layer2.list_vaults():
         print vault.arn
 
 
 def list_jobs(vault_name, region, access_key=None, secret_key=None):
-    layer2 = connect(region, access_key, secret_key)
+    layer2 = connect(region, access_key = access_key, secret_key = secret_key)
     print layer2.layer1.list_jobs(vault_name)
 
 
 def upload_files(vault_name, filenames, region, access_key=None, secret_key=None):
-    layer2 = connect(region, access_key, secret_key)
+    layer2 = connect(region, access_key = access_key, secret_key = secret_key)
     layer2.create_vault(vault_name)
     glacier_vault = layer2.get_vault(vault_name)
     for filename in filenames:
@@ -131,11 +131,11 @@
     access_key = secret_key = None
     region = 'us-east-1'
     for option, value in opts:
-        if option in ('a', '--access_key'):
+        if option in ('-a', '--access_key'):
             access_key = value
-        elif option in ('s', '--secret_key'):
+        elif option in ('-s', '--secret_key'):
             secret_key = value
-        elif option in ('r', '--region'):
+        elif option in ('-r', '--region'):
             region = value
     # handle each command
     if command == 'vaults':
diff --git a/bin/list_instances b/bin/list_instances
index 4da5596..a8de4ad 100755
--- a/bin/list_instances
+++ b/bin/list_instances
@@ -35,8 +35,10 @@
     parser.add_option("-r", "--region", help="Region (default us-east-1)", dest="region", default="us-east-1")
     parser.add_option("-H", "--headers", help="Set headers (use 'T:tagname' for including tags)", default=None, action="store", dest="headers", metavar="ID,Zone,Groups,Hostname,State,T:Name")
     parser.add_option("-t", "--tab", help="Tab delimited, skip header - useful in shell scripts", action="store_true", default=False)
+    parser.add_option("-f", "--filter", help="Filter option sent to DescribeInstances API call, format is key1=value1,key2=value2,...", default=None)
     (options, args) = parser.parse_args()
 
+
     # Connect the region
     for r in regions():
         if r.name == options.region:
@@ -62,13 +64,19 @@
             format_string += "%%-%ds" % HEADERS[h]['length']
 
 
+    # Parse filters (if any)
+    if options.filter:
+        filters = dict([entry.split('=') for entry in options.filter.split(',')])
+    else:
+        filters = {}
+
     # List and print
 
     if not options.tab:
         print format_string % headers
         print "-" * len(format_string % headers)
 
-    for r in ec2.get_all_instances():
+    for r in ec2.get_all_instances(filters=filters):
         groups = [g.name for g in r.groups]
         for i in r.instances:
             i.groups = ','.join(groups)
diff --git a/bin/mturk b/bin/mturk
new file mode 100755
index 0000000..e0b4bab
--- /dev/null
+++ b/bin/mturk
@@ -0,0 +1,465 @@
+#!/usr/bin/env python
+# Copyright 2012 Kodi Arfer
+#
+# Permission is hereby granted, free of charge, to any person obtaining a
+# copy of this software and associated documentation files (the
+# "Software"), to deal in the Software without restriction, including
+# without limitation the rights to use, copy, modify, merge, publish, dis-
+# tribute, sublicense, and/or sell copies of the Software, and to permit
+# persons to whom the Software is furnished to do so, subject to the fol-
+# lowing conditions:
+#
+# The above copyright notice and this permission notice shall be included
+# in all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
+# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
+# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, 
+# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+
+import argparse # Hence, Python 2.7 is required.
+import sys
+import os.path
+import string
+import inspect
+import datetime, calendar
+import boto.mturk.connection, boto.mturk.price, boto.mturk.question, boto.mturk.qualification
+from boto.compat import json
+
+# --------------------------------------------------
+# Globals
+# -------------------------------------------------
+
+interactive = False
+con = None
+mturk_website = None
+
+default_nicknames_path = os.path.expanduser('~/.boto_mturkcli_hit_nicknames')
+nicknames = {}
+nickname_pool = set(string.ascii_lowercase)
+
+time_units = dict(
+    s = 1,
+    min = 60,
+    h = 60 * 60,
+    d = 24 * 60 * 60)
+
+qual_requirements = dict(
+    Adult = '00000000000000000060',
+    Locale = '00000000000000000071',
+    NumberHITsApproved = '00000000000000000040',
+    PercentAssignmentsSubmitted = '00000000000000000000',
+    PercentAssignmentsAbandoned = '00000000000000000070',
+    PercentAssignmentsReturned = '000000000000000000E0',
+    PercentAssignmentsApproved = '000000000000000000L0',
+    PercentAssignmentsRejected = '000000000000000000S0')
+
+qual_comparators = {v : k for k, v in dict(
+    LessThan = '<', LessThanOrEqualTo = '<=',
+    GreaterThan = '>', GreaterThanOrEqualTo = '>=',
+    EqualTo = '==', NotEqualTo = '!=',
+    Exists = 'exists').items()}
+
+example_config_file = '''Example configuration file:
+
+  {
+    "title": "Pick your favorite color",
+    "description": "In this task, you are asked to pick your favorite color.",
+    "reward": 0.50,
+    "assignments": 10,
+    "duration": "20 min",
+    "keywords": ["color", "favorites", "survey"],
+    "lifetime": "7 d",
+    "approval_delay": "14 d",
+    "qualifications": [
+        "PercentAssignmentsApproved > 90",
+        "Locale == US",
+        "2ARFPLSP75KLA8M8DH1HTEQVJT3SY6 exists"
+    ],
+    "question_url": "http://example.com/myhit",
+    "question_frame_height": 450
+  }'''
+
+# --------------------------------------------------
+# Subroutines
+# --------------------------------------------------
+
+def unjson(path):
+    with open(path) as o:
+        return json.load(o)
+
+def add_argparse_arguments(parser):
+    parser.add_argument('-P', '--production',
+        dest = 'sandbox', action = 'store_false', default = True,
+        help = 'use the production site (default: use the sandbox)')
+    parser.add_argument('--nicknames',
+        dest = 'nicknames_path', metavar = 'PATH',
+        default = default_nicknames_path,
+        help = 'where to store HIT nicknames (default: {})'.format(
+            default_nicknames_path))
+
+def init_by_args(args):
+    init(args.sandbox, args.nicknames_path)
+
+def init(sandbox = False, nicknames_path = default_nicknames_path):
+    global con, mturk_website, nicknames, original_nicknames
+
+    mturk_website = 'workersandbox.mturk.com' if sandbox else 'www.mturk.com'
+    con = boto.mturk.connection.MTurkConnection(
+        host = 'mechanicalturk.sandbox.amazonaws.com' if sandbox else 'mechanicalturk.amazonaws.com')
+
+    try:
+        nicknames = unjson(nicknames_path)
+    except IOError:
+        nicknames = {}
+    original_nicknames = nicknames.copy()
+
+def save_nicknames(nicknames_path = default_nicknames_path):
+    if nicknames != original_nicknames:
+        with open(nicknames_path, 'w') as o:
+            json.dump(nicknames, o, sort_keys = True, indent = 4)
+            print >>o
+
+def parse_duration(s):
+    '''Parses durations like "2 d", "48 h", "2880 min",
+"172800 s", or "172800".'''
+    x = s.split()
+    return int(x[0]) * time_units['s' if len(x) == 1 else x[1]]
+def display_duration(n):
+    for unit, m in sorted(time_units.items(), key = lambda x: -x[1]):
+        if n % m == 0:
+            return '{} {}'.format(n / m, unit)
+
+def parse_qualification(inp):
+    '''Parses qualifications like "PercentAssignmentsApproved > 90",
+"Locale == US", and "2ARFPLSP75KLA8M8DH1HTEQVJT3SY6 exists".'''
+    inp = inp.split()
+    name, comparator, value = inp.pop(0), inp.pop(0), (inp[0] if len(inp) else None)
+    qtid = qual_requirements.get(name)
+    if qtid is None:
+      # Treat "name" as a Qualification Type ID.
+        qtid = name
+    if qtid == qual_requirements['Locale']:
+        return boto.mturk.qualification.LocaleRequirement(
+            qual_comparators[comparator],
+            value,
+            required_to_preview = False)
+    return boto.mturk.qualification.Requirement(
+        qtid,
+        qual_comparators[comparator],
+        value,
+        required_to_preview = qtid == qual_requirements['Adult'])
+          # Thus required_to_preview is true only for the
+          # Worker_Adult requirement.
+
+def preview_url(hit):
+    return 'https://{}/mturk/preview?groupId={}'.format(
+        mturk_website, hit.HITTypeId)
+
+def parse_timestamp(s):
+    '''Takes a timestamp like "2012-11-24T16:34:41Z".
+
+Returns a datetime object in the local time zone.'''
+    return datetime.datetime.fromtimestamp(
+        calendar.timegm(
+        datetime.datetime.strptime(s, '%Y-%m-%dT%H:%M:%SZ').timetuple()))
+
+def get_hitid(nickname_or_hitid):
+    return nicknames.get(nickname_or_hitid) or nickname_or_hitid
+
+def get_nickname(hitid):
+    for k, v in nicknames.items():
+        if v == hitid:
+            return k
+    return None
+
+def display_datetime(dt):
+    return dt.strftime('%e %b %Y, %l:%M %P')
+
+def display_hit(hit, verbose = False):
+    et = parse_timestamp(hit.Expiration)
+    return '\n'.join([
+        '{} - {} ({}, {}, {})'.format(
+            get_nickname(hit.HITId),
+            hit.Title,
+            hit.FormattedPrice,
+            display_duration(int(hit.AssignmentDurationInSeconds)),
+            hit.HITStatus),
+        'HIT ID: ' + hit.HITId,
+        'Type ID: ' + hit.HITTypeId,
+        'Group ID: ' + hit.HITGroupId,
+        'Preview: ' + preview_url(hit),
+        'Created {}   {}'.format(
+            display_datetime(parse_timestamp(hit.CreationTime)),
+            'Expired' if et <= datetime.datetime.now() else
+                'Expires ' + display_datetime(et)),
+        'Assignments: {} -- {} avail, {} pending, {} reviewable, {} reviewed'.format(
+            hit.MaxAssignments,
+            hit.NumberOfAssignmentsAvailable,
+            hit.NumberOfAssignmentsPending,
+            int(hit.MaxAssignments) - (int(hit.NumberOfAssignmentsAvailable) + int(hit.NumberOfAssignmentsPending) + int(hit.NumberOfAssignmentsCompleted)),
+            hit.NumberOfAssignmentsCompleted)
+            if hasattr(hit, 'NumberOfAssignmentsAvailable')
+            else 'Assignments: {} total'.format(hit.MaxAssignments),
+            # For some reason, SearchHITs includes the
+            # NumberOfAssignmentsFoobar fields but GetHIT doesn't.
+        ] + ([] if not verbose else [
+            '\nDescription: ' + hit.Description,
+            '\nKeywords: ' + hit.Keywords
+        ])) + '\n'
+
+def digest_assignment(a):
+    return dict(
+        answers = {str(x.qid): str(x.fields[0]) for x in a.answers[0]},
+        **{k: str(getattr(a, k)) for k in (
+            'AcceptTime', 'SubmitTime',
+            'HITId', 'AssignmentId', 'WorkerId',
+            'AssignmentStatus')})
+
+# --------------------------------------------------
+# Commands
+# --------------------------------------------------
+
+def get_balance():
+    return con.get_account_balance()
+
+def show_hit(hit):
+    return display_hit(con.get_hit(hit)[0], verbose = True)
+
+def list_hits():
+    'Lists your 10 most recently created HITs, with the most recent last.'
+    return '\n'.join(reversed(map(display_hit, con.search_hits(
+        sort_by = 'CreationTime',
+        sort_direction = 'Descending',
+        page_size = 10))))
+
+def make_hit(title, description, keywords, reward, question_url, question_frame_height, duration, assignments, approval_delay, lifetime, qualifications = []):
+    r = con.create_hit(
+        title = title,
+        description = description,
+        keywords = con.get_keywords_as_string(keywords),
+        reward = con.get_price_as_price(reward),
+        question = boto.mturk.question.ExternalQuestion(
+            question_url,
+            question_frame_height),
+        duration = parse_duration(duration),
+        qualifications = boto.mturk.qualification.Qualifications(
+            map(parse_qualification, qualifications)),
+        max_assignments = assignments,
+        approval_delay = parse_duration(approval_delay),
+        lifetime = parse_duration(lifetime))
+    nick = None
+    available_nicks = nickname_pool - set(nicknames.keys())
+    if available_nicks:
+        nick = min(available_nicks)
+        nicknames[nick] = r[0].HITId
+    if interactive:
+        print 'Nickname:', nick
+        print 'HIT ID:', r[0].HITId
+        print 'Preview:', preview_url(r[0])
+    else:
+        return r[0]
+
+def extend_hit(hit, assignments_increment = None, expiration_increment = None):
+    con.extend_hit(hit, assignments_increment, expiration_increment)
+
+def expire_hit(hit):
+    con.expire_hit(hit)
+
+def delete_hit(hit):
+    '''Deletes a HIT using DisableHIT.
+
+Unreviewed assignments get automatically approved. Unsubmitted
+assignments get automatically approved upon submission.
+
+The API docs say DisableHIT doesn't work with Reviewable HITs,
+but apparently, it does.'''
+    con.disable_hit(hit)
+    global nicknames
+    nicknames = {k: v for k, v in nicknames.items() if v != hit}
+
+def list_assignments(hit, only_reviewable = False):
+    assignments = map(digest_assignment, con.get_assignments(
+        hit_id = hit,
+        page_size = 100,
+        status = 'Submitted' if only_reviewable else None))
+    if interactive:
+        print json.dumps(assignments, sort_keys = True, indent = 4)
+        print ' '.join([a['AssignmentId'] for a in assignments])
+        print ' '.join([a['WorkerId'] + ',' + a['AssignmentId'] for a in assignments])
+    else:
+        return assignments
+
+def grant_bonus(message, amount, pairs):
+    for worker, assignment in pairs:
+        con.grant_bonus(worker, assignment, con.get_price_as_price(amount), message)
+        if interactive: print 'Bonused', worker
+
+def approve_assignments(message, assignments):
+    for a in assignments:
+        con.approve_assignment(a, message)
+        if interactive: print 'Approved', a
+
+def reject_assignments(message, assignments):
+    for a in assignments:
+        con.reject_assignment(a, message)
+        if interactive: print 'Rejected', a
+
+def unreject_assignments(message, assignments):
+    for a in assignments:
+        con.approve_rejected_assignment(a, message)
+        if interactive: print 'Unrejected', a
+
+def notify_workers(subject, text, workers):
+    con.notify_workers(workers, subject, text)
+
+# --------------------------------------------------
+# Mainline code
+# --------------------------------------------------
+
+if __name__ == '__main__':
+    interactive = True
+
+    parser = argparse.ArgumentParser()
+    add_argparse_arguments(parser)
+    subs = parser.add_subparsers()
+
+    sub = subs.add_parser('bal',
+        help = 'display your prepaid balance')
+    sub.set_defaults(f = get_balance, a = lambda: [])
+
+    sub = subs.add_parser('hit',
+        help = 'get information about a HIT')
+    sub.add_argument('hit',
+        help = 'nickname or ID of the HIT to show')
+    sub.set_defaults(f = show_hit, a = lambda:
+        [get_hitid(args.hit)])
+
+    sub = subs.add_parser('hits',
+        help = 'list all your HITs')
+    sub.set_defaults(f = list_hits, a = lambda: [])
+
+    sub = subs.add_parser('new',
+        help = 'create a new HIT (external questions only)',
+        epilog = example_config_file,
+        formatter_class = argparse.RawDescriptionHelpFormatter)
+    sub.add_argument('json_path',
+        help = 'path to JSON configuration file for the HIT')
+    sub.add_argument('-u', '--question-url', dest = 'question_url',
+        metavar = 'URL',
+        help = 'URL for the external question')
+    sub.add_argument('-a', '--assignments', dest = 'assignments',
+        type = int, metavar = 'N',
+        help = 'number of assignments')
+    sub.add_argument('-r', '--reward', dest = 'reward',
+        type = float, metavar = 'PRICE',
+        help = 'reward amount, in USD')
+    sub.set_defaults(f = make_hit, a = lambda: dict(
+        unjson(args.json_path).items() + [(k, getattr(args, k))
+            for k in ('question_url', 'assignments', 'reward')
+            if getattr(args, k) is not None]))
+
+    sub = subs.add_parser('extend',
+        help = 'add assignments or time to a HIT')
+    sub.add_argument('hit',
+        help = 'nickname or ID of the HIT to extend')
+    sub.add_argument('-a', '--assignments', dest = 'assignments',
+        metavar = 'N', type = int,
+        help = 'number of assignments to add')
+    sub.add_argument('-t', '--time', dest = 'time',
+        metavar = 'T',
+        help = 'amount of time to add to the expiration date')
+    sub.set_defaults(f = extend_hit, a = lambda:
+        [get_hitid(args.hit), args.assignments,
+            args.time and parse_duration(args.time)])
+
+    sub = subs.add_parser('expire',
+        help = 'force a HIT to expire without deleting it')
+    sub.add_argument('hit',
+        help = 'nickname or ID of the HIT to expire')
+    sub.set_defaults(f = expire_hit, a = lambda:
+        [get_hitid(args.hit)])
+
+    sub = subs.add_parser('rm',
+        help = 'delete a HIT')
+    sub.add_argument('hit',
+        help = 'nickname or ID of the HIT to delete')
+    sub.set_defaults(f = delete_hit, a = lambda:
+        [get_hitid(args.hit)])
+
+    sub = subs.add_parser('as',
+        help = "list a HIT's submitted assignments")
+    sub.add_argument('hit',
+        help = 'nickname or ID of the HIT to get assignments for')
+    sub.add_argument('-r', '--reviewable', dest = 'only_reviewable',
+        action = 'store_true',
+        help = 'show only unreviewed assignments')
+    sub.set_defaults(f = list_assignments, a = lambda:
+        [get_hitid(args.hit), args.only_reviewable])
+
+    for command, fun, helpmsg in [
+            ('approve', approve_assignments, 'approve assignments'),
+            ('reject', reject_assignments, 'reject assignments'),
+            ('unreject', unreject_assignments, 'approve previously rejected assignments')]:
+        sub = subs.add_parser(command, help = helpmsg)
+        sub.add_argument('assignment', nargs = '+',
+            help = 'ID of an assignment')
+        sub.add_argument('-m', '--message', dest = 'message',
+            metavar = 'TEXT',
+            help = 'feedback message shown to workers')
+        sub.set_defaults(f = fun, a = lambda:
+            [args.message, args.assignment])
+
+    sub = subs.add_parser('bonus',
+        help = 'give some workers a bonus')
+    sub.add_argument('amount', type = float,
+        help = 'bonus amount, in USD')
+    sub.add_argument('message',
+        help = 'the reason for the bonus (shown to workers in an email sent by MTurk)')
+    sub.add_argument('widaid', nargs = '+',
+        help = 'a WORKER_ID,ASSIGNMENT_ID pair')
+    sub.set_defaults(f = grant_bonus, a = lambda:
+        [args.message, args.amount,
+            [p.split(',') for p in args.widaid]])
+
+    sub = subs.add_parser('notify',
+        help = 'send a message to some workers')
+    sub.add_argument('subject',
+        help = 'subject of the message')
+    sub.add_argument('message',
+        help = 'text of the message')
+    sub.add_argument('worker', nargs = '+',
+        help = 'ID of a worker')
+    sub.set_defaults(f = notify_workers, a = lambda:
+        [args.subject, args.message, args.worker])
+
+    args = parser.parse_args()
+
+    init_by_args(args)
+
+    f = args.f
+    a = args.a()
+    if isinstance(a, dict):
+        # We do some introspective gymnastics so we can produce a
+        # less incomprehensible error message if some arguments
+        # are missing.
+        spec = inspect.getargspec(f)
+        missing = set(spec.args[: len(spec.args) - len(spec.defaults)]) - set(a.keys())
+        if missing:
+            raise ValueError('Missing arguments: ' + ', '.join(missing))
+        doit = lambda: f(**a)
+    else:
+        doit = lambda: f(*a)
+
+    try:
+        x = doit()
+    except boto.mturk.connection.MTurkRequestError as e:
+        print 'MTurk error:', e.error_message
+        sys.exit(1)
+
+    if x is not None:
+        print x
+
+    save_nicknames()
diff --git a/bin/s3put b/bin/s3put
index 9e5c5f2..01d9fcb 100755
--- a/bin/s3put
+++ b/bin/s3put
@@ -15,22 +15,45 @@
 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
 # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
 # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
-# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, 
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
 # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
 # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
 # IN THE SOFTWARE.
 #
-import getopt, sys, os
+import getopt
+import sys
+import os
 import boto
-from boto.exception import S3ResponseError
+
+try:
+    # multipart portions copyright Fabian Topfstedt
+    # https://gist.github.com/924094
+
+    import math
+    import mimetypes
+    from multiprocessing import Pool
+    from boto.s3.connection import S3Connection
+    from filechunkio import FileChunkIO
+    multipart_capable = True
+    usage_flag_multipart_capable = """ [--multipart]"""
+    usage_string_multipart_capable = """
+        multipart - Upload files as multiple parts. This needs filechunkio."""
+except ImportError as err:
+    multipart_capable = False
+    usage_flag_multipart_capable = ""
+    usage_string_multipart_capable = '\n\n     "' + \
+        err.message[len('No module named '):] + \
+        '" is missing for multipart support '
+
 
 usage_string = """
 SYNOPSIS
     s3put [-a/--access_key <access_key>] [-s/--secret_key <secret_key>]
           -b/--bucket <bucket_name> [-c/--callback <num_cb>]
           [-d/--debug <debug_level>] [-i/--ignore <ignore_dirs>]
-          [-n/--no_op] [-p/--prefix <prefix>] [-q/--quiet]
-          [-g/--grant grant] [-w/--no_overwrite] [-r/--reduced] path
+          [-n/--no_op] [-p/--prefix <prefix>] [-k/--key_prefix <key_prefix>]
+          [-q/--quiet] [-g/--grant grant] [-w/--no_overwrite] [-r/--reduced]
+          [--header] [--host <s3_host>]""" + usage_flag_multipart_capable + """ path [path...]
 
     Where
         access_key - Your AWS Access Key ID.  If not supplied, boto will
@@ -44,7 +67,7 @@
         path - A path to a directory or file that represents the items
                to be uploaded.  If the path points to an individual file,
                that file will be uploaded to the specified bucket.  If the
-               path points to a directory, s3_it will recursively traverse
+               path points to a directory, it will recursively traverse
                the directory and upload all files to the specified bucket.
         debug_level - 0 means no debug output (default), 1 means normal
                       debug output from boto, and 2 means boto debug output
@@ -64,63 +87,176 @@
                      /bar/fie.baz
                  The prefix must end in a trailing separator and if it
                  does not then one will be added.
+        key_prefix - A prefix to be added to the S3 key name, after any
+                     stripping of the file path is done based on the
+                     "-p/--prefix" option.
+        reduced - Use Reduced Redundancy storage
         grant - A canned ACL policy that will be granted on each file
                 transferred to S3.  The value of provided must be one
                 of the "canned" ACL policies supported by S3:
                 private|public-read|public-read-write|authenticated-read
-        no_overwrite - No files will be overwritten on S3, if the file/key 
-                       exists on s3 it will be kept. This is useful for 
-                       resuming interrupted transfers. Note this is not a 
-                       sync, even if the file has been updated locally if 
-                       the key exists on s3 the file on s3 will not be 
+        no_overwrite - No files will be overwritten on S3, if the file/key
+                       exists on s3 it will be kept. This is useful for
+                       resuming interrupted transfers. Note this is not a
+                       sync, even if the file has been updated locally if
+                       the key exists on s3 the file on s3 will not be
                        updated.
-        reduced - Use Reduced Redundancy storage
+        header - key=value pairs of extra header(s) to pass along in the
+                 request
+        host - Hostname override, for using an endpoint other then AWS S3
+""" + usage_string_multipart_capable + """
 
 
      If the -n option is provided, no files will be transferred to S3 but
      informational messages will be printed about what would happen.
 """
+
+
 def usage():
     print usage_string
     sys.exit()
-  
+
+
 def submit_cb(bytes_so_far, total_bytes):
     print '%d bytes transferred / %d bytes total' % (bytes_so_far, total_bytes)
 
-def get_key_name(fullpath, prefix):
-    key_name = fullpath[len(prefix):]
+
+def get_key_name(fullpath, prefix, key_prefix):
+    if fullpath.startswith(prefix):
+        key_name = fullpath[len(prefix):]
+    else:
+        key_name = fullpath
     l = key_name.split(os.sep)
-    return '/'.join(l)
+    return key_prefix + '/'.join(l)
+
+
+def _upload_part(bucketname, aws_key, aws_secret, multipart_id, part_num,
+                 source_path, offset, bytes, debug, cb, num_cb,
+                 amount_of_retries=10):
+    """
+    Uploads a part with retries.
+    """
+    if debug == 1:
+        print "_upload_part(%s, %s, %s)" % (source_path, offset, bytes)
+
+    def _upload(retries_left=amount_of_retries):
+        try:
+            if debug == 1:
+                print 'Start uploading part #%d ...' % part_num
+            conn = S3Connection(aws_key, aws_secret)
+            conn.debug = debug
+            bucket = conn.get_bucket(bucketname)
+            for mp in bucket.get_all_multipart_uploads():
+                if mp.id == multipart_id:
+                    with FileChunkIO(source_path, 'r', offset=offset,
+                                     bytes=bytes) as fp:
+                        mp.upload_part_from_file(fp=fp, part_num=part_num,
+                                                 cb=cb, num_cb=num_cb)
+                    break
+        except Exception, exc:
+            if retries_left:
+                _upload(retries_left=retries_left - 1)
+            else:
+                print 'Failed uploading part #%d' % part_num
+                raise exc
+        else:
+            if debug == 1:
+                print '... Uploaded part #%d' % part_num
+
+    _upload()
+
+
+def multipart_upload(bucketname, aws_key, aws_secret, source_path, keyname,
+                     reduced, debug, cb, num_cb, acl='private', headers={},
+                     guess_mimetype=True, parallel_processes=4):
+    """
+    Parallel multipart upload.
+    """
+    conn = S3Connection(aws_key, aws_secret)
+    conn.debug = debug
+    bucket = conn.get_bucket(bucketname)
+
+    if guess_mimetype:
+        mtype = mimetypes.guess_type(keyname)[0] or 'application/octet-stream'
+        headers.update({'Content-Type': mtype})
+
+    mp = bucket.initiate_multipart_upload(keyname, headers=headers,
+                                          reduced_redundancy=reduced)
+
+    source_size = os.stat(source_path).st_size
+    bytes_per_chunk = max(int(math.sqrt(5242880) * math.sqrt(source_size)),
+                          5242880)
+    chunk_amount = int(math.ceil(source_size / float(bytes_per_chunk)))
+
+    pool = Pool(processes=parallel_processes)
+    for i in range(chunk_amount):
+        offset = i * bytes_per_chunk
+        remaining_bytes = source_size - offset
+        bytes = min([bytes_per_chunk, remaining_bytes])
+        part_num = i + 1
+        pool.apply_async(_upload_part, [bucketname, aws_key, aws_secret, mp.id,
+                                        part_num, source_path, offset, bytes,
+                                        debug, cb, num_cb])
+    pool.close()
+    pool.join()
+
+    if len(mp.get_all_parts()) == chunk_amount:
+        mp.complete_upload()
+        key = bucket.get_key(keyname)
+        key.set_acl(acl)
+    else:
+        mp.cancel_upload()
+
+
+def singlepart_upload(bucket, key_name, fullpath, *kargs, **kwargs):
+    """
+    Single upload.
+    """
+    k = bucket.new_key(key_name)
+    k.set_contents_from_filename(fullpath, *kargs, **kwargs)
+
+
+def expand_path(path):
+    path = os.path.expanduser(path)
+    path = os.path.expandvars(path)
+    return os.path.abspath(path)
+
 
 def main():
-    try:
-        opts, args = getopt.getopt(
-                sys.argv[1:], 'a:b:c::d:g:hi:np:qs:vwr',
-                ['access_key=', 'bucket=', 'callback=', 'debug=', 'help',
-                 'grant=', 'ignore=', 'no_op', 'prefix=', 'quiet',
-                 'secret_key=', 'no_overwrite', 'reduced', "header="]
-                )
-    except:
-        usage()
-    ignore_dirs = []
+
+    # default values
     aws_access_key_id = None
     aws_secret_access_key = None
     bucket_name = ''
-    total = 0
+    ignore_dirs = []
     debug = 0
     cb = None
     num_cb = 0
     quiet = False
     no_op = False
     prefix = '/'
+    key_prefix = ''
     grant = None
     no_overwrite = False
     reduced = False
     headers = {}
+    host = None
+    multipart_requested = False
+
+    try:
+        opts, args = getopt.getopt(
+            sys.argv[1:], 'a:b:c::d:g:hi:k:np:qs:wr',
+            ['access_key=', 'bucket=', 'callback=', 'debug=', 'help', 'grant=',
+             'ignore=', 'key_prefix=', 'no_op', 'prefix=', 'quiet',
+             'secret_key=', 'no_overwrite', 'reduced', 'header=', 'multipart',
+             'host='])
+    except:
+        usage()
+
+    # parse opts
     for o, a in opts:
         if o in ('-h', '--help'):
             usage()
-            sys.exit()
         if o in ('-a', '--access_key'):
             aws_access_key_id = a
         if o in ('-b', '--bucket'):
@@ -138,78 +274,101 @@
             no_op = True
         if o in ('-w', '--no_overwrite'):
             no_overwrite = True
-        if o in ('-r', '--reduced'):
-            reduced = True
         if o in ('-p', '--prefix'):
             prefix = a
             if prefix[-1] != os.sep:
                 prefix = prefix + os.sep
+            prefix = expand_path(prefix)
+        if o in ('-k', '--key_prefix'):
+            key_prefix = a
         if o in ('-q', '--quiet'):
             quiet = True
         if o in ('-s', '--secret_key'):
             aws_secret_access_key = a
+        if o in ('-r', '--reduced'):
+            reduced = True
         if o in ('--header'):
-            (k,v) = a.split("=")
+            (k, v) = a.split("=")
             headers[k] = v
-    if len(args) != 1:
-        print usage()
-    path = os.path.expanduser(args[0])
-    path = os.path.expandvars(path)
-    path = os.path.abspath(path)
-    if bucket_name:
+        if o in ('--host'):
+            host = a
+        if o in ('--multipart'):
+            if multipart_capable:
+                multipart_requested = True
+            else:
+                print "multipart upload requested but not capable"
+                sys.exit()
+
+    if len(args) < 1:
+        usage()
+
+    if not bucket_name:
+        print "bucket name is required!"
+        usage()
+
+    if host:
+        c = boto.connect_s3(host=host, aws_access_key_id=aws_access_key_id,
+                        aws_secret_access_key=aws_secret_access_key)
+    else:
         c = boto.connect_s3(aws_access_key_id=aws_access_key_id,
-                            aws_secret_access_key=aws_secret_access_key)
-        c.debug = debug
-        b = c.get_bucket(bucket_name)
+                        aws_secret_access_key=aws_secret_access_key)
+    c.debug = debug
+    b = c.get_bucket(bucket_name)
+    existing_keys_to_check_against = []
+    files_to_check_for_upload = []
+
+    for path in args:
+        path = expand_path(path)
+        # upload a directory of files recursively
         if os.path.isdir(path):
             if no_overwrite:
                 if not quiet:
                     print 'Getting list of existing keys to check against'
-                keys = []
-                for key in b.list(get_key_name(path, prefix)):
-                    keys.append(key.name)
+                for key in b.list(get_key_name(path, prefix, key_prefix)):
+                    existing_keys_to_check_against.append(key.name)
             for root, dirs, files in os.walk(path):
                 for ignore in ignore_dirs:
                     if ignore in dirs:
                         dirs.remove(ignore)
-                for file in files:
-                    if file.startswith("."):
+                for path in files:
+                    if path.startswith("."):
                         continue
-                    fullpath = os.path.join(root, file)
-                    key_name = get_key_name(fullpath, prefix)
-                    copy_file = True
-                    if no_overwrite:
-                        if key_name in keys:
-                            copy_file = False
-                            if not quiet:
-                                print 'Skipping %s as it exists in s3' % file
-                    if copy_file:
-                        if not quiet:
-                            print 'Copying %s to %s/%s' % (file, bucket_name, key_name)
-                        if not no_op:
-                            k = b.new_key(key_name)
-                            k.set_contents_from_filename(
-                                    fullpath, cb=cb, num_cb=num_cb,
-                                    policy=grant, reduced_redundancy=reduced,
-                                    headers=headers
-                                    )
-                    total += 1
+                    files_to_check_for_upload.append(os.path.join(root, path))
+
+        # upload a single file
         elif os.path.isfile(path):
-            key_name = get_key_name(path, prefix)
-            copy_file = True
-            if no_overwrite:
-                if b.get_key(key_name):
-                    copy_file = False
-                    if not quiet:
-                        print 'Skipping %s as it exists in s3' % path
-            if copy_file:
-                k = b.new_key(key_name)
-                k.set_contents_from_filename(path, cb=cb, num_cb=num_cb,
-                                             policy=grant,
-                                             reduced_redundancy=reduced, headers=headers)
-    else:
-        print usage()
+            fullpath = os.path.abspath(path)
+            key_name = get_key_name(fullpath, prefix, key_prefix)
+            files_to_check_for_upload.append(fullpath)
+            existing_keys_to_check_against.append(key_name)
+
+        # we are trying to upload something unknown
+        else:
+            print "I don't know what %s is, so i can't upload it" % path
+
+    for fullpath in files_to_check_for_upload:
+        key_name = get_key_name(fullpath, prefix, key_prefix)
+
+        if no_overwrite and key_name in existing_keys_to_check_against:
+            if not quiet:
+                print 'Skipping %s as it exists in s3' % fullpath
+            continue
+
+        if not quiet:
+            print 'Copying %s to %s/%s' % (fullpath, bucket_name, key_name)
+
+        if not no_op:
+            # 0-byte files don't work and also don't need multipart upload
+            if os.stat(fullpath).st_size != 0 and multipart_capable and \
+                    multipart_requested:
+                multipart_upload(bucket_name, aws_access_key_id,
+                                 aws_secret_access_key, fullpath, key_name,
+                                 reduced, debug, cb, num_cb,
+                                 grant or 'private', headers)
+            else:
+                singlepart_upload(b, key_name, fullpath, cb=cb, num_cb=num_cb,
+                                  policy=grant, reduced_redundancy=reduced,
+                                  headers=headers)
 
 if __name__ == "__main__":
     main()
-
diff --git a/bin/sdbadmin b/bin/sdbadmin
index 7e87c7b..3fbd3f4 100755
--- a/bin/sdbadmin
+++ b/bin/sdbadmin
@@ -26,15 +26,7 @@
 import boto
 import time
 from boto import sdb
-
-# Allow support for JSON
-try:
-    import simplejson as json
-except:
-    try:
-        import json
-    except:
-        json = False
+from boto.compat import json
 
 def choice_input(options, default=None, title=None):
     """
diff --git a/boto/__init__.py b/boto/__init__.py
index b0eb6bd..2166670 100644
--- a/boto/__init__.py
+++ b/boto/__init__.py
@@ -2,6 +2,7 @@
 # Copyright (c) 2010-2011, Eucalyptus Systems, Inc.
 # Copyright (c) 2011, Nexenta Systems Inc.
 # Copyright (c) 2012 Amazon.com, Inc. or its affiliates.
+# Copyright (c) 2010, Google, Inc.
 # All rights reserved.
 #
 # Permission is hereby granted, free of charge, to any person obtaining a
@@ -35,12 +36,20 @@
 import urlparse
 from boto.exception import InvalidUriError
 
-__version__ = '2.6.0-dev'
+__version__ = '2.9.5'
 Version = __version__  # for backware compatibility
 
 UserAgent = 'Boto/%s (%s)' % (__version__, sys.platform)
 config = Config()
 
+# Regex to disallow buckets violating charset or not [3..255] chars total.
+BUCKET_NAME_RE = re.compile(r'^[a-zA-Z0-9][a-zA-Z0-9\._-]{1,253}[a-zA-Z0-9]$')
+# Regex to disallow buckets with individual DNS labels longer than 63.
+TOO_LONG_DNS_NAME_COMP = re.compile(r'[-_a-z0-9]{64}')
+GENERATION_RE = re.compile(r'(?P<versionless_uri_str>.+)'
+                           r'#(?P<generation>[0-9]+)$')
+VERSION_RE = re.compile('(?P<versionless_uri_str>.+)#(?P<version_id>.+)$')
+
 
 def init_logging():
     for file in BotoConfigLocations:
@@ -635,9 +644,81 @@
     return Layer1(aws_access_key_id, aws_secret_access_key, **kwargs)
 
 
+def connect_elastictranscoder(aws_access_key_id=None,
+                              aws_secret_access_key=None,
+                              **kwargs):
+    """
+    :type aws_access_key_id: string
+    :param aws_access_key_id: Your AWS Access Key ID
+
+    :type aws_secret_access_key: string
+    :param aws_secret_access_key: Your AWS Secret Access Key
+
+    :rtype: :class:`boto.ets.layer1.ElasticTranscoderConnection`
+    :return: A connection to Amazon's Elastic Transcoder service
+    """
+    from boto.elastictranscoder.layer1 import ElasticTranscoderConnection
+    return ElasticTranscoderConnection(
+        aws_access_key_id=aws_access_key_id,
+        aws_secret_access_key=aws_secret_access_key,
+        **kwargs)
+
+
+def connect_opsworks(aws_access_key_id=None,
+                     aws_secret_access_key=None,
+                     **kwargs):
+    from boto.opsworks.layer1 import OpsWorksConnection
+    return OpsWorksConnection(
+        aws_access_key_id=aws_access_key_id,
+        aws_secret_access_key=aws_secret_access_key,
+        **kwargs)
+
+
+def connect_redshift(aws_access_key_id=None,
+                     aws_secret_access_key=None,
+                     **kwargs):
+    """
+    :type aws_access_key_id: string
+    :param aws_access_key_id: Your AWS Access Key ID
+
+    :type aws_secret_access_key: string
+    :param aws_secret_access_key: Your AWS Secret Access Key
+
+    :rtype: :class:`boto.redshift.layer1.RedshiftConnection`
+    :return: A connection to Amazon's Redshift service
+    """
+    from boto.redshift.layer1 import RedshiftConnection
+    return RedshiftConnection(
+        aws_access_key_id=aws_access_key_id,
+        aws_secret_access_key=aws_secret_access_key,
+        **kwargs
+    )
+
+
+def connect_support(aws_access_key_id=None,
+                    aws_secret_access_key=None,
+                    **kwargs):
+    """
+    :type aws_access_key_id: string
+    :param aws_access_key_id: Your AWS Access Key ID
+
+    :type aws_secret_access_key: string
+    :param aws_secret_access_key: Your AWS Secret Access Key
+
+    :rtype: :class:`boto.support.layer1.SupportConnection`
+    :return: A connection to Amazon's Support service
+    """
+    from boto.support.layer1 import SupportConnection
+    return SupportConnection(
+        aws_access_key_id=aws_access_key_id,
+        aws_secret_access_key=aws_secret_access_key,
+        **kwargs
+    )
+
+
 def storage_uri(uri_str, default_scheme='file', debug=0, validate=True,
                 bucket_storage_uri_class=BucketStorageUri,
-                suppress_consec_slashes=True):
+                suppress_consec_slashes=True, is_latest=False):
     """
     Instantiate a StorageUri from a URI string.
 
@@ -653,6 +734,9 @@
     :param bucket_storage_uri_class: Allows mocking for unit tests.
     :param suppress_consec_slashes: If provided, controls whether
         consecutive slashes will be suppressed in key paths.
+    :type is_latest: bool
+    :param is_latest: whether this versioned object represents the
+        current version.
 
     We allow validate to be disabled to allow caller
     to implement bucket-level wildcarding (outside the boto library;
@@ -664,14 +748,17 @@
     ``uri_str`` must be one of the following formats:
 
     * gs://bucket/name
+    * gs://bucket/name#ver
     * s3://bucket/name
     * gs://bucket
     * s3://bucket
     * filename (which could be a Unix path like /a/b/c or a Windows path like
       C:\a\b\c)
 
-    The last example uses the default scheme ('file', unless overridden)
+    The last example uses the default scheme ('file', unless overridden).
     """
+    version_id = None
+    generation = None
 
     # Manually parse URI components instead of using urlparse.urlparse because
     # what we're calling URIs don't really fit the standard syntax for URIs
@@ -688,7 +775,8 @@
             if not (platform.system().lower().startswith('windows')
                     and colon_pos == 1
                     and drive_char >= 'a' and drive_char <= 'z'):
-              raise InvalidUriError('"%s" contains ":" instead of "://"' % uri_str)
+              raise InvalidUriError('"%s" contains ":" instead of "://"' %
+                                    uri_str)
         scheme = default_scheme.lower()
         path = uri_str
     else:
@@ -707,23 +795,38 @@
     else:
         path_parts = path.split('/', 1)
         bucket_name = path_parts[0]
-        if (validate and bucket_name and
-            # Disallow buckets violating charset or not [3..255] chars total.
-            (not re.match('^[a-z0-9][a-z0-9\._-]{1,253}[a-z0-9]$', bucket_name)
-            # Disallow buckets with individual DNS labels longer than 63.
-             or re.search('[-_a-z0-9]{64}', bucket_name))):
-            raise InvalidUriError('Invalid bucket name in URI "%s"' % uri_str)
-        # If enabled, ensure the bucket name is valid, to avoid possibly
-        # confusing other parts of the code. (For example if we didn't
+        object_name = ''
+        # If validate enabled, ensure the bucket name is valid, to avoid
+        # possibly confusing other parts of the code. (For example if we didn't
         # catch bucket names containing ':', when a user tried to connect to
         # the server with that name they might get a confusing error about
         # non-integer port numbers.)
-        object_name = ''
+        if (validate and bucket_name and
+            (not BUCKET_NAME_RE.match(bucket_name)
+             or TOO_LONG_DNS_NAME_COMP.search(bucket_name))):
+            raise InvalidUriError('Invalid bucket name in URI "%s"' % uri_str)
+        if scheme == 'gs':
+            match = GENERATION_RE.search(path)
+            if match:
+                md = match.groupdict()
+                versionless_uri_str = md['versionless_uri_str']
+                path_parts = versionless_uri_str.split('/', 1)
+                generation = int(md['generation'])
+        elif scheme == 's3':
+            match = VERSION_RE.search(path)
+            if match:
+                md = match.groupdict()
+                versionless_uri_str = md['versionless_uri_str']
+                path_parts = versionless_uri_str.split('/', 1)
+                version_id = md['version_id']
+        else:
+            raise InvalidUriError('Unrecognized scheme "%s"' % scheme)
         if len(path_parts) > 1:
             object_name = path_parts[1]
         return bucket_storage_uri_class(
             scheme, bucket_name, object_name, debug,
-            suppress_consec_slashes=suppress_consec_slashes)
+            suppress_consec_slashes=suppress_consec_slashes,
+            version_id=version_id, generation=generation, is_latest=is_latest)
 
 
 def storage_uri_for_key(key):
diff --git a/boto/auth.py b/boto/auth.py
index 29f9ac5..cd7ac68 100644
--- a/boto/auth.py
+++ b/boto/auth.py
@@ -32,13 +32,14 @@
 import boto.exception
 import boto.plugin
 import boto.utils
+import copy
+import datetime
+from email.utils import formatdate
 import hmac
 import sys
-import urllib
 import time
-import datetime
-import copy
-from email.utils import formatdate
+import urllib
+import posixpath
 
 from boto.auth_handler import AuthHandler
 from boto.exception import BotoClientError
@@ -164,9 +165,9 @@
         boto.log.debug('StringToSign:\n%s' % string_to_sign)
         b64_hmac = self.sign_string(string_to_sign)
         auth_hdr = self._provider.auth_header
-        headers['Authorization'] = ("%s %s:%s" %
-                                    (auth_hdr,
-                                     self._provider.access_key, b64_hmac))
+        auth = ("%s %s:%s" % (auth_hdr, self._provider.access_key, b64_hmac))
+        boto.log.debug('Signature:\n%s' % auth)
+        headers['Authorization'] = auth
 
 
 class HmacAuthV2Handler(AuthHandler, HmacKeys):
@@ -188,6 +189,9 @@
         headers = http_request.headers
         if 'Date' not in headers:
             headers['Date'] = formatdate(usegmt=True)
+        if self._provider.security_token:
+            key = self._provider.security_token_header
+            headers[key] = self._provider.security_token
 
         b64_hmac = self.sign_string(headers['Date'])
         auth_hdr = self._provider.auth_header
@@ -264,7 +268,7 @@
         headers_to_sign = self.headers_to_sign(http_request)
         canonical_headers = self.canonical_headers(headers_to_sign)
         string_to_sign = '\n'.join([http_request.method,
-                                    http_request.path,
+                                    http_request.auth_path,
                                     '',
                                     canonical_headers,
                                     '',
@@ -303,9 +307,15 @@
 
     capability = ['hmac-v4']
 
-    def __init__(self, host, config, provider):
+    def __init__(self, host, config, provider,
+                 service_name=None, region_name=None):
         AuthHandler.__init__(self, host, config, provider)
         HmacKeys.__init__(self, host, config, provider)
+        # You can set the service_name and region_name to override the
+        # values which would otherwise come from the endpoint, e.g.
+        # <service>.<region>.amazonaws.com.
+        self.service_name = service_name
+        self.region_name = region_name
 
     def _sign(self, key, msg, hex=False):
         if hex:
@@ -319,14 +329,22 @@
         Select the headers from the request that need to be included
         in the StringToSign.
         """
+        host_header_value = self.host_header(self.host, http_request)
         headers_to_sign = {}
-        headers_to_sign = {'Host': self.host}
+        headers_to_sign = {'Host': host_header_value}
         for name, value in http_request.headers.items():
             lname = name.lower()
             if lname.startswith('x-amz'):
                 headers_to_sign[name] = value
         return headers_to_sign
 
+    def host_header(self, host, http_request):
+        port = http_request.port
+        secure = http_request.protocol == 'https'
+        if ((port == 80 and not secure) or (port == 443 and secure)):
+            return host
+        return '%s:%s' % (host, port)
+
     def query_string(self, http_request):
         parameter_names = sorted(http_request.params.keys())
         pairs = []
@@ -337,12 +355,15 @@
         return '&'.join(pairs)
 
     def canonical_query_string(self, http_request):
+        # POST requests pass parameters in through the
+        # http_request.body field.
+        if http_request.method == 'POST':
+            return ""
         l = []
-        for param in http_request.params:
+        for param in sorted(http_request.params):
             value = str(http_request.params[param])
             l.append('%s=%s' % (urllib.quote(param, safe='-_.~'),
                                 urllib.quote(value, safe='-_.~')))
-        l = sorted(l)
         return '&'.join(l)
 
     def canonical_headers(self, headers_to_sign):
@@ -352,9 +373,9 @@
         case, sorting them in alphabetical order and then joining
         them into a string, separated by newlines.
         """
-        l = ['%s:%s' % (n.lower().strip(),
-                      headers_to_sign[n].strip()) for n in headers_to_sign]
-        l = sorted(l)
+        l = sorted(['%s:%s' % (n.lower().strip(),
+                    ' '.join(headers_to_sign[n].strip().split()))
+                    for n in headers_to_sign])
         return '\n'.join(l)
 
     def signed_headers(self, headers_to_sign):
@@ -363,7 +384,11 @@
         return ';'.join(l)
 
     def canonical_uri(self, http_request):
-        return http_request.path
+        # Normalize the path.
+        normalized = posixpath.normpath(http_request.auth_path)
+        # Then urlencode whatever's left.
+        encoded = urllib.quote(normalized)
+        return encoded
 
     def payload(self, http_request):
         body = http_request.body
@@ -396,13 +421,26 @@
         scope = []
         http_request.timestamp = http_request.headers['X-Amz-Date'][0:8]
         scope.append(http_request.timestamp)
+        # The service_name and region_name either come from:
+        # * The service_name/region_name attrs or (if these values are None)
+        # * parsed from the endpoint <service>.<region>.amazonaws.com.
         parts = http_request.host.split('.')
-        if len(parts) == 3:
-            http_request.region_name = 'us-east-1'
+        if self.region_name is not None:
+            region_name = self.region_name
         else:
-            http_request.region_name = parts[1]
+            if len(parts) == 3:
+                region_name = 'us-east-1'
+            else:
+                region_name = parts[1]
+        if self.service_name is not None:
+            service_name = self.service_name
+        else:
+            service_name = parts[0]
+
+        http_request.service_name = service_name
+        http_request.region_name = region_name
+
         scope.append(http_request.region_name)
-        http_request.service_name = parts[0]
         scope.append(http_request.service_name)
         scope.append('aws4_request')
         return '/'.join(scope)
@@ -443,6 +481,18 @@
         req.headers['X-Amz-Date'] = now.strftime('%Y%m%dT%H%M%SZ')
         if self._provider.security_token:
             req.headers['X-Amz-Security-Token'] = self._provider.security_token
+        qs = self.query_string(req)
+        if qs and req.method == 'POST':
+            # Stash request parameters into post body
+            # before we generate the signature.
+            req.body = qs
+            req.headers['Content-Type'] = 'application/x-www-form-urlencoded; charset=UTF-8'
+            req.headers['Content-Length'] = str(len(req.body))
+        else:
+            # Safe to modify req.path here since
+            # the signature will use req.auth_path.
+            req.path = req.path.split('?')[0]
+            req.path = req.path + '?' + qs
         canonical_request = self.canonical_request(req)
         boto.log.debug('CanonicalRequest:\n%s' % canonical_request)
         string_to_sign = self.string_to_sign(req, canonical_request)
@@ -454,10 +504,6 @@
         l.append('SignedHeaders=%s' % self.signed_headers(headers_to_sign))
         l.append('Signature=%s' % signature)
         req.headers['Authorization'] = ','.join(l)
-        qs = self.query_string(req)
-        if qs:
-            req.path = req.path.split('?')[0]
-            req.path = req.path + '?' + qs
 
 
 class QuerySignatureHelper(HmacKeys):
@@ -519,6 +565,11 @@
     SignatureVersion = 1
     capability = ['sign-v1', 'mturk']
 
+    def __init__(self, *args, **kw):
+        QuerySignatureHelper.__init__(self, *args, **kw)
+        AuthHandler.__init__(self, *args, **kw)
+        self._hmac_256 = None
+
     def _calc_signature(self, params, *args):
         boto.log.debug('using _calc_signature_1')
         hmac = self._get_hmac()
@@ -612,8 +663,7 @@
         An implementation of AuthHandler.
 
     Raises:
-        boto.exception.NoAuthHandlerFound:
-        boto.exception.TooManyAuthHandlerReadyToAuthenticate:
+        boto.exception.NoAuthHandlerFound
     """
     ready_handlers = []
     auth_handlers = boto.plugin.get_plugin(AuthHandler, requested_capability)
@@ -632,18 +682,14 @@
               ' %s '
               'Check your credentials' % (len(names), str(names)))
 
-    if len(ready_handlers) > 1:
-        # NOTE: Even though it would be nice to accept more than one handler
-        # by using one of the many ready handlers, we are never sure that each
-        # of them are referring to the same storage account. Since we cannot
-        # easily guarantee that, it is always safe to fail, rather than operate
-        # on the wrong account.
-        names = [handler.__class__.__name__ for handler in ready_handlers]
-        raise boto.exception.TooManyAuthHandlerReadyToAuthenticate(
-               '%d AuthHandlers %s ready to authenticate for requested_capability '
-               '%s, only 1 expected. This happens if you import multiple '
-               'pluging.Plugin implementations that declare support for the '
-               'requested_capability.' % (len(names), str(names),
-               requested_capability))
-
-    return ready_handlers[0]
+    # We select the last ready auth handler that was loaded, to allow users to
+    # customize how auth works in environments where there are shared boto
+    # config files (e.g., /etc/boto.cfg and ~/.boto): The more general,
+    # system-wide shared configs should be loaded first, and the user's
+    # customizations loaded last. That way, for example, the system-wide
+    # config might include a plugin_directory that includes a service account
+    # auth plugin shared by all users of a Google Compute Engine instance
+    # (allowing sharing of non-user data between various services), and the
+    # user could override this with a .boto config that includes user-specific
+    # credentials (for access to user data).
+    return ready_handlers[-1]
diff --git a/boto/beanstalk/__init__.py b/boto/beanstalk/__init__.py
index e69de29..904d855 100644
--- a/boto/beanstalk/__init__.py
+++ b/boto/beanstalk/__init__.py
@@ -0,0 +1,65 @@
+# Copyright (c) 2013 Amazon.com, Inc. or its affiliates.
+# All Rights Reserved
+#
+# Permission is hereby granted, free of charge, to any person obtaining a
+# copy of this software and associated documentation files (the
+# "Software"), to deal in the Software without restriction, including
+# without limitation the rights to use, copy, modify, merge, publish, dis-
+# tribute, sublicense, and/or sell copies of the Software, and to permit
+# persons to whom the Software is furnished to do so, subject to the fol-
+# lowing conditions:
+#
+# The above copyright notice and this permission notice shall be included
+# in all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
+# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
+# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+# IN THE SOFTWARE.
+#
+from boto.regioninfo import RegionInfo
+
+
+def regions():
+    """
+    Get all available regions for the AWS Elastic Beanstalk service.
+
+    :rtype: list
+    :return: A list of :class:`boto.regioninfo.RegionInfo`
+    """
+    import boto.beanstalk.layer1
+    return [RegionInfo(name='us-east-1',
+                       endpoint='elasticbeanstalk.us-east-1.amazonaws.com',
+                       connection_cls=boto.beanstalk.layer1.Layer1),
+            RegionInfo(name='us-west-1',
+                       endpoint='elasticbeanstalk.us-west-1.amazonaws.com',
+                       connection_cls=boto.beanstalk.layer1.Layer1),
+            RegionInfo(name='us-west-2',
+                       endpoint='elasticbeanstalk.us-west-2.amazonaws.com',
+                       connection_cls=boto.beanstalk.layer1.Layer1),
+            RegionInfo(name='ap-northeast-1',
+                       endpoint='elasticbeanstalk.ap-northeast-1.amazonaws.com',
+                       connection_cls=boto.beanstalk.layer1.Layer1),
+            RegionInfo(name='ap-southeast-1',
+                       endpoint='elasticbeanstalk.ap-southeast-1.amazonaws.com',
+                       connection_cls=boto.beanstalk.layer1.Layer1),
+            RegionInfo(name='ap-southeast-2',
+                       endpoint='elasticbeanstalk.ap-southeast-2.amazonaws.com',
+                       connection_cls=boto.beanstalk.layer1.Layer1),
+            RegionInfo(name='eu-west-1',
+                       endpoint='elasticbeanstalk.eu-west-1.amazonaws.com',
+                       connection_cls=boto.beanstalk.layer1.Layer1),
+            RegionInfo(name='sa-east-1',
+                       endpoint='elasticbeanstalk.sa-east-1.amazonaws.com',
+                       connection_cls=boto.beanstalk.layer1.Layer1),
+            ]
+
+
+def connect_to_region(region_name, **kw_params):
+    for region in regions():
+        if region.name == region_name:
+            return region.connect(**kw_params)
+    return None
diff --git a/boto/beanstalk/exception.py b/boto/beanstalk/exception.py
index c209cef..f6f9ffa 100644
--- a/boto/beanstalk/exception.py
+++ b/boto/beanstalk/exception.py
@@ -1,5 +1,5 @@
 import sys
-import json
+from boto.compat import json
 from boto.exception import BotoServerError
 
 
diff --git a/boto/beanstalk/layer1.py b/boto/beanstalk/layer1.py
index 5e994e1..e63f70e 100644
--- a/boto/beanstalk/layer1.py
+++ b/boto/beanstalk/layer1.py
@@ -21,10 +21,10 @@
 # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
 # IN THE SOFTWARE.
 #
-import json
 
 import boto
 import boto.jsonresponse
+from boto.compat import json
 from boto.regioninfo import RegionInfo
 from boto.connection import AWSQueryConnection
 
@@ -54,7 +54,7 @@
                                     security_token)
 
     def _required_auth_capability(self):
-        return ['sign-v2']
+        return ['hmac-v4']
 
     def _encode_bool(self, v):
         v = bool(v)
@@ -75,7 +75,7 @@
 
         :type cname_prefix: string
         :param cname_prefix: The prefix used when this CNAME is
-        reserved.
+            reserved.
         """
         params = {'CNAMEPrefix': cname_prefix}
         return self._get_response('CheckDNSAvailability', params)
@@ -87,9 +87,9 @@
 
         :type application_name: string
         :param application_name: The name of the application.
-        Constraint: This name must be unique within your account. If the
-        specified name already exists, the action returns an
-        InvalidParameterValue error.
+            Constraint: This name must be unique within your account. If the
+            specified name already exists, the action returns an
+            InvalidParameterValue error.
 
         :type description: string
         :param description: Describes the application.
@@ -108,37 +108,34 @@
 
         :type application_name: string
         :param application_name: The name of the application. If no
-        application is found with this name, and AutoCreateApplication
-        is false, returns an InvalidParameterValue error.
+            application is found with this name, and AutoCreateApplication is
+            false, returns an InvalidParameterValue error.
 
         :type version_label: string
-        :param version_label: A label identifying this
-        version.Constraint: Must be unique per application. If an
-        application version already exists with this label for the
-        specified application, AWS Elastic Beanstalk returns an
-        InvalidParameterValue error.
+        :param version_label: A label identifying this version. Constraint:
+            Must be unique per application. If an application version already
+            exists with this label for the specified application, AWS Elastic
+            Beanstalk returns an InvalidParameterValue error.
 
         :type description: string
         :param description: Describes this version.
 
         :type s3_bucket: string
-        :param s3_bucket: The Amazon S3 bucket where the data is
-        located.
+        :param s3_bucket: The Amazon S3 bucket where the data is located.
 
         :type s3_key: string
-        :param s3_key: The Amazon S3 key where the data is located.
-        Both s3_bucket and s3_key must be specified in order to use
-        a specific source bundle.  If both of these values are not specified
-        the sample application will be used.
+        :param s3_key: The Amazon S3 key where the data is located.  Both
+            s3_bucket and s3_key must be specified in order to use a specific
+            source bundle.  If both of these values are not specified the
+            sample application will be used.
 
         :type auto_create_application: boolean
-        :param auto_create_application: Determines how the system
-        behaves if the specified application for this version does not
-        already exist:  true: Automatically creates the specified
-        application for this version if it does not already exist.
-        false: Returns an InvalidParameterValue if the specified
-        application for this version does not already exist.  Default:
-        false  Valid Values: true | false
+        :param auto_create_application: Determines how the system behaves if
+            the specified application for this version does not already exist:
+            true: Automatically creates the specified application for this
+            version if it does not already exist.  false: Returns an
+            InvalidParameterValue if the specified application for this version
+            does not already exist.  Default: false  Valid Values: true | false
 
         :raises: TooManyApplicationsException,
                  TooManyApplicationVersionsException,
@@ -171,52 +168,49 @@
         configuration settings.
 
         :type application_name: string
-        :param application_name: The name of the application to
-        associate with this configuration template. If no application is
-        found with this name, AWS Elastic Beanstalk returns an
-        InvalidParameterValue error.
+        :param application_name: The name of the application to associate with
+            this configuration template. If no application is found with this
+            name, AWS Elastic Beanstalk returns an InvalidParameterValue error.
 
         :type template_name: string
-        :param template_name: The name of the configuration
-        template.Constraint: This name must be unique per application.
-        Default: If a configuration template already exists with this
-        name, AWS Elastic Beanstalk returns an InvalidParameterValue
-        error.
+        :param template_name: The name of the configuration template.
+            Constraint: This name must be unique per application.  Default: If
+            a configuration template already exists with this name, AWS Elastic
+            Beanstalk returns an InvalidParameterValue error.
 
         :type solution_stack_name: string
-        :param solution_stack_name: The name of the solution stack used
-        by this configuration. The solution stack specifies the
-        operating system, architecture, and application server for a
-        configuration template. It determines the set of configuration
-        options as well as the possible and default values.  Use
-        ListAvailableSolutionStacks to obtain a list of available
-        solution stacks.  Default: If the SolutionStackName is not
-        specified and the source configuration parameter is blank, AWS
-        Elastic Beanstalk uses the default solution stack. If not
-        specified and the source configuration parameter is specified,
-        AWS Elastic Beanstalk uses the same solution stack as the source
-        configuration template.
+        :param solution_stack_name: The name of the solution stack used by this
+            configuration. The solution stack specifies the operating system,
+            architecture, and application server for a configuration template.
+            It determines the set of configuration options as well as the
+            possible and default values.  Use ListAvailableSolutionStacks to
+            obtain a list of available solution stacks.  Default: If the
+            SolutionStackName is not specified and the source configuration
+            parameter is blank, AWS Elastic Beanstalk uses the default solution
+            stack. If not specified and the source configuration parameter is
+            specified, AWS Elastic Beanstalk uses the same solution stack as
+            the source configuration template.
 
         :type source_configuration_application_name: string
         :param source_configuration_application_name: The name of the
-        application associated with the configuration.
+            application associated with the configuration.
 
         :type source_configuration_template_name: string
         :param source_configuration_template_name: The name of the
-        configuration template.
+            configuration template.
 
         :type environment_id: string
         :param environment_id: The ID of the environment used with this
-        configuration template.
+            configuration template.
 
         :type description: string
         :param description: Describes this configuration.
 
         :type option_settings: list
-        :param option_settings: If specified, AWS Elastic Beanstalk sets
-        the specified configuration option to the requested value. The
-        new value overrides the value obtained from the solution stack
-        or the source configuration template.
+        :param option_settings: If specified, AWS Elastic Beanstalk sets the
+            specified configuration option to the requested value. The new
+            value overrides the value obtained from the solution stack or the
+            source configuration template.
 
         :raises: InsufficientPrivilegesException,
                  TooManyConfigurationTemplatesException
@@ -226,9 +220,9 @@
         if solution_stack_name:
             params['SolutionStackName'] = solution_stack_name
         if source_configuration_application_name:
-            params['ApplicationName'] = source_configuration_application_name
+            params['SourceConfiguration.ApplicationName'] = source_configuration_application_name
         if source_configuration_template_name:
-            params['TemplateName'] = source_configuration_template_name
+            params['SourceConfiguration.TemplateName'] = source_configuration_template_name
         if environment_id:
             params['EnvironmentId'] = environment_id
         if description:
@@ -247,73 +241,72 @@
         """Launches an environment for the application using a configuration.
 
         :type application_name: string
-        :param application_name: The name of the application that
-        contains the version to be deployed.  If no application is found
-        with this name, CreateEnvironment returns an
-        InvalidParameterValue error.
+        :param application_name: The name of the application that contains the
+            version to be deployed.  If no application is found with this name,
+            CreateEnvironment returns an InvalidParameterValue error.
 
         :type version_label: string
-        :param version_label: The name of the application version to
-        deploy. If the specified application has no associated
-        application versions, AWS Elastic Beanstalk UpdateEnvironment
-        returns an InvalidParameterValue error.  Default: If not
-        specified, AWS Elastic Beanstalk attempts to launch the most
-        recently created application version.
+        :param version_label: The name of the application version to deploy. If
+            the specified application has no associated application versions,
+            AWS Elastic Beanstalk UpdateEnvironment returns an
+            InvalidParameterValue error.  Default: If not specified, AWS
+            Elastic Beanstalk attempts to launch the most recently created
+            application version.
 
         :type environment_name: string
-        :param environment_name: A unique name for the deployment
-        environment. Used in the application URL. Constraint: Must be
-        from 4 to 23 characters in length. The name can contain only
-        letters, numbers, and hyphens. It cannot start or end with a
-        hyphen. This name must be unique in your account. If the
-        specified name already exists, AWS Elastic Beanstalk returns an
-        InvalidParameterValue error. Default: If the CNAME parameter is
-        not specified, the environment name becomes part of the CNAME,
-        and therefore part of the visible URL for your application.
+        :param environment_name: A unique name for the deployment environment.
+            Used in the application URL. Constraint: Must be from 4 to 23
+            characters in length. The name can contain only letters, numbers,
+            and hyphens. It cannot start or end with a hyphen. This name must
+            be unique in your account. If the specified name already exists,
+            AWS Elastic Beanstalk returns an InvalidParameterValue error.
+            Default: If the CNAME parameter is not specified, the environment
+            name becomes part of the CNAME, and therefore part of the visible
+            URL for your application.
 
         :type template_name: string
         :param template_name: The name of the configuration template to
-        use in deployment. If no configuration template is found with
-        this name, AWS Elastic Beanstalk returns an
-        InvalidParameterValue error.  Condition: You must specify either
-        this parameter or a SolutionStackName, but not both. If you
-        specify both, AWS Elastic Beanstalk returns an
-        InvalidParameterCombination error. If you do not specify either,
-        AWS Elastic Beanstalk returns a MissingRequiredParameter error.
+            use in deployment. If no configuration template is found with this
+            name, AWS Elastic Beanstalk returns an InvalidParameterValue error.
+            Condition: You must specify either this parameter or a
+            SolutionStackName, but not both. If you specify both, AWS Elastic
+            Beanstalk returns an InvalidParameterCombination error. If you do
+            not specify either, AWS Elastic Beanstalk returns a
+            MissingRequiredParameter error.
 
         :type solution_stack_name: string
-        :param solution_stack_name: This is an alternative to specifying
-        a configuration name. If specified, AWS Elastic Beanstalk sets
-        the configuration values to the default values associated with
-        the specified solution stack.  Condition: You must specify
-        either this or a TemplateName, but not both. If you specify
-        both, AWS Elastic Beanstalk returns an
-        InvalidParameterCombination error. If you do not specify either,
-        AWS Elastic Beanstalk returns a MissingRequiredParameter error.
+        :param solution_stack_name: This is an alternative to specifying a
+            configuration name. If specified, AWS Elastic Beanstalk sets the
+            configuration values to the default values associated with the
+            specified solution stack.  Condition: You must specify either this
+            or a TemplateName, but not both. If you specify both, AWS Elastic
+            Beanstalk returns an InvalidParameterCombination error. If you do
+            not specify either, AWS Elastic Beanstalk returns a
+            MissingRequiredParameter error.
 
         :type cname_prefix: string
-        :param cname_prefix: If specified, the environment attempts to
-        use this value as the prefix for the CNAME. If not specified,
-        the environment uses the environment name.
+        :param cname_prefix: If specified, the environment attempts to use this
+            value as the prefix for the CNAME. If not specified, the
+            environment uses the environment name.
 
         :type description: string
         :param description: Describes this environment.
 
         :type option_settings: list
-        :param option_settings: If specified, AWS Elastic Beanstalk sets
-        the specified configuration options to the requested value in
-        the configuration set for the new environment. These override
-        the values obtained from the solution stack or the configuration
-        template.  Each element in the list is a tuple of (Namespace,
-        OptionName, Value), for example::
+        :param option_settings: If specified, AWS Elastic Beanstalk sets the
+            specified configuration options to the requested value in the
+            configuration set for the new environment. These override the
+            values obtained from the solution stack or the configuration
+            template.  Each element in the list is a tuple of (Namespace,
+            OptionName, Value), for example::
 
-            [('aws:autoscaling:launchconfiguration',
-              'Ec2KeyName', 'mykeypair')]
+                [('aws:autoscaling:launchconfiguration',
+                    'Ec2KeyName', 'mykeypair')]
 
         :type options_to_remove: list
-        :param options_to_remove: A list of custom user-defined
-        configuration options to remove from the configuration set for
-        this new environment.
+        :param options_to_remove: A list of custom user-defined configuration
+            options to remove from the configuration set for this new
+            environment.
 
         :raises: TooManyEnvironmentsException, InsufficientPrivilegesException
 
@@ -363,7 +356,7 @@
 
         :type terminate_env_by_force: boolean
         :param terminate_env_by_force: When set to true, running
-        environments will be terminated before deleting the application.
+            environments will be terminated before deleting the application.
 
         :raises: OperationInProgressException
 
@@ -380,14 +373,15 @@
 
         :type application_name: string
         :param application_name: The name of the application to delete
-        releases from.
+            releases from.
 
         :type version_label: string
         :param version_label: The label of the version to delete.
 
         :type delete_source_bundle: boolean
         :param delete_source_bundle: Indicates whether to delete the
-        associated source bundle from Amazon S3.  Valid Values: true | false
+            associated source bundle from Amazon S3.  Valid Values: true |
+            false
 
         :raises: SourceBundleDeletionException,
                  InsufficientPrivilegesException,
@@ -406,11 +400,11 @@
 
         :type application_name: string
         :param application_name: The name of the application to delete
-        the configuration template from.
+            the configuration template from.
 
         :type template_name: string
         :param template_name: The name of the configuration template to
-        delete.
+            delete.
 
         :raises: OperationInProgressException
 
@@ -434,11 +428,11 @@
 
         :type application_name: string
         :param application_name: The name of the application the
-        environment is associated with.
+            environment is associated with.
 
         :type environment_name: string
         :param environment_name: The name of the environment to delete
-        the draft configuration from.
+            the draft configuration from.
 
         """
         params = {'ApplicationName': application_name,
@@ -450,14 +444,14 @@
         """Returns descriptions for existing application versions.
 
         :type application_name: string
-        :param application_name: If specified, AWS Elastic Beanstalk
-        restricts the returned descriptions to only include ones that
-        are associated with the specified application.
+        :param application_name: If specified, AWS Elastic Beanstalk restricts
+            the returned descriptions to only include ones that are associated
+            with the specified application.
 
         :type version_labels: list
         :param version_labels: If specified, restricts the returned
-        descriptions to only include ones that have the specified
-        version labels.
+            descriptions to only include ones that have the specified version
+            labels.
 
         """
         params = {}
@@ -472,9 +466,9 @@
         """Returns the descriptions of existing applications.
 
         :type application_names: list
-        :param application_names: If specified, AWS Elastic Beanstalk
-        restricts the returned descriptions to only include those with
-        the specified names.
+        :param application_names: If specified, AWS Elastic Beanstalk restricts
+            the returned descriptions to only include those with the specified
+            names.
 
         """
         params = {}
@@ -497,26 +491,26 @@
         is changed.
 
         :type application_name: string
-        :param application_name: The name of the application associated
-        with the configuration template or environment. Only needed if
-        you want to describe the configuration options associated with
-        either the configuration template or environment.
+        :param application_name: The name of the application associated with
+            the configuration template or environment. Only needed if you want
+            to describe the configuration options associated with either the
+            configuration template or environment.
 
         :type template_name: string
-        :param template_name: The name of the configuration template
-        whose configuration options you want to describe.
+        :param template_name: The name of the configuration template whose
+            configuration options you want to describe.
 
         :type environment_name: string
         :param environment_name: The name of the environment whose
-        configuration options you want to describe.
+            configuration options you want to describe.
 
         :type solution_stack_name: string
         :param solution_stack_name: The name of the solution stack whose
-        configuration options you want to describe.
+            configuration options you want to describe.
 
         :type options: list
         :param options: If specified, restricts the descriptions to only
-        the specified options.
+            the specified options.
         """
         params = {}
         if application_name:
@@ -547,23 +541,22 @@
 
         :type application_name: string
         :param application_name: The application for the environment or
-        configuration template.
+            configuration template.
 
         :type template_name: string
         :param template_name: The name of the configuration template to
-        describe.  Conditional: You must specify either this parameter
-        or an EnvironmentName, but not both. If you specify both, AWS
-        Elastic Beanstalk returns an InvalidParameterCombination error.
-        If you do not specify either, AWS Elastic Beanstalk returns a
-        MissingRequiredParameter error.
+            describe.  Conditional: You must specify either this parameter or
+            an EnvironmentName, but not both. If you specify both, AWS Elastic
+            Beanstalk returns an InvalidParameterCombination error.  If you do
+            not specify either, AWS Elastic Beanstalk returns a
+            MissingRequiredParameter error.
 
         :type environment_name: string
-        :param environment_name: The name of the environment to
-        describe.  Condition: You must specify either this or a
-        TemplateName, but not both. If you specify both, AWS Elastic
-        Beanstalk returns an InvalidParameterCombination error. If you
-        do not specify either, AWS Elastic Beanstalk returns
-        MissingRequiredParameter error.
+        :param environment_name: The name of the environment to describe.
+            Condition: You must specify either this or a TemplateName, but not
+            both. If you specify both, AWS Elastic Beanstalk returns an
+            InvalidParameterCombination error. If you do not specify either,
+            AWS Elastic Beanstalk returns MissingRequiredParameter error.
         """
         params = {'ApplicationName': application_name}
         if template_name:
@@ -578,15 +571,15 @@
 
         :type environment_id: string
         :param environment_id: The ID of the environment to retrieve AWS
-        resource usage data.  Condition: You must specify either this or
-        an EnvironmentName, or both. If you do not specify either, AWS
-        Elastic Beanstalk returns MissingRequiredParameter error.
+            resource usage data.  Condition: You must specify either this or an
+            EnvironmentName, or both. If you do not specify either, AWS Elastic
+            Beanstalk returns MissingRequiredParameter error.
 
         :type environment_name: string
         :param environment_name: The name of the environment to retrieve
-        AWS resource usage data.  Condition: You must specify either
-        this or an EnvironmentId, or both. If you do not specify either,
-        AWS Elastic Beanstalk returns MissingRequiredParameter error.
+            AWS resource usage data.  Condition: You must specify either this
+            or an EnvironmentId, or both. If you do not specify either, AWS
+            Elastic Beanstalk returns MissingRequiredParameter error.
 
         :raises: InsufficientPrivilegesException
         """
@@ -604,35 +597,35 @@
         """Returns descriptions for existing environments.
 
         :type application_name: string
-        :param application_name: If specified, AWS Elastic Beanstalk
-        restricts the returned descriptions to include only those that
-        are associated with this application.
+        :param application_name: If specified, AWS Elastic Beanstalk restricts
+            the returned descriptions to include only those that are associated
+            with this application.
 
         :type version_label: string
-        :param version_label: If specified, AWS Elastic Beanstalk
-        restricts the returned descriptions to include only those that
-        are associated with this application version.
+        :param version_label: If specified, AWS Elastic Beanstalk restricts the
+            returned descriptions to include only those that are associated
+            with this application version.
 
         :type environment_ids: list
-        :param environment_ids: If specified, AWS Elastic Beanstalk
-        restricts the returned descriptions to include only those that
-        have the specified IDs.
+        :param environment_ids: If specified, AWS Elastic Beanstalk restricts
+            the returned descriptions to include only those that have the
+            specified IDs.
 
         :type environment_names: list
-        :param environment_names: If specified, AWS Elastic Beanstalk
-        restricts the returned descriptions to include only those that
-        have the specified names.
+        :param environment_names: If specified, AWS Elastic Beanstalk restricts
+            the returned descriptions to include only those that have the
+            specified names.
 
         :type include_deleted: boolean
         :param include_deleted: Indicates whether to include deleted
-        environments:  true: Environments that have been deleted after
-        IncludedDeletedBackTo are displayed.  false: Do not include
-        deleted environments.
+            environments:  true: Environments that have been deleted after
+            IncludedDeletedBackTo are displayed.  false: Do not include deleted
+            environments.
 
         :type included_deleted_back_to: timestamp
-        :param included_deleted_back_to: If specified when
-        IncludeDeleted is set to true, then environments deleted after
-        this date are displayed.
+        :param included_deleted_back_to: If specified when IncludeDeleted is
+            set to true, then environments deleted after this date are
+            displayed.
         """
         params = {}
         if application_name:
@@ -659,57 +652,55 @@
         """Returns event descriptions matching criteria up to the last 6 weeks.
 
         :type application_name: string
-        :param application_name: If specified, AWS Elastic Beanstalk
-        restricts the returned descriptions to include only those
-        associated with this application.
+        :param application_name: If specified, AWS Elastic Beanstalk restricts
+            the returned descriptions to include only those associated with
+            this application.
 
         :type version_label: string
-        :param version_label: If specified, AWS Elastic Beanstalk
-        restricts the returned descriptions to those associated with
-        this application version.
+        :param version_label: If specified, AWS Elastic Beanstalk restricts the
+            returned descriptions to those associated with this application
+            version.
 
         :type template_name: string
-        :param template_name: If specified, AWS Elastic Beanstalk
-        restricts the returned descriptions to those that are associated
-        with this environment configuration.
+        :param template_name: If specified, AWS Elastic Beanstalk restricts the
+            returned descriptions to those that are associated with this
+            environment configuration.
 
         :type environment_id: string
-        :param environment_id: If specified, AWS Elastic Beanstalk
-        restricts the returned descriptions to those associated with
-        this environment.
+        :param environment_id: If specified, AWS Elastic Beanstalk restricts
+            the returned descriptions to those associated with this
+            environment.
 
         :type environment_name: string
-        :param environment_name: If specified, AWS Elastic Beanstalk
-        restricts the returned descriptions to those associated with
-        this environment.
+        :param environment_name: If specified, AWS Elastic Beanstalk restricts
+            the returned descriptions to those associated with this
+            environment.
 
         :type request_id: string
-        :param request_id: If specified, AWS Elastic Beanstalk restricts
-        the described events to include only those associated with this
-        request ID.
+        :param request_id: If specified, AWS Elastic Beanstalk restricts the
+            described events to include only those associated with this request
+            ID.
 
         :type severity: string
-        :param severity: If specified, limits the events returned from
-        this call to include only those with the specified severity or
-        higher.
+        :param severity: If specified, limits the events returned from this
+            call to include only those with the specified severity or higher.
 
         :type start_time: timestamp
-        :param start_time: If specified, AWS Elastic Beanstalk restricts
-        the returned descriptions to those that occur on or after this
-        time.
+        :param start_time: If specified, AWS Elastic Beanstalk restricts the
+            returned descriptions to those that occur on or after this time.
 
         :type end_time: timestamp
-        :param end_time: If specified, AWS Elastic Beanstalk restricts
-        the returned descriptions to those that occur up to, but not
-        including, the EndTime.
+        :param end_time: If specified, AWS Elastic Beanstalk restricts the
+            returned descriptions to those that occur up to, but not including,
+            the EndTime.
 
         :type max_records: integer
-        :param max_records: Specifies the maximum number of events that
-        can be returned, beginning with the most recent event.
+        :param max_records: Specifies the maximum number of events that can be
+            returned, beginning with the most recent event.
 
         :type next_token: string
-        :param next_token: Pagination token. If specified, the events
-        return the next batch of results.
+        :param next_token: Pagination token. If specified, the events return
+            the next batch of results.
         """
         params = {}
         if application_name:
@@ -748,15 +739,15 @@
 
         :type environment_id: string
         :param environment_id: The ID of the environment to rebuild.
-        Condition: You must specify either this or an EnvironmentName,
-        or both. If you do not specify either, AWS Elastic Beanstalk
-        returns MissingRequiredParameter error.
+            Condition: You must specify either this or an EnvironmentName, or
+            both.  If you do not specify either, AWS Elastic Beanstalk returns
+            MissingRequiredParameter error.
 
         :type environment_name: string
         :param environment_name: The name of the environment to rebuild.
-        Condition: You must specify either this or an EnvironmentId, or
-        both. If you do not specify either, AWS Elastic Beanstalk
-        returns MissingRequiredParameter error.
+            Condition: You must specify either this or an EnvironmentId, or
+            both.  If you do not specify either, AWS Elastic Beanstalk returns
+            MissingRequiredParameter error.
 
         :raises: InsufficientPrivilegesException
         """
@@ -781,19 +772,19 @@
 
         :type environment_id: string
         :param environment_id: The ID of the environment of the
-        requested data. If no such environment is found,
-        RequestEnvironmentInfo returns an InvalidParameterValue error.
-        Condition: You must specify either this or an EnvironmentName,
-        or both. If you do not specify either, AWS Elastic Beanstalk
-        returns MissingRequiredParameter error.
+            requested data. If no such environment is found,
+            RequestEnvironmentInfo returns an InvalidParameterValue error.
+            Condition: You must specify either this or an EnvironmentName, or
+            both. If you do not specify either, AWS Elastic Beanstalk returns
+            MissingRequiredParameter error.
 
         :type environment_name: string
         :param environment_name: The name of the environment of the
-        requested data. If no such environment is found,
-        RequestEnvironmentInfo returns an InvalidParameterValue error.
-        Condition: You must specify either this or an EnvironmentId, or
-        both. If you do not specify either, AWS Elastic Beanstalk
-        returns MissingRequiredParameter error.
+            requested data. If no such environment is found,
+            RequestEnvironmentInfo returns an InvalidParameterValue error.
+            Condition: You must specify either this or an EnvironmentId, or
+            both. If you do not specify either, AWS Elastic Beanstalk returns
+            MissingRequiredParameter error.
         """
         params = {'InfoType': info_type}
         if environment_id:
@@ -808,16 +799,16 @@
         server running on each Amazon EC2 instance.
 
         :type environment_id: string
-        :param environment_id: The ID of the environment to restart the
-        server for.  Condition: You must specify either this or an
-        EnvironmentName, or both. If you do not specify either, AWS
-        Elastic Beanstalk returns MissingRequiredParameter error.
+        :param environment_id: The ID of the environment to restart the server
+            for.  Condition: You must specify either this or an
+            EnvironmentName, or both. If you do not specify either, AWS Elastic
+            Beanstalk returns MissingRequiredParameter error.
 
         :type environment_name: string
-        :param environment_name: The name of the environment to restart
-        the server for.  Condition: You must specify either this or an
-        EnvironmentId, or both. If you do not specify either, AWS
-        Elastic Beanstalk returns MissingRequiredParameter error.
+        :param environment_name: The name of the environment to restart the
+            server for.  Condition: You must specify either this or an
+            EnvironmentId, or both. If you do not specify either, AWS Elastic
+            Beanstalk returns MissingRequiredParameter error.
         """
         params = {}
         if environment_id:
@@ -836,18 +827,18 @@
         :param info_type: The type of information to retrieve.
 
         :type environment_id: string
-        :param environment_id: The ID of the data's environment. If no
-        such environment is found, returns an InvalidParameterValue
-        error.  Condition: You must specify either this or an
-        EnvironmentName, or both. If you do not specify either, AWS
-        Elastic Beanstalk returns MissingRequiredParameter error.
+        :param environment_id: The ID of the data's environment. If no such
+            environment is found, returns an InvalidParameterValue error.
+            Condition: You must specify either this or an EnvironmentName, or
+            both.  If you do not specify either, AWS Elastic Beanstalk returns
+            MissingRequiredParameter error.
 
         :type environment_name: string
-        :param environment_name: The name of the data's environment. If
-        no such environment is found, returns an InvalidParameterValue
-        error.  Condition: You must specify either this or an
-        EnvironmentId, or both. If you do not specify either, AWS
-        Elastic Beanstalk returns MissingRequiredParameter error.
+        :param environment_name: The name of the data's environment. If no such
+            environment is found, returns an InvalidParameterValue error.
+            Condition: You must specify either this or an EnvironmentId, or
+            both.  If you do not specify either, AWS Elastic Beanstalk returns
+            MissingRequiredParameter error.
         """
         params = {'InfoType': info_type}
         if environment_id:
@@ -864,31 +855,31 @@
 
         :type source_environment_id: string
         :param source_environment_id: The ID of the source environment.
-        Condition: You must specify at least the SourceEnvironmentID or
-        the SourceEnvironmentName. You may also specify both. If you
-        specify the SourceEnvironmentId, you must specify the
-        DestinationEnvironmentId.
+            Condition: You must specify at least the SourceEnvironmentID or the
+            SourceEnvironmentName. You may also specify both. If you specify
+            the SourceEnvironmentId, you must specify the
+            DestinationEnvironmentId.
 
         :type source_environment_name: string
-        :param source_environment_name: The name of the source
-        environment.  Condition: You must specify at least the
-        SourceEnvironmentID or the SourceEnvironmentName. You may also
-        specify both. If you specify the SourceEnvironmentName, you must
-        specify the DestinationEnvironmentName.
+        :param source_environment_name: The name of the source environment.
+            Condition: You must specify at least the SourceEnvironmentID or the
+            SourceEnvironmentName. You may also specify both. If you specify
+            the SourceEnvironmentName, you must specify the
+            DestinationEnvironmentName.
 
         :type destination_environment_id: string
         :param destination_environment_id: The ID of the destination
-        environment.  Condition: You must specify at least the
-        DestinationEnvironmentID or the DestinationEnvironmentName. You
-        may also specify both. You must specify the SourceEnvironmentId
-        with the DestinationEnvironmentId.
+            environment.  Condition: You must specify at least the
+            DestinationEnvironmentID or the DestinationEnvironmentName. You may
+            also specify both. You must specify the SourceEnvironmentId with
+            the DestinationEnvironmentId.
 
         :type destination_environment_name: string
         :param destination_environment_name: The name of the destination
-        environment.  Condition: You must specify at least the
-        DestinationEnvironmentID or the DestinationEnvironmentName. You
-        may also specify both. You must specify the
-        SourceEnvironmentName with the DestinationEnvironmentName.
+            environment.  Condition: You must specify at least the
+            DestinationEnvironmentID or the DestinationEnvironmentName. You may
+            also specify both. You must specify the SourceEnvironmentName with
+            the DestinationEnvironmentName.
         """
         params = {}
         if source_environment_id:
@@ -907,25 +898,25 @@
 
         :type environment_id: string
         :param environment_id: The ID of the environment to terminate.
-        Condition: You must specify either this or an EnvironmentName,
-        or both. If you do not specify either, AWS Elastic Beanstalk
-        returns MissingRequiredParameter error.
+            Condition: You must specify either this or an EnvironmentName, or
+            both.  If you do not specify either, AWS Elastic Beanstalk returns
+            MissingRequiredParameter error.
 
         :type environment_name: string
-        :param environment_name: The name of the environment to
-        terminate. Condition: You must specify either this or an
-        EnvironmentId, or both. If you do not specify either, AWS
-        Elastic Beanstalk returns MissingRequiredParameter error.
+        :param environment_name: The name of the environment to terminate.
+            Condition: You must specify either this or an EnvironmentId, or
+            both.  If you do not specify either, AWS Elastic Beanstalk returns
+            MissingRequiredParameter error.
 
         :type terminate_resources: boolean
         :param terminate_resources: Indicates whether the associated AWS
-        resources should shut down when the environment is terminated:
-        true: (default) The user AWS resources (for example, the Auto
-        Scaling group, LoadBalancer, etc.) are terminated along with the
-        environment.  false: The environment is removed from the AWS
-        Elastic Beanstalk but the AWS resources continue to operate.
-        For more information, see the  AWS Elastic Beanstalk User Guide.
-        Default: true  Valid Values: true | false
+            resources should shut down when the environment is terminated:
+            true: (default) The user AWS resources (for example, the Auto
+            Scaling group, LoadBalancer, etc.) are terminated along with the
+            environment.  false: The environment is removed from the AWS
+            Elastic Beanstalk but the AWS resources continue to operate.  For
+            more information, see the  AWS Elastic Beanstalk User Guide.
+            Default: true  Valid Values: true | false
 
         :raises: InsufficientPrivilegesException
         """
@@ -946,13 +937,13 @@
 
         :type application_name: string
         :param application_name: The name of the application to update.
-        If no such application is found, UpdateApplication returns an
-        InvalidParameterValue error.
+            If no such application is found, UpdateApplication returns an
+            InvalidParameterValue error.
 
         :type description: string
-        :param description: A new description for the application.
-        Default: If not specified, AWS Elastic Beanstalk does not update
-        the description.
+        :param description: A new description for the application.  Default: If
+            not specified, AWS Elastic Beanstalk does not update the
+            description.
         """
         params = {'ApplicationName': application_name}
         if description:
@@ -964,14 +955,14 @@
         """Updates the application version to have the properties.
 
         :type application_name: string
-        :param application_name: The name of the application associated
-        with this version.  If no application is found with this name,
-        UpdateApplication returns an InvalidParameterValue error.
+        :param application_name: The name of the application associated with
+            this version.  If no application is found with this name,
+            UpdateApplication returns an InvalidParameterValue error.
 
         :type version_label: string
         :param version_label: The name of the version to update. If no
-        application version is found with this label, UpdateApplication
-        returns an InvalidParameterValue error.
+            application version is found with this label, UpdateApplication
+            returns an InvalidParameterValue error.
 
         :type description: string
         :param description: A new description for this release.
@@ -990,28 +981,27 @@
         specified properties or configuration option values.
 
         :type application_name: string
-        :param application_name: The name of the application associated
-        with the configuration template to update. If no application is
-        found with this name, UpdateConfigurationTemplate returns an
-        InvalidParameterValue error.
+        :param application_name: The name of the application associated with
+            the configuration template to update. If no application is found
+            with this name, UpdateConfigurationTemplate returns an
+            InvalidParameterValue error.
 
         :type template_name: string
-        :param template_name: The name of the configuration template to
-        update. If no configuration template is found with this name,
-        UpdateConfigurationTemplate returns an InvalidParameterValue
-        error.
+        :param template_name: The name of the configuration template to update.
+            If no configuration template is found with this name,
+            UpdateConfigurationTemplate returns an InvalidParameterValue error.
 
         :type description: string
         :param description: A new description for the configuration.
 
         :type option_settings: list
-        :param option_settings: A list of configuration option settings
-        to update with the new specified option value.
+        :param option_settings: A list of configuration option settings to
+            update with the new specified option value.
 
         :type options_to_remove: list
-        :param options_to_remove: A list of configuration options to
-        remove from the configuration set.  Constraint: You can remove
-        only UserDefined configuration options.
+        :param options_to_remove: A list of configuration options to remove
+            from the configuration set.  Constraint: You can remove only
+            UserDefined configuration options.
 
         :raises: InsufficientPrivilegesException
         """
@@ -1045,47 +1035,43 @@
         setting descriptions with different DeploymentStatus values.
 
         :type environment_id: string
-        :param environment_id: The ID of the environment to update. If
-        no environment with this ID exists, AWS Elastic Beanstalk
-        returns an InvalidParameterValue error.  Condition: You must
-        specify either this or an EnvironmentName, or both. If you do
-        not specify either, AWS Elastic Beanstalk returns
-        MissingRequiredParameter error.
+        :param environment_id: The ID of the environment to update. If no
+            environment with this ID exists, AWS Elastic Beanstalk returns an
+            InvalidParameterValue error.  Condition: You must specify either
+            this or an EnvironmentName, or both. If you do not specify either,
+            AWS Elastic Beanstalk returns MissingRequiredParameter error.
 
         :type environment_name: string
-        :param environment_name: The name of the environment to update.
-        If no environment with this name exists, AWS Elastic Beanstalk
-        returns an InvalidParameterValue error.  Condition: You must
-        specify either this or an EnvironmentId, or both. If you do not
-        specify either, AWS Elastic Beanstalk returns
-        MissingRequiredParameter error.
+        :param environment_name: The name of the environment to update.  If no
+            environment with this name exists, AWS Elastic Beanstalk returns an
+            InvalidParameterValue error.  Condition: You must specify either
+            this or an EnvironmentId, or both. If you do not specify either,
+            AWS Elastic Beanstalk returns MissingRequiredParameter error.
 
         :type version_label: string
-        :param version_label: If this parameter is specified, AWS
-        Elastic Beanstalk deploys the named application version to the
-        environment. If no such application version is found, returns an
-        InvalidParameterValue error.
+        :param version_label: If this parameter is specified, AWS Elastic
+            Beanstalk deploys the named application version to the environment.
+            If no such application version is found, returns an
+            InvalidParameterValue error.
 
         :type template_name: string
-        :param template_name: If this parameter is specified, AWS
-        Elastic Beanstalk deploys this configuration template to the
-        environment. If no such configuration template is found, AWS
-        Elastic Beanstalk returns an InvalidParameterValue error.
+        :param template_name: If this parameter is specified, AWS Elastic
+            Beanstalk deploys this configuration template to the environment.
+            If no such configuration template is found, AWS Elastic Beanstalk
+            returns an InvalidParameterValue error.
 
         :type description: string
         :param description: If this parameter is specified, AWS Elastic
-        Beanstalk updates the description of this environment.
+            Beanstalk updates the description of this environment.
 
         :type option_settings: list
-        :param option_settings: If specified, AWS Elastic Beanstalk
-        updates the configuration set associated with the running
-        environment and sets the specified configuration options to the
-        requested value.
+        :param option_settings: If specified, AWS Elastic Beanstalk updates the
+            configuration set associated with the running environment and sets
+            the specified configuration options to the requested value.
 
         :type options_to_remove: list
-        :param options_to_remove: A list of custom user-defined
-        configuration options to remove from the configuration set for
-        this environment.
+        :param options_to_remove: A list of custom user-defined configuration
+            options to remove from the configuration set for this environment.
 
         :raises: InsufficientPrivilegesException
         """
@@ -1121,21 +1107,21 @@
 
         :type application_name: string
         :param application_name: The name of the application that the
-        configuration template or environment belongs to.
+            configuration template or environment belongs to.
 
         :type template_name: string
         :param template_name: The name of the configuration template to
-        validate the settings against.  Condition: You cannot specify
-        both this and an environment name.
+            validate the settings against.  Condition: You cannot specify both
+            this and an environment name.
 
         :type environment_name: string
-        :param environment_name: The name of the environment to validate
-        the settings against.  Condition: You cannot specify both this
-        and a configuration template name.
+        :param environment_name: The name of the environment to validate the
+            settings against.  Condition: You cannot specify both this and a
+            configuration template name.
 
         :type option_settings: list
-        :param option_settings: A list of the options and desired values
-        to evaluate.
+        :param option_settings: A list of the options and desired values to
+            evaluate.
 
         :raises: InsufficientPrivilegesException
         """
diff --git a/boto/beanstalk/response.py b/boto/beanstalk/response.py
index 22bc102..2d071bc 100644
--- a/boto/beanstalk/response.py
+++ b/boto/beanstalk/response.py
@@ -175,7 +175,7 @@
 
 class EnvironmentInfoDescription(BaseObject):
     def __init__(self, response):
-        EnvironmentInfoDescription(Response, self).__init__()
+        super(EnvironmentInfoDescription, self).__init__()
 
         self.ec2_instance_id = str(response['Ec2InstanceId'])
         self.info_type = str(response['InfoType'])
diff --git a/boto/cacerts/cacerts.txt b/boto/cacerts/cacerts.txt
index e65f21d..f6e0ee6 100644
--- a/boto/cacerts/cacerts.txt
+++ b/boto/cacerts/cacerts.txt
@@ -631,3 +631,1567 @@
 95K+8cPV1ZVqBLssziY2ZcgxxufuP+NXdYR6Ee9GTxj005i7qIcyunL2POI9n9cd
 2cNgQ4xYDiKWL2KjLB+6rQXvqzJ4h6BUcxm1XAX5Uj5tLUUL9wqT6u0G+bI=
 -----END CERTIFICATE-----
+
+GTE CyberTrust Global Root
+==========================
+
+-----BEGIN CERTIFICATE-----
+MIICWjCCAcMCAgGlMA0GCSqGSIb3DQEBBAUAMHUxCzAJBgNVBAYTAlVTMRgwFgYD
+VQQKEw9HVEUgQ29ycG9yYXRpb24xJzAlBgNVBAsTHkdURSBDeWJlclRydXN0IFNv
+bHV0aW9ucywgSW5jLjEjMCEGA1UEAxMaR1RFIEN5YmVyVHJ1c3QgR2xvYmFsIFJv
+b3QwHhcNOTgwODEzMDAyOTAwWhcNMTgwODEzMjM1OTAwWjB1MQswCQYDVQQGEwJV
+UzEYMBYGA1UEChMPR1RFIENvcnBvcmF0aW9uMScwJQYDVQQLEx5HVEUgQ3liZXJU
+cnVzdCBTb2x1dGlvbnMsIEluYy4xIzAhBgNVBAMTGkdURSBDeWJlclRydXN0IEds
+b2JhbCBSb290MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCVD6C28FCc6HrH
+iM3dFw4usJTQGz0O9pTAipTHBsiQl8i4ZBp6fmw8U+E3KHNgf7KXUwefU/ltWJTS
+r41tiGeA5u2ylc9yMcqlHHK6XALnZELn+aks1joNrI1CqiQBOeacPwGFVw1Yh0X4
+04Wqk2kmhXBIgD8SFcd5tB8FLztimQIDAQABMA0GCSqGSIb3DQEBBAUAA4GBAG3r
+GwnpXtlR22ciYaQqPEh346B8pt5zohQDhT37qw4wxYMWM4ETCJ57NE7fQMh017l9
+3PR2VX2bY1QY6fDq81yx2YtCHrnAlU66+tXifPVoYb+O7AWXX1uw16OFNMQkpw0P
+lZPvy5TYnh+dXIVtx6quTx8itc2VrbqnzPmrC3p/
+-----END CERTIFICATE-----
+
+GlobalSign Root CA
+==================
+
+-----BEGIN CERTIFICATE-----
+MIIDdTCCAl2gAwIBAgILBAAAAAABFUtaw5QwDQYJKoZIhvcNAQEFBQAwVzELMAkG
+A1UEBhMCQkUxGTAXBgNVBAoTEEdsb2JhbFNpZ24gbnYtc2ExEDAOBgNVBAsTB1Jv
+b3QgQ0ExGzAZBgNVBAMTEkdsb2JhbFNpZ24gUm9vdCBDQTAeFw05ODA5MDExMjAw
+MDBaFw0yODAxMjgxMjAwMDBaMFcxCzAJBgNVBAYTAkJFMRkwFwYDVQQKExBHbG9i
+YWxTaWduIG52LXNhMRAwDgYDVQQLEwdSb290IENBMRswGQYDVQQDExJHbG9iYWxT
+aWduIFJvb3QgQ0EwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDaDuaZ
+jc6j40+Kfvvxi4Mla+pIH/EqsLmVEQS98GPR4mdmzxzdzxtIK+6NiY6arymAZavp
+xy0Sy6scTHAHoT0KMM0VjU/43dSMUBUc71DuxC73/OlS8pF94G3VNTCOXkNz8kHp
+1Wrjsok6Vjk4bwY8iGlbKk3Fp1S4bInMm/k8yuX9ifUSPJJ4ltbcdG6TRGHRjcdG
+snUOhugZitVtbNV4FpWi6cgKOOvyJBNPc1STE4U6G7weNLWLBYy5d4ux2x8gkasJ
+U26Qzns3dLlwR5EiUWMWea6xrkEmCMgZK9FGqkjWZCrXgzT/LCrBbBlDSgeF59N8
+9iFo7+ryUp9/k5DPAgMBAAGjQjBAMA4GA1UdDwEB/wQEAwIBBjAPBgNVHRMBAf8E
+BTADAQH/MB0GA1UdDgQWBBRge2YaRQ2XyolQL30EzTSo//z9SzANBgkqhkiG9w0B
+AQUFAAOCAQEA1nPnfE920I2/7LqivjTFKDK1fPxsnCwrvQmeU79rXqoRSLblCKOz
+yj1hTdNGCbM+w6DjY1Ub8rrvrTnhQ7k4o+YviiY776BQVvnGCv04zcQLcFGUl5gE
+38NflNUVyRRBnMRddWQVDf9VMOyGj/8N7yy5Y0b2qvzfvGn9LhJIZJrglfCm7ymP
+AbEVtQwdpf5pLGkkeB6zpxxxYu7KyJesF12KwvhHhm4qxFYxldBniYUr+WymXUad
+DKqC5JlR3XC321Y9YeRq4VzW9v493kHMB65jUr9TU/Qr6cf9tveCX4XSQRjbgbME
+HMUfpIBvFSDJ3gyICh3WZlXi/EjJKSZp4A==
+-----END CERTIFICATE-----
+
+GlobalSign Root CA - R2
+=======================
+
+-----BEGIN CERTIFICATE-----
+MIIDujCCAqKgAwIBAgILBAAAAAABD4Ym5g0wDQYJKoZIhvcNAQEFBQAwTDEgMB4G
+A1UECxMXR2xvYmFsU2lnbiBSb290IENBIC0gUjIxEzARBgNVBAoTCkdsb2JhbFNp
+Z24xEzARBgNVBAMTCkdsb2JhbFNpZ24wHhcNMDYxMjE1MDgwMDAwWhcNMjExMjE1
+MDgwMDAwWjBMMSAwHgYDVQQLExdHbG9iYWxTaWduIFJvb3QgQ0EgLSBSMjETMBEG
+A1UEChMKR2xvYmFsU2lnbjETMBEGA1UEAxMKR2xvYmFsU2lnbjCCASIwDQYJKoZI
+hvcNAQEBBQADggEPADCCAQoCggEBAKbPJA6+Lm8omUVCxKs+IVSbC9N/hHD6ErPL
+v4dfxn+G07IwXNb9rfF73OX4YJYJkhD10FPe+3t+c4isUoh7SqbKSaZeqKeMWhG8
+eoLrvozps6yWJQeXSpkqBy+0Hne/ig+1AnwblrjFuTosvNYSuetZfeLQBoZfXklq
+tTleiDTsvHgMCJiEbKjNS7SgfQx5TfC4LcshytVsW33hoCmEofnTlEnLJGKRILzd
+C9XZzPnqJworc5HGnRusyMvo4KD0L5CLTfuwNhv2GXqF4G3yYROIXJ/gkwpRl4pa
+zq+r1feqCapgvdzZX99yqWATXgAByUr6P6TqBwMhAo6CygPCm48CAwEAAaOBnDCB
+mTAOBgNVHQ8BAf8EBAMCAQYwDwYDVR0TAQH/BAUwAwEB/zAdBgNVHQ4EFgQUm+IH
+V2ccHsBqBt5ZtJot39wZhi4wNgYDVR0fBC8wLTAroCmgJ4YlaHR0cDovL2NybC5n
+bG9iYWxzaWduLm5ldC9yb290LXIyLmNybDAfBgNVHSMEGDAWgBSb4gdXZxwewGoG
+3lm0mi3f3BmGLjANBgkqhkiG9w0BAQUFAAOCAQEAmYFThxxol4aR7OBKuEQLq4Gs
+J0/WwbgcQ3izDJr86iw8bmEbTUsp9Z8FHSbBuOmDAGJFtqkIk7mpM0sYmsL4h4hO
+291xNBrBVNpGP+DTKqttVCL1OmLNIG+6KYnX3ZHu01yiPqFbQfXf5WRDLenVOavS
+ot+3i9DAgBkcRcAtjOj4LaR0VknFBbVPFd5uRHg5h6h+u/N5GJG79G+dwfCMNYxd
+AfvDbbnvRG15RjF+Cv6pgsH/76tuIMRQyV+dTZsXjAzlAcmgQWpzU/qlULRuJQ/7
+TBj0/VLZjmmx6BEP3ojY+x1J96relc8geMJgEtslQIxq/H5COEBkEveegeGTLg==
+-----END CERTIFICATE-----
+
+ValiCert Class 1 VA
+===================
+
+-----BEGIN CERTIFICATE-----
+MIIC5zCCAlACAQEwDQYJKoZIhvcNAQEFBQAwgbsxJDAiBgNVBAcTG1ZhbGlDZXJ0
+IFZhbGlkYXRpb24gTmV0d29yazEXMBUGA1UEChMOVmFsaUNlcnQsIEluYy4xNTAz
+BgNVBAsTLFZhbGlDZXJ0IENsYXNzIDEgUG9saWN5IFZhbGlkYXRpb24gQXV0aG9y
+aXR5MSEwHwYDVQQDExhodHRwOi8vd3d3LnZhbGljZXJ0LmNvbS8xIDAeBgkqhkiG
+9w0BCQEWEWluZm9AdmFsaWNlcnQuY29tMB4XDTk5MDYyNTIyMjM0OFoXDTE5MDYy
+NTIyMjM0OFowgbsxJDAiBgNVBAcTG1ZhbGlDZXJ0IFZhbGlkYXRpb24gTmV0d29y
+azEXMBUGA1UEChMOVmFsaUNlcnQsIEluYy4xNTAzBgNVBAsTLFZhbGlDZXJ0IENs
+YXNzIDEgUG9saWN5IFZhbGlkYXRpb24gQXV0aG9yaXR5MSEwHwYDVQQDExhodHRw
+Oi8vd3d3LnZhbGljZXJ0LmNvbS8xIDAeBgkqhkiG9w0BCQEWEWluZm9AdmFsaWNl
+cnQuY29tMIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQDYWYJ6ibiWuqYvaG9Y
+LqdUHAZu9OqNSLwxlBfw8068srg1knaw0KWlAdcAAxIiGQj4/xEjm84H9b9pGib+
+TunRf50sQB1ZaG6m+FiwnRqP0z/x3BkGgagO4DrdyFNFCQbmD3DD+kCmDuJWBQ8Y
+TfwggtFzVXSNdnKgHZ0dwN0/cQIDAQABMA0GCSqGSIb3DQEBBQUAA4GBAFBoPUn0
+LBwGlN+VYH+Wexf+T3GtZMjdd9LvWVXoP+iOBSoh8gfStadS/pyxtuJbdxdA6nLW
+I8sogTLDAHkY7FkXicnGah5xyf23dKUlRWnFSKsZ4UWKJWsZ7uW7EvV/96aNUcPw
+nXS3qT6gpf+2SQMT2iLM7XGCK5nPOrf1LXLI
+-----END CERTIFICATE-----
+
+ValiCert Class 2 VA
+===================
+
+-----BEGIN CERTIFICATE-----
+MIIC5zCCAlACAQEwDQYJKoZIhvcNAQEFBQAwgbsxJDAiBgNVBAcTG1ZhbGlDZXJ0
+IFZhbGlkYXRpb24gTmV0d29yazEXMBUGA1UEChMOVmFsaUNlcnQsIEluYy4xNTAz
+BgNVBAsTLFZhbGlDZXJ0IENsYXNzIDIgUG9saWN5IFZhbGlkYXRpb24gQXV0aG9y
+aXR5MSEwHwYDVQQDExhodHRwOi8vd3d3LnZhbGljZXJ0LmNvbS8xIDAeBgkqhkiG
+9w0BCQEWEWluZm9AdmFsaWNlcnQuY29tMB4XDTk5MDYyNjAwMTk1NFoXDTE5MDYy
+NjAwMTk1NFowgbsxJDAiBgNVBAcTG1ZhbGlDZXJ0IFZhbGlkYXRpb24gTmV0d29y
+azEXMBUGA1UEChMOVmFsaUNlcnQsIEluYy4xNTAzBgNVBAsTLFZhbGlDZXJ0IENs
+YXNzIDIgUG9saWN5IFZhbGlkYXRpb24gQXV0aG9yaXR5MSEwHwYDVQQDExhodHRw
+Oi8vd3d3LnZhbGljZXJ0LmNvbS8xIDAeBgkqhkiG9w0BCQEWEWluZm9AdmFsaWNl
+cnQuY29tMIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQDOOnHK5avIWZJV16vY
+dA757tn2VUdZZUcOBVXc65g2PFxTXdMwzzjsvUGJ7SVCCSRrCl6zfN1SLUzm1NZ9
+WlmpZdRJEy0kTRxQb7XBhVQ7/nHk01xC+YDgkRoKWzk2Z/M/VXwbP7RfZHM047QS
+v4dk+NoS/zcnwbNDu+97bi5p9wIDAQABMA0GCSqGSIb3DQEBBQUAA4GBADt/UG9v
+UJSZSWI4OB9L+KXIPqeCgfYrx+jFzug6EILLGACOTb2oWH+heQC1u+mNr0HZDzTu
+IYEZoDJJKPTEjlbVUjP9UNV+mWwD5MlM/Mtsq2azSiGM5bUMMj4QssxsodyamEwC
+W/POuZ6lcg5Ktz885hZo+L7tdEy8W9ViH0Pd
+-----END CERTIFICATE-----
+
+RSA Root Certificate 1
+======================
+
+-----BEGIN CERTIFICATE-----
+MIIC5zCCAlACAQEwDQYJKoZIhvcNAQEFBQAwgbsxJDAiBgNVBAcTG1ZhbGlDZXJ0
+IFZhbGlkYXRpb24gTmV0d29yazEXMBUGA1UEChMOVmFsaUNlcnQsIEluYy4xNTAz
+BgNVBAsTLFZhbGlDZXJ0IENsYXNzIDMgUG9saWN5IFZhbGlkYXRpb24gQXV0aG9y
+aXR5MSEwHwYDVQQDExhodHRwOi8vd3d3LnZhbGljZXJ0LmNvbS8xIDAeBgkqhkiG
+9w0BCQEWEWluZm9AdmFsaWNlcnQuY29tMB4XDTk5MDYyNjAwMjIzM1oXDTE5MDYy
+NjAwMjIzM1owgbsxJDAiBgNVBAcTG1ZhbGlDZXJ0IFZhbGlkYXRpb24gTmV0d29y
+azEXMBUGA1UEChMOVmFsaUNlcnQsIEluYy4xNTAzBgNVBAsTLFZhbGlDZXJ0IENs
+YXNzIDMgUG9saWN5IFZhbGlkYXRpb24gQXV0aG9yaXR5MSEwHwYDVQQDExhodHRw
+Oi8vd3d3LnZhbGljZXJ0LmNvbS8xIDAeBgkqhkiG9w0BCQEWEWluZm9AdmFsaWNl
+cnQuY29tMIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQDjmFGWHOjVsQaBalfD
+cnWTq8+epvzzFlLWLU2fNUSoLgRNB0mKOCn1dzfnt6td3zZxFJmP3MKS8edgkpfs
+2Ejcv8ECIMYkpChMMFp2bbFc893enhBxoYjHW5tBbcqwuI4V7q0zK89HBFx1cQqY
+JJgpp0lZpd34t0NiYfPT4tBVPwIDAQABMA0GCSqGSIb3DQEBBQUAA4GBAFa7AliE
+Zwgs3x/be0kz9dNnnfS0ChCzycUs4pJqcXgn8nCDQtM+z6lU9PHYkhaM0QTLS6vJ
+n0WuPIqpsHEzXcjFV9+vqDWzf4mH6eglkrh/hXqu1rweN1gqZ8mRzyqBPu3GOd/A
+PhmcGcwTTYJBtYze4D1gCCAPRX5ron+jjBXu
+-----END CERTIFICATE-----
+
+Entrust.net Premium 2048 Secure Server CA
+=========================================
+
+-----BEGIN CERTIFICATE-----
+MIIEXDCCA0SgAwIBAgIEOGO5ZjANBgkqhkiG9w0BAQUFADCBtDEUMBIGA1UEChML
+RW50cnVzdC5uZXQxQDA+BgNVBAsUN3d3dy5lbnRydXN0Lm5ldC9DUFNfMjA0OCBp
+bmNvcnAuIGJ5IHJlZi4gKGxpbWl0cyBsaWFiLikxJTAjBgNVBAsTHChjKSAxOTk5
+IEVudHJ1c3QubmV0IExpbWl0ZWQxMzAxBgNVBAMTKkVudHJ1c3QubmV0IENlcnRp
+ZmljYXRpb24gQXV0aG9yaXR5ICgyMDQ4KTAeFw05OTEyMjQxNzUwNTFaFw0xOTEy
+MjQxODIwNTFaMIG0MRQwEgYDVQQKEwtFbnRydXN0Lm5ldDFAMD4GA1UECxQ3d3d3
+LmVudHJ1c3QubmV0L0NQU18yMDQ4IGluY29ycC4gYnkgcmVmLiAobGltaXRzIGxp
+YWIuKTElMCMGA1UECxMcKGMpIDE5OTkgRW50cnVzdC5uZXQgTGltaXRlZDEzMDEG
+A1UEAxMqRW50cnVzdC5uZXQgQ2VydGlmaWNhdGlvbiBBdXRob3JpdHkgKDIwNDgp
+MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEArU1LqRKGsuqjIAcVFmQq
+K0vRvwtKTY7tgHalZ7d4QMBzQshowNtTK91euHaYNZOLGp18EzoOH1u3Hs/lJBQe
+sYGpjX24zGtLA/ECDNyrpUAkAH90lKGdCCmziAv1h3edVc3kw37XamSrhRSGlVuX
+MlBvPci6Zgzj/L24ScF2iUkZ/cCovYmjZy/Gn7xxGWC4LeksyZB2ZnuU4q941mVT
+XTzWnLLPKQP5L6RQstRIzgUyVYr9smRMDuSYB3Xbf9+5CFVghTAp+XtIpGmG4zU/
+HoZdenoVve8AjhUiVBcAkCaTvA5JaJG/+EfTnZVCwQ5N328mz8MYIWJmQ3DW1cAH
+4QIDAQABo3QwcjARBglghkgBhvhCAQEEBAMCAAcwHwYDVR0jBBgwFoAUVeSB0RGA
+vtiJuQijMfmhJAkWuXAwHQYDVR0OBBYEFFXkgdERgL7YibkIozH5oSQJFrlwMB0G
+CSqGSIb2fQdBAAQQMA4bCFY1LjA6NC4wAwIEkDANBgkqhkiG9w0BAQUFAAOCAQEA
+WUesIYSKF8mciVMeuoCFGsY8Tj6xnLZ8xpJdGGQC49MGCBFhfGPjK50xA3B20qMo
+oPS7mmNz7W3lKtvtFKkrxjYR0CvrB4ul2p5cGZ1WEvVUKcgF7bISKo30Axv/55IQ
+h7A6tcOdBTcSo8f0FbnVpDkWm1M6I5HxqIKiaohowXkCIryqptau37AUX7iH0N18
+f3v/rxzP5tsHrV7bhZ3QKw0z2wTR5klAEyt2+z7pnIkPFc4YsIV4IU9rTw76NmfN
+B/L/CNDi3tm/Kq+4h4YhPATKt5Rof8886ZjXOP/swNlQ8C5LWK5Gb9Auw2DaclVy
+vUxFnmG6v4SBkgPR0ml8xQ==
+-----END CERTIFICATE-----
+
+Baltimore CyberTrust Root
+=========================
+
+-----BEGIN CERTIFICATE-----
+MIIDdzCCAl+gAwIBAgIEAgAAuTANBgkqhkiG9w0BAQUFADBaMQswCQYDVQQGEwJJ
+RTESMBAGA1UEChMJQmFsdGltb3JlMRMwEQYDVQQLEwpDeWJlclRydXN0MSIwIAYD
+VQQDExlCYWx0aW1vcmUgQ3liZXJUcnVzdCBSb290MB4XDTAwMDUxMjE4NDYwMFoX
+DTI1MDUxMjIzNTkwMFowWjELMAkGA1UEBhMCSUUxEjAQBgNVBAoTCUJhbHRpbW9y
+ZTETMBEGA1UECxMKQ3liZXJUcnVzdDEiMCAGA1UEAxMZQmFsdGltb3JlIEN5YmVy
+VHJ1c3QgUm9vdDCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAKMEuyKr
+mD1X6CZymrV51Cni4eiVgLGw41uOKymaZN+hXe2wCQVt2yguzmKiYv60iNoS6zjr
+IZ3AQSsBUnuId9Mcj8e6uYi1agnnc+gRQKfRzMpijS3ljwumUNKoUMMo6vWrJYeK
+mpYcqWe4PwzV9/lSEy/CG9VwcPCPwBLKBsua4dnKM3p31vjsufFoREJIE9LAwqSu
+XmD+tqYF/LTdB1kC1FkYmGP1pWPgkAx9XbIGevOF6uvUA65ehD5f/xXtabz5OTZy
+dc93Uk3zyZAsuT3lySNTPx8kmCFcB5kpvcY67Oduhjprl3RjM71oGDHweI12v/ye
+jl0qhqdNkNwnGjkCAwEAAaNFMEMwHQYDVR0OBBYEFOWdWTCCR1jMrPoIVDaGezq1
+BE3wMBIGA1UdEwEB/wQIMAYBAf8CAQMwDgYDVR0PAQH/BAQDAgEGMA0GCSqGSIb3
+DQEBBQUAA4IBAQCFDF2O5G9RaEIFoN27TyclhAO992T9Ldcw46QQF+vaKSm2eT92
+9hkTI7gQCvlYpNRhcL0EYWoSihfVCr3FvDB81ukMJY2GQE/szKN+OMY3EU/t3Wgx
+jkzSswF07r51XgdIGn9w/xZchMB5hbgF/X++ZRGjD8ACtPhSNzkE1akxehi/oCr0
+Epn3o0WC4zxe9Z2etciefC7IpJ5OCBRLbf1wbWsaY71k5h+3zvDyny67G7fyUIhz
+ksLi4xaNmjICq44Y3ekQEe5+NauQrz4wlHrQMz2nZQ/1/I6eYs9HRCwBXbsdtTLS
+R9I4LtD+gdwyah617jzV/OeBHRnDJELqYzmp
+-----END CERTIFICATE-----
+
+AddTrust Low-Value Services Root
+================================
+
+-----BEGIN CERTIFICATE-----
+MIIEGDCCAwCgAwIBAgIBATANBgkqhkiG9w0BAQUFADBlMQswCQYDVQQGEwJTRTEU
+MBIGA1UEChMLQWRkVHJ1c3QgQUIxHTAbBgNVBAsTFEFkZFRydXN0IFRUUCBOZXR3
+b3JrMSEwHwYDVQQDExhBZGRUcnVzdCBDbGFzcyAxIENBIFJvb3QwHhcNMDAwNTMw
+MTAzODMxWhcNMjAwNTMwMTAzODMxWjBlMQswCQYDVQQGEwJTRTEUMBIGA1UEChML
+QWRkVHJ1c3QgQUIxHTAbBgNVBAsTFEFkZFRydXN0IFRUUCBOZXR3b3JrMSEwHwYD
+VQQDExhBZGRUcnVzdCBDbGFzcyAxIENBIFJvb3QwggEiMA0GCSqGSIb3DQEBAQUA
+A4IBDwAwggEKAoIBAQCWltQhSWDia+hBBwzexODcEyPNwTXH+9ZOEQpnXvUGW2ul
+CDtbKRY654eyNAbFvAWlA3yCyykQruGIgb3WntP+LVbBFc7jJp0VLhD7Bo8wBN6n
+tGO0/7Gcrjyvd7ZWxbWroulpOj0OM3kyP3CCkplhbY0wCI9xP6ZIVxn4JdxLZlyl
+dI+Yrsj5wAYi56xz36Uu+1LcsRVlIPo1Zmne3yzxbrww2ywkEtvrNTVokMsAsJch
+PXQhI2U0K7t4WaPW4XY5mqRJjox0r26kmqPZm9I4XJuiGMx1I4S+6+JNM3GOGvDC
++Mcdoq0Dlyz4zyXG9rgkMbFjXZJ/Y/AlyVMuH79NAgMBAAGjgdIwgc8wHQYDVR0O
+BBYEFJWxtPCUtr3H2tERCSG+wa9J/RB7MAsGA1UdDwQEAwIBBjAPBgNVHRMBAf8E
+BTADAQH/MIGPBgNVHSMEgYcwgYSAFJWxtPCUtr3H2tERCSG+wa9J/RB7oWmkZzBl
+MQswCQYDVQQGEwJTRTEUMBIGA1UEChMLQWRkVHJ1c3QgQUIxHTAbBgNVBAsTFEFk
+ZFRydXN0IFRUUCBOZXR3b3JrMSEwHwYDVQQDExhBZGRUcnVzdCBDbGFzcyAxIENB
+IFJvb3SCAQEwDQYJKoZIhvcNAQEFBQADggEBACxtZBsfzQ3duQH6lmM0MkhHma6X
+7f1yFqZzR1r0693p9db7RcwpiURdv0Y5PejuvE1Uhh4dbOMXJ0PhiVYrqW9yTkkz
+43J8KiOavD7/KCrto/8cI7pDVwlnTUtiBi34/2ydYB7YHEt9tTEv2dB8Xfjea4MY
+eDdXL+gzB2ffHsdrKpV2ro9Xo/D0UrSpUwjP4E/TelOL/bscVjby/rK25Xa71SJl
+pz/+0WatC7xrmYbvP33zGDLKe8bjq2RGlfgmadlVg3sslgf/WSxEo8bl6ancoWOA
+WiFeIc9TVPC6b4nbqKqVz4vjccweGyBECMB6tkD9xOQ14R0WHNC8K47Wcdk=
+-----END CERTIFICATE-----
+
+AddTrust External Root
+======================
+
+-----BEGIN CERTIFICATE-----
+MIIENjCCAx6gAwIBAgIBATANBgkqhkiG9w0BAQUFADBvMQswCQYDVQQGEwJTRTEU
+MBIGA1UEChMLQWRkVHJ1c3QgQUIxJjAkBgNVBAsTHUFkZFRydXN0IEV4dGVybmFs
+IFRUUCBOZXR3b3JrMSIwIAYDVQQDExlBZGRUcnVzdCBFeHRlcm5hbCBDQSBSb290
+MB4XDTAwMDUzMDEwNDgzOFoXDTIwMDUzMDEwNDgzOFowbzELMAkGA1UEBhMCU0Ux
+FDASBgNVBAoTC0FkZFRydXN0IEFCMSYwJAYDVQQLEx1BZGRUcnVzdCBFeHRlcm5h
+bCBUVFAgTmV0d29yazEiMCAGA1UEAxMZQWRkVHJ1c3QgRXh0ZXJuYWwgQ0EgUm9v
+dDCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBALf3GjPm8gAELTngTlvt
+H7xsD821+iO2zt6bETOXpClMfZOfvUq8k+0DGuOPz+VtUFrWlymUWoCwSXrbLpX9
+uMq/NzgtHj6RQa1wVsfwTz/oMp50ysiQVOnGXw94nZpAPA6sYapeFI+eh6FqUNzX
+mk6vBbOmcZSccbNQYArHE504B4YCqOmoaSYYkKtMsE8jqzpPhNjfzp/haW+710LX
+a0Tkx63ubUFfclpxCDezeWWkWaCUN/cALw3CknLa0Dhy2xSoRcRdKn23tNbE7qzN
+E0S3ySvdQwAl+mG5aWpYIxG3pzOPVnVZ9c0p10a3CitlttNCbxWyuHv77+ldU9U0
+WicCAwEAAaOB3DCB2TAdBgNVHQ4EFgQUrb2YejS0Jvf6xCZU7wO94CTLVBowCwYD
+VR0PBAQDAgEGMA8GA1UdEwEB/wQFMAMBAf8wgZkGA1UdIwSBkTCBjoAUrb2YejS0
+Jvf6xCZU7wO94CTLVBqhc6RxMG8xCzAJBgNVBAYTAlNFMRQwEgYDVQQKEwtBZGRU
+cnVzdCBBQjEmMCQGA1UECxMdQWRkVHJ1c3QgRXh0ZXJuYWwgVFRQIE5ldHdvcmsx
+IjAgBgNVBAMTGUFkZFRydXN0IEV4dGVybmFsIENBIFJvb3SCAQEwDQYJKoZIhvcN
+AQEFBQADggEBALCb4IUlwtYj4g+WBpKdQZic2YR5gdkeWxQHIzZlj7DYd7usQWxH
+YINRsPkyPef89iYTx4AWpb9a/IfPeHmJIZriTAcKhjW88t5RxNKWt9x+Tu5w/Rw5
+6wwCURQtjr0W4MHfRnXnJK3s9EK0hZNwEGe6nQY1ShjTK3rMUUKhemPR5ruhxSvC
+Nr4TDea9Y355e6cJDUCrat2PisP29owaQgVR1EX1n6diIWgVIEM8med8vSTYqZEX
+c4g/VhsxOBi0cQ+azcgOno4uG+GMmIPLHzHxREzGBHNJdmAPx/i9F4BrLunMTA5a
+mnkPIAou1Z5jJh5VkpTYghdae9C8x49OhgQ=
+-----END CERTIFICATE-----
+
+AddTrust Public Services Root
+=============================
+
+-----BEGIN CERTIFICATE-----
+MIIEFTCCAv2gAwIBAgIBATANBgkqhkiG9w0BAQUFADBkMQswCQYDVQQGEwJTRTEU
+MBIGA1UEChMLQWRkVHJ1c3QgQUIxHTAbBgNVBAsTFEFkZFRydXN0IFRUUCBOZXR3
+b3JrMSAwHgYDVQQDExdBZGRUcnVzdCBQdWJsaWMgQ0EgUm9vdDAeFw0wMDA1MzAx
+MDQxNTBaFw0yMDA1MzAxMDQxNTBaMGQxCzAJBgNVBAYTAlNFMRQwEgYDVQQKEwtB
+ZGRUcnVzdCBBQjEdMBsGA1UECxMUQWRkVHJ1c3QgVFRQIE5ldHdvcmsxIDAeBgNV
+BAMTF0FkZFRydXN0IFB1YmxpYyBDQSBSb290MIIBIjANBgkqhkiG9w0BAQEFAAOC
+AQ8AMIIBCgKCAQEA6Rowj4OIFMEg2Dybjxt+A3S72mnTRqX4jsIMEZBRpS9mVEBV
+6tsfSlbunyNu9DnLoblv8n75XYcmYZ4c+OLspoH4IcUkzBEMP9smcnrHAZcHF/nX
+GCwwfQ56HmIexkvA/X1id9NEHif2P0tEs7c42TkfYNVRknMDtABp4/MUTu7R3AnP
+dzRGULD4EfL+OHn3Bzn+UZKXC1sIXzSGAa2Il+tmzV7R/9x98oTaunet3IAIx6eH
+1lWfl2royBFkuucZKT8Rs3iQhCBSWxHveNCD9tVIkNAwHM+A+WD+eeSI8t0A65RF
+62WUaUC6wNW0uLp9BBGo6zEFlpROWCGOn9Bg/QIDAQABo4HRMIHOMB0GA1UdDgQW
+BBSBPjfYkrAfd59ctKtzquf2NGAv+jALBgNVHQ8EBAMCAQYwDwYDVR0TAQH/BAUw
+AwEB/zCBjgYDVR0jBIGGMIGDgBSBPjfYkrAfd59ctKtzquf2NGAv+qFopGYwZDEL
+MAkGA1UEBhMCU0UxFDASBgNVBAoTC0FkZFRydXN0IEFCMR0wGwYDVQQLExRBZGRU
+cnVzdCBUVFAgTmV0d29yazEgMB4GA1UEAxMXQWRkVHJ1c3QgUHVibGljIENBIFJv
+b3SCAQEwDQYJKoZIhvcNAQEFBQADggEBAAP3FUr4JNojVhaTdt02KLmuG7jD8WS6
+IBh4lSknVwW8fCr0uVFV2ocC3g8WFzH4qnkuCRO7r7IgGRLlk/lL+YPoRNWyQSW/
+iHVv/xD8SlTQX/D67zZzfRs2RcYhbbQVuE7PnFylPVoAjgbjPGsye/Kf8Lb93/Ao
+GEjwxrzQvzSAlsJKsW2Ox5BF3i9nrEUEo3rcVZLJR2bYGozH7ZxOmuASu7VqTITh
+4SINhwBk/ox9Yjllpu9CtoAlEmEBqCQTcAARJl/6NVDFSMwGR+gn2HCNX2TmoUQm
+XiLsks3/QppEIW1cxeMiHV9HEufOX1362KqxMy3ZdvJOOjMMK7MtkAY=
+-----END CERTIFICATE-----
+
+AddTrust Qualified Certificates Root
+====================================
+
+-----BEGIN CERTIFICATE-----
+MIIEHjCCAwagAwIBAgIBATANBgkqhkiG9w0BAQUFADBnMQswCQYDVQQGEwJTRTEU
+MBIGA1UEChMLQWRkVHJ1c3QgQUIxHTAbBgNVBAsTFEFkZFRydXN0IFRUUCBOZXR3
+b3JrMSMwIQYDVQQDExpBZGRUcnVzdCBRdWFsaWZpZWQgQ0EgUm9vdDAeFw0wMDA1
+MzAxMDQ0NTBaFw0yMDA1MzAxMDQ0NTBaMGcxCzAJBgNVBAYTAlNFMRQwEgYDVQQK
+EwtBZGRUcnVzdCBBQjEdMBsGA1UECxMUQWRkVHJ1c3QgVFRQIE5ldHdvcmsxIzAh
+BgNVBAMTGkFkZFRydXN0IFF1YWxpZmllZCBDQSBSb290MIIBIjANBgkqhkiG9w0B
+AQEFAAOCAQ8AMIIBCgKCAQEA5B6a/twJWoekn0e+EV+vhDTbYjx5eLfpMLXsDBwq
+xBb/4Oxx64r1EW7tTw2R0hIYLUkVAcKkIhPHEWT/IhKauY5cLwjPcWqzZwFZ8V1G
+87B4pfYOQnrjfxvM0PC3KP0q6p6zsLkEqv32x7SxuCqg+1jxGaBvcCV+PmlKfw8i
+2O+tCBGaKZnhqkRFmhJePp1tUvznoD1oL/BLcHwTOK28FSXx1s6rosAx1i+f4P8U
+WfyEk9mHfExUE+uf0S0R+Bg6Ot4l2ffTQO2kBhLEO+GRwVY18BTcZTYJbqukB8c1
+0cIDMzZbdSZtQvESa0NvS3GU+jQd7RNuyoB/mC9suWXY6QIDAQABo4HUMIHRMB0G
+A1UdDgQWBBQ5lYtii1zJ1IC6WA+XPxUIQ8yYpzALBgNVHQ8EBAMCAQYwDwYDVR0T
+AQH/BAUwAwEB/zCBkQYDVR0jBIGJMIGGgBQ5lYtii1zJ1IC6WA+XPxUIQ8yYp6Fr
+pGkwZzELMAkGA1UEBhMCU0UxFDASBgNVBAoTC0FkZFRydXN0IEFCMR0wGwYDVQQL
+ExRBZGRUcnVzdCBUVFAgTmV0d29yazEjMCEGA1UEAxMaQWRkVHJ1c3QgUXVhbGlm
+aWVkIENBIFJvb3SCAQEwDQYJKoZIhvcNAQEFBQADggEBABmrder4i2VhlRO6aQTv
+hsoToMeqT2QbPxj2qC0sVY8FtzDqQmodwCVRLae/DLPt7wh/bDxGGuoYQ992zPlm
+hpwsaPXpF/gxsxjE1kh9I0xowX67ARRvxdlu3rsEQmr49lx95dr6h+sNNVJn0J6X
+dgWTP5XHAeZpVTh/EGGZyeNfpso+gmNIquIISD6q8rKFYqa0p9m9N5xotS1WfbC3
+P6CxB9bpT9zeRXEwMn8bLgn5v1Kh7sKAPgZcLlVAwRv1cEWw3F369nJad9Jjzc9Y
+iQBCYz95OdBEsIJuQRno3eDBiFrRHnGTHyQwdOUeqN48Jzd/g66ed8/wMLH/S5no
+xqE=
+-----END CERTIFICATE-----
+
+Entrust Root Certification Authority
+====================================
+
+-----BEGIN CERTIFICATE-----
+MIIEkTCCA3mgAwIBAgIERWtQVDANBgkqhkiG9w0BAQUFADCBsDELMAkGA1UEBhMC
+VVMxFjAUBgNVBAoTDUVudHJ1c3QsIEluYy4xOTA3BgNVBAsTMHd3dy5lbnRydXN0
+Lm5ldC9DUFMgaXMgaW5jb3Jwb3JhdGVkIGJ5IHJlZmVyZW5jZTEfMB0GA1UECxMW
+KGMpIDIwMDYgRW50cnVzdCwgSW5jLjEtMCsGA1UEAxMkRW50cnVzdCBSb290IENl
+cnRpZmljYXRpb24gQXV0aG9yaXR5MB4XDTA2MTEyNzIwMjM0MloXDTI2MTEyNzIw
+NTM0MlowgbAxCzAJBgNVBAYTAlVTMRYwFAYDVQQKEw1FbnRydXN0LCBJbmMuMTkw
+NwYDVQQLEzB3d3cuZW50cnVzdC5uZXQvQ1BTIGlzIGluY29ycG9yYXRlZCBieSBy
+ZWZlcmVuY2UxHzAdBgNVBAsTFihjKSAyMDA2IEVudHJ1c3QsIEluYy4xLTArBgNV
+BAMTJEVudHJ1c3QgUm9vdCBDZXJ0aWZpY2F0aW9uIEF1dGhvcml0eTCCASIwDQYJ
+KoZIhvcNAQEBBQADggEPADCCAQoCggEBALaVtkNC+sZtKm9I35RMOVcF7sN5EUFo
+Nu3s/poBj6E4KPz3EEZmLk0eGrEaTsbRwJWIsMn/MYszA9u3g3s+IIRe7bJWKKf4
+4LlAcTfFy0cOlypowCKVYhXbR9n10Cv/gkvJrT7eTNuQgFA/CYqEAOwwCj0Yzfv9
+KlmaI5UXLEWeH25DeW0MXJj+SKfFI0dcXv1u5x609mhF0YaDW6KKjbHjKYD+JXGI
+rb68j6xSlkuqUY3kEzEZ6E5Nn9uss2rVvDlUccp6en+Q3X0dgNmBu1kmwhH+5pPi
+94DkZfs0Nw4pgHBNrziGLp5/V6+eF67rHMsoIV+2HNjnogQi+dPa2MsCAwEAAaOB
+sDCBrTAOBgNVHQ8BAf8EBAMCAQYwDwYDVR0TAQH/BAUwAwEB/zArBgNVHRAEJDAi
+gA8yMDA2MTEyNzIwMjM0MlqBDzIwMjYxMTI3MjA1MzQyWjAfBgNVHSMEGDAWgBRo
+kORnpKZTgMeGZqTx90tD+4S9bTAdBgNVHQ4EFgQUaJDkZ6SmU4DHhmak8fdLQ/uE
+vW0wHQYJKoZIhvZ9B0EABBAwDhsIVjcuMTo0LjADAgSQMA0GCSqGSIb3DQEBBQUA
+A4IBAQCT1DCw1wMgKtD5Y+iRDAUgqV8ZyntyTtSx29CW+1RaGSwMCPeyvIWonX9t
+O1KzKtvn1ISMY/YPyyYBkVBs9F8U4pN0wBOeMDpQ47RgxRzwIkSNcUesyBrJ6Zua
+AGAT/3B+XxFNSRuzFVJ7yVTav52Vr2ua2J7p8eRDjeIRRDq/r72DQnNSi6q7pynP
+9WQcCk3RvKqsnyrQ/39/2n3qse0wJcGE2jTSW3iDVuycNsMm4hH2Z0kdkquM++v/
+eu6FSqdQgPCnXEqULl8FmTxSQeDNtGPPAUO6nIPcj2A781q0tHuu2guQOHXvgR1m
+0vdXcDazv/wor3ElhVsT/h5/WrQ8
+-----END CERTIFICATE-----
+
+GeoTrust Global CA
+==================
+
+-----BEGIN CERTIFICATE-----
+MIIDVDCCAjygAwIBAgIDAjRWMA0GCSqGSIb3DQEBBQUAMEIxCzAJBgNVBAYTAlVT
+MRYwFAYDVQQKEw1HZW9UcnVzdCBJbmMuMRswGQYDVQQDExJHZW9UcnVzdCBHbG9i
+YWwgQ0EwHhcNMDIwNTIxMDQwMDAwWhcNMjIwNTIxMDQwMDAwWjBCMQswCQYDVQQG
+EwJVUzEWMBQGA1UEChMNR2VvVHJ1c3QgSW5jLjEbMBkGA1UEAxMSR2VvVHJ1c3Qg
+R2xvYmFsIENBMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA2swYYzD9
+9BcjGlZ+W988bDjkcbd4kdS8odhM+KhDtgPpTSEHCIjaWC9mOSm9BXiLnTjoBbdq
+fnGk5sRgprDvgOSJKA+eJdbtg/OtppHHmMlCGDUUna2YRpIuT8rxh0PBFpVXLVDv
+iS2Aelet8u5fa9IAjbkU+BQVNdnARqN7csiRv8lVK83Qlz6cJmTM386DGXHKTubU
+1XupGc1V3sjs0l44U+VcT4wt/lAjNvxm5suOpDkZALeVAjmRCw7+OC7RHQWa9k0+
+bw8HHa8sHo9gOeL6NlMTOdReJivbPagUvTLrGAMoUgRx5aszPeE4uwc2hGKceeoW
+MPRfwCvocWvk+QIDAQABo1MwUTAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQWBBTA
+ephojYn7qwVkDBF9qn1luMrMTjAfBgNVHSMEGDAWgBTAephojYn7qwVkDBF9qn1l
+uMrMTjANBgkqhkiG9w0BAQUFAAOCAQEANeMpauUvXVSOKVCUn5kaFOSPeCpilKIn
+Z57QzxpeR+nBsqTP3UEaBU6bS+5Kb1VSsyShNwrrZHYqLizz/Tt1kL/6cdjHPTfS
+tQWVYrmm3ok9Nns4d0iXrKYgjy6myQzCsplFAMfOEVEiIuCl6rYVSAlk6l5PdPcF
+PseKUgzbFbS9bZvlxrFUaKnjaZC2mqUPuLk/IH2uSrW4nOQdtqvmlKXBx4Ot2/Un
+hw4EbNX/3aBd7YdStysVAq45pmp06drE57xNNB6pXE0zX5IJL4hmXXeXxx12E6nV
+5fEWCRE11azbJHFwLJhWC9kXtNHjUStedejV0NxPNO3CBWaAocvmMw==
+-----END CERTIFICATE-----
+
+GeoTrust Global CA 2
+====================
+
+-----BEGIN CERTIFICATE-----
+MIIDZjCCAk6gAwIBAgIBATANBgkqhkiG9w0BAQUFADBEMQswCQYDVQQGEwJVUzEW
+MBQGA1UEChMNR2VvVHJ1c3QgSW5jLjEdMBsGA1UEAxMUR2VvVHJ1c3QgR2xvYmFs
+IENBIDIwHhcNMDQwMzA0MDUwMDAwWhcNMTkwMzA0MDUwMDAwWjBEMQswCQYDVQQG
+EwJVUzEWMBQGA1UEChMNR2VvVHJ1c3QgSW5jLjEdMBsGA1UEAxMUR2VvVHJ1c3Qg
+R2xvYmFsIENBIDIwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDvPE1A
+PRDfO1MA4Wf+lGAVPoWI8YkNkMgoI5kF6CsgncbzYEbYwbLVjDHZ3CB5JIG/NTL8
+Y2nbsSpr7iFY8gjpeMtvy/wWUsiRxP89c96xPqfCfWbB9X5SJBri1WeR0IIQ13hL
+TytCOb1kLUCgsBDTOEhGiKEMuzozKmKY+wCdE1l/bztyqu6mD4b5BWHqZ38MN5aL
+5mkWRxHCJ1kDs6ZgwiFAVvqgx306E+PsV8ez1q6diYD3Aecs9pYrEw15LNnA5IZ7
+S4wMcoKK+xfNAGw6EzywhIdLFnopsk/bHdQL82Y3vdj2V7teJHq4PIu5+pIaGoSe
+2HSPqht/XvT+RSIhAgMBAAGjYzBhMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0OBBYE
+FHE4NvICMVNHK266ZUapEBVYIAUJMB8GA1UdIwQYMBaAFHE4NvICMVNHK266ZUap
+EBVYIAUJMA4GA1UdDwEB/wQEAwIBhjANBgkqhkiG9w0BAQUFAAOCAQEAA/e1K6td
+EPx7srJerJsOflN4WT5CBP51o62sgU7XAotexC3IUnbHLB/8gTKY0UvGkpMzNTEv
+/NgdRN3ggX+d6YvhZJFiCzkIjKx0nVnZellSlxG5FntvRdOW2TF9AjYPnDtuzywN
+A0ZF66D0f0hExghAzN4bcLUprbqLOzRldRtxIR0sFAqwlpW41uryZfspuk/qkZN0
+abby/+Ea0AzRdoXLiiW9l14sbxWZJue2Kf8i7MkCx1YAzUm5s2x7UwQa4qjJqhIF
+I8LO57sEAszAR6LkxCkvW0VXiVHuPOtSCP8HNR6fNWpHSlaY0VqFH4z1Ir+rzoPz
+4iIprn2DQKi6bA==
+-----END CERTIFICATE-----
+
+GeoTrust Universal CA
+=====================
+
+-----BEGIN CERTIFICATE-----
+MIIFaDCCA1CgAwIBAgIBATANBgkqhkiG9w0BAQUFADBFMQswCQYDVQQGEwJVUzEW
+MBQGA1UEChMNR2VvVHJ1c3QgSW5jLjEeMBwGA1UEAxMVR2VvVHJ1c3QgVW5pdmVy
+c2FsIENBMB4XDTA0MDMwNDA1MDAwMFoXDTI5MDMwNDA1MDAwMFowRTELMAkGA1UE
+BhMCVVMxFjAUBgNVBAoTDUdlb1RydXN0IEluYy4xHjAcBgNVBAMTFUdlb1RydXN0
+IFVuaXZlcnNhbCBDQTCCAiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBAKYV
+VaCjxuAfjJ0hUNfBvitbtaSeodlyWL0AG0y/YckUHUWCq8YdgNY96xCcOq9tJPi8
+cQGeBvV8Xx7BDlXKg5pZMK4ZyzBIle0iN430SppyZj6tlcDgFgDgEB8rMQ7XlFTT
+QjOgNB0eRXbdT8oYN+yFFXoZCPzVx5zw8qkuEKmS5j1YPakWaDwvdSEYfyh3peFh
+F7em6fgemdtzbvQKoiFs7tqqhZJmr/Z6a4LauiIINQ/PQvE1+mrufislzDoR5G2v
+c7J2Ha3QsnhnGqQ5HFELZ1aD/ThdDc7d8Lsrlh/eezJS/R27tQahsiFepdaVaH/w
+mZ7cRQg+59IJDTWU3YBOU5fXtQlEIGQWFwMCTFMNaN7VqnJNk22CDtucvc+081xd
+VHppCZbW2xHBjXWotM85yM48vCR85mLK4b19p71XZQvk/iXttmkQ3CgaRr0BHdCX
+teGYO8A3ZNY9lO4L4fUorgtWv3GLIylBjobFS1J72HGrH4oVpjuDWtdYAVHGTEHZ
+f9hBZ3KiKN9gg6meyHv8U3NyWfWTehd2Ds735VzZC1U0oqpbtWpU5xPKV+yXbfRe
+Bi9Fi1jUIxaS5BZuKGNZMN9QAZxjiRqf2xeUgnA3wySemkfWWspOqGmJch+RbNt+
+nhutxx9z3SxPGWX9f5NAEC7S8O08ni4oPmkmM8V7AgMBAAGjYzBhMA8GA1UdEwEB
+/wQFMAMBAf8wHQYDVR0OBBYEFNq7LqqwDLiIJlF0XG0D08DYj3rWMB8GA1UdIwQY
+MBaAFNq7LqqwDLiIJlF0XG0D08DYj3rWMA4GA1UdDwEB/wQEAwIBhjANBgkqhkiG
+9w0BAQUFAAOCAgEAMXjmx7XfuJRAyXHEqDXsRh3ChfMoWIawC/yOsjmPRFWrZIRc
+aanQmjg8+uUfNeVE44B5lGiku8SfPeE0zTBGi1QrlaXv9z+ZhP015s8xxtxqv6fX
+IwjhmF7DWgh2qaavdy+3YL1ERmrvl/9zlcGO6JP7/TG37FcREUWbMPEaiDnBTzyn
+ANXH/KttgCJwpQzgXQQpAvvLoJHRfNbDflDVnVi+QTjruXU8FdmbyUqDWcDaU/0z
+uzYYm4UPFd3uLax2k7nZAY1IEKj79TiG8dsKxr2EoyNB3tZ3b4XUhRxQ4K5RirqN
+Pnbiucon8l+f725ZDQbYKxek0nxru18UGkiPGkzns0ccjkxFKyDuSN/n3QmOGKja
+QI2SJhFTYXNd673nxE0pN2HrrDktZy4W1vUAg4WhzH92xH3kt0tm7wNFYGm2DFKW
+koRepqO1pD4r2czYG0eq8kTaT/kD6PAUyz/zg97QwVTjt+gKN02LIFkDMBmhLMi9
+ER/frslKxfMnZmaGrGiR/9nmUxwPi1xpZQomyB40w11Re9epnAahNt3ViZS82eQt
+DF4JbAiXfKM9fJP/P6EUp8+1Xevb2xzEdt+Iub1FBZUbrvxGakyvSOPOrg/Sfuvm
+bJxPgWp6ZKy7PtXny3YuxadIwVyQD8vIP/rmMuGNG2+k5o7Y+SlIis5z/iw=
+-----END CERTIFICATE-----
+
+GeoTrust Universal CA 2
+=======================
+
+-----BEGIN CERTIFICATE-----
+MIIFbDCCA1SgAwIBAgIBATANBgkqhkiG9w0BAQUFADBHMQswCQYDVQQGEwJVUzEW
+MBQGA1UEChMNR2VvVHJ1c3QgSW5jLjEgMB4GA1UEAxMXR2VvVHJ1c3QgVW5pdmVy
+c2FsIENBIDIwHhcNMDQwMzA0MDUwMDAwWhcNMjkwMzA0MDUwMDAwWjBHMQswCQYD
+VQQGEwJVUzEWMBQGA1UEChMNR2VvVHJ1c3QgSW5jLjEgMB4GA1UEAxMXR2VvVHJ1
+c3QgVW5pdmVyc2FsIENBIDIwggIiMA0GCSqGSIb3DQEBAQUAA4ICDwAwggIKAoIC
+AQCzVFLByT7y2dyxUxpZKeexw0Uo5dfR7cXFS6GqdHtXr0om/Nj1XqduGdt0DE81
+WzILAePb63p3NeqqWuDW6KFXlPCQo3RWlEQwAx5cTiuFJnSCegx2oG9NzkEtoBUG
+FF+3Qs17j1hhNNwqCPkuwwGmIkQcTAeC5lvO0Ep8BNMZcyfwqph/Lq9O64ceJHdq
+XbboW0W63MOhBW9Wjo8QJqVJwy7XQYci4E+GymC16qFjwAGXEHm9ADwSbSsVsaxL
+se4YuU6W3Nx2/zu+z18DwPw76L5GG//aQMJS9/7jOvdqdzXQ2o3rXhhqMcceujwb
+KNZrVMaqW9eiLBsZzKIC9ptZvTdrhrVtgrrY6slWvKk2WP0+GfPtDCapkzj4T8Fd
+IgbQl+rhrcZV4IErKIM6+vR7IVEAvlI4zs1meaj0gVbi0IMJR1FbUGrP20gaXT73
+y/Zl92zxlfgCOzJWgjl6W70viRu/obTo/3+NjN8D8WBOWBFM66M/ECuDmgFz2ZRt
+hAAnZqzwcEAJQpKtT5MNYQlRJNiS1QuUYbKHsu3/mjX/hVTK7URDrBs8FmtISgoc
+QIgfksILAAX/8sgCSqSqqcyZlpwvWOB94b67B9xfBHJcMTTD7F8t4D1kkCLm0ey4
+Lt1ZrtmhN79UNdxzMk+MBB4zsslG8dhcyFVQyWi9qLo2CQIDAQABo2MwYTAPBgNV
+HRMBAf8EBTADAQH/MB0GA1UdDgQWBBR281Xh+qQ2+/CfXGJx7Tz0RzgQKzAfBgNV
+HSMEGDAWgBR281Xh+qQ2+/CfXGJx7Tz0RzgQKzAOBgNVHQ8BAf8EBAMCAYYwDQYJ
+KoZIhvcNAQEFBQADggIBAGbBxiPz2eAubl/oz66wsCVNK/g7WJtAJDday6sWSf+z
+dXkzoS9tcBc0kf5nfo/sm+VegqlVHy/c1FEHEv6sFj4sNcZj/NwQ6w2jqtB8zNHQ
+L1EuxBRa3ugZ4T7GzKQp5y6EqgYweHZUcyiYWTjgAA1i00J9IZ+uPTqM1fp3DRgr
+Fg5fNuH8KrUwJM/gYwx7WBr+mbpCErGR9Hxo4sjoryzqyX6uuyo9DRXcNJW2GHSo
+ag/HtPQTxORb7QrSpJdMKu0vbBKJPfEncKpqA1Ihn0CoZ1Dy81of398j9tx4TuaY
+T1U6U+Pv8vSfx3zYWK8pIpe44L2RLrB27FcRz+8pRPPphXpgY+RdM4kX2TGq2tbz
+GDVyz4crL2MjhF2EjD9XoIj8mZEoJmmZ1I+XRL6O1UixpCgp8RW04eWe3fiPpm8m
+1wk8OhwRDqZsN/etRIcsKMfYdIKz0G9KV7s1KSegi+ghp4dkNl3M2Basx7InQJJV
+OCiNUW7dFGdTbHFcJoRNdVq2fmBWqU2t+5sel/MN2dKXVHfaPRK34B7vCAas+YWH
+6aLcr34YEoP9VhdBLtUpgn2Z9DH2canPLAEnpQW5qrJITirvn5NSUZU8UnOOVkwX
+QMAJKOSLakhT2+zNVVXxxvjpoixMptEmX36vWkzaH6byHCx+rgIW0lbQL1dTR+iS
+-----END CERTIFICATE-----
+
+America Online Root Certification Authority 1
+=============================================
+
+-----BEGIN CERTIFICATE-----
+MIIDpDCCAoygAwIBAgIBATANBgkqhkiG9w0BAQUFADBjMQswCQYDVQQGEwJVUzEc
+MBoGA1UEChMTQW1lcmljYSBPbmxpbmUgSW5jLjE2MDQGA1UEAxMtQW1lcmljYSBP
+bmxpbmUgUm9vdCBDZXJ0aWZpY2F0aW9uIEF1dGhvcml0eSAxMB4XDTAyMDUyODA2
+MDAwMFoXDTM3MTExOTIwNDMwMFowYzELMAkGA1UEBhMCVVMxHDAaBgNVBAoTE0Ft
+ZXJpY2EgT25saW5lIEluYy4xNjA0BgNVBAMTLUFtZXJpY2EgT25saW5lIFJvb3Qg
+Q2VydGlmaWNhdGlvbiBBdXRob3JpdHkgMTCCASIwDQYJKoZIhvcNAQEBBQADggEP
+ADCCAQoCggEBAKgv6KRpBgNHw+kqmP8ZonCaxlCyfqXfaE0bfA+2l2h9LaaLl+lk
+hsmj76CGv2BlnEtUiMJIxUo5vxTjWVXlGbR0yLQFOVwWpeKVBeASrlmLojNoWBym
+1BW32J/X3HGrfpq/m44zDyL9Hy7nBzbvYjnF3cu6JRQj3gzGPTzOggjmZj7aUTsW
+OqMFf6Dch9Wc/HKpoH145LcxVR5lu9RhsCFg7RAycsWSJR74kEoYeEfffjA3PlAb
+2xzTa5qGUwew76wGePiEmf4hjUyAtgyC9mZweRrTT6PP8c9GsEsPPt2IYriMqQko
+O3rHl+Ee5fSfwMCuJKDIodkP1nsmgmkyPacCAwEAAaNjMGEwDwYDVR0TAQH/BAUw
+AwEB/zAdBgNVHQ4EFgQUAK3Zo/Z59m50qX8zPYEX10zPM94wHwYDVR0jBBgwFoAU
+AK3Zo/Z59m50qX8zPYEX10zPM94wDgYDVR0PAQH/BAQDAgGGMA0GCSqGSIb3DQEB
+BQUAA4IBAQB8itEfGDeC4Liwo+1WlchiYZwFos3CYiZhzRAW18y0ZTTQEYqtqKkF
+Zu90821fnZmv9ov761KyBZiibyrFVL0lvV+uyIbqRizBs73B6UlwGBaXCBOMIOAb
+LjpHyx7kADCVW/RFo8AasAFOq73AI25jP4BKxQft3OJvx8Fi8eNy1gTIdGcL+oir
+oQHIb/AUr9KZzVGTfu0uOMe9zkZQPXLjeSWdm4grECDdpbgyn43gKd8hdIaC2y+C
+MMbHNYaz+ZZfRtsMRf3zUMNvxsNIrUam4SdHCh0Om7bCd39j8uB9Gr784N/Xx6ds
+sPmuujz9dLQR6FgNgLzTqIA6me11zEZ7
+-----END CERTIFICATE-----
+
+America Online Root Certification Authority 2
+=============================================
+
+-----BEGIN CERTIFICATE-----
+MIIFpDCCA4ygAwIBAgIBATANBgkqhkiG9w0BAQUFADBjMQswCQYDVQQGEwJVUzEc
+MBoGA1UEChMTQW1lcmljYSBPbmxpbmUgSW5jLjE2MDQGA1UEAxMtQW1lcmljYSBP
+bmxpbmUgUm9vdCBDZXJ0aWZpY2F0aW9uIEF1dGhvcml0eSAyMB4XDTAyMDUyODA2
+MDAwMFoXDTM3MDkyOTE0MDgwMFowYzELMAkGA1UEBhMCVVMxHDAaBgNVBAoTE0Ft
+ZXJpY2EgT25saW5lIEluYy4xNjA0BgNVBAMTLUFtZXJpY2EgT25saW5lIFJvb3Qg
+Q2VydGlmaWNhdGlvbiBBdXRob3JpdHkgMjCCAiIwDQYJKoZIhvcNAQEBBQADggIP
+ADCCAgoCggIBAMxBRR3pPU0Q9oyxQcngXssNt79Hc9PwVU3dxgz6sWYFas14tNwC
+206B89enfHG8dWOgXeMHDEjsJcQDIPT/DjsS/5uN4cbVG7RtIuOx238hZK+GvFci
+KtZHgVdEglZTvYYUAQv8f3SkWq7xuhG1m1hagLQ3eAkzfDJHA1zEpYNI9FdWboE2
+JxhP7JsowtS013wMPgwr38oE18aO6lhOqKSlGBxsRZijQdEt0sdtjRnxrXm3gT+9
+BoInLRBYBbV4Bbkv2wxrkJB+FFk4u5QkE+XRnRTf04JNRvCAOVIyD+OEsnpD8l7e
+Xz8d3eOyG6ChKiMDbi4BFYdcpnV1x5dhvt6G3NRI270qv0pV2uh9UPu0gBe4lL8B
+PeraunzgWGcXuVjgiIZGZ2ydEEdYMtA1fHkqkKJaEBEjNa0vzORKW6fIJ/KD3l67
+Xnfn6KVuY8INXWHQjNJsWiEOyiijzirplcdIz5ZvHZIlyMbGwcEMBawmxNJ10uEq
+Z8A9W6Wa6897GqidFEXlD6CaZd4vKL3Ob5Rmg0gp2OpljK+T2WSfVVcmv2/LNzGZ
+o2C7HK2JNDJiuEMhBnIMoVxtRsX6Kc8w3onccVvdtjc+31D1uAclJuW8tf48ArO3
++L5DwYcRlJ4jbBeKuIonDFRH8KmzwICMoCfrHRnjB453cMor9H124HhnAgMBAAGj
+YzBhMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0OBBYEFE1FwWg4u3OpaaEg5+31IqEj
+FNeeMB8GA1UdIwQYMBaAFE1FwWg4u3OpaaEg5+31IqEjFNeeMA4GA1UdDwEB/wQE
+AwIBhjANBgkqhkiG9w0BAQUFAAOCAgEAZ2sGuV9FOypLM7PmG2tZTiLMubekJcmn
+xPBUlgtk87FYT15R/LKXeydlwuXK5w0MJXti4/qftIe3RUavg6WXSIylvfEWK5t2
+LHo1YGwRgJfMqZJS5ivmae2p+DYtLHe/YUjRYwu5W1LtGLBDQiKmsXeu3mnFzccc
+obGlHBD7GL4acN3Bkku+KVqdPzW+5X1R+FXgJXUjhx5c3LqdsKyzadsXg8n33gy8
+CNyRnqjQ1xU3c6U1uPx+xURABsPr+CKAXEfOAuMRn0T//ZoyzH1kUQ7rVyZ2OuMe
+IjzCpjbdGe+n/BLzJsBZMYVMnNjP36TMzCmT/5RtdlwTCJfy7aULTd3oyWgOZtMA
+DjMSW7yV5TKQqLPGbIOtd+6Lfn6xqavT4fG2wLHqiMDn05DpKJKUe2h7lyoKZy2F
+AjgQ5ANh1NolNscIWC2hp1GvMApJ9aZphwctREZ2jirlmjvXGKL8nDgQzMY70rUX
+Om/9riW99XJZZLF0KjhfGEzfz3EEWjbUvy+ZnOjZurGV5gJLIaFb1cFPj65pbVPb
+AZO1XB4Y3WRayhgoPmMEEf0cjQAPuDffZ4qdZqkCapH/E8ovXYO8h5Ns3CRRFgQl
+Zvqz2cK6Kb6aSDiCmfS/O0oxGfm/jiEzFMpPVF/7zvuPcX/9XhmgD0uRuMRUvAaw
+RY8mkaKO/qk=
+-----END CERTIFICATE-----
+
+Comodo AAA Services root
+========================
+
+-----BEGIN CERTIFICATE-----
+MIIEMjCCAxqgAwIBAgIBATANBgkqhkiG9w0BAQUFADB7MQswCQYDVQQGEwJHQjEb
+MBkGA1UECAwSR3JlYXRlciBNYW5jaGVzdGVyMRAwDgYDVQQHDAdTYWxmb3JkMRow
+GAYDVQQKDBFDb21vZG8gQ0EgTGltaXRlZDEhMB8GA1UEAwwYQUFBIENlcnRpZmlj
+YXRlIFNlcnZpY2VzMB4XDTA0MDEwMTAwMDAwMFoXDTI4MTIzMTIzNTk1OVowezEL
+MAkGA1UEBhMCR0IxGzAZBgNVBAgMEkdyZWF0ZXIgTWFuY2hlc3RlcjEQMA4GA1UE
+BwwHU2FsZm9yZDEaMBgGA1UECgwRQ29tb2RvIENBIExpbWl0ZWQxITAfBgNVBAMM
+GEFBQSBDZXJ0aWZpY2F0ZSBTZXJ2aWNlczCCASIwDQYJKoZIhvcNAQEBBQADggEP
+ADCCAQoCggEBAL5AnfRu4ep2hxxNRUSOvkbIgwadwSr+GB+O5AL686tdUIoWMQua
+BtDFcCLNSS1UY8y2bmhGC1Pqy0wkwLxyTurxFa70VJoSCsN6sjNg4tqJVfMiWPPe
+3M/vg4aijJRPn2jymJBGhCfHdr/jzDUsi14HZGWCwEiwqJH5YZ92IFCokcdmtet4
+YgNW8IoaE+oxox6gmf049vYnMlhvB/VruPsUK6+3qszWY19zjNoFmag4qMsXeDZR
+rOme9Hg6jc8P2ULimAyrL58OAd7vn5lJ8S3frHRNG5i1R8XlKdH5kBjHYpy+g8cm
+ez6KJcfA3Z3mNWgQIJ2P2N7Sw4ScDV7oL8kCAwEAAaOBwDCBvTAdBgNVHQ4EFgQU
+oBEKIz6W8Qfs4q8p74Klf9AwpLQwDgYDVR0PAQH/BAQDAgEGMA8GA1UdEwEB/wQF
+MAMBAf8wewYDVR0fBHQwcjA4oDagNIYyaHR0cDovL2NybC5jb21vZG9jYS5jb20v
+QUFBQ2VydGlmaWNhdGVTZXJ2aWNlcy5jcmwwNqA0oDKGMGh0dHA6Ly9jcmwuY29t
+b2RvLm5ldC9BQUFDZXJ0aWZpY2F0ZVNlcnZpY2VzLmNybDANBgkqhkiG9w0BAQUF
+AAOCAQEACFb8AvCb6P+k+tZ7xkSAzk/ExfYAWMymtrwUSWgEdujm7l3sAg9g1o1Q
+GE8mTgHj5rCl7r+8dFRBv/38ErjHT1r0iWAFf2C3BUrz9vHCv8S5dIa2LX1rzNLz
+Rt0vxuBqw8M0Ayx9lt1awg6nCpnBBYurDC/zXDrPbDdVCYfeU0BsWO/8tqtlbgT2
+G9w84FoVxp7Z8VlIMCFlA2zs6SFz7JsDoeA3raAVGI/6ugLOpyypEBMs1OUIJqsi
+l2D4kF501KKaU73yqWjgom7C12yxow+ev+to51byrvLjKzg6CYG1a4XXvi3tPxq3
+smPi9WIsgtRqAEFQ8TmDn5XpNpaYbg==
+-----END CERTIFICATE-----
+
+Comodo Secure Services root
+===========================
+
+-----BEGIN CERTIFICATE-----
+MIIEPzCCAyegAwIBAgIBATANBgkqhkiG9w0BAQUFADB+MQswCQYDVQQGEwJHQjEb
+MBkGA1UECAwSR3JlYXRlciBNYW5jaGVzdGVyMRAwDgYDVQQHDAdTYWxmb3JkMRow
+GAYDVQQKDBFDb21vZG8gQ0EgTGltaXRlZDEkMCIGA1UEAwwbU2VjdXJlIENlcnRp
+ZmljYXRlIFNlcnZpY2VzMB4XDTA0MDEwMTAwMDAwMFoXDTI4MTIzMTIzNTk1OVow
+fjELMAkGA1UEBhMCR0IxGzAZBgNVBAgMEkdyZWF0ZXIgTWFuY2hlc3RlcjEQMA4G
+A1UEBwwHU2FsZm9yZDEaMBgGA1UECgwRQ29tb2RvIENBIExpbWl0ZWQxJDAiBgNV
+BAMMG1NlY3VyZSBDZXJ0aWZpY2F0ZSBTZXJ2aWNlczCCASIwDQYJKoZIhvcNAQEB
+BQADggEPADCCAQoCggEBAMBxM4KK0HDrc4eCQNUd5MvJDkKQ+d40uaG6EfQlhfPM
+cm3ye5drswfxdySRXyWP9nQ95IDC+DwN879A6vfIUtFyb+/Iq0G4bi4XKpVpDM3S
+HpR7LZQdqnXXs5jLrLxkU0C8j6ysNstcrbvd4JQX7NFc0L/vpZXJkMWwrPsbQ996
+CF23uPJAGysnnlDOXmWCiIxe004MeuoIkbY2qitC++rCoznl2yY4rYsK7hljxxwk
+3wN42ubqwUcaCwtGCd0C/N7Lh1/XMGNooa7cMqG6vv5Eq2i2pRcV/b3Vp6ea5EQz
+6YiO/O1R65NxTq0B50SOqy3LqP4BSUjwwN3HaNiS/j0CAwEAAaOBxzCBxDAdBgNV
+HQ4EFgQUPNiTiMLAggnMAZkGkyDpnnAJY08wDgYDVR0PAQH/BAQDAgEGMA8GA1Ud
+EwEB/wQFMAMBAf8wgYEGA1UdHwR6MHgwO6A5oDeGNWh0dHA6Ly9jcmwuY29tb2Rv
+Y2EuY29tL1NlY3VyZUNlcnRpZmljYXRlU2VydmljZXMuY3JsMDmgN6A1hjNodHRw
+Oi8vY3JsLmNvbW9kby5uZXQvU2VjdXJlQ2VydGlmaWNhdGVTZXJ2aWNlcy5jcmww
+DQYJKoZIhvcNAQEFBQADggEBAIcBbSMdflsXfcFhMs+P5/OKlFlm4J4oqF7Tt/Q0
+5qo5spcWxYJvMqTpjOev/e/C6LlLqqP05tqNZSH7uoDrJiiFGv45jN5bBAS0VPmj
+Z55B+glSzAVIqMk/IQQezkhr/IXownuvf7fM+F86/TXGDe+X3EyrEeFryzHRbPtI
+gKvcnDe4IRRLDXE97IMzbtFuMhbsmMcWi1mmNKsFVy2T96oTy9IT4rcuO81rUBcJ
+aD61JlfutuC23bkpgHl9j6PwpCikFcSF9CfUa7/lXORlAnZUtOM3ZiTTGWHIUhDl
+izeauan5Hb/qmZJhlv8BzaFfDbxxvA6sCx1HRR3B7Hzs/Sk=
+-----END CERTIFICATE-----
+
+Comodo Trusted Services root
+============================
+
+-----BEGIN CERTIFICATE-----
+MIIEQzCCAyugAwIBAgIBATANBgkqhkiG9w0BAQUFADB/MQswCQYDVQQGEwJHQjEb
+MBkGA1UECAwSR3JlYXRlciBNYW5jaGVzdGVyMRAwDgYDVQQHDAdTYWxmb3JkMRow
+GAYDVQQKDBFDb21vZG8gQ0EgTGltaXRlZDElMCMGA1UEAwwcVHJ1c3RlZCBDZXJ0
+aWZpY2F0ZSBTZXJ2aWNlczAeFw0wNDAxMDEwMDAwMDBaFw0yODEyMzEyMzU5NTla
+MH8xCzAJBgNVBAYTAkdCMRswGQYDVQQIDBJHcmVhdGVyIE1hbmNoZXN0ZXIxEDAO
+BgNVBAcMB1NhbGZvcmQxGjAYBgNVBAoMEUNvbW9kbyBDQSBMaW1pdGVkMSUwIwYD
+VQQDDBxUcnVzdGVkIENlcnRpZmljYXRlIFNlcnZpY2VzMIIBIjANBgkqhkiG9w0B
+AQEFAAOCAQ8AMIIBCgKCAQEA33FvNlhTWvI2VFeAxHQIIO0Yfyod5jWaHiWsnOWW
+fnJSoBVC21ndZHoa0Lh73TkVvFVIxO06AOoxEbrycXQaZ7jPM8yoMa+j49d/vzMt
+TGo87IvDktJTdyR0nAducPy9C1t2ul/y/9c3S0pgePfw+spwtOpZqqPOSC+pw7IL
+fhdyFgymBwwbOM/JYrc/oJOlh0Hyt3BAd9i+FHzjqMB6juljatEPmsbS9Is6FARW
+1O24zG71++IsWL1/T2sr92AkWCTOJu80kTrV44HQsvAEAtdbtz6SrGsSivnkBbA7
+kUlcsutT6vifR4buv5XAwAaf0lteERv0xwQ1KdJVXOTt6wIDAQABo4HJMIHGMB0G
+A1UdDgQWBBTFe1i97doladL3WRaoszLAeydb9DAOBgNVHQ8BAf8EBAMCAQYwDwYD
+VR0TAQH/BAUwAwEB/zCBgwYDVR0fBHwwejA8oDqgOIY2aHR0cDovL2NybC5jb21v
+ZG9jYS5jb20vVHJ1c3RlZENlcnRpZmljYXRlU2VydmljZXMuY3JsMDqgOKA2hjRo
+dHRwOi8vY3JsLmNvbW9kby5uZXQvVHJ1c3RlZENlcnRpZmljYXRlU2VydmljZXMu
+Y3JsMA0GCSqGSIb3DQEBBQUAA4IBAQDIk4E7ibSvuIQSTI3S8NtwuleGFTQQuS9/
+HrCoiWChisJ3DFBKmwCL2Iv0QeLQg4pKHBQGsKNoBXAxMKdTmw7pSqBYaWcOrp32
+pSxBvzwGa+RZzG0Q8ZZvH9/0BAKkn0U+yNj6NkZEUD+Cl5EfKNsYEYwq5GWDVxIS
+jBc/lDb+XbDABHcTuPQV1T84zJQ6VdCsmPW6AF/ghhmBeC8owH7TzEIK9a5QoNE+
+xqFx7D+gIIxmOom0jtTYsU0lR+4viMi14QVFwL4Ucd56/Y57fU0IlqUSc/Atyjcn
+dBInTMu2l+nZrghtWjlA3QVHdWpaIbOjGM9O9y5Xt5hwXsjEeLBi
+-----END CERTIFICATE-----
+
+UTN DATACorp SGC Root CA
+========================
+
+-----BEGIN CERTIFICATE-----
+MIIEXjCCA0agAwIBAgIQRL4Mi1AAIbQR0ypoBqmtaTANBgkqhkiG9w0BAQUFADCB
+kzELMAkGA1UEBhMCVVMxCzAJBgNVBAgTAlVUMRcwFQYDVQQHEw5TYWx0IExha2Ug
+Q2l0eTEeMBwGA1UEChMVVGhlIFVTRVJUUlVTVCBOZXR3b3JrMSEwHwYDVQQLExho
+dHRwOi8vd3d3LnVzZXJ0cnVzdC5jb20xGzAZBgNVBAMTElVUTiAtIERBVEFDb3Jw
+IFNHQzAeFw05OTA2MjQxODU3MjFaFw0xOTA2MjQxOTA2MzBaMIGTMQswCQYDVQQG
+EwJVUzELMAkGA1UECBMCVVQxFzAVBgNVBAcTDlNhbHQgTGFrZSBDaXR5MR4wHAYD
+VQQKExVUaGUgVVNFUlRSVVNUIE5ldHdvcmsxITAfBgNVBAsTGGh0dHA6Ly93d3cu
+dXNlcnRydXN0LmNvbTEbMBkGA1UEAxMSVVROIC0gREFUQUNvcnAgU0dDMIIBIjAN
+BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA3+5YEKIrblXEjr8uRgnn4AgPLit6
+E5Qbvfa2gI5lBZMAHryv4g+OGQ0SR+ysraP6LnD43m77VkIVni5c7yPeIbkFdicZ
+D0/Ww5y0vpQZY/KmEQrrU0icvvIpOxboGqBMpsn0GFlowHDyUwDAXlCCpVZvNvlK
+4ESGoE1O1kduSUrLZ9emxAW5jh70/P/N5zbgnAVssjMiFdC04MwXwLLA9P4yPykq
+lXvY8qdOD1R8oQ2AswkDwf9c3V6aPryuvEeKaq5xyh+xKrhfQgUL7EYw0XILyulW
+bfXv33i+Ybqypa4ETLyorGkVl73v67SMvzX41MPRKA5cOp9wGDMgd8SirwIDAQAB
+o4GrMIGoMAsGA1UdDwQEAwIBxjAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQWBBRT
+MtGzz3/64PGgXYVOktKeRR20TzA9BgNVHR8ENjA0MDKgMKAuhixodHRwOi8vY3Js
+LnVzZXJ0cnVzdC5jb20vVVROLURBVEFDb3JwU0dDLmNybDAqBgNVHSUEIzAhBggr
+BgEFBQcDAQYKKwYBBAGCNwoDAwYJYIZIAYb4QgQBMA0GCSqGSIb3DQEBBQUAA4IB
+AQAnNZcAiosovcYzMB4p/OL31ZjUQLtgyr+rFywJNn9Q+kHcrpY6CiM+iVnJowft
+Gzet/Hy+UUla3joKVAgWRcKZsYfNjGjgaQPpxE6YsjuMFrMOoAyYUJuTqXAJyCyj
+j98C5OBxOvG0I3KgqgHf35g+FFCgMSa9KOlaMCZ1+XtgHI3zzVAmbQQnmt/VDUVH
+KWss5nbZqSl9Mt3JNjy9rjXxEZ4du5A/EkdOjtd+D2JzHVImOBwYSf0wdJrE5SIv
+2MCN7ZF6TACPcn9d2t0bi0Vr591pl6jFVkwPDPafepE39peC4N1xaf92P2BNPM/3
+mfnGV/TJVTl4uix5yaaIK/QI
+-----END CERTIFICATE-----
+
+UTN USERFirst Hardware Root CA
+==============================
+
+-----BEGIN CERTIFICATE-----
+MIIEdDCCA1ygAwIBAgIQRL4Mi1AAJLQR0zYq/mUK/TANBgkqhkiG9w0BAQUFADCB
+lzELMAkGA1UEBhMCVVMxCzAJBgNVBAgTAlVUMRcwFQYDVQQHEw5TYWx0IExha2Ug
+Q2l0eTEeMBwGA1UEChMVVGhlIFVTRVJUUlVTVCBOZXR3b3JrMSEwHwYDVQQLExho
+dHRwOi8vd3d3LnVzZXJ0cnVzdC5jb20xHzAdBgNVBAMTFlVUTi1VU0VSRmlyc3Qt
+SGFyZHdhcmUwHhcNOTkwNzA5MTgxMDQyWhcNMTkwNzA5MTgxOTIyWjCBlzELMAkG
+A1UEBhMCVVMxCzAJBgNVBAgTAlVUMRcwFQYDVQQHEw5TYWx0IExha2UgQ2l0eTEe
+MBwGA1UEChMVVGhlIFVTRVJUUlVTVCBOZXR3b3JrMSEwHwYDVQQLExhodHRwOi8v
+d3d3LnVzZXJ0cnVzdC5jb20xHzAdBgNVBAMTFlVUTi1VU0VSRmlyc3QtSGFyZHdh
+cmUwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCx98M4P7Sof885glFn
+0G2f0v9Y8+efK+wNiVSZuTiZFvfgIXlIwrthdBKWHTxqctU8EGc6Oe0rE81m65UJ
+M6Rsl7HoxuzBdXmcRl6Nq9Bq/bkqVRcQVLMZ8Jr28bFdtqdt++BxF2uiiPsA3/4a
+MXcMmgF6sTLjKwEHOG7DpV4jvEWbe1DByTCP2+UretNb+zNAHqDVmBe8i4fDidNd
+oI6yqqr2jmmIBsX6iSHzCJ1pLgkzmykNRg+MzEk0sGlRvfkGzWitZky8PqxhvQqI
+DsjfPe58BEydCl5rkdbux+0ojatNh4lz0G6k0B4WixThdkQDf2Os5M1JnMWS9Ksy
+oUhbAgMBAAGjgbkwgbYwCwYDVR0PBAQDAgHGMA8GA1UdEwEB/wQFMAMBAf8wHQYD
+VR0OBBYEFKFyXyYbKJhDlV0HN9WFlp1L0sNFMEQGA1UdHwQ9MDswOaA3oDWGM2h0
+dHA6Ly9jcmwudXNlcnRydXN0LmNvbS9VVE4tVVNFUkZpcnN0LUhhcmR3YXJlLmNy
+bDAxBgNVHSUEKjAoBggrBgEFBQcDAQYIKwYBBQUHAwUGCCsGAQUFBwMGBggrBgEF
+BQcDBzANBgkqhkiG9w0BAQUFAAOCAQEARxkP3nTGmZev/K0oXnWO6y1n7k57K9cM
+//bey1WiCuFMVGWTYGufEpytXoMs61quwOQt9ABjHbjAbPLPSbtNk28Gpgoiskli
+CE7/yMgUsogWXecB5BKV5UU0s4tpvc+0hY91UZ59Ojg6FEgSxvunOxqNDYJAB+gE
+CJChicsZUN/KHAG8HQQZexB2lzvukJDKxA4fFm517zP4029bHpbj4HR3dHuKom4t
+3XbWOTCC8KucUvIqx69JXn7HaOWCgchqJ/kniCrVWFCVH/A7HFe7fRQ5YiuayZSS
+KqMiDP+JJn1fIytH1xUdqWqeUQ0qUZ6B+dQ7XnASfxAynB67nfhmqA==
+-----END CERTIFICATE-----
+
+XRamp Global CA Root
+====================
+
+-----BEGIN CERTIFICATE-----
+MIIEMDCCAxigAwIBAgIQUJRs7Bjq1ZxN1ZfvdY+grTANBgkqhkiG9w0BAQUFADCB
+gjELMAkGA1UEBhMCVVMxHjAcBgNVBAsTFXd3dy54cmFtcHNlY3VyaXR5LmNvbTEk
+MCIGA1UEChMbWFJhbXAgU2VjdXJpdHkgU2VydmljZXMgSW5jMS0wKwYDVQQDEyRY
+UmFtcCBHbG9iYWwgQ2VydGlmaWNhdGlvbiBBdXRob3JpdHkwHhcNMDQxMTAxMTcx
+NDA0WhcNMzUwMTAxMDUzNzE5WjCBgjELMAkGA1UEBhMCVVMxHjAcBgNVBAsTFXd3
+dy54cmFtcHNlY3VyaXR5LmNvbTEkMCIGA1UEChMbWFJhbXAgU2VjdXJpdHkgU2Vy
+dmljZXMgSW5jMS0wKwYDVQQDEyRYUmFtcCBHbG9iYWwgQ2VydGlmaWNhdGlvbiBB
+dXRob3JpdHkwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCYJB69FbS6
+38eMpSe2OAtp87ZOqCwuIR1cRN8hXX4jdP5efrRKt6atH67gBhbim1vZZ3RrXYCP
+KZ2GG9mcDZhtdhAoWORlsH9KmHmf4MMxfoArtYzAQDsRhtDLooY2YKTVMIJt2W7Q
+DxIEM5dfT2Fa8OT5kavnHTu86M/0ay00fOJIYRyO82FEzG+gSqmUsE3a56k0enI4
+qEHMPJQRfevIpoy3hsvKMzvZPTeL+3o+hiznc9cKV6xkmxnr9A8ECIqsAxcZZPRa
+JSKNNCyy9mgdEm3Tih4U2sSPpuIjhdV6Db1q4Ons7Be7QhtnqiXtRYMh/MHJfNVi
+PvryxS3T/dRlAgMBAAGjgZ8wgZwwEwYJKwYBBAGCNxQCBAYeBABDAEEwCwYDVR0P
+BAQDAgGGMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0OBBYEFMZPoj0GY4QJnM5i5ASs
+jVy16bYbMDYGA1UdHwQvMC0wK6ApoCeGJWh0dHA6Ly9jcmwueHJhbXBzZWN1cml0
+eS5jb20vWEdDQS5jcmwwEAYJKwYBBAGCNxUBBAMCAQEwDQYJKoZIhvcNAQEFBQAD
+ggEBAJEVOQMBG2f7Shz5CmBbodpNl2L5JFMn14JkTpAuw0kbK5rc/Kh4ZzXxHfAR
+vbdI4xD2Dd8/0sm2qlWkSLoC295ZLhVbO50WfUfXN+pfTXYSNrsf16GBBEYgoyxt
+qZ4Bfj8pzgCT3/3JknOJiWSe5yvkHJEs0rnOfc5vMZnT5r7SHpDwCRR5XCOrTdLa
+IR9NmXmd4c8nnxCbHIgNsIpkQTG4DmyQJKSbXHGPurt+HBvbaoAPIbzp26a3QPSy
+i6mx5O+aGtA9aZnuqCij4Tyz8LIRnM98QObd50N9otg6tamN8jSZxNQQ4Qb9CYQQ
+O+7ETPTsJ3xCwnR8gooJybQDJbw=
+-----END CERTIFICATE-----
+
+Go Daddy Class 2 CA
+===================
+
+-----BEGIN CERTIFICATE-----
+MIIEADCCAuigAwIBAgIBADANBgkqhkiG9w0BAQUFADBjMQswCQYDVQQGEwJVUzEh
+MB8GA1UEChMYVGhlIEdvIERhZGR5IEdyb3VwLCBJbmMuMTEwLwYDVQQLEyhHbyBE
+YWRkeSBDbGFzcyAyIENlcnRpZmljYXRpb24gQXV0aG9yaXR5MB4XDTA0MDYyOTE3
+MDYyMFoXDTM0MDYyOTE3MDYyMFowYzELMAkGA1UEBhMCVVMxITAfBgNVBAoTGFRo
+ZSBHbyBEYWRkeSBHcm91cCwgSW5jLjExMC8GA1UECxMoR28gRGFkZHkgQ2xhc3Mg
+MiBDZXJ0aWZpY2F0aW9uIEF1dGhvcml0eTCCASAwDQYJKoZIhvcNAQEBBQADggEN
+ADCCAQgCggEBAN6d1+pXGEmhW+vXX0iG6r7d/+TvZxz0ZWizV3GgXne77ZtJ6XCA
+PVYYYwhv2vLM0D9/AlQiVBDYsoHUwHU9S3/Hd8M+eKsaA7Ugay9qK7HFiH7Eux6w
+wdhFJ2+qN1j3hybX2C32qRe3H3I2TqYXP2WYktsqbl2i/ojgC95/5Y0V4evLOtXi
+EqITLdiOr18SPaAIBQi2XKVlOARFmR6jYGB0xUGlcmIbYsUfb18aQr4CUWWoriMY
+avx4A6lNf4DD+qta/KFApMoZFv6yyO9ecw3ud72a9nmYvLEHZ6IVDd2gWMZEewo+
+YihfukEHU1jPEX44dMX4/7VpkI+EdOqXG68CAQOjgcAwgb0wHQYDVR0OBBYEFNLE
+sNKR1EwRcbNhyz2h/t2oatTjMIGNBgNVHSMEgYUwgYKAFNLEsNKR1EwRcbNhyz2h
+/t2oatTjoWekZTBjMQswCQYDVQQGEwJVUzEhMB8GA1UEChMYVGhlIEdvIERhZGR5
+IEdyb3VwLCBJbmMuMTEwLwYDVQQLEyhHbyBEYWRkeSBDbGFzcyAyIENlcnRpZmlj
+YXRpb24gQXV0aG9yaXR5ggEAMAwGA1UdEwQFMAMBAf8wDQYJKoZIhvcNAQEFBQAD
+ggEBADJL87LKPpH8EsahB4yOd6AzBhRckB4Y9wimPQoZ+YeAEW5p5JYXMP80kWNy
+OO7MHAGjHZQopDH2esRU1/blMVgDoszOYtuURXO1v0XJJLXVggKtI3lpjbi2Tc7P
+TMozI+gciKqdi0FuFskg5YmezTvacPd+mSYgFFQlq25zheabIZ0KbIIOqPjCDPoQ
+HmyW74cNxA9hi63ugyuV+I6ShHI56yDqg+2DzZduCLzrTia2cyvk0/ZM/iZx4mER
+dEr/VxqHD3VILs9RaRegAhJhldXRQLIQTO7ErBBDpqWeCtWVYpoNz4iCxTIM5Cuf
+ReYNnyicsbkqWletNw+vHX/bvZ8=
+-----END CERTIFICATE-----
+
+Starfield Class 2 CA
+====================
+
+-----BEGIN CERTIFICATE-----
+MIIEDzCCAvegAwIBAgIBADANBgkqhkiG9w0BAQUFADBoMQswCQYDVQQGEwJVUzEl
+MCMGA1UEChMcU3RhcmZpZWxkIFRlY2hub2xvZ2llcywgSW5jLjEyMDAGA1UECxMp
+U3RhcmZpZWxkIENsYXNzIDIgQ2VydGlmaWNhdGlvbiBBdXRob3JpdHkwHhcNMDQw
+NjI5MTczOTE2WhcNMzQwNjI5MTczOTE2WjBoMQswCQYDVQQGEwJVUzElMCMGA1UE
+ChMcU3RhcmZpZWxkIFRlY2hub2xvZ2llcywgSW5jLjEyMDAGA1UECxMpU3RhcmZp
+ZWxkIENsYXNzIDIgQ2VydGlmaWNhdGlvbiBBdXRob3JpdHkwggEgMA0GCSqGSIb3
+DQEBAQUAA4IBDQAwggEIAoIBAQC3Msj+6XGmBIWtDBFk385N78gDGIc/oav7PKaf
+8MOh2tTYbitTkPskpD6E8J7oX+zlJ0T1KKY/e97gKvDIr1MvnsoFAZMej2YcOadN
++lq2cwQlZut3f+dZxkqZJRRU6ybH838Z1TBwj6+wRir/resp7defqgSHo9T5iaU0
+X9tDkYI22WY8sbi5gv2cOj4QyDvvBmVmepsZGD3/cVE8MC5fvj13c7JdBmzDI1aa
+K4UmkhynArPkPw2vCHmCuDY96pzTNbO8acr1zJ3o/WSNF4Azbl5KXZnJHoe0nRrA
+1W4TNSNe35tfPe/W93bC6j67eA0cQmdrBNj41tpvi/JEoAGrAgEDo4HFMIHCMB0G
+A1UdDgQWBBS/X7fRzt0fhvRbVazc1xDCDqmI5zCBkgYDVR0jBIGKMIGHgBS/X7fR
+zt0fhvRbVazc1xDCDqmI56FspGowaDELMAkGA1UEBhMCVVMxJTAjBgNVBAoTHFN0
+YXJmaWVsZCBUZWNobm9sb2dpZXMsIEluYy4xMjAwBgNVBAsTKVN0YXJmaWVsZCBD
+bGFzcyAyIENlcnRpZmljYXRpb24gQXV0aG9yaXR5ggEAMAwGA1UdEwQFMAMBAf8w
+DQYJKoZIhvcNAQEFBQADggEBAAWdP4id0ckaVaGsafPzWdqbAYcaT1epoXkJKtv3
+L7IezMdeatiDh6GX70k1PncGQVhiv45YuApnP+yz3SFmH8lU+nLMPUxA2IGvd56D
+eruix/U0F47ZEUD0/CwqTRV/p2JdLiXTAAsgGh1o+Re49L2L7ShZ3U0WixeDyLJl
+xy16paq8U4Zt3VekyvggQQto8PT7dL5WXXp59fkdheMtlb71cZBDzI0fmgAKhynp
+VSJYACPq4xJDKVtHCN2MQWplBqjlIapBtJUhlbl90TSrE9atvNziPTnNvT51cKEY
+WQPJIrSPnNVeKtelttQKbfi3QBFGmh95DmK/D5fs4C8fF5Q=
+-----END CERTIFICATE-----
+
+StartCom Certification Authority
+================================
+
+-----BEGIN CERTIFICATE-----
+MIIHyTCCBbGgAwIBAgIBATANBgkqhkiG9w0BAQUFADB9MQswCQYDVQQGEwJJTDEW
+MBQGA1UEChMNU3RhcnRDb20gTHRkLjErMCkGA1UECxMiU2VjdXJlIERpZ2l0YWwg
+Q2VydGlmaWNhdGUgU2lnbmluZzEpMCcGA1UEAxMgU3RhcnRDb20gQ2VydGlmaWNh
+dGlvbiBBdXRob3JpdHkwHhcNMDYwOTE3MTk0NjM2WhcNMzYwOTE3MTk0NjM2WjB9
+MQswCQYDVQQGEwJJTDEWMBQGA1UEChMNU3RhcnRDb20gTHRkLjErMCkGA1UECxMi
+U2VjdXJlIERpZ2l0YWwgQ2VydGlmaWNhdGUgU2lnbmluZzEpMCcGA1UEAxMgU3Rh
+cnRDb20gQ2VydGlmaWNhdGlvbiBBdXRob3JpdHkwggIiMA0GCSqGSIb3DQEBAQUA
+A4ICDwAwggIKAoICAQDBiNsJvGxGfHiflXu1M5DycmLWwTYgIiRezul38kMKogZk
+pMyONvg45iPwbm2xPN1yo4UcodM9tDMr0y+v/uqwQVlntsQGfQqedIXWeUyAN3rf
+OQVSWff0G0ZDpNKFhdLDcfN1YjS6LIp/Ho/u7TTQEceWzVI9ujPW3U3eCztKS5/C
+Ji/6tRYccjV3yjxd5srhJosaNnZcAdt0FCX+7bWgiA/deMotHweXMAEtcnn6RtYT
+Kqi5pquDSR3l8u/d5AGOGAqPY1MWhWKpDhk6zLVmpsJrdAfkK+F2PrRt2PZE4XNi
+HzvEvqBTViVsUQn3qqvKv3b9bZvzndu/PWa8DFaqr5hIlTpL36dYUNk4dalb6kMM
+Av+Z6+hsTXBbKWWc3apdzK8BMewM69KN6Oqce+Zu9ydmDBpI125C4z/eIT574Q1w
++2OqqGwaVLRcJXrJosmLFqa7LH4XXgVNWG4SHQHuEhANxjJ/GP/89PrNbpHoNkm+
+Gkhpi8KWTRoSsmkXwQqQ1vp5Iki/untp+HDH+no32NgN0nZPV/+Qt+OR0t3vwmC3
+Zzrd/qqc8NSLf3Iizsafl7b4r4qgEKjZ+xjGtrVcUjyJthkqcwEKDwOzEmDyei+B
+26Nu/yYwl/WL3YlXtq09s68rxbd2AvCl1iuahhQqcvbjM4xdCUsT37uMdBNSSwID
+AQABo4ICUjCCAk4wDAYDVR0TBAUwAwEB/zALBgNVHQ8EBAMCAa4wHQYDVR0OBBYE
+FE4L7xqkQFulF2mHMMo0aEPQQa7yMGQGA1UdHwRdMFswLKAqoCiGJmh0dHA6Ly9j
+ZXJ0LnN0YXJ0Y29tLm9yZy9zZnNjYS1jcmwuY3JsMCugKaAnhiVodHRwOi8vY3Js
+LnN0YXJ0Y29tLm9yZy9zZnNjYS1jcmwuY3JsMIIBXQYDVR0gBIIBVDCCAVAwggFM
+BgsrBgEEAYG1NwEBATCCATswLwYIKwYBBQUHAgEWI2h0dHA6Ly9jZXJ0LnN0YXJ0
+Y29tLm9yZy9wb2xpY3kucGRmMDUGCCsGAQUFBwIBFilodHRwOi8vY2VydC5zdGFy
+dGNvbS5vcmcvaW50ZXJtZWRpYXRlLnBkZjCB0AYIKwYBBQUHAgIwgcMwJxYgU3Rh
+cnQgQ29tbWVyY2lhbCAoU3RhcnRDb20pIEx0ZC4wAwIBARqBl0xpbWl0ZWQgTGlh
+YmlsaXR5LCByZWFkIHRoZSBzZWN0aW9uICpMZWdhbCBMaW1pdGF0aW9ucyogb2Yg
+dGhlIFN0YXJ0Q29tIENlcnRpZmljYXRpb24gQXV0aG9yaXR5IFBvbGljeSBhdmFp
+bGFibGUgYXQgaHR0cDovL2NlcnQuc3RhcnRjb20ub3JnL3BvbGljeS5wZGYwEQYJ
+YIZIAYb4QgEBBAQDAgAHMDgGCWCGSAGG+EIBDQQrFilTdGFydENvbSBGcmVlIFNT
+TCBDZXJ0aWZpY2F0aW9uIEF1dGhvcml0eTANBgkqhkiG9w0BAQUFAAOCAgEAFmyZ
+9GYMNPXQhV59CuzaEE44HF7fpiUFS5Eyweg78T3dRAlbB0mKKctmArexmvclmAk8
+jhvh3TaHK0u7aNM5Zj2gJsfyOZEdUauCe37Vzlrk4gNXcGmXCPleWKYK34wGmkUW
+FjgKXlf2Ysd6AgXmvB618p70qSmD+LIU424oh0TDkBreOKk8rENNZEXO3SipXPJz
+ewT4F+irsfMuXGRuczE6Eri8sxHkfY+BUZo7jYn0TZNmezwD7dOaHZrzZVD1oNB1
+ny+v8OqCQ5j4aZyJecRDjkZy42Q2Eq/3JR44iZB3fsNrarnDy0RLrHiQi+fHLB5L
+EUTINFInzQpdn4XBidUaePKVEFMy3YCEZnXZtWgo+2EuvoSoOMCZEoalHmdkrQYu
+L6lwhceWD3yJZfWOQ1QOq92lgDmUYMA0yZZwLKMS9R9Ie70cfmu3nZD0Ijuu+Pwq
+yvqCUqDvr0tVk+vBtfAii6w0TiYiBKGHLHVKt+V9E9e4DGTANtLJL4YSjCMJwRuC
+O3NJo2pXh5Tl1njFmUNj403gdy3hZZlyaQQaRwnmDwFWJPsfvw55qVguucQJAX6V
+um0ABj6y6koQOdjQK/W/7HW/lwLFCRsI3FU34oH7N4RDYiDK51ZLZer+bMEkkySh
+NOsF/5oirpt9P/FlUQqmMGqz9IgcgA38corog14=
+-----END CERTIFICATE-----
+
+DigiCert Assured ID Root CA
+===========================
+
+-----BEGIN CERTIFICATE-----
+MIIDtzCCAp+gAwIBAgIQDOfg5RfYRv6P5WD8G/AwOTANBgkqhkiG9w0BAQUFADBl
+MQswCQYDVQQGEwJVUzEVMBMGA1UEChMMRGlnaUNlcnQgSW5jMRkwFwYDVQQLExB3
+d3cuZGlnaWNlcnQuY29tMSQwIgYDVQQDExtEaWdpQ2VydCBBc3N1cmVkIElEIFJv
+b3QgQ0EwHhcNMDYxMTEwMDAwMDAwWhcNMzExMTEwMDAwMDAwWjBlMQswCQYDVQQG
+EwJVUzEVMBMGA1UEChMMRGlnaUNlcnQgSW5jMRkwFwYDVQQLExB3d3cuZGlnaWNl
+cnQuY29tMSQwIgYDVQQDExtEaWdpQ2VydCBBc3N1cmVkIElEIFJvb3QgQ0EwggEi
+MA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCtDhXO5EOAXLGH87dg+XESpa7c
+JpSIqvTO9SA5KFhgDPiA2qkVlTJhPLWxKISKityfCgyDF3qPkKyK53lTXDGEKvYP
+mDI2dsze3Tyoou9q+yHyUmHfnyDXH+Kx2f4YZNISW1/5WBg1vEfNoTb5a3/UsDg+
+wRvDjDPZ2C8Y/igPs6eD1sNuRMBhNZYW/lmci3Zt1/GiSw0r/wty2p5g0I6QNcZ4
+VYcgoc/lbQrISXwxmDNsIumH0DJaoroTghHtORedmTpyoeb6pNnVFzF1roV9Iq4/
+AUaG9ih5yLHa5FcXxH4cDrC0kqZWs72yl+2qp/C3xag/lRbQ/6GW6whfGHdPAgMB
+AAGjYzBhMA4GA1UdDwEB/wQEAwIBhjAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQW
+BBRF66Kv9JLLgjEtUYunpyGd823IDzAfBgNVHSMEGDAWgBRF66Kv9JLLgjEtUYun
+pyGd823IDzANBgkqhkiG9w0BAQUFAAOCAQEAog683+Lt8ONyc3pklL/3cmbYMuRC
+dWKuh+vy1dneVrOfzM4UKLkNl2BcEkxY5NM9g0lFWJc1aRqoR+pWxnmrEthngYTf
+fwk8lOa4JiwgvT2zKIn3X/8i4peEH+ll74fg38FnSbNd67IJKusm7Xi+fT8r87cm
+NW1fiQG2SVufAQWbqz0lwcy2f8Lxb4bG+mRo64EtlOtCt/qMHt1i8b5QZ7dsvfPx
+H2sMNgcWfzd8qVttevESRmCD1ycEvkvOl77DZypoEd+A5wwzZr8TDRRu838fYxAe
++o0bJW1sj6W3YQGx0qMmoRBxna3iw/nDmVG3KwcIzi7mULKn+gpFL6Lw8g==
+-----END CERTIFICATE-----
+
+DigiCert Global Root CA
+=======================
+
+-----BEGIN CERTIFICATE-----
+MIIDrzCCApegAwIBAgIQCDvgVpBCRrGhdWrJWZHHSjANBgkqhkiG9w0BAQUFADBh
+MQswCQYDVQQGEwJVUzEVMBMGA1UEChMMRGlnaUNlcnQgSW5jMRkwFwYDVQQLExB3
+d3cuZGlnaWNlcnQuY29tMSAwHgYDVQQDExdEaWdpQ2VydCBHbG9iYWwgUm9vdCBD
+QTAeFw0wNjExMTAwMDAwMDBaFw0zMTExMTAwMDAwMDBaMGExCzAJBgNVBAYTAlVT
+MRUwEwYDVQQKEwxEaWdpQ2VydCBJbmMxGTAXBgNVBAsTEHd3dy5kaWdpY2VydC5j
+b20xIDAeBgNVBAMTF0RpZ2lDZXJ0IEdsb2JhbCBSb290IENBMIIBIjANBgkqhkiG
+9w0BAQEFAAOCAQ8AMIIBCgKCAQEA4jvhEXLeqKTTo1eqUKKPC3eQyaKl7hLOllsB
+CSDMAZOnTjC3U/dDxGkAV53ijSLdhwZAAIEJzs4bg7/fzTtxRuLWZscFs3YnFo97
+nh6Vfe63SKMI2tavegw5BmV/Sl0fvBf4q77uKNd0f3p4mVmFaG5cIzJLv07A6Fpt
+43C/dxC//AH2hdmoRBBYMql1GNXRor5H4idq9Joz+EkIYIvUX7Q6hL+hqkpMfT7P
+T19sdl6gSzeRntwi5m3OFBqOasv+zbMUZBfHWymeMr/y7vrTC0LUq7dBMtoM1O/4
+gdW7jVg/tRvoSSiicNoxBN33shbyTApOB6jtSj1etX+jkMOvJwIDAQABo2MwYTAO
+BgNVHQ8BAf8EBAMCAYYwDwYDVR0TAQH/BAUwAwEB/zAdBgNVHQ4EFgQUA95QNVbR
+TLtm8KPiGxvDl7I90VUwHwYDVR0jBBgwFoAUA95QNVbRTLtm8KPiGxvDl7I90VUw
+DQYJKoZIhvcNAQEFBQADggEBAMucN6pIExIK+t1EnE9SsPTfrgT1eXkIoyQY/Esr
+hMAtudXH/vTBH1jLuG2cenTnmCmrEbXjcKChzUyImZOMkXDiqw8cvpOp/2PV5Adg
+06O/nVsJ8dWO41P0jmP6P6fbtGbfYmbW0W5BjfIttep3Sp+dWOIrWcBAI+0tKIJF
+PnlUkiaY4IBIqDfv8NZ5YBberOgOzW6sRBc4L0na4UU+Krk2U886UAb3LujEV0ls
+YSEY1QSteDwsOoBrp+uvFRTp2InBuThs4pFsiv9kuXclVzDAGySj4dzp30d8tbQk
+CAUw7C29C79Fv1C5qfPrmAESrciIxpg0X40KPMbp1ZWVbd4=
+-----END CERTIFICATE-----
+
+DigiCert High Assurance EV Root CA
+==================================
+
+-----BEGIN CERTIFICATE-----
+MIIDxTCCAq2gAwIBAgIQAqxcJmoLQJuPC3nyrkYldzANBgkqhkiG9w0BAQUFADBs
+MQswCQYDVQQGEwJVUzEVMBMGA1UEChMMRGlnaUNlcnQgSW5jMRkwFwYDVQQLExB3
+d3cuZGlnaWNlcnQuY29tMSswKQYDVQQDEyJEaWdpQ2VydCBIaWdoIEFzc3VyYW5j
+ZSBFViBSb290IENBMB4XDTA2MTExMDAwMDAwMFoXDTMxMTExMDAwMDAwMFowbDEL
+MAkGA1UEBhMCVVMxFTATBgNVBAoTDERpZ2lDZXJ0IEluYzEZMBcGA1UECxMQd3d3
+LmRpZ2ljZXJ0LmNvbTErMCkGA1UEAxMiRGlnaUNlcnQgSGlnaCBBc3N1cmFuY2Ug
+RVYgUm9vdCBDQTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAMbM5XPm
++9S75S0tMqbf5YE/yc0lSbZxKsPVlDRnogocsF9ppkCxxLeyj9CYpKlBWTrT3JTW
+PNt0OKRKzE0lgvdKpVMSOO7zSW1xkX5jtqumX8OkhPhPYlG++MXs2ziS4wblCJEM
+xChBVfvLWokVfnHoNb9Ncgk9vjo4UFt3MRuNs8ckRZqnrG0AFFoEt7oT61EKmEFB
+Ik5lYYeBQVCmeVyJ3hlKV9Uu5l0cUyx+mM0aBhakaHPQNAQTXKFx01p8VdteZOE3
+hzBWBOURtCmAEvF5OYiiAhF8J2a3iLd48soKqDirCmTCv2ZdlYTBoSUeh10aUAsg
+EsxBu24LUTi4S8sCAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgGGMA8GA1UdEwEB/wQF
+MAMBAf8wHQYDVR0OBBYEFLE+w2kD+L9HAdSYJhoIAu9jZCvDMB8GA1UdIwQYMBaA
+FLE+w2kD+L9HAdSYJhoIAu9jZCvDMA0GCSqGSIb3DQEBBQUAA4IBAQAcGgaX3Nec
+nzyIZgYIVyHbIUf4KmeqvxgydkAQV8GK83rZEWWONfqe/EW1ntlMMUu4kehDLI6z
+eM7b41N5cdblIZQB2lWHmiRk9opmzN6cN82oNLFpmyPInngiK3BD41VHMWEZ71jF
+hS9OMPagMRYjyOfiZRYzy78aG6A9+MpeizGLYAiJLQwGXFK3xPkKmNEVX58Svnw2
+Yzi9RKR/5CYrCsSXaQ3pjOLAEFe4yHYSkVXySGnYvCoCWw9E1CAx2/S6cCZdkGCe
+vEsXCS+0yx5DaMkHJ8HSXPfqIbloEpw8nL+e/IBcm2PN7EeqJSdnoDfzAIJ9VNep
++OkuE6N36B9K
+-----END CERTIFICATE-----
+
+GeoTrust Primary Certification Authority
+========================================
+
+-----BEGIN CERTIFICATE-----
+MIIDfDCCAmSgAwIBAgIQGKy1av1pthU6Y2yv2vrEoTANBgkqhkiG9w0BAQUFADBY
+MQswCQYDVQQGEwJVUzEWMBQGA1UEChMNR2VvVHJ1c3QgSW5jLjExMC8GA1UEAxMo
+R2VvVHJ1c3QgUHJpbWFyeSBDZXJ0aWZpY2F0aW9uIEF1dGhvcml0eTAeFw0wNjEx
+MjcwMDAwMDBaFw0zNjA3MTYyMzU5NTlaMFgxCzAJBgNVBAYTAlVTMRYwFAYDVQQK
+Ew1HZW9UcnVzdCBJbmMuMTEwLwYDVQQDEyhHZW9UcnVzdCBQcmltYXJ5IENlcnRp
+ZmljYXRpb24gQXV0aG9yaXR5MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKC
+AQEAvrgVe//UfH1nrYNke8hCUy3f9oQIIGHWAVlqnEQRr+92/ZV+zmEwu3qDXwK9
+AWbK7hWNb6EwnL2hhZ6UOvNWiAAxz9juapYC2e0DjPt1befquFUWBRaa9OBesYjA
+ZIVcFU2Ix7e64HXprQU9nceJSOC7KMgD4TCTZF5SwFlwIjVXiIrxlQqD17wxcwE0
+7e9GceBrAqg1cmuXm2bgyxx5X9gaBGgeRwLmnWDiNpcB3841kt++Z8dtd1k7j53W
+kBWUvEI0EME5+bEnPn7WinXFsq+W06Lem+SYvn3h6YGttm/81w7a4DSwDRp35+MI
+mO9Y+pyEtzavwt+s0vQQBnBxNQIDAQABo0IwQDAPBgNVHRMBAf8EBTADAQH/MA4G
+A1UdDwEB/wQEAwIBBjAdBgNVHQ4EFgQULNVQQZcVi/CPNmFbSvtr2ZnJM5IwDQYJ
+KoZIhvcNAQEFBQADggEBAFpwfyzdtzRP9YZRqSa+S7iq8XEN3GHHoOo0Hnp3DwQ1
+6CePbJC/kRYkRj5KTs4rFtULUh38H2eiAkUxT87z+gOneZ1TatnaYzr4gNfTmeGl
+4b7UVXGYNTq+k+qurUKykG/g/CFNNWMziUnWm07Kx+dOCQD32sfvmWKZd7aVIl6K
+oKv0uHiYyjgZmclynnjNS6yvGaBzEi38wkG6gZHaFloxt/m0cYASSJlyc1pZU8Fj
+UjPtp8nSOQJw+uCxQmYpqptR7TBUIhRf2asdweSU8Pj1K/fqynhG1riR/aYNKxoU
+AT6A8EKglQdebc3MS6RFjasS6LPeWuWgfOgPIh1a6Vk=
+-----END CERTIFICATE-----
+
+COMODO Certification Authority
+==============================
+
+-----BEGIN CERTIFICATE-----
+MIIEHTCCAwWgAwIBAgIQToEtioJl4AsC7j41AkblPTANBgkqhkiG9w0BAQUFADCB
+gTELMAkGA1UEBhMCR0IxGzAZBgNVBAgTEkdyZWF0ZXIgTWFuY2hlc3RlcjEQMA4G
+A1UEBxMHU2FsZm9yZDEaMBgGA1UEChMRQ09NT0RPIENBIExpbWl0ZWQxJzAlBgNV
+BAMTHkNPTU9ETyBDZXJ0aWZpY2F0aW9uIEF1dGhvcml0eTAeFw0wNjEyMDEwMDAw
+MDBaFw0yOTEyMzEyMzU5NTlaMIGBMQswCQYDVQQGEwJHQjEbMBkGA1UECBMSR3Jl
+YXRlciBNYW5jaGVzdGVyMRAwDgYDVQQHEwdTYWxmb3JkMRowGAYDVQQKExFDT01P
+RE8gQ0EgTGltaXRlZDEnMCUGA1UEAxMeQ09NT0RPIENlcnRpZmljYXRpb24gQXV0
+aG9yaXR5MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA0ECLi3LjkRv3
+UcEbVASY06m/weaKXTuH+7uIzg3jLz8GlvCiKVCZrts7oVewdFFxze1CkU1B/qnI
+2GqGd0S7WWaXUF601CxwRM/aN5VCaTwwxHGzUvAhTaHYujl8HJ6jJJ3ygxaYqhZ8
+Q5sVW7euNJH+1GImGEaaP+vB+fGQV+useg2L23IwambV4EajcNxo2f8ESIl33rXp
++2dtQem8Ob0y2WIC8bGoPW43nOIv4tOiJovGuFVDiOEjPqXSJDlqR6sA1KGzqSX+
+DT+nHbrTUcELpNqsOO9VUCQFZUaTNE8tja3G1CEZ0o7KBWFxB3NH5YoZEr0ETc5O
+nKVIrLsm9wIDAQABo4GOMIGLMB0GA1UdDgQWBBQLWOWLxkwVN6RAqTCpIb5HNlpW
+/zAOBgNVHQ8BAf8EBAMCAQYwDwYDVR0TAQH/BAUwAwEB/zBJBgNVHR8EQjBAMD6g
+PKA6hjhodHRwOi8vY3JsLmNvbW9kb2NhLmNvbS9DT01PRE9DZXJ0aWZpY2F0aW9u
+QXV0aG9yaXR5LmNybDANBgkqhkiG9w0BAQUFAAOCAQEAPpiem/Yb6dc5t3iuHXIY
+SdOH5EOC6z/JqvWote9VfCFSZfnVDeFs9D6Mk3ORLgLETgdxb8CPOGEIqB6BCsAv
+IC9Bi5HcSEW88cbeunZrM8gALTFGTO3nnc+IlP8zwFboJIYmuNg4ON8qa90SzMc/
+RxdMosIGlgnW2/4/PEZB31jiVg88O8EckzXZOFKs7sjsLjBOlDW0JB9LeGna8gI4
+zJVSk/BwJVmcIGfE7vmLV2H0knZ9P4SNVbfo5azV8fUZVqZa+5Acr5Pr5RzUZ5dd
+BA6+C4OmF4O5MBKgxTMVBbkN+8cFduPYSo38NBejxiEovjBFMR7HeL5YYTisO+IB
+ZQ==
+-----END CERTIFICATE-----
+
+Network Solutions Certificate Authority
+=======================================
+
+-----BEGIN CERTIFICATE-----
+MIID5jCCAs6gAwIBAgIQV8szb8JcFuZHFhfjkDFo4DANBgkqhkiG9w0BAQUFADBi
+MQswCQYDVQQGEwJVUzEhMB8GA1UEChMYTmV0d29yayBTb2x1dGlvbnMgTC5MLkMu
+MTAwLgYDVQQDEydOZXR3b3JrIFNvbHV0aW9ucyBDZXJ0aWZpY2F0ZSBBdXRob3Jp
+dHkwHhcNMDYxMjAxMDAwMDAwWhcNMjkxMjMxMjM1OTU5WjBiMQswCQYDVQQGEwJV
+UzEhMB8GA1UEChMYTmV0d29yayBTb2x1dGlvbnMgTC5MLkMuMTAwLgYDVQQDEydO
+ZXR3b3JrIFNvbHV0aW9ucyBDZXJ0aWZpY2F0ZSBBdXRob3JpdHkwggEiMA0GCSqG
+SIb3DQEBAQUAA4IBDwAwggEKAoIBAQDkvH6SMG3G2I4rC7xGzuAnlt7e+foS0zwz
+c7MEL7xxjOWftiJgPl9dzgn/ggwbmlFQGiaJ3dVhXRncEg8tCqJDXRfQNJIg6nPP
+OCwGJgl6cvf6UDL4wpPTaaIjzkGxzOTVHzbRijr4jGPiFFlp7Q3Tf2vouAPlT2rl
+mGNpSAW+Lv8ztumXWWn4Zxmuk2GWRBXTcrA/vGp97Eh/jcOrqnErU2lBUzS1sLnF
+BgrEsEX1QV1uiUV7PTsmjHTC5dLRfbIR1PtYMiKagMnc/Qzpf14Dl847ABSHJ3A4
+qY5usyd2mFHgBeMhqxrVhSI8KbWaFsWAqPS7azCPL0YCorEMIuDTAgMBAAGjgZcw
+gZQwHQYDVR0OBBYEFCEwyfsA106Y2oeqKtCnLrFAMadMMA4GA1UdDwEB/wQEAwIB
+BjAPBgNVHRMBAf8EBTADAQH/MFIGA1UdHwRLMEkwR6BFoEOGQWh0dHA6Ly9jcmwu
+bmV0c29sc3NsLmNvbS9OZXR3b3JrU29sdXRpb25zQ2VydGlmaWNhdGVBdXRob3Jp
+dHkuY3JsMA0GCSqGSIb3DQEBBQUAA4IBAQC7rkvnt1frf6ott3NHhWrB5KUd5Oc8
+6fRZZXe1eltajSU24HqXLjjAV2CDmAaDn7l2em5Q4LqILPxFzBiwmZVRDuwduIj/
+h1AcgsLj4DKAv6ALR8jDMe+ZZzKATxcheQxpXN5eNK4CtSbqUN9/GGUsyfJj4akH
+/nxxH2szJGoeBfcFaMBqEssuXmHLrijTfsK0ZpEmXzwuJF/LWA/rKOyvEZbz3Htv
+wKeI8lN3s2Berq4o2jUsbzRF0ybh3uxbTydrFny9RAQYgrOJeRcQcT16ohZO9QHN
+pGxlaKFJdlxDydi8NmdspZS11My5vWo1ViHe2MPr+8ukYEywVaCge1ey
+-----END CERTIFICATE-----
+
+COMODO ECC Certification Authority
+==================================
+
+-----BEGIN CERTIFICATE-----
+MIICiTCCAg+gAwIBAgIQH0evqmIAcFBUTAGem2OZKjAKBggqhkjOPQQDAzCBhTEL
+MAkGA1UEBhMCR0IxGzAZBgNVBAgTEkdyZWF0ZXIgTWFuY2hlc3RlcjEQMA4GA1UE
+BxMHU2FsZm9yZDEaMBgGA1UEChMRQ09NT0RPIENBIExpbWl0ZWQxKzApBgNVBAMT
+IkNPTU9ETyBFQ0MgQ2VydGlmaWNhdGlvbiBBdXRob3JpdHkwHhcNMDgwMzA2MDAw
+MDAwWhcNMzgwMTE4MjM1OTU5WjCBhTELMAkGA1UEBhMCR0IxGzAZBgNVBAgTEkdy
+ZWF0ZXIgTWFuY2hlc3RlcjEQMA4GA1UEBxMHU2FsZm9yZDEaMBgGA1UEChMRQ09N
+T0RPIENBIExpbWl0ZWQxKzApBgNVBAMTIkNPTU9ETyBFQ0MgQ2VydGlmaWNhdGlv
+biBBdXRob3JpdHkwdjAQBgcqhkjOPQIBBgUrgQQAIgNiAAQDR3svdcmCFYX7deSR
+FtSrYpn1PlILBs5BAH+X4QokPB0BBO490o0JlwzgdeT6+3eKKvUDYEs2ixYjFq0J
+cfRK9ChQtP6IHG4/bC8vCVlbpVsLM5niwz2J+Wos77LTBumjQjBAMB0GA1UdDgQW
+BBR1cacZSBm8nZ3qQUfflMRId5nTeTAOBgNVHQ8BAf8EBAMCAQYwDwYDVR0TAQH/
+BAUwAwEB/zAKBggqhkjOPQQDAwNoADBlAjEA7wNbeqy3eApyt4jf/7VGFAkK+qDm
+fQjGGoe9GKhzvSbKYAydzpmfz1wPMOG+FDHqAjAU9JM8SaczepBGR7NjfRObTrdv
+GDeAU/7dIOA1mjbRxwG55tzd8/8dLDoWV9mSOdY=
+-----END CERTIFICATE-----
+
+TC TrustCenter Class 2 CA II
+============================
+
+-----BEGIN CERTIFICATE-----
+MIIEqjCCA5KgAwIBAgIOLmoAAQACH9dSISwRXDswDQYJKoZIhvcNAQEFBQAwdjEL
+MAkGA1UEBhMCREUxHDAaBgNVBAoTE1RDIFRydXN0Q2VudGVyIEdtYkgxIjAgBgNV
+BAsTGVRDIFRydXN0Q2VudGVyIENsYXNzIDIgQ0ExJTAjBgNVBAMTHFRDIFRydXN0
+Q2VudGVyIENsYXNzIDIgQ0EgSUkwHhcNMDYwMTEyMTQzODQzWhcNMjUxMjMxMjI1
+OTU5WjB2MQswCQYDVQQGEwJERTEcMBoGA1UEChMTVEMgVHJ1c3RDZW50ZXIgR21i
+SDEiMCAGA1UECxMZVEMgVHJ1c3RDZW50ZXIgQ2xhc3MgMiBDQTElMCMGA1UEAxMc
+VEMgVHJ1c3RDZW50ZXIgQ2xhc3MgMiBDQSBJSTCCASIwDQYJKoZIhvcNAQEBBQAD
+ggEPADCCAQoCggEBAKuAh5uO8MN8h9foJIIRszzdQ2Lu+MNF2ujhoF/RKrLqk2jf
+tMjWQ+nEdVl//OEd+DFwIxuInie5e/060smp6RQvkL4DUsFJzfb95AhmC1eKokKg
+uNV/aVyQMrKXDcpK3EY+AlWJU+MaWss2xgdW94zPEfRMuzBwBJWl9jmM/XOBCH2J
+XjIeIqkiRUuwZi4wzJ9l/fzLganx4Duvo4bRierERXlQXa7pIXSSTYtZgo+U4+lK
+8edJsBTj9WLL1XK9H7nSn6DNqPoByNkN39r8R52zyFTfSUrxIan+GE7uSNQZu+99
+5OKdy1u2bv/jzVrndIIFuoAlOMvkaZ6vQaoahPUCAwEAAaOCATQwggEwMA8GA1Ud
+EwEB/wQFMAMBAf8wDgYDVR0PAQH/BAQDAgEGMB0GA1UdDgQWBBTjq1RMgKHbVkO3
+kUrL84J6E1wIqzCB7QYDVR0fBIHlMIHiMIHfoIHcoIHZhjVodHRwOi8vd3d3LnRy
+dXN0Y2VudGVyLmRlL2NybC92Mi90Y19jbGFzc18yX2NhX0lJLmNybIaBn2xkYXA6
+Ly93d3cudHJ1c3RjZW50ZXIuZGUvQ049VEMlMjBUcnVzdENlbnRlciUyMENsYXNz
+JTIwMiUyMENBJTIwSUksTz1UQyUyMFRydXN0Q2VudGVyJTIwR21iSCxPVT1yb290
+Y2VydHMsREM9dHJ1c3RjZW50ZXIsREM9ZGU/Y2VydGlmaWNhdGVSZXZvY2F0aW9u
+TGlzdD9iYXNlPzANBgkqhkiG9w0BAQUFAAOCAQEAjNfffu4bgBCzg/XbEeprS6iS
+GNn3Bzn1LL4GdXpoUxUc6krtXvwjshOg0wn/9vYua0Fxec3ibf2uWWuFHbhOIprt
+ZjluS5TmVfwLG4t3wVMTZonZKNaL80VKY7f9ewthXbhtvsPcW3nS7Yblok2+XnR8
+au0WOB9/WIFaGusyiC2y8zl3gK9etmF1KdsjTYjKUCjLhdLTEKJZbtOTVAB6okaV
+hgWcqRmY5TFyDADiZ9lA4CQze28suVyrZZ0srHbqNZn1l7kPJOzHdiEoZa5X6AeI
+dUpWoNIFOqTmjZKILPPy4cHGYdtBxceb9w4aUUXCYWvcZCcXjFq32nQozZfkvQ==
+-----END CERTIFICATE-----
+
+TC TrustCenter Class 3 CA II
+============================
+
+-----BEGIN CERTIFICATE-----
+MIIEqjCCA5KgAwIBAgIOSkcAAQAC5aBd1j8AUb8wDQYJKoZIhvcNAQEFBQAwdjEL
+MAkGA1UEBhMCREUxHDAaBgNVBAoTE1RDIFRydXN0Q2VudGVyIEdtYkgxIjAgBgNV
+BAsTGVRDIFRydXN0Q2VudGVyIENsYXNzIDMgQ0ExJTAjBgNVBAMTHFRDIFRydXN0
+Q2VudGVyIENsYXNzIDMgQ0EgSUkwHhcNMDYwMTEyMTQ0MTU3WhcNMjUxMjMxMjI1
+OTU5WjB2MQswCQYDVQQGEwJERTEcMBoGA1UEChMTVEMgVHJ1c3RDZW50ZXIgR21i
+SDEiMCAGA1UECxMZVEMgVHJ1c3RDZW50ZXIgQ2xhc3MgMyBDQTElMCMGA1UEAxMc
+VEMgVHJ1c3RDZW50ZXIgQ2xhc3MgMyBDQSBJSTCCASIwDQYJKoZIhvcNAQEBBQAD
+ggEPADCCAQoCggEBALTgu1G7OVyLBMVMeRwjhjEQY0NVJz/GRcekPewJDRoeIMJW
+Ht4bNwcwIi9v8Qbxq63WyKthoy9DxLCyLfzDlml7forkzMA5EpBCYMnMNWju2l+Q
+Vl/NHE1bWEnrDgFPZPosPIlY2C8u4rBo6SI7dYnWRBpl8huXJh0obazovVkdKyT2
+1oQDZogkAHhg8fir/gKya/si+zXmFtGt9i4S5Po1auUZuV3bOx4a+9P/FRQI2Alq
+ukWdFHlgfa9Aigdzs5OW03Q0jTo3Kd5c7PXuLjHCINy+8U9/I1LZW+Jk2ZyqBwi1
+Rb3R0DHBq1SfqdLDYmAD8bs5SpJKPQq5ncWg/jcCAwEAAaOCATQwggEwMA8GA1Ud
+EwEB/wQFMAMBAf8wDgYDVR0PAQH/BAQDAgEGMB0GA1UdDgQWBBTUovyfs8PYA9NX
+XAek0CSnwPIA1DCB7QYDVR0fBIHlMIHiMIHfoIHcoIHZhjVodHRwOi8vd3d3LnRy
+dXN0Y2VudGVyLmRlL2NybC92Mi90Y19jbGFzc18zX2NhX0lJLmNybIaBn2xkYXA6
+Ly93d3cudHJ1c3RjZW50ZXIuZGUvQ049VEMlMjBUcnVzdENlbnRlciUyMENsYXNz
+JTIwMyUyMENBJTIwSUksTz1UQyUyMFRydXN0Q2VudGVyJTIwR21iSCxPVT1yb290
+Y2VydHMsREM9dHJ1c3RjZW50ZXIsREM9ZGU/Y2VydGlmaWNhdGVSZXZvY2F0aW9u
+TGlzdD9iYXNlPzANBgkqhkiG9w0BAQUFAAOCAQEANmDkcPcGIEPZIxpC8vijsrlN
+irTzwppVMXzEO2eatN9NDoqTSheLG43KieHPOh6sHfGcMrSOWXaiQYUlN6AT0PV8
+TtXqluJucsG7Kv5sbviRmEb8yRtXW+rIGjs/sFGYPAfaLFkB2otE6OF0/ado3VS6
+g0bsyEa1+K+XwDsJHI/OcpY9M1ZwvJbL2NV9IJqDnxrcOfHFcqMRA/07QlIp2+gB
+95tejNaNhk4Z+rwcvsUhpYeeeC422wlxo3I0+GzjBgnyXlal092Y+tTmBvTwtiBj
+S+opvaqCZh77gaqnN60TGOaSw4HBM7uIHqHn4rS9MWwOUT1v+5ZWgOI2F9Hc5A==
+-----END CERTIFICATE-----
+
+TC TrustCenter Universal CA I
+=============================
+
+-----BEGIN CERTIFICATE-----
+MIID3TCCAsWgAwIBAgIOHaIAAQAC7LdggHiNtgYwDQYJKoZIhvcNAQEFBQAweTEL
+MAkGA1UEBhMCREUxHDAaBgNVBAoTE1RDIFRydXN0Q2VudGVyIEdtYkgxJDAiBgNV
+BAsTG1RDIFRydXN0Q2VudGVyIFVuaXZlcnNhbCBDQTEmMCQGA1UEAxMdVEMgVHJ1
+c3RDZW50ZXIgVW5pdmVyc2FsIENBIEkwHhcNMDYwMzIyMTU1NDI4WhcNMjUxMjMx
+MjI1OTU5WjB5MQswCQYDVQQGEwJERTEcMBoGA1UEChMTVEMgVHJ1c3RDZW50ZXIg
+R21iSDEkMCIGA1UECxMbVEMgVHJ1c3RDZW50ZXIgVW5pdmVyc2FsIENBMSYwJAYD
+VQQDEx1UQyBUcnVzdENlbnRlciBVbml2ZXJzYWwgQ0EgSTCCASIwDQYJKoZIhvcN
+AQEBBQADggEPADCCAQoCggEBAKR3I5ZEr5D0MacQ9CaHnPM42Q9e3s9B6DGtxnSR
+JJZ4Hgmgm5qVSkr1YnwCqMqs+1oEdjneX/H5s7/zA1hV0qq34wQi0fiU2iIIAI3T
+fCZdzHd55yx4Oagmcw6iXSVphU9VDprvxrlE4Vc93x9UIuVvZaozhDrzznq+VZeu
+jRIPFDPiUHDDSYcTvFHe15gSWu86gzOSBnWLknwSaHtwag+1m7Z3W0hZneTvWq3z
+wZ7U10VOylY0Ibw+F1tvdwxIAUMpsN0/lm7mlaoMwCC2/T42J5zjXM9OgdwZu5GQ
+fezmlwQek8wiSdeXhrYTCjxDI3d+8NzmzSQfO4ObNDqDNOMCAwEAAaNjMGEwHwYD
+VR0jBBgwFoAUkqR1LKSevoFE63n8isWVpesQdXMwDwYDVR0TAQH/BAUwAwEB/zAO
+BgNVHQ8BAf8EBAMCAYYwHQYDVR0OBBYEFJKkdSyknr6BROt5/IrFlaXrEHVzMA0G
+CSqGSIb3DQEBBQUAA4IBAQAo0uCG1eb4e/CX3CJrO5UUVg8RMKWaTzqwOuAGy2X1
+7caXJ/4l8lfmXpWMPmRgFVp/Lw0BxbFg/UU1z/CyvwbZ71q+s2IhtNerNXxTPqYn
+8aEt2hojnczd7Dwtnic0XQ/CNnm8yUpiLe1r2X1BQ3y2qsrtYbE3ghUJGooWMNjs
+ydZHcnhLEEYUjl8Or+zHL6sQ17bxbuyGssLoDZJz3KL0Dzq/YSMQiZxIQG5wALPT
+ujdEWBF6AmqI8Dc08BnprNRlc/ZpjGSUOnmFKbAWKwyCPwacx/0QK54PLLae4xW/
+2TYcuiUaUj0a7CIMHOCkoj3w6DnPgcB77V0fb8XQC9eY
+-----END CERTIFICATE-----
+
+Cybertrust Global Root
+======================
+
+-----BEGIN CERTIFICATE-----
+MIIDoTCCAomgAwIBAgILBAAAAAABD4WqLUgwDQYJKoZIhvcNAQEFBQAwOzEYMBYG
+A1UEChMPQ3liZXJ0cnVzdCwgSW5jMR8wHQYDVQQDExZDeWJlcnRydXN0IEdsb2Jh
+bCBSb290MB4XDTA2MTIxNTA4MDAwMFoXDTIxMTIxNTA4MDAwMFowOzEYMBYGA1UE
+ChMPQ3liZXJ0cnVzdCwgSW5jMR8wHQYDVQQDExZDeWJlcnRydXN0IEdsb2JhbCBS
+b290MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA+Mi8vRRQZhP/8NN5
+7CPytxrHjoXxEnOmGaoQ25yiZXRadz5RfVb23CO21O1fWLE3TdVJDm71aofW0ozS
+J8bi/zafmGWgE07GKmSb1ZASzxQG9Dvj1Ci+6A74q05IlG2OlTEQXO2iLb3VOm2y
+HLtgwEZLAfVJrn5GitB0jaEMAs7u/OePuGtm839EAL9mJRQr3RAwHQeWP032a7iP
+t3sMpTjr3kfb1V05/Iin89cqdPHoWqI7n1C6poxFNcJQZZXcY4Lv3b93TZxiyWNz
+FtApD0mpSPCzqrdsxacwOUBdrsTiXSZT8M4cIwhhqJQZugRiQOwfOHB3EgZxpzAY
+XSUnpQIDAQABo4GlMIGiMA4GA1UdDwEB/wQEAwIBBjAPBgNVHRMBAf8EBTADAQH/
+MB0GA1UdDgQWBBS2CHsNesysIEyGVjJez6tuhS1wVzA/BgNVHR8EODA2MDSgMqAw
+hi5odHRwOi8vd3d3Mi5wdWJsaWMtdHJ1c3QuY29tL2NybC9jdC9jdHJvb3QuY3Js
+MB8GA1UdIwQYMBaAFLYIew16zKwgTIZWMl7Pq26FLXBXMA0GCSqGSIb3DQEBBQUA
+A4IBAQBW7wojoFROlZfJ+InaRcHUowAl9B8Tq7ejhVhpwjCt2BWKLePJzYFa+HMj
+Wqd8BfP9IjsO0QbE2zZMcwSO5bAi5MXzLqXZI+O4Tkogp24CJJ8iYGd7ix1yCcUx
+XOl5n4BHPa2hCwcUPUf/A2kaDAtE52Mlp3+yybh2hO0j9n0Hq0V+09+zv+mKts2o
+omcrUtW3ZfA5TGOgkXmTUg9U3YO7n9GPp1Nzw8v/MOx8BLjYRB+TX3EJIrduPuoc
+A06dGiBh+4E37F78CkWr1+cXVdCg6mCbpvbjjFspwgZgFJ0tl0ypkxWdYcQBX0jW
+WL1WMRJOEcgh4LMRkWXbtKaIOM5V
+-----END CERTIFICATE-----
+
+GeoTrust Primary Certification Authority - G3
+=============================================
+
+-----BEGIN CERTIFICATE-----
+MIID/jCCAuagAwIBAgIQFaxulBmyeUtB9iepwxgPHzANBgkqhkiG9w0BAQsFADCB
+mDELMAkGA1UEBhMCVVMxFjAUBgNVBAoTDUdlb1RydXN0IEluYy4xOTA3BgNVBAsT
+MChjKSAyMDA4IEdlb1RydXN0IEluYy4gLSBGb3IgYXV0aG9yaXplZCB1c2Ugb25s
+eTE2MDQGA1UEAxMtR2VvVHJ1c3QgUHJpbWFyeSBDZXJ0aWZpY2F0aW9uIEF1dGhv
+cml0eSAtIEczMB4XDTA4MDQwMjAwMDAwMFoXDTM3MTIwMTIzNTk1OVowgZgxCzAJ
+BgNVBAYTAlVTMRYwFAYDVQQKEw1HZW9UcnVzdCBJbmMuMTkwNwYDVQQLEzAoYykg
+MjAwOCBHZW9UcnVzdCBJbmMuIC0gRm9yIGF1dGhvcml6ZWQgdXNlIG9ubHkxNjA0
+BgNVBAMTLUdlb1RydXN0IFByaW1hcnkgQ2VydGlmaWNhdGlvbiBBdXRob3JpdHkg
+LSBHMzCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBANziXmJYHTNXOTIz
++uvLh4yn1ErdBojqZI4xmKU4kB6Yzy5jK/BGvESyiaHAKAxJcCGVn2TAppMSAmUm
+hsalifD614SgcK9PGpc/BkTVyetyEH3kMSj7HGHmKAdEc5IiaacDiGydY8hS2pgn
+5whMcD60yRLBxWeDXTPzAxHsatBT4tG6NmCUgLthY2xbF37fQJQeqw3CIShwiP/W
+JmxsYAQlTlV+fe+/lEjetx3dcI0FX4ilm/LC7urRQEFtYjgdVgbFA0dRIBn8exAL
+DmKudlW/X3e+PkkBUz2YJQN2JFodtNuJ6nnltrM7P7pMKEF/BqxqjsHQ9gUdfeZC
+huOl1UcCAwEAAaNCMEAwDwYDVR0TAQH/BAUwAwEB/zAOBgNVHQ8BAf8EBAMCAQYw
+HQYDVR0OBBYEFMR5yo6hTgMdHNxr2zFblD4/MH8tMA0GCSqGSIb3DQEBCwUAA4IB
+AQAtxRPPVoB7eni9n64smefv2t+UXglpp+duaIy9cr5HqQ6XErhK8WTTOd8lNNTB
+zU6B8A8ExCSzNJbGpqow32hhc9f5joWJ7w5elShKKiePEI4ufIbEAp7aDHdlDkQN
+kv39sxY2+hENHYwOB4lqKVb3cvTdFZx3NWZXqxNT2I7BQMXXExZacse3aQHEerGD
+AWh9jUGhlBjBJVz88P6DAod8DQ3PLghcSkANPuyBYeYk28rgDi0Hsj5W3I31QYUH
+SJsMC8tJP33st/3LjWeJGqvtux6jAAgIFyqCXDFdRootD4abdNlF+9RAsXqqaC2G
+spki4cErx5z481+oghLrGREt
+-----END CERTIFICATE-----
+
+thawte Primary Root CA - G2
+===========================
+
+-----BEGIN CERTIFICATE-----
+MIICiDCCAg2gAwIBAgIQNfwmXNmET8k9Jj1Xm67XVjAKBggqhkjOPQQDAzCBhDEL
+MAkGA1UEBhMCVVMxFTATBgNVBAoTDHRoYXd0ZSwgSW5jLjE4MDYGA1UECxMvKGMp
+IDIwMDcgdGhhd3RlLCBJbmMuIC0gRm9yIGF1dGhvcml6ZWQgdXNlIG9ubHkxJDAi
+BgNVBAMTG3RoYXd0ZSBQcmltYXJ5IFJvb3QgQ0EgLSBHMjAeFw0wNzExMDUwMDAw
+MDBaFw0zODAxMTgyMzU5NTlaMIGEMQswCQYDVQQGEwJVUzEVMBMGA1UEChMMdGhh
+d3RlLCBJbmMuMTgwNgYDVQQLEy8oYykgMjAwNyB0aGF3dGUsIEluYy4gLSBGb3Ig
+YXV0aG9yaXplZCB1c2Ugb25seTEkMCIGA1UEAxMbdGhhd3RlIFByaW1hcnkgUm9v
+dCBDQSAtIEcyMHYwEAYHKoZIzj0CAQYFK4EEACIDYgAEotWcgnuVnfFSeIf+iha/
+BebfowJPDQfGAFG6DAJSLSKkQjnE/o/qycG+1E3/n3qe4rF8mq2nhglzh9HnmuN6
+papu+7qzcMBniKI11KOasf2twu8x+qi58/sIxpHR+ymVo0IwQDAPBgNVHRMBAf8E
+BTADAQH/MA4GA1UdDwEB/wQEAwIBBjAdBgNVHQ4EFgQUmtgAMADna3+FGO6Lts6K
+DPgR4bswCgYIKoZIzj0EAwMDaQAwZgIxAN344FdHW6fmCsO99YCKlzUNG4k8VIZ3
+KMqh9HneteY4sPBlcIx/AlTCv//YoT7ZzwIxAMSNlPzcU9LcnXgWHxUzI1NS41ox
+XZ3Krr0TKUQNJ1uo52icEvdYPy5yAlejj6EULg==
+-----END CERTIFICATE-----
+
+thawte Primary Root CA - G3
+===========================
+
+-----BEGIN CERTIFICATE-----
+MIIEKjCCAxKgAwIBAgIQYAGXt0an6rS0mtZLL/eQ+zANBgkqhkiG9w0BAQsFADCB
+rjELMAkGA1UEBhMCVVMxFTATBgNVBAoTDHRoYXd0ZSwgSW5jLjEoMCYGA1UECxMf
+Q2VydGlmaWNhdGlvbiBTZXJ2aWNlcyBEaXZpc2lvbjE4MDYGA1UECxMvKGMpIDIw
+MDggdGhhd3RlLCBJbmMuIC0gRm9yIGF1dGhvcml6ZWQgdXNlIG9ubHkxJDAiBgNV
+BAMTG3RoYXd0ZSBQcmltYXJ5IFJvb3QgQ0EgLSBHMzAeFw0wODA0MDIwMDAwMDBa
+Fw0zNzEyMDEyMzU5NTlaMIGuMQswCQYDVQQGEwJVUzEVMBMGA1UEChMMdGhhd3Rl
+LCBJbmMuMSgwJgYDVQQLEx9DZXJ0aWZpY2F0aW9uIFNlcnZpY2VzIERpdmlzaW9u
+MTgwNgYDVQQLEy8oYykgMjAwOCB0aGF3dGUsIEluYy4gLSBGb3IgYXV0aG9yaXpl
+ZCB1c2Ugb25seTEkMCIGA1UEAxMbdGhhd3RlIFByaW1hcnkgUm9vdCBDQSAtIEcz
+MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAsr8nLPvb2FvdeHsbnndm
+gcs+vHyu86YnmjSjaDFxODNi5PNxZnmxqWWjpYvVj2AtP0LMqmsywCPLLEHd5N/8
+YZzic7IilRFDGF/Eth9XbAoFWCLINkw6fKXRz4aviKdEAhN0cXMKQlkC+BsUa0Lf
+b1+6a4KinVvnSr0eAXLbS3ToO39/fR8EtCab4LRarEc9VbjXsCZSKAExQGbY2SS9
+9irY7CFJXJv2eul/VTV+lmuNk5Mny5K76qxAwJ/C+IDPXfRa3M50hqY+bAtTyr2S
+zhkGcuYMXDhpxwTWvGzOW/b3aJzcJRVIiKHpqfiYnODz1TEoYRFsZ5aNOZnLwkUk
+OQIDAQABo0IwQDAPBgNVHRMBAf8EBTADAQH/MA4GA1UdDwEB/wQEAwIBBjAdBgNV
+HQ4EFgQUrWyqlGCc7eT/+j4KdCtjA/e2Wb8wDQYJKoZIhvcNAQELBQADggEBABpA
+2JVlrAmSicY59BDlqQ5mU1143vokkbvnRFHfxhY0Cu9qRFHqKweKA3rD6z8KLFIW
+oCtDuSWQP3CpMyVtRRooOyfPqsMpQhvfO0zAMzRbQYi/aytlryjvsvXDqmbOe1bu
+t8jLZ8HJnBoYuMTDSQPxYA5QzUbF83d597YV4Djbxy8ooAw/dyZ02SUS2jHaGh7c
+KUGRIjxpp7sC8rZcJwOJ9Abqm+RyguOhCcHpABnTPtRwa7pxpqpYrvS76Wy274fM
+m7v/OeZWYdMKp8RcTGB7BXcmer/YB1IsYvdwY9k5vG8cwnncdimvzsUsZAReiDZu
+MdRAGmI0Nj81Aa6sY6A=
+-----END CERTIFICATE-----
+
+GeoTrust Primary Certification Authority - G2
+=============================================
+
+-----BEGIN CERTIFICATE-----
+MIICrjCCAjWgAwIBAgIQPLL0SAoA4v7rJDteYD7DazAKBggqhkjOPQQDAzCBmDEL
+MAkGA1UEBhMCVVMxFjAUBgNVBAoTDUdlb1RydXN0IEluYy4xOTA3BgNVBAsTMChj
+KSAyMDA3IEdlb1RydXN0IEluYy4gLSBGb3IgYXV0aG9yaXplZCB1c2Ugb25seTE2
+MDQGA1UEAxMtR2VvVHJ1c3QgUHJpbWFyeSBDZXJ0aWZpY2F0aW9uIEF1dGhvcml0
+eSAtIEcyMB4XDTA3MTEwNTAwMDAwMFoXDTM4MDExODIzNTk1OVowgZgxCzAJBgNV
+BAYTAlVTMRYwFAYDVQQKEw1HZW9UcnVzdCBJbmMuMTkwNwYDVQQLEzAoYykgMjAw
+NyBHZW9UcnVzdCBJbmMuIC0gRm9yIGF1dGhvcml6ZWQgdXNlIG9ubHkxNjA0BgNV
+BAMTLUdlb1RydXN0IFByaW1hcnkgQ2VydGlmaWNhdGlvbiBBdXRob3JpdHkgLSBH
+MjB2MBAGByqGSM49AgEGBSuBBAAiA2IABBWx6P0DFUPlrOuHNxFi79KDNlJ9RVcL
+So17VDs6bl8VAsBQps8lL33KSLjHUGMcKiEIfJo22Av+0SbFWDEwKCXzXV2juLal
+tJLtbCyf691DiaI8S0iRHVDsJt/WYC69IaNCMEAwDwYDVR0TAQH/BAUwAwEB/zAO
+BgNVHQ8BAf8EBAMCAQYwHQYDVR0OBBYEFBVfNVdRVfslsq0DafwBo/q+EVXVMAoG
+CCqGSM49BAMDA2cAMGQCMGSWWaboCd6LuvpaiIjwH5HTRqjySkwCY/tsXzjbLkGT
+qQ7mndwxHLKgpxgceeHHNgIwOlavmnRs9vuD4DPTCF+hnMJbn0bWtsuRBmOiBucz
+rD6ogRLQy7rQkgu2npaqBA+K
+-----END CERTIFICATE-----
+
+VeriSign Universal Root Certification Authority
+===============================================
+
+-----BEGIN CERTIFICATE-----
+MIIEuTCCA6GgAwIBAgIQQBrEZCGzEyEDDrvkEhrFHTANBgkqhkiG9w0BAQsFADCB
+vTELMAkGA1UEBhMCVVMxFzAVBgNVBAoTDlZlcmlTaWduLCBJbmMuMR8wHQYDVQQL
+ExZWZXJpU2lnbiBUcnVzdCBOZXR3b3JrMTowOAYDVQQLEzEoYykgMjAwOCBWZXJp
+U2lnbiwgSW5jLiAtIEZvciBhdXRob3JpemVkIHVzZSBvbmx5MTgwNgYDVQQDEy9W
+ZXJpU2lnbiBVbml2ZXJzYWwgUm9vdCBDZXJ0aWZpY2F0aW9uIEF1dGhvcml0eTAe
+Fw0wODA0MDIwMDAwMDBaFw0zNzEyMDEyMzU5NTlaMIG9MQswCQYDVQQGEwJVUzEX
+MBUGA1UEChMOVmVyaVNpZ24sIEluYy4xHzAdBgNVBAsTFlZlcmlTaWduIFRydXN0
+IE5ldHdvcmsxOjA4BgNVBAsTMShjKSAyMDA4IFZlcmlTaWduLCBJbmMuIC0gRm9y
+IGF1dGhvcml6ZWQgdXNlIG9ubHkxODA2BgNVBAMTL1ZlcmlTaWduIFVuaXZlcnNh
+bCBSb290IENlcnRpZmljYXRpb24gQXV0aG9yaXR5MIIBIjANBgkqhkiG9w0BAQEF
+AAOCAQ8AMIIBCgKCAQEAx2E3XrEBNNti1xWb/1hajCMj1mCOkdeQmIN65lgZOIzF
+9uVkhbSicfvtvbnazU0AtMgtc6XHaXGVHzk8skQHnOgO+k1KxCHfKWGPMiJhgsWH
+H26MfF8WIFFE0XBPV+rjHOPMee5Y2A7Cs0WTwCznmhcrewA3ekEzeOEz4vMQGn+H
+LL729fdC4uW/h2KJXwBL38Xd5HVEMkE6HnFuacsLdUYI0crSK5XQz/u5QGtkjFdN
+/BMReYTtXlT2NJ8IAfMQJQYXStrxHXpma5hgZqTZ79IugvHw7wnqRMkVauIDbjPT
+rJ9VAMf2CGqUuV/c4DPxhGD5WycRtPwW8rtWaoAljQIDAQABo4GyMIGvMA8GA1Ud
+EwEB/wQFMAMBAf8wDgYDVR0PAQH/BAQDAgEGMG0GCCsGAQUFBwEMBGEwX6FdoFsw
+WTBXMFUWCWltYWdlL2dpZjAhMB8wBwYFKw4DAhoEFI/l0xqGrI2Oa8PPgGrUSBgs
+exkuMCUWI2h0dHA6Ly9sb2dvLnZlcmlzaWduLmNvbS92c2xvZ28uZ2lmMB0GA1Ud
+DgQWBBS2d/ppSEefUxLVwuoHMnYH0ZcHGTANBgkqhkiG9w0BAQsFAAOCAQEASvj4
+sAPmLGd75JR3Y8xuTPl9Dg3cyLk1uXBPY/ok+myDjEedO2Pzmvl2MpWRsXe8rJq+
+seQxIcaBlVZaDrHC1LGmWazxY8u4TB1ZkErvkBYoH1quEPuBUDgMbMzxPcP1Y+Oz
+4yHJJDnp/RVmRvQbEdBNc6N9Rvk97ahfYtTxP/jgdFcrGJ2BtMQo2pSXpXDrrB2+
+BxHw1dvd5Yzw1TKwg+ZX4o+/vqGqvz0dtdQ46tewXDpPaj+PwGZsY6rp2aQW9IHR
+lRQOfc2VNNnSj3BzgXucfr2YYdhFh5iQxeuGMMY1v/D/w1WIg0vvBZIGcfK4mJO3
+7M2CYfE45k+XmCpajQ==
+-----END CERTIFICATE-----
+
+VeriSign Class 3 Public Primary Certification Authority - G4
+============================================================
+
+-----BEGIN CERTIFICATE-----
+MIIDhDCCAwqgAwIBAgIQL4D+I4wOIg9IZxIokYesszAKBggqhkjOPQQDAzCByjEL
+MAkGA1UEBhMCVVMxFzAVBgNVBAoTDlZlcmlTaWduLCBJbmMuMR8wHQYDVQQLExZW
+ZXJpU2lnbiBUcnVzdCBOZXR3b3JrMTowOAYDVQQLEzEoYykgMjAwNyBWZXJpU2ln
+biwgSW5jLiAtIEZvciBhdXRob3JpemVkIHVzZSBvbmx5MUUwQwYDVQQDEzxWZXJp
+U2lnbiBDbGFzcyAzIFB1YmxpYyBQcmltYXJ5IENlcnRpZmljYXRpb24gQXV0aG9y
+aXR5IC0gRzQwHhcNMDcxMTA1MDAwMDAwWhcNMzgwMTE4MjM1OTU5WjCByjELMAkG
+A1UEBhMCVVMxFzAVBgNVBAoTDlZlcmlTaWduLCBJbmMuMR8wHQYDVQQLExZWZXJp
+U2lnbiBUcnVzdCBOZXR3b3JrMTowOAYDVQQLEzEoYykgMjAwNyBWZXJpU2lnbiwg
+SW5jLiAtIEZvciBhdXRob3JpemVkIHVzZSBvbmx5MUUwQwYDVQQDEzxWZXJpU2ln
+biBDbGFzcyAzIFB1YmxpYyBQcmltYXJ5IENlcnRpZmljYXRpb24gQXV0aG9yaXR5
+IC0gRzQwdjAQBgcqhkjOPQIBBgUrgQQAIgNiAASnVnp8Utpkmw4tXNherJI9/gHm
+GUo9FANL+mAnINmDiWn6VMaaGF5VKmTeBvaNSjutEDxlPZCIBIngMGGzrl0Bp3ve
+fLK+ymVhAIau2o970ImtTR1ZmkGxvEeA3J5iw/mjgbIwga8wDwYDVR0TAQH/BAUw
+AwEB/zAOBgNVHQ8BAf8EBAMCAQYwbQYIKwYBBQUHAQwEYTBfoV2gWzBZMFcwVRYJ
+aW1hZ2UvZ2lmMCEwHzAHBgUrDgMCGgQUj+XTGoasjY5rw8+AatRIGCx7GS4wJRYj
+aHR0cDovL2xvZ28udmVyaXNpZ24uY29tL3ZzbG9nby5naWYwHQYDVR0OBBYEFLMW
+kf3upm7ktS5Jj4d4gYDs5bG1MAoGCCqGSM49BAMDA2gAMGUCMGYhDBgmYFo4e1ZC
+4Kf8NoRRkSAsdk1DPcQdhCPQrNZ8NQbOzWm9kA3bbEhCHQ6qQgIxAJw9SDkjOVga
+FRJZap7v1VmyHVIsmXHNxynfGyphe3HR3vPA5Q06Sqotp9iGKt0uEA==
+-----END CERTIFICATE-----
+
+GlobalSign Root CA - R3
+=======================
+
+-----BEGIN CERTIFICATE-----
+MIIDXzCCAkegAwIBAgILBAAAAAABIVhTCKIwDQYJKoZIhvcNAQELBQAwTDEgMB4G
+A1UECxMXR2xvYmFsU2lnbiBSb290IENBIC0gUjMxEzARBgNVBAoTCkdsb2JhbFNp
+Z24xEzARBgNVBAMTCkdsb2JhbFNpZ24wHhcNMDkwMzE4MTAwMDAwWhcNMjkwMzE4
+MTAwMDAwWjBMMSAwHgYDVQQLExdHbG9iYWxTaWduIFJvb3QgQ0EgLSBSMzETMBEG
+A1UEChMKR2xvYmFsU2lnbjETMBEGA1UEAxMKR2xvYmFsU2lnbjCCASIwDQYJKoZI
+hvcNAQEBBQADggEPADCCAQoCggEBAMwldpB5BngiFvXAg7aEyiie/QV2EcWtiHL8
+RgJDx7KKnQRfJMsuS+FggkbhUqsMgUdwbN1k0ev1LKMPgj0MK66X17YUhhB5uzsT
+gHeMCOFJ0mpiLx9e+pZo34knlTifBtc+ycsmWQ1z3rDI6SYOgxXG71uL0gRgykmm
+KPZpO/bLyCiR5Z2KYVc3rHQU3HTgOu5yLy6c+9C7v/U9AOEGM+iCK65TpjoWc4zd
+QQ4gOsC0p6Hpsk+QLjJg6VfLuQSSaGjlOCZgdbKfd/+RFO+uIEn8rUAVSNECMWEZ
+XriX7613t2Saer9fwRPvm2L7DWzgVGkWqQPabumDk3F2xmmFghcCAwEAAaNCMEAw
+DgYDVR0PAQH/BAQDAgEGMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0OBBYEFI/wS3+o
+LkUkrk1Q+mOai97i3Ru8MA0GCSqGSIb3DQEBCwUAA4IBAQBLQNvAUKr+yAzv95ZU
+RUm7lgAJQayzE4aGKAczymvmdLm6AC2upArT9fHxD4q/c2dKg8dEe3jgr25sbwMp
+jjM5RcOO5LlXbKr8EpbsU8Yt5CRsuZRj+9xTaGdWPoO4zzUhw8lo/s7awlOqzJCK
+6fBdRoyV3XpYKBovHd7NADdBj+1EbddTKJd+82cEHhXXipa0095MJ6RMG3NzdvQX
+mcIfeg7jLQitChws/zyrVQ4PkX4268NXSb7hLi18YIvDQVETI53O9zJrlAGomecs
+Mx86OyXShkDOOyyGeMlhLxS67ttVb9+E7gUJTb0o2HLO02JQZR7rkpeDMdmztcpH
+WD9f
+-----END CERTIFICATE-----
+
+TC TrustCenter Universal CA III
+===============================
+
+-----BEGIN CERTIFICATE-----
+MIID4TCCAsmgAwIBAgIOYyUAAQACFI0zFQLkbPQwDQYJKoZIhvcNAQEFBQAwezEL
+MAkGA1UEBhMCREUxHDAaBgNVBAoTE1RDIFRydXN0Q2VudGVyIEdtYkgxJDAiBgNV
+BAsTG1RDIFRydXN0Q2VudGVyIFVuaXZlcnNhbCBDQTEoMCYGA1UEAxMfVEMgVHJ1
+c3RDZW50ZXIgVW5pdmVyc2FsIENBIElJSTAeFw0wOTA5MDkwODE1MjdaFw0yOTEy
+MzEyMzU5NTlaMHsxCzAJBgNVBAYTAkRFMRwwGgYDVQQKExNUQyBUcnVzdENlbnRl
+ciBHbWJIMSQwIgYDVQQLExtUQyBUcnVzdENlbnRlciBVbml2ZXJzYWwgQ0ExKDAm
+BgNVBAMTH1RDIFRydXN0Q2VudGVyIFVuaXZlcnNhbCBDQSBJSUkwggEiMA0GCSqG
+SIb3DQEBAQUAA4IBDwAwggEKAoIBAQDC2pxisLlxErALyBpXsq6DFJmzNEubkKLF
+5+cvAqBNLaT6hdqbJYUtQCggbergvbFIgyIpRJ9Og+41URNzdNW88jBmlFPAQDYv
+DIRlzg9uwliT6CwLOunBjvvya8o84pxOjuT5fdMnnxvVZ3iHLX8LR7PH6MlIfK8v
+zArZQe+f/prhsq75U7Xl6UafYOPfjdN/+5Z+s7Vy+EutCHnNaYlAJ/Uqwa1D7KRT
+yGG299J5KmcYdkhtWyUB0SbFt1dpIxVbYYqt8Bst2a9c8SaQaanVDED1M4BDj5yj
+dipFtK+/fz6HP3bFzSreIMUWWMv5G/UPyw0RUmS40nZid4PxWJ//AgMBAAGjYzBh
+MB8GA1UdIwQYMBaAFFbn4VslQ4Dg9ozhcbyO5YAvxEjiMA8GA1UdEwEB/wQFMAMB
+Af8wDgYDVR0PAQH/BAQDAgEGMB0GA1UdDgQWBBRW5+FbJUOA4PaM4XG8juWAL8RI
+4jANBgkqhkiG9w0BAQUFAAOCAQEAg8ev6n9NCjw5sWi+e22JLumzCecYV42Fmhfz
+dkJQEw/HkG8zrcVJYCtsSVgZ1OK+t7+rSbyUyKu+KGwWaODIl0YgoGhnYIg5IFHY
+aAERzqf2EQf27OysGh+yZm5WZ2B6dF7AbZc2rrUNXWZzwCUyRdhKBgePxLcHsU0G
+DeGl6/R1yrqc0L2z0zIkTO5+4nYES0lT2PLpVDP85XEfPRRclkvxOvIAu2y0+pZV
+CIgJwcyRGSmwIC3/yzikQOEXvnlhgP8HA4ZMTnsGnxGGjYnuJ8Tb4rwZjgvDwxPH
+LQNjO9Po5KIqwoIIlBZU8O8fJ5AluA0OKBtHd0e9HKgl8ZS0Zg==
+-----END CERTIFICATE-----
+
+Go Daddy Root Certificate Authority - G2
+========================================
+
+-----BEGIN CERTIFICATE-----
+MIIDxTCCAq2gAwIBAgIBADANBgkqhkiG9w0BAQsFADCBgzELMAkGA1UEBhMCVVMx
+EDAOBgNVBAgTB0FyaXpvbmExEzARBgNVBAcTClNjb3R0c2RhbGUxGjAYBgNVBAoT
+EUdvRGFkZHkuY29tLCBJbmMuMTEwLwYDVQQDEyhHbyBEYWRkeSBSb290IENlcnRp
+ZmljYXRlIEF1dGhvcml0eSAtIEcyMB4XDTA5MDkwMTAwMDAwMFoXDTM3MTIzMTIz
+NTk1OVowgYMxCzAJBgNVBAYTAlVTMRAwDgYDVQQIEwdBcml6b25hMRMwEQYDVQQH
+EwpTY290dHNkYWxlMRowGAYDVQQKExFHb0RhZGR5LmNvbSwgSW5jLjExMC8GA1UE
+AxMoR28gRGFkZHkgUm9vdCBDZXJ0aWZpY2F0ZSBBdXRob3JpdHkgLSBHMjCCASIw
+DQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAL9xYgjx+lk09xvJGKP3gElY6SKD
+E6bFIEMBO4Tx5oVJnyfq9oQbTqC023CYxzIBsQU+B07u9PpPL1kwIuerGVZr4oAH
+/PMWdYA5UXvl+TW2dE6pjYIT5LY/qQOD+qK+ihVqf94Lw7YZFAXK6sOoBJQ7Rnwy
+DfMAZiLIjWltNowRGLfTshxgtDj6AozO091GB94KPutdfMh8+7ArU6SSYmlRJQVh
+GkSBjCypQ5Yj36w6gZoOKcUcqeldHraenjAKOc7xiID7S13MMuyFYkMlNAJWJwGR
+tDtwKj9useiciAF9n9T521NtYJ2/LOdYq7hfRvzOxBsDPAnrSTFcaUaz4EcCAwEA
+AaNCMEAwDwYDVR0TAQH/BAUwAwEB/zAOBgNVHQ8BAf8EBAMCAQYwHQYDVR0OBBYE
+FDqahQcQZyi27/a9BUFuIMGU2g/eMA0GCSqGSIb3DQEBCwUAA4IBAQCZ21151fmX
+WWcDYfF+OwYxdS2hII5PZYe096acvNjpL9DbWu7PdIxztDhC2gV7+AJ1uP2lsdeu
+9tfeE8tTEH6KRtGX+rcuKxGrkLAngPnon1rpN5+r5N9ss4UXnT3ZJE95kTXWXwTr
+gIOrmgIttRD02JDHBHNA7XIloKmf7J6raBKZV8aPEjoJpL1E/QYVN8Gb5DKj7Tjo
+2GTzLH4U/ALqn83/B2gX2yKQOC16jdFU8WnjXzPKej17CuPKf1855eJ1usV2GDPO
+LPAvTK33sefOT6jEm0pUBsV/fdUID+Ic/n4XuKxe9tQWskMJDE32p2u0mYRlynqI
+4uJEvlz36hz1
+-----END CERTIFICATE-----
+
+Starfield Root Certificate Authority - G2
+=========================================
+
+-----BEGIN CERTIFICATE-----
+MIID3TCCAsWgAwIBAgIBADANBgkqhkiG9w0BAQsFADCBjzELMAkGA1UEBhMCVVMx
+EDAOBgNVBAgTB0FyaXpvbmExEzARBgNVBAcTClNjb3R0c2RhbGUxJTAjBgNVBAoT
+HFN0YXJmaWVsZCBUZWNobm9sb2dpZXMsIEluYy4xMjAwBgNVBAMTKVN0YXJmaWVs
+ZCBSb290IENlcnRpZmljYXRlIEF1dGhvcml0eSAtIEcyMB4XDTA5MDkwMTAwMDAw
+MFoXDTM3MTIzMTIzNTk1OVowgY8xCzAJBgNVBAYTAlVTMRAwDgYDVQQIEwdBcml6
+b25hMRMwEQYDVQQHEwpTY290dHNkYWxlMSUwIwYDVQQKExxTdGFyZmllbGQgVGVj
+aG5vbG9naWVzLCBJbmMuMTIwMAYDVQQDEylTdGFyZmllbGQgUm9vdCBDZXJ0aWZp
+Y2F0ZSBBdXRob3JpdHkgLSBHMjCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoC
+ggEBAL3twQP89o/8ArFvW59I2Z154qK3A2FWGMNHttfKPTUuiUP3oWmb3ooa/RMg
+nLRJdzIpVv257IzdIvpy3Cdhl+72WoTsbhm5iSzchFvVdPtrX8WJpRBSiUZV9Lh1
+HOZ/5FSuS/hVclcCGfgXcVnrHigHdMWdSL5stPSksPNkN3mSwOxGXn/hbVNMYq/N
+Hwtjuzqd+/x5AJhhdM8mgkBj87JyahkNmcrUDnXMN/uLicFZ8WJ/X7NfZTD4p7dN
+dloedl40wOiWVpmKs/B/pM293DIxfJHP4F8R+GuqSVzRmZTRouNjWwl2tVZi4Ut0
+HZbUJtQIBFnQmA4O5t78w+wfkPECAwEAAaNCMEAwDwYDVR0TAQH/BAUwAwEB/zAO
+BgNVHQ8BAf8EBAMCAQYwHQYDVR0OBBYEFHwMMh+n2TB/xH1oo2Kooc6rB1snMA0G
+CSqGSIb3DQEBCwUAA4IBAQARWfolTwNvlJk7mh+ChTnUdgWUXuEok21iXQnCoKjU
+sHU48TRqneSfioYmUeYs0cYtbpUgSpIB7LiKZ3sx4mcujJUDJi5DnUox9g61DLu3
+4jd/IroAow57UvtruzvE03lRTs2Q9GcHGcg8RnoNAX3FWOdt5oUwF5okxBDgBPfg
+8n/Uqgr/Qh037ZTlZFkSIHc40zI+OIF1lnP6aI+xy84fxez6nH7PfrHxBy22/L/K
+pL/QlwVKvOoYKAKQvVR4CSFx09F9HdkWsKlhPdAKACL8x3vLCWRFCztAgfd9fDL1
+mMpYjn0q7pBZc2T5NnReJaH1ZgUufzkVqSr7UIuOhWn0
+-----END CERTIFICATE-----
+
+Starfield Services Root Certificate Authority - G2
+==================================================
+
+-----BEGIN CERTIFICATE-----
+MIID7zCCAtegAwIBAgIBADANBgkqhkiG9w0BAQsFADCBmDELMAkGA1UEBhMCVVMx
+EDAOBgNVBAgTB0FyaXpvbmExEzARBgNVBAcTClNjb3R0c2RhbGUxJTAjBgNVBAoT
+HFN0YXJmaWVsZCBUZWNobm9sb2dpZXMsIEluYy4xOzA5BgNVBAMTMlN0YXJmaWVs
+ZCBTZXJ2aWNlcyBSb290IENlcnRpZmljYXRlIEF1dGhvcml0eSAtIEcyMB4XDTA5
+MDkwMTAwMDAwMFoXDTM3MTIzMTIzNTk1OVowgZgxCzAJBgNVBAYTAlVTMRAwDgYD
+VQQIEwdBcml6b25hMRMwEQYDVQQHEwpTY290dHNkYWxlMSUwIwYDVQQKExxTdGFy
+ZmllbGQgVGVjaG5vbG9naWVzLCBJbmMuMTswOQYDVQQDEzJTdGFyZmllbGQgU2Vy
+dmljZXMgUm9vdCBDZXJ0aWZpY2F0ZSBBdXRob3JpdHkgLSBHMjCCASIwDQYJKoZI
+hvcNAQEBBQADggEPADCCAQoCggEBANUMOsQq+U7i9b4Zl1+OiFOxHz/Lz58gE20p
+OsgPfTz3a3Y4Y9k2YKibXlwAgLIvWX/2h/klQ4bnaRtSmpDhcePYLQ1Ob/bISdm2
+8xpWriu2dBTrz/sm4xq6HZYuajtYlIlHVv8loJNwU4PahHQUw2eeBGg6345AWh1K
+Ts9DkTvnVtYAcMtS7nt9rjrnvDH5RfbCYM8TWQIrgMw0R9+53pBlbQLPLJGmpufe
+hRhJfGZOozptqbXuNC66DQO4M99H67FrjSXZm86B0UVGMpZwh94CDklDhbZsc7tk
+6mFBrMnUVN+HL8cisibMn1lUaJ/8viovxFUcdUBgF4UCVTmLfwUCAwEAAaNCMEAw
+DwYDVR0TAQH/BAUwAwEB/zAOBgNVHQ8BAf8EBAMCAQYwHQYDVR0OBBYEFJxfAN+q
+AdcwKziIorhtSpzyEZGDMA0GCSqGSIb3DQEBCwUAA4IBAQBLNqaEd2ndOxmfZyMI
+bw5hyf2E3F/YNoHN2BtBLZ9g3ccaaNnRbobhiCPPE95Dz+I0swSdHynVv/heyNXB
+ve6SbzJ08pGCL72CQnqtKrcgfU28elUSwhXqvfdqlS5sdJ/PHLTyxQGjhdByPq1z
+qwubdQxtRbeOlKyWN7Wg0I8VRw7j6IPdj/3vQQF3zCepYoUz8jcI73HPdwbeyBkd
+iEDPfUYd/x7H4c7/I9vG+o1VTqkC50cRRj70/b17KSa7qWFiNyi2LSr2EIZkyXCn
+0q23KXB56jzaYyWf/Wi3MOxw+3WKt21gZ7IeyLnp2KhvAotnDU0mV3HaIPzBSlCN
+sSi6
+-----END CERTIFICATE-----
+
+AffirmTrust Commercial
+======================
+
+-----BEGIN CERTIFICATE-----
+MIIDTDCCAjSgAwIBAgIId3cGJyapsXwwDQYJKoZIhvcNAQELBQAwRDELMAkGA1UE
+BhMCVVMxFDASBgNVBAoMC0FmZmlybVRydXN0MR8wHQYDVQQDDBZBZmZpcm1UcnVz
+dCBDb21tZXJjaWFsMB4XDTEwMDEyOTE0MDYwNloXDTMwMTIzMTE0MDYwNlowRDEL
+MAkGA1UEBhMCVVMxFDASBgNVBAoMC0FmZmlybVRydXN0MR8wHQYDVQQDDBZBZmZp
+cm1UcnVzdCBDb21tZXJjaWFsMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKC
+AQEA9htPZwcroRX1BiLLHwGy43NFBkRJLLtJJRTWzsO3qyxPxkEylFf6EqdbDuKP
+Hx6GGaeqtS25Xw2Kwq+FNXkyLbscYjfysVtKPcrNcV/pQr6U6Mje+SJIZMblq8Yr
+ba0F8PrVC8+a5fBQpIs7R6UjW3p6+DM/uO+Zl+MgwdYoic+U+7lF7eNAFxHUdPAL
+MeIrJmqbTFeurCA+ukV6BfO9m2kVrn1OIGPENXY6BwLJN/3HR+7o8XYdcxXyl6S1
+yHp52UKqK39c/s4mT6NmgTWvRLpUHhwwMmWd5jyTXlBOeuM61G7MGvv50jeuJCqr
+VwMiKA1JdX+3KNp1v47j3A55MQIDAQABo0IwQDAdBgNVHQ4EFgQUnZPGU4teyq8/
+nx4P5ZmVvCT2lI8wDwYDVR0TAQH/BAUwAwEB/zAOBgNVHQ8BAf8EBAMCAQYwDQYJ
+KoZIhvcNAQELBQADggEBAFis9AQOzcAN/wr91LoWXym9e2iZWEnStB03TX8nfUYG
+XUPGhi4+c7ImfU+TqbbEKpqrIZcUsd6M06uJFdhrJNTxFq7YpFzUf1GO7RgBsZNj
+vbz4YYCanrHOQnDiqX0GJX0nof5v7LMeJNrjS1UaADs1tDvZ110w/YETifLCBivt
+Z8SOyUOyXGsViQK8YvxO8rUzqrJv0wqiUOP2O+guRMLbZjipM1ZI8W0bM40NjD9g
+N53Tym1+NH4Nn3J2ixufcv1SNUFFApYvHLKac0khsUlHRUe072o0EclNmsxZt9YC
+nlpOZbWUrhvfKbAW8b8Angc6F2S1BLUjIZkKlTuXfO8=
+-----END CERTIFICATE-----
+
+AffirmTrust Networking
+======================
+
+-----BEGIN CERTIFICATE-----
+MIIDTDCCAjSgAwIBAgIIfE8EORzUmS0wDQYJKoZIhvcNAQEFBQAwRDELMAkGA1UE
+BhMCVVMxFDASBgNVBAoMC0FmZmlybVRydXN0MR8wHQYDVQQDDBZBZmZpcm1UcnVz
+dCBOZXR3b3JraW5nMB4XDTEwMDEyOTE0MDgyNFoXDTMwMTIzMTE0MDgyNFowRDEL
+MAkGA1UEBhMCVVMxFDASBgNVBAoMC0FmZmlybVRydXN0MR8wHQYDVQQDDBZBZmZp
+cm1UcnVzdCBOZXR3b3JraW5nMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKC
+AQEAtITMMxcua5Rsa2FSoOujz3mUTOWUgJnLVWREZY9nZOIG41w3SfYvm4SEHi3y
+YJ0wTsyEheIszx6e/jarM3c1RNg1lho9Nuh6DtjVR6FqaYvZ/Ls6rnla1fTWcbua
+kCNrmreIdIcMHl+5ni36q1Mr3Lt2PpNMCAiMHqIjHNRqrSK6mQEubWXLviRmVSRL
+QESxG9fhwoXA3hA/Pe24/PHxI1Pcv2WXb9n5QHGNfb2V1M6+oF4nI979ptAmDgAp
+6zxG8D1gvz9Q0twmQVGeFDdCBKNwV6gbh+0t+nvujArjqWaJGctB+d1ENmHP4ndG
+yH329JKBNv3bNPFyfvMMFr20FQIDAQABo0IwQDAdBgNVHQ4EFgQUBx/S55zawm6i
+QLSwelAQUHTEyL0wDwYDVR0TAQH/BAUwAwEB/zAOBgNVHQ8BAf8EBAMCAQYwDQYJ
+KoZIhvcNAQEFBQADggEBAIlXshZ6qML91tmbmzTCnLQyFE2npN/svqe++EPbkTfO
+tDIuUFUaNU52Q3Eg75N3ThVwLofDwR1t3Mu1J9QsVtFSUzpE0nPIxBsFZVpikpzu
+QY0x2+c06lkh1QF612S4ZDnNye2v7UsDSKegmQGA3GWjNq5lWUhPgkvIZfFXHeVZ
+Lgo/bNjR9eUJtGxUAArgFU2HdW23WJZa3W3SAKD0m0i+wzekujbgfIeFlxoVot4u
+olu9rxj5kFDNcFn4J2dHy8egBzp90SxdbBk6ZrV9/ZFvgrG+CJPbFEfxojfHRZ48
+x3evZKiT3/Zpg4Jg8klCNO1aAFSFHBY2kgxc+qatv9s=
+-----END CERTIFICATE-----
+
+AffirmTrust Premium
+===================
+
+-----BEGIN CERTIFICATE-----
+MIIFRjCCAy6gAwIBAgIIbYwURrGmCu4wDQYJKoZIhvcNAQEMBQAwQTELMAkGA1UE
+BhMCVVMxFDASBgNVBAoMC0FmZmlybVRydXN0MRwwGgYDVQQDDBNBZmZpcm1UcnVz
+dCBQcmVtaXVtMB4XDTEwMDEyOTE0MTAzNloXDTQwMTIzMTE0MTAzNlowQTELMAkG
+A1UEBhMCVVMxFDASBgNVBAoMC0FmZmlybVRydXN0MRwwGgYDVQQDDBNBZmZpcm1U
+cnVzdCBQcmVtaXVtMIICIjANBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAxBLf
+qV/+Qd3d9Z+K4/as4Tx4mrzY8H96oDMq3I0gW64tb+eT2TZwamjPjlGjhVtnBKAQ
+JG9dKILBl1fYSCkTtuG+kU3fhQxTGJoeJKJPj/CihQvL9Cl/0qRY7iZNyaqoe5rZ
++jjeRFcV5fiMyNlI4g0WJx0eyIOFJbe6qlVBzAMiSy2RjYvmia9mx+n/K+k8rNrS
+s8PhaJyJ+HoAVt70VZVs+7pk3WKL3wt3MutizCaam7uqYoNMtAZ6MMgpv+0GTZe5
+HMQxK9VfvFMSF5yZVylmd2EhMQcuJUmdGPLu8ytxjLW6OQdJd/zvLpKQBY0tL3d7
+70O/Nbua2Plzpyzy0FfuKE4mX4+QaAkvuPjcBukumj5Rp9EixAqnOEhss/n/fauG
+V+O61oV4d7pD6kh/9ti+I20ev9E2bFhc8e6kGVQa9QPSdubhjL08s9NIS+LI+H+S
+qHZGnEJlPqQewQcDWkYtuJfzt9WyVSHvutxMAJf7FJUnM7/oQ0dG0giZFmA7mn7S
+5u046uwBHjxIVkkJx0w3AJ6IDsBz4W9m6XJHMD4Q5QsDyZpCAGzFlH5hxIrff4Ia
+C1nEWTJ3s7xgaVY5/bQGeyzWZDbZvUjthB9+pSKPKrhC9IK31FOQeE4tGv2Bb0TX
+OwF0lkLgAOIua+rF7nKsu7/+6qqo+Nz2snmKtmcCAwEAAaNCMEAwHQYDVR0OBBYE
+FJ3AZ6YMItkm9UWrpmVSESfYRaxjMA8GA1UdEwEB/wQFMAMBAf8wDgYDVR0PAQH/
+BAQDAgEGMA0GCSqGSIb3DQEBDAUAA4ICAQCzV00QYk465KzquByvMiPIs0laUZx2
+KI15qldGF9X1Uva3ROgIRL8YhNILgM3FEv0AVQVhh0HctSSePMTYyPtwni94loMg
+Nt58D2kTiKV1NpgIpsbfrM7jWNa3Pt668+s0QNiigfV4Py/VpfzZotReBA4Xrf5B
+8OWycvpEgjNC6C1Y91aMYj+6QrCcDFx+LmUmXFNPALJ4fqENmS2NuB2OosSw/WDQ
+MKSOyARiqcTtNd56l+0OOF6SL5Nwpamcb6d9Ex1+xghIsV5n61EIJenmJWtSKZGc
+0jlzCFfemQa0W50QBuHCAKi4HEoCChTQwUHK+4w1IX2COPKpVJEZNZOUbWo6xbLQ
+u4mGk+ibyQ86p3q4ofB4Rvr8Ny/lioTz3/4E2aFooC8k4gmVBtWVyuEklut89pMF
+u+1z6S3RdTnX5yTb2E5fQ4+e0BQ5v1VwSJlXMbSc7kqYA5YwH2AG7hsj/oFgIxpH
+YoWlzBk0gG+zrBrjn/B7SK3VAdlntqlyk+otZrWyuOQ9PLLvTIzq6we/qzWaVYa8
+GKa1qF60g2xraUDTn9zxw2lrueFtCfTxqlB2Cnp9ehehVZZCmTEJ3WARjQUwfuaO
+RtGdFNrHF+QFlozEJLUbzxQHskD4o55BhrwE0GuWyCqANP2/7waj3VjFhT0+j/6e
+KeC2uAloGRwYQw==
+-----END CERTIFICATE-----
+
+AffirmTrust Premium ECC
+=======================
+
+-----BEGIN CERTIFICATE-----
+MIIB/jCCAYWgAwIBAgIIdJclisc/elQwCgYIKoZIzj0EAwMwRTELMAkGA1UEBhMC
+VVMxFDASBgNVBAoMC0FmZmlybVRydXN0MSAwHgYDVQQDDBdBZmZpcm1UcnVzdCBQ
+cmVtaXVtIEVDQzAeFw0xMDAxMjkxNDIwMjRaFw00MDEyMzExNDIwMjRaMEUxCzAJ
+BgNVBAYTAlVTMRQwEgYDVQQKDAtBZmZpcm1UcnVzdDEgMB4GA1UEAwwXQWZmaXJt
+VHJ1c3QgUHJlbWl1bSBFQ0MwdjAQBgcqhkjOPQIBBgUrgQQAIgNiAAQNMF4bFZ0D
+0KF5Nbc6PJJ6yhUczWLznCZcBz3lVPqj1swS6vQUX+iOGasvLkjmrBhDeKzQN8O9
+ss0s5kfiGuZjuD0uL3jET9v0D6RoTFVya5UdThhClXjMNzyR4ptlKymjQjBAMB0G
+A1UdDgQWBBSaryl6wBE1NSZRMADDav5A1a7WPDAPBgNVHRMBAf8EBTADAQH/MA4G
+A1UdDwEB/wQEAwIBBjAKBggqhkjOPQQDAwNnADBkAjAXCfOHiFBar8jAQr9HX/Vs
+aobgxCd05DhT1wV/GzTjxi+zygk8N53X57hG8f2h4nECMEJZh0PUUd+60wkyWs6I
+flc9nF9Ca/UHLbXwgpP5WW+uZPpY5Yse42O+tYHNbwKMeQ==
+-----END CERTIFICATE-----
+
+StartCom Certification Authority
+================================
+
+-----BEGIN CERTIFICATE-----
+MIIHhzCCBW+gAwIBAgIBLTANBgkqhkiG9w0BAQsFADB9MQswCQYDVQQGEwJJTDEW
+MBQGA1UEChMNU3RhcnRDb20gTHRkLjErMCkGA1UECxMiU2VjdXJlIERpZ2l0YWwg
+Q2VydGlmaWNhdGUgU2lnbmluZzEpMCcGA1UEAxMgU3RhcnRDb20gQ2VydGlmaWNh
+dGlvbiBBdXRob3JpdHkwHhcNMDYwOTE3MTk0NjM3WhcNMzYwOTE3MTk0NjM2WjB9
+MQswCQYDVQQGEwJJTDEWMBQGA1UEChMNU3RhcnRDb20gTHRkLjErMCkGA1UECxMi
+U2VjdXJlIERpZ2l0YWwgQ2VydGlmaWNhdGUgU2lnbmluZzEpMCcGA1UEAxMgU3Rh
+cnRDb20gQ2VydGlmaWNhdGlvbiBBdXRob3JpdHkwggIiMA0GCSqGSIb3DQEBAQUA
+A4ICDwAwggIKAoICAQDBiNsJvGxGfHiflXu1M5DycmLWwTYgIiRezul38kMKogZk
+pMyONvg45iPwbm2xPN1yo4UcodM9tDMr0y+v/uqwQVlntsQGfQqedIXWeUyAN3rf
+OQVSWff0G0ZDpNKFhdLDcfN1YjS6LIp/Ho/u7TTQEceWzVI9ujPW3U3eCztKS5/C
+Ji/6tRYccjV3yjxd5srhJosaNnZcAdt0FCX+7bWgiA/deMotHweXMAEtcnn6RtYT
+Kqi5pquDSR3l8u/d5AGOGAqPY1MWhWKpDhk6zLVmpsJrdAfkK+F2PrRt2PZE4XNi
+HzvEvqBTViVsUQn3qqvKv3b9bZvzndu/PWa8DFaqr5hIlTpL36dYUNk4dalb6kMM
+Av+Z6+hsTXBbKWWc3apdzK8BMewM69KN6Oqce+Zu9ydmDBpI125C4z/eIT574Q1w
++2OqqGwaVLRcJXrJosmLFqa7LH4XXgVNWG4SHQHuEhANxjJ/GP/89PrNbpHoNkm+
+Gkhpi8KWTRoSsmkXwQqQ1vp5Iki/untp+HDH+no32NgN0nZPV/+Qt+OR0t3vwmC3
+Zzrd/qqc8NSLf3Iizsafl7b4r4qgEKjZ+xjGtrVcUjyJthkqcwEKDwOzEmDyei+B
+26Nu/yYwl/WL3YlXtq09s68rxbd2AvCl1iuahhQqcvbjM4xdCUsT37uMdBNSSwID
+AQABo4ICEDCCAgwwDwYDVR0TAQH/BAUwAwEB/zAOBgNVHQ8BAf8EBAMCAQYwHQYD
+VR0OBBYEFE4L7xqkQFulF2mHMMo0aEPQQa7yMB8GA1UdIwQYMBaAFE4L7xqkQFul
+F2mHMMo0aEPQQa7yMIIBWgYDVR0gBIIBUTCCAU0wggFJBgsrBgEEAYG1NwEBATCC
+ATgwLgYIKwYBBQUHAgEWImh0dHA6Ly93d3cuc3RhcnRzc2wuY29tL3BvbGljeS5w
+ZGYwNAYIKwYBBQUHAgEWKGh0dHA6Ly93d3cuc3RhcnRzc2wuY29tL2ludGVybWVk
+aWF0ZS5wZGYwgc8GCCsGAQUFBwICMIHCMCcWIFN0YXJ0IENvbW1lcmNpYWwgKFN0
+YXJ0Q29tKSBMdGQuMAMCAQEagZZMaW1pdGVkIExpYWJpbGl0eSwgcmVhZCB0aGUg
+c2VjdGlvbiAqTGVnYWwgTGltaXRhdGlvbnMqIG9mIHRoZSBTdGFydENvbSBDZXJ0
+aWZpY2F0aW9uIEF1dGhvcml0eSBQb2xpY3kgYXZhaWxhYmxlIGF0IGh0dHA6Ly93
+d3cuc3RhcnRzc2wuY29tL3BvbGljeS5wZGYwEQYJYIZIAYb4QgEBBAQDAgAHMDgG
+CWCGSAGG+EIBDQQrFilTdGFydENvbSBGcmVlIFNTTCBDZXJ0aWZpY2F0aW9uIEF1
+dGhvcml0eTANBgkqhkiG9w0BAQsFAAOCAgEAjo/n3JR5fPGFf59Jb2vKXfuM/gTF
+wWLRfUKKvFO3lANmMD+x5wqnUCBVJX92ehQN6wQOQOY+2IirByeDqXWmN3PH/UvS
+Ta0XQMhGvjt/UfzDtgUx3M2FIk5xt/JxXrAaxrqTi3iSSoX4eA+D/i+tLPfkpLst
+0OcNOrg+zvZ49q5HJMqjNTbOx8aHmNrs++myziebiMMEofYLWWivydsQD032ZGNc
+pRJvkrKTlMeIFw6Ttn5ii5B/q06f/ON1FE8qMt9bDeD1e5MNq6HPh+GlBEXoPBKl
+CcWw0bdT82AUuoVpaiF8H3VhFyAXe2w7QSlc4axa0c2Mm+tgHRns9+Ww2vl5GKVF
+P0lDV9LdJNUso/2RjSe15esUBppMeyG7Oq0wBhjA2MFrLH9ZXF2RsXAiV+uKa0hK
+1Q8p7MZAwC+ITGgBF3f0JBlPvfrhsiAhS90a2Cl9qrjeVOwhVYBsHvUwyKMQ5bLm
+KhQxw4UtjJixhlpPiVktucf3HMiKf8CdBUrmQk9io20ppB+Fq9vlgcitKj1MXVuE
+JnHEhV5xJMqlG2zYYdMa4FTbzrqpMrUi9nNBCV24F10OD5mQ1kfabwo6YigUZ4LZ
+8dCAWZvLMdibD4x3TrVoivJs9iQOLWxwxXPR3hTQcY+203sC9uO41Alua551hDnm
+fyWl8kgAwKQB2j8=
+-----END CERTIFICATE-----
+
+StartCom Certification Authority G2
+===================================
+
+-----BEGIN CERTIFICATE-----
+MIIFYzCCA0ugAwIBAgIBOzANBgkqhkiG9w0BAQsFADBTMQswCQYDVQQGEwJJTDEW
+MBQGA1UEChMNU3RhcnRDb20gTHRkLjEsMCoGA1UEAxMjU3RhcnRDb20gQ2VydGlm
+aWNhdGlvbiBBdXRob3JpdHkgRzIwHhcNMTAwMTAxMDEwMDAxWhcNMzkxMjMxMjM1
+OTAxWjBTMQswCQYDVQQGEwJJTDEWMBQGA1UEChMNU3RhcnRDb20gTHRkLjEsMCoG
+A1UEAxMjU3RhcnRDb20gQ2VydGlmaWNhdGlvbiBBdXRob3JpdHkgRzIwggIiMA0G
+CSqGSIb3DQEBAQUAA4ICDwAwggIKAoICAQC2iTZbB7cgNr2Cu+EWIAOVeq8Oo1XJ
+JZlKxdBWQYeQTSFgpBSHO839sj60ZwNq7eEPS8CRhXBF4EKe3ikj1AENoBB5uNsD
+vfOpL9HG4A/LnooUCri99lZi8cVytjIl2bLzvWXFDSxu1ZJvGIsAQRSCb0AgJnoo
+D/Uefyf3lLE3PbfHkffiAez9lInhzG7TNtYKGXmu1zSCZf98Qru23QumNK9LYP5/
+Q0kGi4xDuFby2X8hQxfqp0iVAXV16iulQ5XqFYSdCI0mblWbq9zSOdIxHWDirMxW
+RST1HFSr7obdljKF+ExP6JV2tgXdNiNnvP8V4so75qbsO+wmETRIjfaAKxojAuuK
+HDp2KntWFhxyKrOq42ClAJ8Em+JvHhRYW6Vsi1g8w7pOOlz34ZYrPu8HvKTlXcxN
+nw3h3Kq74W4a7I/htkxNeXJdFzULHdfBR9qWJODQcqhaX2YtENwvKhOuJv4KHBnM
+0D4LnMgJLvlblnpHnOl68wVQdJVznjAJ85eCXuaPOQgeWeU1FEIT/wCc976qUM/i
+UUjXuG+v+E5+M5iSFGI6dWPPe/regjupuznixL0sAA7IF6wT700ljtizkC+p2il9
+Ha90OrInwMEePnWjFqmveiJdnxMaz6eg6+OGCtP95paV1yPIN93EfKo2rJgaErHg
+TuixO/XWb/Ew1wIDAQABo0IwQDAPBgNVHRMBAf8EBTADAQH/MA4GA1UdDwEB/wQE
+AwIBBjAdBgNVHQ4EFgQUS8W0QGutHLOlHGVuRjaJhwUMDrYwDQYJKoZIhvcNAQEL
+BQADggIBAHNXPyzVlTJ+N9uWkusZXn5T50HsEbZH77Xe7XRcxfGOSeD8bpkTzZ+K
+2s06Ctg6Wgk/XzTQLwPSZh0avZyQN8gMjgdalEVGKua+etqhqaRpEpKwfTbURIfX
+UfEpY9Z1zRbkJ4kd+MIySP3bmdCPX1R0zKxnNBFi2QwKN4fRoxdIjtIXHfbX/dtl
+6/2o1PXWT6RbdejF0mCy2wl+JYt7ulKSnj7oxXehPOBKc2thz4bcQ///If4jXSRK
+9dNtD2IEBVeC2m6kMyV5Sy5UGYvMLD0w6dEG/+gyRr61M3Z3qAFdlsHB1b6uJcDJ
+HgoJIIihDsnzb02CVAAgp9KP5DlUFy6NHrgbuxu9mk47EDTcnIhT76IxW1hPkWLI
+wpqazRVdOKnWvvgTtZ8SafJQYqz7Fzf07rh1Z2AQ+4NQ+US1dZxAF7L+/XldblhY
+XzD8AK6vM8EOTmy6p6ahfzLbOOCxchcKK5HsamMm7YnUeMx0HgX4a/6ManY5Ka5l
+IxKVCCIcl85bBu4M4ru8H0ST9tg4RQUh7eStqxK2A6RCLi3ECToDZ2mEmuFZkIoo
+hdVddLHRDiBYmxOlsGOm7XtH/UVVMKTumtTm4ofvmMkyghEpIrwACjFeLQ/Ajulr
+so8uBtjRkcfGEvRM/TAXw8HaOFvjqermobp573PYtlNXLfbQ4ddI
+-----END CERTIFICATE-----
diff --git a/boto/cloudformation/__init__.py b/boto/cloudformation/__init__.py
index 53a02e5..d5de73e 100644
--- a/boto/cloudformation/__init__.py
+++ b/boto/cloudformation/__init__.py
@@ -30,7 +30,9 @@
     'sa-east-1': 'cloudformation.sa-east-1.amazonaws.com',
     'eu-west-1': 'cloudformation.eu-west-1.amazonaws.com',
     'ap-northeast-1': 'cloudformation.ap-northeast-1.amazonaws.com',
-    'ap-southeast-1': 'cloudformation.ap-southeast-1.amazonaws.com'}
+    'ap-southeast-1': 'cloudformation.ap-southeast-1.amazonaws.com',
+    'ap-southeast-2': 'cloudformation.ap-southeast-2.amazonaws.com',
+}
 
 
 def regions():
diff --git a/boto/cloudformation/connection.py b/boto/cloudformation/connection.py
index 816066c..84c7680 100644
--- a/boto/cloudformation/connection.py
+++ b/boto/cloudformation/connection.py
@@ -19,17 +19,13 @@
 # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
 # IN THE SOFTWARE.
 
-try:
-    import simplejson as json
-except:
-    import json
-
 import boto
 from boto.cloudformation.stack import Stack, StackSummary, StackEvent
 from boto.cloudformation.stack import StackResource, StackResourceSummary
 from boto.cloudformation.template import Template
 from boto.connection import AWSQueryConnection
 from boto.regioninfo import RegionInfo
+from boto.compat import json
 
 
 class CloudFormationConnection(AWSQueryConnection):
@@ -42,9 +38,15 @@
     DefaultRegionEndpoint = boto.config.get('Boto', 'cfn_region_endpoint',
                                             'cloudformation.us-east-1.amazonaws.com')
 
-    valid_states = ("CREATE_IN_PROGRESS", "CREATE_FAILED", "CREATE_COMPLETE",
-            "ROLLBACK_IN_PROGRESS", "ROLLBACK_FAILED", "ROLLBACK_COMPLETE",
-            "DELETE_IN_PROGRESS", "DELETE_FAILED", "DELETE_COMPLETE")
+    valid_states = (
+        'CREATE_IN_PROGRESS', 'CREATE_FAILED', 'CREATE_COMPLETE',
+        'ROLLBACK_IN_PROGRESS', 'ROLLBACK_FAILED', 'ROLLBACK_COMPLETE',
+        'DELETE_IN_PROGRESS', 'DELETE_FAILED', 'DELETE_COMPLETE',
+        'UPDATE_IN_PROGRESS', 'UPDATE_COMPLETE_CLEANUP_IN_PROGRESS',
+        'UPDATE_COMPLETE', 'UPDATE_ROLLBACK_IN_PROGRESS',
+        'UPDATE_ROLLBACK_FAILED',
+        'UPDATE_ROLLBACK_COMPLETE_CLEANUP_IN_PROGRESS',
+        'UPDATE_ROLLBACK_COMPLETE')
 
     def __init__(self, aws_access_key_id=None, aws_secret_access_key=None,
                  is_secure=True, port=None, proxy=None, proxy_port=None,
diff --git a/boto/cloudformation/stack.py b/boto/cloudformation/stack.py
index 9a9d63b..289e18f 100644
--- a/boto/cloudformation/stack.py
+++ b/boto/cloudformation/stack.py
@@ -41,7 +41,10 @@
 
     def endElement(self, name, value, connection):
         if name == 'CreationTime':
-            self.creation_time = datetime.strptime(value, '%Y-%m-%dT%H:%M:%SZ')
+            try:
+                self.creation_time = datetime.strptime(value, '%Y-%m-%dT%H:%M:%SZ')
+            except ValueError:
+                self.creation_time = datetime.strptime(value, '%Y-%m-%dT%H:%M:%S.%fZ')
         elif name == "Description":
             self.description = value
         elif name == "DisableRollback":
@@ -200,6 +203,7 @@
         dict.__init__(self)
         self.connection = connection
         self._current_key = None
+        self._current_value = None
 
     def startElement(self, name, attrs, connection):
         return None
@@ -208,10 +212,15 @@
         if name == "Key":
             self._current_key = value
         elif name == "Value":
-            self[self._current_key] = value
+            self._current_value = value
         else:
             setattr(self, name, value)
 
+        if self._current_key and self._current_value:
+            self[self._current_key] = self._current_value
+            self._current_key = None
+            self._current_value = None
+
 
 class NotificationARN(object):
     def __init__(self, connection=None):
@@ -345,7 +354,10 @@
         elif name == "StackName":
             self.stack_name = value
         elif name == "Timestamp":
-            self.timestamp = datetime.strptime(value, '%Y-%m-%dT%H:%M:%SZ')
+            try:
+                self.timestamp = datetime.strptime(value, '%Y-%m-%dT%H:%M:%SZ')
+            except ValueError:
+                self.timestamp = datetime.strptime(value, '%Y-%m-%dT%H:%M:%S.%fZ')
         else:
             setattr(self, name, value)
 
diff --git a/boto/cloudfront/distribution.py b/boto/cloudfront/distribution.py
index 718f2c2..78b2624 100644
--- a/boto/cloudfront/distribution.py
+++ b/boto/cloudfront/distribution.py
@@ -14,7 +14,7 @@
 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
 # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
 # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
-# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, 
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
 # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
 # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
 # IN THE SOFTWARE.
@@ -22,10 +22,7 @@
 import uuid
 import base64
 import time
-try:
-    import simplejson as json
-except ImportError:
-    import json
+from boto.compat import json
 from boto.cloudfront.identity import OriginAccessIdentity
 from boto.cloudfront.object import Object, StreamingObject
 from boto.cloudfront.signers import ActiveTrustedSigners, TrustedSigners
@@ -52,23 +49,23 @@
         :param enabled: Whether the distribution is enabled to accept
                         end user requests for content.
         :type enabled: bool
-        
+
         :param caller_reference: A unique number that ensures the
                                  request can't be replayed.  If no
                                  caller_reference is provided, boto
                                  will generate a type 4 UUID for use
                                  as the caller reference.
         :type enabled: str
-        
+
         :param cnames: A CNAME alias you want to associate with this
                        distribution. You can have up to 10 CNAME aliases
                        per distribution.
         :type enabled: array of str
-        
+
         :param comment: Any comments you want to include about the
                         distribution.
         :type comment: str
-        
+
         :param trusted_signers: Specifies any AWS accounts you want to
                                 permit to create signed URLs for private
                                 content. If you want the distribution to
@@ -77,7 +74,7 @@
                                 distribution to use basic URLs, leave
                                 this None.
         :type trusted_signers: :class`boto.cloudfront.signers.TrustedSigners`
-        
+
         :param default_root_object: Designates a default root object.
                                     Only include a DefaultRootObject value
                                     if you are going to assign a default
@@ -89,7 +86,7 @@
                         this should contain a LoggingInfo object; otherwise
                         it should contain None.
         :type logging: :class`boto.cloudfront.logging.LoggingInfo`
-        
+
         """
         self.connection = connection
         self.origin = origin
@@ -281,7 +278,7 @@
 
     def get_distribution(self):
         return self.connection.get_streaming_distribution_info(self.id)
-    
+
 class Distribution:
 
     def __init__(self, connection=None, config=None, domain_name='',
@@ -403,11 +400,11 @@
             return self._bucket
         else:
             raise NotImplementedError('Unable to get_objects on CustomOrigin')
-    
+
     def get_objects(self):
         """
         Return a list of all content objects in this distribution.
-        
+
         :rtype: list of :class:`boto.cloudfront.object.Object`
         :return: The content objects
         """
@@ -643,13 +640,13 @@
     @staticmethod
     def _sign_string(message, private_key_file=None, private_key_string=None):
         """
-        Signs a string for use with Amazon CloudFront.  Requires the M2Crypto
-        library be installed.
+        Signs a string for use with Amazon CloudFront.
+        Requires the rsa library be installed.
         """
         try:
-            from M2Crypto import EVP
+            import rsa
         except ImportError:
-            raise NotImplementedError("Boto depends on the python M2Crypto "
+            raise NotImplementedError("Boto depends on the python rsa "
                                       "library to generate signed URLs for "
                                       "CloudFront")
         # Make sure only one of private_key_file and private_key_string is set
@@ -657,18 +654,16 @@
             raise ValueError("Only specify the private_key_file or the private_key_string not both")
         if not private_key_file and not private_key_string:
             raise ValueError("You must specify one of private_key_file or private_key_string")
-        # if private_key_file is a file object read the key string from there
+        # If private_key_file is a file, read its contents. Otherwise, open it and then read it
         if isinstance(private_key_file, file):
             private_key_string = private_key_file.read()
-        # Now load key and calculate signature
-        if private_key_string:
-            key = EVP.load_key_string(private_key_string)
-        else:
-            key = EVP.load_key(private_key_file)
-        key.reset_context(md='sha1')
-        key.sign_init()
-        key.sign_update(str(message))
-        signature = key.sign_final()
+        elif private_key_file:
+            with open(private_key_file, 'r') as file_handle:
+                private_key_string = file_handle.read()
+
+        # Sign it!
+        private_key = rsa.PrivateKey.load_pkcs1(private_key_string)
+        signature = rsa.sign(str(message), private_key, 'SHA-1')
         return signature
 
     @staticmethod
@@ -746,5 +741,5 @@
 
     def delete(self):
         self.connection.delete_streaming_distribution(self.id, self.etag)
-            
-        
+
+
diff --git a/boto/cloudsearch/__init__.py b/boto/cloudsearch/__init__.py
index 9c8157a..5ba1060 100644
--- a/boto/cloudsearch/__init__.py
+++ b/boto/cloudsearch/__init__.py
@@ -35,6 +35,9 @@
     return [RegionInfo(name='us-east-1',
                        endpoint='cloudsearch.us-east-1.amazonaws.com',
                        connection_cls=boto.cloudsearch.layer1.Layer1),
+            RegionInfo(name='eu-west-1',
+                       endpoint='cloudsearch.eu-west-1.amazonaws.com',
+                       connection_cls=boto.cloudsearch.layer1.Layer1),
             ]
 
 
diff --git a/boto/cloudsearch/document.py b/boto/cloudsearch/document.py
index 64a11e0..c799d70 100644
--- a/boto/cloudsearch/document.py
+++ b/boto/cloudsearch/document.py
@@ -21,12 +21,9 @@
 # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
 # IN THE SOFTWARE.
 #
-try:
-    import simplejson as json
-except ImportError:
-    import json
 
 import boto.exception
+from boto.compat import json
 import requests
 import boto
 
@@ -37,8 +34,50 @@
 class CommitMismatchError(Exception):
     pass
 
+class EncodingError(Exception):
+    """
+    Content sent for Cloud Search indexing was incorrectly encoded.
+
+    This usually happens when a document is marked as unicode but non-unicode
+    characters are present.
+    """
+    pass
+
+class ContentTooLongError(Exception):
+    """
+    Content sent for Cloud Search indexing was too long
+
+    This will usually happen when documents queued for indexing add up to more
+    than the limit allowed per upload batch (5MB)
+
+    """
+    pass
 
 class DocumentServiceConnection(object):
+    """
+    A CloudSearch document service.
+
+    The DocumentServiceConection is used to add, remove and update documents in
+    CloudSearch. Commands are uploaded to CloudSearch in SDF (Search Document Format).
+
+    To generate an appropriate SDF, use :func:`add` to add or update documents,
+    as well as :func:`delete` to remove documents.
+
+    Once the set of documents is ready to be index, use :func:`commit` to send the
+    commands to CloudSearch.
+
+    If there are a lot of documents to index, it may be preferable to split the
+    generation of SDF data and the actual uploading into CloudSearch. Retrieve
+    the current SDF with :func:`get_sdf`. If this file is the uploaded into S3,
+    it can be retrieved back afterwards for upload into CloudSearch using
+    :func:`add_sdf_from_s3`.
+
+    The SDF is not cleared after a :func:`commit`. If you wish to continue
+    using the DocumentServiceConnection for another batch upload of commands,
+    you will need to :func:`clear_sdf` first to stop the previous batch of
+    commands from being uploaded again.
+
+    """
 
     def __init__(self, domain=None, endpoint=None):
         self.domain = domain
@@ -49,26 +88,95 @@
         self._sdf = None
 
     def add(self, _id, version, fields, lang='en'):
+        """
+        Add a document to be processed by the DocumentService
+
+        The document will not actually be added until :func:`commit` is called
+
+        :type _id: string
+        :param _id: A unique ID used to refer to this document.
+
+        :type version: int
+        :param version: Version of the document being indexed. If a file is
+            being reindexed, the version should be higher than the existing one
+            in CloudSearch.
+
+        :type fields: dict
+        :param fields: A dictionary of key-value pairs to be uploaded .
+
+        :type lang: string
+        :param lang: The language code the data is in. Only 'en' is currently
+            supported
+        """
+
         d = {'type': 'add', 'id': _id, 'version': version, 'lang': lang,
             'fields': fields}
         self.documents_batch.append(d)
 
     def delete(self, _id, version):
+        """
+        Schedule a document to be removed from the CloudSearch service
+
+        The document will not actually be scheduled for removal until :func:`commit` is called
+
+        :type _id: string
+        :param _id: The unique ID of this document.
+
+        :type version: int
+        :param version: Version of the document to remove. The delete will only
+            occur if this version number is higher than the version currently
+            in the index.
+        """
+
         d = {'type': 'delete', 'id': _id, 'version': version}
         self.documents_batch.append(d)
 
     def get_sdf(self):
+        """
+        Generate the working set of documents in Search Data Format (SDF)
+
+        :rtype: string
+        :returns: JSON-formatted string of the documents in SDF
+        """
+
         return self._sdf if self._sdf else json.dumps(self.documents_batch)
 
     def clear_sdf(self):
+        """
+        Clear the working documents from this DocumentServiceConnection
+
+        This should be used after :func:`commit` if the connection will be reused
+        for another set of documents.
+        """
+
         self._sdf = None
         self.documents_batch = []
 
     def add_sdf_from_s3(self, key_obj):
-        """@todo (lucas) would be nice if this could just take an s3://uri..."""
+        """
+        Load an SDF from S3
+
+        Using this method will result in documents added through
+        :func:`add` and :func:`delete` being ignored.
+
+        :type key_obj: :class:`boto.s3.key.Key`
+        :param key_obj: An S3 key which contains an SDF
+        """
+        #@todo:: (lucas) would be nice if this could just take an s3://uri..."
+
         self._sdf = key_obj.get_contents_as_string()
 
     def commit(self):
+        """
+        Actually send an SDF to CloudSearch for processing
+
+        If an SDF file has been explicitly loaded it will be used. Otherwise,
+        documents added through :func:`add` and :func:`delete` will be used.
+
+        :rtype: :class:`CommitResponse`
+        :returns: A summary of documents added and deleted
+        """
+
         sdf = self.get_sdf()
 
         if ': null' in sdf:
@@ -79,15 +187,19 @@
 
         url = "http://%s/2011-02-01/documents/batch" % (self.endpoint)
 
-        request_config = {
-            'pool_connections': 20,
-            'keep_alive': True,
-            'max_retries': 5,
-            'pool_maxsize': 50
-        }
-
-        r = requests.post(url, data=sdf, config=request_config,
-            headers={'Content-Type': 'application/json'})
+        # Keep-alive is automatic in a post-1.0 requests world.
+        session = requests.Session()
+        adapter = requests.adapters.HTTPAdapter(
+            pool_connections=20,
+            pool_maxsize=50
+        )
+        # Now kludge in the right number of retries.
+        # Once we're requiring ``requests>=1.2.1``, this can become an
+        # initialization parameter above.
+        adapter.max_retries = 5
+        session.mount('http://', adapter)
+        session.mount('https://', adapter)
+        r = session.post(url, data=sdf, headers={'Content-Type': 'application/json'})
 
         return CommitResponse(r, self, sdf)
 
@@ -98,12 +210,14 @@
     :type response: :class:`requests.models.Response`
     :param response: Response from Cloudsearch /documents/batch API
 
-    :type doc_service: :class:`exfm.cloudsearch.DocumentServiceConnection`
+    :type doc_service: :class:`boto.cloudsearch.document.DocumentServiceConnection`
     :param doc_service: Object containing the documents posted and methods to
         retry
 
     :raises: :class:`boto.exception.BotoServerError`
-    :raises: :class:`exfm.cloudsearch.SearchServiceException`
+    :raises: :class:`boto.cloudsearch.document.SearchServiceException`
+    :raises: :class:`boto.cloudsearch.document.EncodingError`
+    :raises: :class:`boto.cloudsearch.document.ContentTooLongError`
     """
     def __init__(self, response, doc_service, sdf):
         self.response = response
@@ -113,8 +227,8 @@
         try:
             self.content = json.loads(response.content)
         except:
-            boto.log.error('Error indexing documents.\nResponse Content:\n{}\n\n'
-                'SDF:\n{}'.format(response.content, self.sdf))
+            boto.log.error('Error indexing documents.\nResponse Content:\n{0}\n\n'
+                'SDF:\n{1}'.format(response.content, self.sdf))
             raise boto.exception.BotoServerError(self.response.status_code, '',
                 body=response.content)
 
@@ -122,6 +236,11 @@
         if self.status == 'error':
             self.errors = [e.get('message') for e in self.content.get('errors',
                 [])]
+            for e in self.errors:
+                if "Illegal Unicode character" in e:
+                    raise EncodingError("Illegal Unicode character in document")
+                elif e == "The Content-Length is too long":
+                    raise ContentTooLongError("Content was too long")
         else:
             self.errors = []
 
@@ -139,12 +258,12 @@
         :type response_num: int
         :param response_num: Number of adds or deletes in the response.
 
-        :raises: :class:`exfm.cloudsearch.SearchServiceException`
+        :raises: :class:`boto.cloudsearch.document.CommitMismatchError`
         """
         commit_num = len([d for d in self.doc_service.documents_batch
             if d['type'] == type_])
 
         if response_num != commit_num:
             raise CommitMismatchError(
-                'Incorrect number of {}s returned. Commit: {} Respose: {}'\
+                'Incorrect number of {0}s returned. Commit: {1} Response: {2}'\
                 .format(type_, commit_num, response_num))
diff --git a/boto/cloudsearch/domain.py b/boto/cloudsearch/domain.py
index 43fcac8..9497325 100644
--- a/boto/cloudsearch/domain.py
+++ b/boto/cloudsearch/domain.py
@@ -23,10 +23,7 @@
 #
 
 import boto
-try:
-    import simplejson as json
-except ImportError:
-    import json
+from boto.compat import json
 from .optionstatus import OptionStatus
 from .optionstatus import IndexFieldStatus
 from .optionstatus import ServicePoliciesStatus
diff --git a/boto/cloudsearch/layer1.py b/boto/cloudsearch/layer1.py
index 054fc32..ff71293 100644
--- a/boto/cloudsearch/layer1.py
+++ b/boto/cloudsearch/layer1.py
@@ -82,7 +82,7 @@
             if not inner:
                 return None if list_marker == None else []
             if isinstance(inner, list):
-                return [dict(**i) for i in inner]
+                return inner
             else:
                 return dict(**inner)
         else:
diff --git a/boto/cloudsearch/optionstatus.py b/boto/cloudsearch/optionstatus.py
index 869d82f..dddda76 100644
--- a/boto/cloudsearch/optionstatus.py
+++ b/boto/cloudsearch/optionstatus.py
@@ -22,10 +22,9 @@
 # IN THE SOFTWARE.
 #
 
-try:
-    import simplejson as json
-except ImportError:
-    import json
+import time
+from boto.compat import json
+
 
 class OptionStatus(dict):
     """
diff --git a/boto/cloudsearch/search.py b/boto/cloudsearch/search.py
index f1b16e4..69a1981 100644
--- a/boto/cloudsearch/search.py
+++ b/boto/cloudsearch/search.py
@@ -23,8 +23,8 @@
 #
 from math import ceil
 import time
-import json
 import boto
+from boto.compat import json
 import requests
 
 
@@ -51,6 +51,12 @@
         self.query = attrs['query']
         self.search_service = attrs['search_service']
 
+        self.facets = {}
+        if 'facets' in attrs:
+            for (facet, values) in attrs['facets'].iteritems():
+                if 'constraints' in values:
+                    self.facets[facet] = dict((k, v) for (k, v) in map(lambda x: (x['value'], x['count']), values['constraints']))
+
         self.num_pages_needed = ceil(self.hits / self.query.real_size)
 
     def __len__(self):
@@ -62,8 +68,8 @@
     def next_page(self):
         """Call Cloudsearch to get the next page of search results
 
-        :rtype: :class:`exfm.cloudsearch.SearchResults`
-        :return: A cloudsearch SearchResults object
+        :rtype: :class:`boto.cloudsearch.search.SearchResults`
+        :return: the following page of search results
         """
         if self.query.page <= self.num_pages_needed:
             self.query.start += self.query.real_size
@@ -161,43 +167,105 @@
                size=10, start=0, facet=None, facet_constraints=None,
                facet_sort=None, facet_top_n=None, t=None):
         """
-        Query Cloudsearch
+        Send a query to CloudSearch
 
-        :type q:
-        :param q:
+        Each search query should use at least the q or bq argument to specify
+        the search parameter. The other options are used to specify the
+        criteria of the search.
 
-        :type bq:
-        :param bq:
+        :type q: string
+        :param q: A string to search the default search fields for.
 
-        :type rank:
-        :param rank:
+        :type bq: string
+        :param bq: A string to perform a Boolean search. This can be used to
+            create advanced searches.
 
-        :type return_fields:
-        :param return_fields:
+        :type rank: List of strings
+        :param rank: A list of fields or rank expressions used to order the
+            search results. A field can be reversed by using the - operator.
+            ``['-year', 'author']``
 
-        :type size:
-        :param size:
+        :type return_fields: List of strings
+        :param return_fields: A list of fields which should be returned by the
+            search. If this field is not specified, only IDs will be returned.
+            ``['headline']``
 
-        :type start:
-        :param start:
+        :type size: int
+        :param size: Number of search results to specify
 
-        :type facet:
-        :param facet:
+        :type start: int
+        :param start: Offset of the first search result to return (can be used
+            for paging)
 
-        :type facet_constraints:
-        :param facet_constraints:
+        :type facet: list
+        :param facet: List of fields for which facets should be returned
+            ``['colour', 'size']``
 
-        :type facet_sort:
-        :param facet_sort:
+        :type facet_constraints: dict
+        :param facet_constraints: Use to limit facets to specific values
+            specified as comma-delimited strings in a Dictionary of facets
+            ``{'colour': "'blue','white','red'", 'size': "big"}``
 
-        :type facet_top_n:
-        :param facet_top_n:
+        :type facet_sort: dict
+        :param facet_sort: Rules used to specify the order in which facet
+            values should be returned. Allowed values are *alpha*, *count*,
+            *max*, *sum*. Use *alpha* to sort alphabetical, and *count* to sort
+            the facet by number of available result. 
+            ``{'color': 'alpha', 'size': 'count'}``
 
-        :type t:
-        :param t:
+        :type facet_top_n: dict
+        :param facet_top_n: Dictionary of facets and number of facets to
+            return.
+            ``{'colour': 2}``
 
-        :rtype: :class:`exfm.cloudsearch.SearchResults`
-        :return: A cloudsearch SearchResults object
+        :type t: dict
+        :param t: Specify ranges for specific fields
+            ``{'year': '2000..2005'}``
+
+        :rtype: :class:`boto.cloudsearch.search.SearchResults`
+        :return: Returns the results of this search
+
+        The following examples all assume we have indexed a set of documents
+        with fields: *author*, *date*, *headline*
+
+        A simple search will look for documents whose default text search
+        fields will contain the search word exactly:
+
+        >>> search(q='Tim') # Return documents with the word Tim in them (but not Timothy)
+
+        A simple search with more keywords will return documents whose default
+        text search fields contain the search strings together or separately.
+
+        >>> search(q='Tim apple') # Will match "tim" and "apple"
+
+        More complex searches require the boolean search operator.
+
+        Wildcard searches can be used to search for any words that start with
+        the search string.
+
+        >>> search(bq="'Tim*'") # Return documents with words like Tim or Timothy)
+        
+        Search terms can also be combined. Allowed operators are "and", "or",
+        "not", "field", "optional", "token", "phrase", or "filter"
+        
+        >>> search(bq="(and 'Tim' (field author 'John Smith'))")
+
+        Facets allow you to show classification information about the search
+        results. For example, you can retrieve the authors who have written
+        about Tim:
+
+        >>> search(q='Tim', facet=['Author'])
+
+        With facet_constraints, facet_top_n and facet_sort more complicated
+        constraints can be specified such as returning the top author out of
+        John Smith and Mark Smith who have a document with the word Tim in it.
+        
+        >>> search(q='Tim', 
+        ...     facet=['Author'], 
+        ...     facet_constraints={'author': "'John Smith','Mark Smith'"}, 
+        ...     facet=['author'], 
+        ...     facet_top_n={'author': 1}, 
+        ...     facet_sort={'author': 'count'})
         """
 
         query = self.build_query(q=q, bq=bq, rank=rank,
@@ -211,11 +279,11 @@
     def __call__(self, query):
         """Make a call to CloudSearch
 
-        :type query: :class:`exfm.cloudsearch.Query`
-        :param query: A fully specified Query instance
+        :type query: :class:`boto.cloudsearch.search.Query`
+        :param query: A group of search criteria
 
-        :rtype: :class:`exfm.cloudsearch.SearchResults`
-        :return: A cloudsearch SearchResults object
+        :rtype: :class:`boto.cloudsearch.search.SearchResults`
+        :return: search results
         """
         url = "http://%s/2011-02-01/search" % (self.endpoint)
         params = query.to_params()
@@ -239,14 +307,14 @@
     def get_all_paged(self, query, per_page):
         """Get a generator to iterate over all pages of search results
 
-        :type query: :class:`exfm.cloudsearch.Query`
-        :param query: A fully specified Query instance
+        :type query: :class:`boto.cloudsearch.search.Query`
+        :param query: A group of search criteria
 
         :type per_page: int
-        :param per_page: Number of docs in each SearchResults object.
+        :param per_page: Number of docs in each :class:`boto.cloudsearch.search.SearchResults` object.
 
         :rtype: generator
-        :return: Generator containing :class:`exfm.cloudsearch.SearchResults`
+        :return: Generator containing :class:`boto.cloudsearch.search.SearchResults`
         """
         query.update_size(per_page)
         page = 0
@@ -266,8 +334,8 @@
         you can iterate over all results in a reasonably efficient
         manner.
 
-        :type query: :class:`exfm.cloudsearch.Query`
-        :param query: A fully specified Query instance
+        :type query: :class:`boto.cloudsearch.search.Query`
+        :param query: A group of search criteria
 
         :rtype: generator
         :return: All docs matching query
@@ -285,8 +353,8 @@
     def get_num_hits(self, query):
         """Return the total number of hits for query
 
-        :type query: :class:`exfm.cloudsearch.Query`
-        :param query: A fully specified Query instance
+        :type query: :class:`boto.cloudsearch.search.Query`
+        :param query: a group of search criteria
 
         :rtype: int
         :return: Total number of hits for query
diff --git a/boto/compat.py b/boto/compat.py
new file mode 100644
index 0000000..44fbc3b
--- /dev/null
+++ b/boto/compat.py
@@ -0,0 +1,28 @@
+# Copyright (c) 2012 Amazon.com, Inc. or its affiliates.  All Rights Reserved
+#
+# Permission is hereby granted, free of charge, to any person obtaining a
+# copy of this software and associated documentation files (the
+# "Software"), to deal in the Software without restriction, including
+# without limitation the rights to use, copy, modify, merge, publish, dis-
+# tribute, sublicense, and/or sell copies of the Software, and to permit
+# persons to whom the Software is furnished to do so, subject to the fol-
+# lowing conditions:
+#
+# The above copyright notice and this permission notice shall be included
+# in all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
+# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
+# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+# IN THE SOFTWARE.
+#
+
+# This allows boto modules to say "from boto.compat import json".  This is
+# preferred so that all modules don't have to repeat this idiom.
+try:
+    import simplejson as json
+except ImportError:
+    import json
diff --git a/boto/connection.py b/boto/connection.py
index 080ff5e..1f7392c 100644
--- a/boto/connection.py
+++ b/boto/connection.py
@@ -67,8 +67,10 @@
 import boto.cacerts
 
 from boto import config, UserAgent
-from boto.exception import AWSConnectionError, BotoClientError
+from boto.exception import AWSConnectionError
+from boto.exception import BotoClientError
 from boto.exception import BotoServerError
+from boto.exception import PleaseRetryException
 from boto.provider import Provider
 from boto.resultset import ResultSet
 
@@ -487,7 +489,7 @@
         self.handle_proxy(proxy, proxy_port, proxy_user, proxy_pass)
         # define exceptions from httplib that we want to catch and retry
         self.http_exceptions = (httplib.HTTPException, socket.error,
-                                socket.gaierror)
+                                socket.gaierror, httplib.BadStatusLine)
         # define subclasses of the above that are not retryable.
         self.http_unretryable_exceptions = []
         if HAVE_HTTPS_CONNECTION:
@@ -523,9 +525,11 @@
         # timeouts will only be applied if Python is 2.6 or greater.
         self.http_connection_kwargs = {}
         if (sys.version_info[0], sys.version_info[1]) >= (2, 6):
-            if config.has_option('Boto', 'http_socket_timeout'):
-                timeout = config.getint('Boto', 'http_socket_timeout')
-                self.http_connection_kwargs['timeout'] = timeout
+            # If timeout isn't defined in boto config file, use 70 second
+            # default as recommended by
+            # http://docs.aws.amazon.com/amazonswf/latest/apireference/API_PollForActivityTask.html
+            self.http_connection_kwargs['timeout'] = config.getint(
+                'Boto', 'http_socket_timeout', 70)
 
         if isinstance(provider, Provider):
             # Allow overriding Provider
@@ -537,15 +541,19 @@
                                      aws_secret_access_key,
                                      security_token)
 
-        # allow config file to override default host
+        # Allow config file to override default host and port.
         if self.provider.host:
             self.host = self.provider.host
+        if self.provider.port:
+            self.port = self.provider.port
 
         self._pool = ConnectionPool()
         self._connection = (self.server_name(), self.is_secure)
         self._last_rs = None
         self._auth_handler = auth.get_auth_handler(
               host, config, self.provider, self._required_auth_capability())
+        if getattr(self, 'AuthServiceName', None) is not None:
+            self.auth_service_name = self.AuthServiceName
 
     def __repr__(self):
         return '%s:%s' % (self.__class__.__name__, self.host)
@@ -553,6 +561,23 @@
     def _required_auth_capability(self):
         return []
 
+    def _get_auth_service_name(self):
+        return getattr(self._auth_handler, 'service_name')
+
+    # For Sigv4, the auth_service_name/auth_region_name properties allow
+    # the service_name/region_name to be explicitly set instead of being
+    # derived from the endpoint url.
+    def _set_auth_service_name(self, value):
+        self._auth_handler.service_name = value
+    auth_service_name = property(_get_auth_service_name, _set_auth_service_name)
+
+    def _get_auth_region_name(self):
+        return getattr(self._auth_handler, 'region_name')
+
+    def _set_auth_region_name(self, value):
+        self._auth_handler.region_name = value
+    auth_region_name = property(_get_auth_region_name, _set_auth_region_name)
+
     def connection(self):
         return self.get_http_connection(*self._connection)
     connection = property(connection)
@@ -575,7 +600,7 @@
         # https://groups.google.com/forum/#!topic/boto-dev/-ft0XPUy0y8
         # You can override that behavior with the suppress_consec_slashes param.
         if not self.suppress_consec_slashes:
-            return self.path + re.sub('^/*', "", path)
+            return self.path + re.sub('^(/*)/', "\\1", path)
         pos = path.find('?')
         if pos >= 0:
             params = path[pos:]
@@ -658,7 +683,7 @@
             return self.new_http_connection(host, is_secure)
 
     def new_http_connection(self, host, is_secure):
-        if self.use_proxy:
+        if self.use_proxy and not is_secure:
             host = '%s:%d' % (self.proxy, int(self.proxy_port))
         if host is None:
             host = self.server_name()
@@ -667,7 +692,7 @@
                     'establishing HTTPS connection: host=%s, kwargs=%s',
                     host, self.http_connection_kwargs)
             if self.use_proxy:
-                connection = self.proxy_ssl()
+                connection = self.proxy_ssl(host, is_secure and 443 or 80)
             elif self.https_connection_factory:
                 connection = self.https_connection_factory(host)
             elif self.https_validate_certificates and HAVE_HTTPS_CONNECTION:
@@ -703,11 +728,16 @@
     def put_http_connection(self, host, is_secure, connection):
         self._pool.put_http_connection(host, is_secure, connection)
 
-    def proxy_ssl(self):
-        host = '%s:%d' % (self.host, self.port)
+    def proxy_ssl(self, host=None, port=None):
+        if host and port:
+            host = '%s:%d' % (host, port)
+        else:
+            host = '%s:%d' % (self.host, self.port)
         sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
         try:
             sock.connect((self.proxy, int(self.proxy_port)))
+            if "timeout" in self.http_connection_kwargs:
+                sock.settimeout(self.http_connection_kwargs["timeout"])
         except:
             raise
         boto.log.debug("Proxy connection: CONNECT %s HTTP/1.0\r\n", host)
@@ -789,6 +819,7 @@
         boto.log.debug('Data: %s' % request.body)
         boto.log.debug('Headers: %s' % request.headers)
         boto.log.debug('Host: %s' % request.host)
+        boto.log.debug('Params: %s' % request.params)
         response = None
         body = None
         e = None
@@ -799,7 +830,7 @@
         i = 0
         connection = self.get_http_connection(request.host, self.is_secure)
         while i <= num_retries:
-            # Use binary exponential backoff to desynchronize client requests
+            # Use binary exponential backoff to desynchronize client requests.
             next_sleep = random.random() * (2 ** i)
             try:
                 # we now re-sign each request before it is retried
@@ -849,6 +880,11 @@
                                                           scheme == 'https')
                     response = None
                     continue
+            except PleaseRetryException, e:
+                boto.log.debug('encountered a retry exception: %s' % e)
+                connection = self.new_http_connection(request.host,
+                                                      self.is_secure)
+                response = e.response
             except self.http_exceptions, e:
                 for unretryable in self.http_unretryable_exceptions:
                     if isinstance(e, unretryable):
@@ -865,7 +901,7 @@
         # If we made it here, it's because we have exhausted our retries
         # and stil haven't succeeded.  So, if we have a response object,
         # use it to raise an exception.
-        # Otherwise, raise the exception that must have already h#appened.
+        # Otherwise, raise the exception that must have already happened.
         if response:
             raise BotoServerError(response.status, response.reason, body)
         elif e:
@@ -901,13 +937,14 @@
 
     def make_request(self, method, path, headers=None, data='', host=None,
                      auth_path=None, sender=None, override_num_retries=None,
-                     params=None):
+                     params=None, retry_handler=None):
         """Makes a request to the server, with stock multiple-retry logic."""
         if params is None:
             params = {}
         http_request = self.build_base_http_request(method, path, auth_path,
                                                     params, headers, data, host)
-        return self._mexe(http_request, sender, override_num_retries)
+        return self._mexe(http_request, sender, override_num_retries,
+                          retry_handler=retry_handler)
 
     def close(self):
         """(Optional) Close any open HTTP connections.  This is non-destructive,
@@ -952,11 +989,51 @@
         return self._mexe(http_request)
 
     def build_list_params(self, params, items, label):
-        if isinstance(items, str):
+        if isinstance(items, basestring):
             items = [items]
         for i in range(1, len(items) + 1):
             params['%s.%d' % (label, i)] = items[i - 1]
 
+    def build_complex_list_params(self, params, items, label, names):
+        """Serialize a list of structures.
+
+        For example::
+
+            items = [('foo', 'bar', 'baz'), ('foo2', 'bar2', 'baz2')]
+            label = 'ParamName.member'
+            names = ('One', 'Two', 'Three')
+            self.build_complex_list_params(params, items, label, names)
+
+        would result in the params dict being updated with these params::
+
+            ParamName.member.1.One = foo
+            ParamName.member.1.Two = bar
+            ParamName.member.1.Three = baz
+
+            ParamName.member.2.One = foo2
+            ParamName.member.2.Two = bar2
+            ParamName.member.2.Three = baz2
+
+        :type params: dict
+        :param params: The params dict.  The complex list params
+            will be added to this dict.
+
+        :type items: list of tuples
+        :param items: The list to serialize.
+
+        :type label: string
+        :param label: The prefix to apply to the parameter.
+
+        :type names: tuple of strings
+        :param names: The names associated with each tuple element.
+
+        """
+        for i, item in enumerate(items, 1):
+            current_prefix = '%s.%s' % (label, i)
+            for key, value in zip(names, item):
+                full_key = '%s.%s' % (current_prefix, key)
+                params[full_key] = value
+
     # generics
 
     def get_list(self, action, params, markers, path='/',
diff --git a/boto/core/credentials.py b/boto/core/credentials.py
index 1f315a3..b4b35b5 100644
--- a/boto/core/credentials.py
+++ b/boto/core/credentials.py
@@ -23,8 +23,8 @@
 #
 import os
 from six.moves import configparser
+from boto.compat import json
 import requests
-import json
 
 
 class Credentials(object):
diff --git a/boto/datapipeline/__init__.py b/boto/datapipeline/__init__.py
new file mode 100644
index 0000000..e69de29
--- /dev/null
+++ b/boto/datapipeline/__init__.py
diff --git a/boto/datapipeline/exceptions.py b/boto/datapipeline/exceptions.py
new file mode 100644
index 0000000..c2761e2
--- /dev/null
+++ b/boto/datapipeline/exceptions.py
@@ -0,0 +1,42 @@
+# Copyright (c) 2012 Amazon.com, Inc. or its affiliates.  All Rights Reserved
+#
+# Permission is hereby granted, free of charge, to any person obtaining a
+# copy of this software and associated documentation files (the
+# "Software"), to deal in the Software without restriction, including
+# without limitation the rights to use, copy, modify, merge, publish, dis-
+# tribute, sublicense, and/or sell copies of the Software, and to permit
+# persons to whom the Software is furnished to do so, subject to the fol-
+# lowing conditions:
+#
+# The above copyright notice and this permission notice shall be included
+# in all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
+# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
+# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+# IN THE SOFTWARE.
+#
+from boto.exception import JSONResponseError
+
+
+class PipelineDeletedException(JSONResponseError):
+    pass
+
+
+class InvalidRequestException(JSONResponseError):
+    pass
+
+
+class TaskNotFoundException(JSONResponseError):
+    pass
+
+
+class PipelineNotFoundException(JSONResponseError):
+    pass
+
+
+class InternalServiceError(JSONResponseError):
+    pass
diff --git a/boto/datapipeline/layer1.py b/boto/datapipeline/layer1.py
new file mode 100644
index 0000000..ff0c400
--- /dev/null
+++ b/boto/datapipeline/layer1.py
@@ -0,0 +1,641 @@
+# Copyright (c) 2013 Amazon.com, Inc. or its affiliates.  All Rights Reserved
+#
+# Permission is hereby granted, free of charge, to any person obtaining a
+# copy of this software and associated documentation files (the
+# "Software"), to deal in the Software without restriction, including
+# without limitation the rights to use, copy, modify, merge, publish, dis-
+# tribute, sublicense, and/or sell copies of the Software, and to permit
+# persons to whom the Software is furnished to do so, subject to the fol-
+# lowing conditions:
+#
+# The above copyright notice and this permission notice shall be included
+# in all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
+# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
+# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+# IN THE SOFTWARE.
+#
+
+import boto
+from boto.compat import json
+from boto.connection import AWSQueryConnection
+from boto.regioninfo import RegionInfo
+from boto.exception import JSONResponseError
+from boto.datapipeline import exceptions
+
+
+class DataPipelineConnection(AWSQueryConnection):
+    """
+    This is the AWS Data Pipeline API Reference . This guide provides
+    descriptions and samples of the AWS Data Pipeline API.
+
+    AWS Data Pipeline is a web service that configures and manages a
+    data-driven workflow called a pipeline. AWS Data Pipeline handles
+    the details of scheduling and ensuring that data dependencies are
+    met so your application can focus on processing the data.
+
+    The AWS Data Pipeline API implements two main sets of
+    functionality. The first set of actions configure the pipeline in
+    the web service. You call these actions to create a pipeline and
+    define data sources, schedules, dependencies, and the transforms
+    to be performed on the data.
+
+    The second set of actions are used by a task runner application
+    that calls the AWS Data Pipeline API to receive the next task
+    ready for processing. The logic for performing the task, such as
+    querying the data, running data analysis, or converting the data
+    from one format to another, is contained within the task runner.
+    The task runner performs the task assigned to it by the web
+    service, reporting progress to the web service as it does so. When
+    the task is done, the task runner reports the final success or
+    failure of the task to the web service.
+
+    AWS Data Pipeline provides an open-source implementation of a task
+    runner called AWS Data Pipeline Task Runner. AWS Data Pipeline
+    Task Runner provides logic for common data management scenarios,
+    such as performing database queries and running data analysis
+    using Amazon Elastic MapReduce (Amazon EMR). You can use AWS Data
+    Pipeline Task Runner as your task runner, or you can write your
+    own task runner to provide custom data management.
+
+    The AWS Data Pipeline API uses the Signature Version 4 protocol
+    for signing requests. For more information about how to sign a
+    request with this protocol, see `Signature Version 4 Signing
+    Process`_. In the code examples in this reference, the Signature
+    Version 4 Request parameters are represented as AuthParams.
+    """
+    APIVersion = "2012-10-29"
+    DefaultRegionName = "us-east-1"
+    DefaultRegionEndpoint = "datapipeline.us-east-1.amazonaws.com"
+    ServiceName = "DataPipeline"
+    TargetPrefix = "DataPipeline"
+    ResponseError = JSONResponseError
+
+    _faults = {
+        "PipelineDeletedException": exceptions.PipelineDeletedException,
+        "InvalidRequestException": exceptions.InvalidRequestException,
+        "TaskNotFoundException": exceptions.TaskNotFoundException,
+        "PipelineNotFoundException": exceptions.PipelineNotFoundException,
+        "InternalServiceError": exceptions.InternalServiceError,
+    }
+
+
+    def __init__(self, **kwargs):
+        region = kwargs.get('region')
+        if not region:
+            region = RegionInfo(self, self.DefaultRegionName,
+                                self.DefaultRegionEndpoint)
+        kwargs['host'] = region.endpoint
+        AWSQueryConnection.__init__(self, **kwargs)
+        self.region = region
+
+    def _required_auth_capability(self):
+        return ['hmac-v4']
+
+    def activate_pipeline(self, pipeline_id):
+        """
+        Validates a pipeline and initiates processing. If the pipeline
+        does not pass validation, activation fails.
+
+        Call this action to start processing pipeline tasks of a
+        pipeline you've created using the CreatePipeline and
+        PutPipelineDefinition actions. A pipeline cannot be modified
+        after it has been successfully activated.
+
+        :type pipeline_id: string
+        :param pipeline_id: The identifier of the pipeline to activate.
+
+        """
+        params = {'pipelineId': pipeline_id, }
+        return self.make_request(action='ActivatePipeline',
+                                 body=json.dumps(params))
+
+    def create_pipeline(self, name, unique_id, description=None):
+        """
+        Creates a new empty pipeline. When this action succeeds, you
+        can then use the PutPipelineDefinition action to populate the
+        pipeline.
+
+        :type name: string
+        :param name: The name of the new pipeline. You can use the same name
+            for multiple pipelines associated with your AWS account, because
+            AWS Data Pipeline assigns each new pipeline a unique pipeline
+            identifier.
+
+        :type unique_id: string
+        :param unique_id: A unique identifier that you specify. This identifier
+            is not the same as the pipeline identifier assigned by AWS Data
+            Pipeline. You are responsible for defining the format and ensuring
+            the uniqueness of this identifier. You use this parameter to ensure
+            idempotency during repeated calls to CreatePipeline. For example,
+            if the first call to CreatePipeline does not return a clear
+            success, you can pass in the same unique identifier and pipeline
+            name combination on a subsequent call to CreatePipeline.
+            CreatePipeline ensures that if a pipeline already exists with the
+            same name and unique identifier, a new pipeline will not be
+            created. Instead, you'll receive the pipeline identifier from the
+            previous attempt. The uniqueness of the name and unique identifier
+            combination is scoped to the AWS account or IAM user credentials.
+
+        :type description: string
+        :param description: The description of the new pipeline.
+
+        """
+        params = {'name': name, 'uniqueId': unique_id, }
+        if description is not None:
+            params['description'] = description
+        return self.make_request(action='CreatePipeline',
+                                 body=json.dumps(params))
+
+    def delete_pipeline(self, pipeline_id):
+        """
+        Permanently deletes a pipeline, its pipeline definition and
+        its run history. You cannot query or restore a deleted
+        pipeline. AWS Data Pipeline will attempt to cancel instances
+        associated with the pipeline that are currently being
+        processed by task runners. Deleting a pipeline cannot be
+        undone.
+
+        To temporarily pause a pipeline instead of deleting it, call
+        SetStatus with the status set to Pause on individual
+        components. Components that are paused by SetStatus can be
+        resumed.
+
+        :type pipeline_id: string
+        :param pipeline_id: The identifier of the pipeline to be deleted.
+
+        """
+        params = {'pipelineId': pipeline_id, }
+        return self.make_request(action='DeletePipeline',
+                                 body=json.dumps(params))
+
+    def describe_objects(self, object_ids, pipeline_id, marker=None,
+                         evaluate_expressions=None):
+        """
+        Returns the object definitions for a set of objects associated
+        with the pipeline. Object definitions are composed of a set of
+        fields that define the properties of the object.
+
+        :type pipeline_id: string
+        :param pipeline_id: Identifier of the pipeline that contains the object
+            definitions.
+
+        :type object_ids: list
+        :param object_ids: Identifiers of the pipeline objects that contain the
+            definitions to be described. You can pass as many as 25 identifiers
+            in a single call to DescribeObjects.
+
+        :type evaluate_expressions: boolean
+        :param evaluate_expressions: Indicates whether any expressions in the
+            object should be evaluated when the object descriptions are
+            returned.
+
+        :type marker: string
+        :param marker: The starting point for the results to be returned. The
+            first time you call DescribeObjects, this value should be empty. As
+            long as the action returns `HasMoreResults` as `True`, you can call
+            DescribeObjects again and pass the marker value from the response
+            to retrieve the next set of results.
+
+        """
+        params = {
+            'pipelineId': pipeline_id,
+            'objectIds': object_ids,
+        }
+        if evaluate_expressions is not None:
+            params['evaluateExpressions'] = evaluate_expressions
+        if marker is not None:
+            params['marker'] = marker
+        return self.make_request(action='DescribeObjects',
+                                 body=json.dumps(params))
+
+    def describe_pipelines(self, pipeline_ids):
+        """
+        Retrieve metadata about one or more pipelines. The information
+        retrieved includes the name of the pipeline, the pipeline
+        identifier, its current state, and the user account that owns
+        the pipeline. Using account credentials, you can retrieve
+        metadata about pipelines that you or your IAM users have
+        created. If you are using an IAM user account, you can
+        retrieve metadata about only those pipelines you have read
+        permission for.
+
+        To retrieve the full pipeline definition instead of metadata
+        about the pipeline, call the GetPipelineDefinition action.
+
+        :type pipeline_ids: list
+        :param pipeline_ids: Identifiers of the pipelines to describe. You can
+            pass as many as 25 identifiers in a single call to
+            DescribePipelines. You can obtain pipeline identifiers by calling
+            ListPipelines.
+
+        """
+        params = {'pipelineIds': pipeline_ids, }
+        return self.make_request(action='DescribePipelines',
+                                 body=json.dumps(params))
+
+    def evaluate_expression(self, pipeline_id, expression, object_id):
+        """
+        Evaluates a string in the context of a specified object. A
+        task runner can use this action to evaluate SQL queries stored
+        in Amazon S3.
+
+        :type pipeline_id: string
+        :param pipeline_id: The identifier of the pipeline.
+
+        :type object_id: string
+        :param object_id: The identifier of the object.
+
+        :type expression: string
+        :param expression: The expression to evaluate.
+
+        """
+        params = {
+            'pipelineId': pipeline_id,
+            'objectId': object_id,
+            'expression': expression,
+        }
+        return self.make_request(action='EvaluateExpression',
+                                 body=json.dumps(params))
+
+    def get_pipeline_definition(self, pipeline_id, version=None):
+        """
+        Returns the definition of the specified pipeline. You can call
+        GetPipelineDefinition to retrieve the pipeline definition you
+        provided using PutPipelineDefinition.
+
+        :type pipeline_id: string
+        :param pipeline_id: The identifier of the pipeline.
+
+        :type version: string
+        :param version: The version of the pipeline definition to retrieve.
+            This parameter accepts the values `latest` (default) and `active`.
+            Where `latest` indicates the last definition saved to the pipeline
+            and `active` indicates the last definition of the pipeline that was
+            activated.
+
+        """
+        params = {'pipelineId': pipeline_id, }
+        if version is not None:
+            params['version'] = version
+        return self.make_request(action='GetPipelineDefinition',
+                                 body=json.dumps(params))
+
+    def list_pipelines(self, marker=None):
+        """
+        Returns a list of pipeline identifiers for all active
+        pipelines. Identifiers are returned only for pipelines you
+        have permission to access.
+
+        :type marker: string
+        :param marker: The starting point for the results to be returned. The
+            first time you call ListPipelines, this value should be empty. As
+            long as the action returns `HasMoreResults` as `True`, you can call
+            ListPipelines again and pass the marker value from the response to
+            retrieve the next set of results.
+
+        """
+        params = {}
+        if marker is not None:
+            params['marker'] = marker
+        return self.make_request(action='ListPipelines',
+                                 body=json.dumps(params))
+
+    def poll_for_task(self, worker_group, hostname=None,
+                      instance_identity=None):
+        """
+        Task runners call this action to receive a task to perform
+        from AWS Data Pipeline. The task runner specifies which tasks
+        it can perform by setting a value for the workerGroup
+        parameter of the PollForTask call. The task returned by
+        PollForTask may come from any of the pipelines that match the
+        workerGroup value passed in by the task runner and that was
+        launched using the IAM user credentials specified by the task
+        runner.
+
+        If tasks are ready in the work queue, PollForTask returns a
+        response immediately. If no tasks are available in the queue,
+        PollForTask uses long-polling and holds on to a poll
+        connection for up to a 90 seconds during which time the first
+        newly scheduled task is handed to the task runner. To
+        accomodate this, set the socket timeout in your task runner to
+        90 seconds. The task runner should not call PollForTask again
+        on the same `workerGroup` until it receives a response, and
+        this may take up to 90 seconds.
+
+        :type worker_group: string
+        :param worker_group: Indicates the type of task the task runner is
+            configured to accept and process. The worker group is set as a
+            field on objects in the pipeline when they are created. You can
+            only specify a single value for `workerGroup` in the call to
+            PollForTask. There are no wildcard values permitted in
+            `workerGroup`, the string must be an exact, case-sensitive, match.
+
+        :type hostname: string
+        :param hostname: The public DNS name of the calling task runner.
+
+        :type instance_identity: dict
+        :param instance_identity: Identity information for the Amazon EC2
+            instance that is hosting the task runner. You can get this value by
+            calling the URI, `http://169.254.169.254/latest/meta-data/instance-
+            id`, from the EC2 instance. For more information, go to `Instance
+            Metadata`_ in the Amazon Elastic Compute Cloud User Guide. Passing
+            in this value proves that your task runner is running on an EC2
+            instance, and ensures the proper AWS Data Pipeline service charges
+            are applied to your pipeline.
+
+        """
+        params = {'workerGroup': worker_group, }
+        if hostname is not None:
+            params['hostname'] = hostname
+        if instance_identity is not None:
+            params['instanceIdentity'] = instance_identity
+        return self.make_request(action='PollForTask',
+                                 body=json.dumps(params))
+
+    def put_pipeline_definition(self, pipeline_objects, pipeline_id):
+        """
+        Adds tasks, schedules, and preconditions that control the
+        behavior of the pipeline. You can use PutPipelineDefinition to
+        populate a new pipeline or to update an existing pipeline that
+        has not yet been activated.
+
+        PutPipelineDefinition also validates the configuration as it
+        adds it to the pipeline. Changes to the pipeline are saved
+        unless one of the following three validation errors exists in
+        the pipeline.
+
+        #. An object is missing a name or identifier field.
+        #. A string or reference field is empty.
+        #. The number of objects in the pipeline exceeds the maximum
+           allowed objects.
+
+
+
+        Pipeline object definitions are passed to the
+        PutPipelineDefinition action and returned by the
+        GetPipelineDefinition action.
+
+        :type pipeline_id: string
+        :param pipeline_id: The identifier of the pipeline to be configured.
+
+        :type pipeline_objects: list
+        :param pipeline_objects: The objects that define the pipeline. These
+            will overwrite the existing pipeline definition.
+
+        """
+        params = {
+            'pipelineId': pipeline_id,
+            'pipelineObjects': pipeline_objects,
+        }
+        return self.make_request(action='PutPipelineDefinition',
+                                 body=json.dumps(params))
+
+    def query_objects(self, pipeline_id, sphere, marker=None, query=None,
+                      limit=None):
+        """
+        Queries a pipeline for the names of objects that match a
+        specified set of conditions.
+
+        The objects returned by QueryObjects are paginated and then
+        filtered by the value you set for query. This means the action
+        may return an empty result set with a value set for marker. If
+        `HasMoreResults` is set to `True`, you should continue to call
+        QueryObjects, passing in the returned value for marker, until
+        `HasMoreResults` returns `False`.
+
+        :type pipeline_id: string
+        :param pipeline_id: Identifier of the pipeline to be queried for object
+            names.
+
+        :type query: dict
+        :param query: Query that defines the objects to be returned. The Query
+            object can contain a maximum of ten selectors. The conditions in
+            the query are limited to top-level String fields in the object.
+            These filters can be applied to components, instances, and
+            attempts.
+
+        :type sphere: string
+        :param sphere: Specifies whether the query applies to components or
+            instances. Allowable values: `COMPONENT`, `INSTANCE`, `ATTEMPT`.
+
+        :type marker: string
+        :param marker: The starting point for the results to be returned. The
+            first time you call QueryObjects, this value should be empty. As
+            long as the action returns `HasMoreResults` as `True`, you can call
+            QueryObjects again and pass the marker value from the response to
+            retrieve the next set of results.
+
+        :type limit: integer
+        :param limit: Specifies the maximum number of object names that
+            QueryObjects will return in a single call. The default value is
+            100.
+
+        """
+        params = {'pipelineId': pipeline_id, 'sphere': sphere, }
+        if query is not None:
+            params['query'] = query
+        if marker is not None:
+            params['marker'] = marker
+        if limit is not None:
+            params['limit'] = limit
+        return self.make_request(action='QueryObjects',
+                                 body=json.dumps(params))
+
+    def report_task_progress(self, task_id):
+        """
+        Updates the AWS Data Pipeline service on the progress of the
+        calling task runner. When the task runner is assigned a task,
+        it should call ReportTaskProgress to acknowledge that it has
+        the task within 2 minutes. If the web service does not recieve
+        this acknowledgement within the 2 minute window, it will
+        assign the task in a subsequent PollForTask call. After this
+        initial acknowledgement, the task runner only needs to report
+        progress every 15 minutes to maintain its ownership of the
+        task. You can change this reporting time from 15 minutes by
+        specifying a `reportProgressTimeout` field in your pipeline.
+        If a task runner does not report its status after 5 minutes,
+        AWS Data Pipeline will assume that the task runner is unable
+        to process the task and will reassign the task in a subsequent
+        response to PollForTask. task runners should call
+        ReportTaskProgress every 60 seconds.
+
+        :type task_id: string
+        :param task_id: Identifier of the task assigned to the task runner.
+            This value is provided in the TaskObject that the service returns
+            with the response for the PollForTask action.
+
+        """
+        params = {'taskId': task_id, }
+        return self.make_request(action='ReportTaskProgress',
+                                 body=json.dumps(params))
+
+    def report_task_runner_heartbeat(self, taskrunner_id, worker_group=None,
+                                     hostname=None):
+        """
+        Task runners call ReportTaskRunnerHeartbeat every 15 minutes
+        to indicate that they are operational. In the case of AWS Data
+        Pipeline Task Runner launched on a resource managed by AWS
+        Data Pipeline, the web service can use this call to detect
+        when the task runner application has failed and restart a new
+        instance.
+
+        :type taskrunner_id: string
+        :param taskrunner_id: The identifier of the task runner. This value
+            should be unique across your AWS account. In the case of AWS Data
+            Pipeline Task Runner launched on a resource managed by AWS Data
+            Pipeline, the web service provides a unique identifier when it
+            launches the application. If you have written a custom task runner,
+            you should assign a unique identifier for the task runner.
+
+        :type worker_group: string
+        :param worker_group: Indicates the type of task the task runner is
+            configured to accept and process. The worker group is set as a
+            field on objects in the pipeline when they are created. You can
+            only specify a single value for `workerGroup` in the call to
+            ReportTaskRunnerHeartbeat. There are no wildcard values permitted
+            in `workerGroup`, the string must be an exact, case-sensitive,
+            match.
+
+        :type hostname: string
+        :param hostname: The public DNS name of the calling task runner.
+
+        """
+        params = {'taskrunnerId': taskrunner_id, }
+        if worker_group is not None:
+            params['workerGroup'] = worker_group
+        if hostname is not None:
+            params['hostname'] = hostname
+        return self.make_request(action='ReportTaskRunnerHeartbeat',
+                                 body=json.dumps(params))
+
+    def set_status(self, object_ids, status, pipeline_id):
+        """
+        Requests that the status of an array of physical or logical
+        pipeline objects be updated in the pipeline. This update may
+        not occur immediately, but is eventually consistent. The
+        status that can be set depends on the type of object.
+
+        :type pipeline_id: string
+        :param pipeline_id: Identifies the pipeline that contains the objects.
+
+        :type object_ids: list
+        :param object_ids: Identifies an array of objects. The corresponding
+            objects can be either physical or components, but not a mix of both
+            types.
+
+        :type status: string
+        :param status: Specifies the status to be set on all the objects in
+            `objectIds`. For components, this can be either `PAUSE` or
+            `RESUME`. For instances, this can be either `CANCEL`, `RERUN`, or
+            `MARK_FINISHED`.
+
+        """
+        params = {
+            'pipelineId': pipeline_id,
+            'objectIds': object_ids,
+            'status': status,
+        }
+        return self.make_request(action='SetStatus',
+                                 body=json.dumps(params))
+
+    def set_task_status(self, task_id, task_status, error_id=None,
+                        error_message=None, error_stack_trace=None):
+        """
+        Notifies AWS Data Pipeline that a task is completed and
+        provides information about the final status. The task runner
+        calls this action regardless of whether the task was
+        sucessful. The task runner does not need to call SetTaskStatus
+        for tasks that are canceled by the web service during a call
+        to ReportTaskProgress.
+
+        :type task_id: string
+        :param task_id: Identifies the task assigned to the task runner. This
+            value is set in the TaskObject that is returned by the PollForTask
+            action.
+
+        :type task_status: string
+        :param task_status: If `FINISHED`, the task successfully completed. If
+            `FAILED` the task ended unsuccessfully. The `FALSE` value is used
+            by preconditions.
+
+        :type error_id: string
+        :param error_id: If an error occurred during the task, this value
+            specifies an id value that represents the error. This value is set
+            on the physical attempt object. It is used to display error
+            information to the user. It should not start with string "Service_"
+            which is reserved by the system.
+
+        :type error_message: string
+        :param error_message: If an error occurred during the task, this value
+            specifies a text description of the error. This value is set on the
+            physical attempt object. It is used to display error information to
+            the user. The web service does not parse this value.
+
+        :type error_stack_trace: string
+        :param error_stack_trace: If an error occurred during the task, this
+            value specifies the stack trace associated with the error. This
+            value is set on the physical attempt object. It is used to display
+            error information to the user. The web service does not parse this
+            value.
+
+        """
+        params = {'taskId': task_id, 'taskStatus': task_status, }
+        if error_id is not None:
+            params['errorId'] = error_id
+        if error_message is not None:
+            params['errorMessage'] = error_message
+        if error_stack_trace is not None:
+            params['errorStackTrace'] = error_stack_trace
+        return self.make_request(action='SetTaskStatus',
+                                 body=json.dumps(params))
+
+    def validate_pipeline_definition(self, pipeline_objects, pipeline_id):
+        """
+        Tests the pipeline definition with a set of validation checks
+        to ensure that it is well formed and can run without error.
+
+        :type pipeline_id: string
+        :param pipeline_id: Identifies the pipeline whose definition is to be
+            validated.
+
+        :type pipeline_objects: list
+        :param pipeline_objects: A list of objects that define the pipeline
+            changes to validate against the pipeline.
+
+        """
+        params = {
+            'pipelineId': pipeline_id,
+            'pipelineObjects': pipeline_objects,
+        }
+        return self.make_request(action='ValidatePipelineDefinition',
+                                 body=json.dumps(params))
+
+    def make_request(self, action, body):
+        headers = {
+            'X-Amz-Target': '%s.%s' % (self.TargetPrefix, action),
+            'Host': self.region.endpoint,
+            'Content-Type': 'application/x-amz-json-1.1',
+            'Content-Length': str(len(body)),
+        }
+        http_request = self.build_base_http_request(
+            method='POST', path='/', auth_path='/', params={},
+            headers=headers, data=body)
+        response = self._mexe(http_request, sender=None,
+                              override_num_retries=10)
+        response_body = response.read()
+        boto.log.debug(response_body)
+        if response.status == 200:
+            if response_body:
+                return json.loads(response_body)
+        else:
+            json_body = json.loads(response_body)
+            fault_name = json_body.get('__type', None)
+            exception_class = self._faults.get(fault_name, self.ResponseError)
+            raise exception_class(response.status, response.reason,
+                                  body=json_body)
+
diff --git a/boto/dynamodb/__init__.py b/boto/dynamodb/__init__.py
index c60b5c3..1220436 100644
--- a/boto/dynamodb/__init__.py
+++ b/boto/dynamodb/__init__.py
@@ -47,9 +47,15 @@
             RegionInfo(name='ap-southeast-1',
                        endpoint='dynamodb.ap-southeast-1.amazonaws.com',
                        connection_cls=boto.dynamodb.layer2.Layer2),
+            RegionInfo(name='ap-southeast-2',
+                       endpoint='dynamodb.ap-southeast-2.amazonaws.com',
+                       connection_cls=boto.dynamodb.layer2.Layer2),
             RegionInfo(name='eu-west-1',
                        endpoint='dynamodb.eu-west-1.amazonaws.com',
                        connection_cls=boto.dynamodb.layer2.Layer2),
+            RegionInfo(name='sa-east-1',
+                   endpoint='dynamodb.sa-east-1.amazonaws.com',
+                   connection_cls=boto.dynamodb.layer2.Layer2),
             ]
 
 
diff --git a/boto/dynamodb/batch.py b/boto/dynamodb/batch.py
index 87c84fc..6a755a9 100644
--- a/boto/dynamodb/batch.py
+++ b/boto/dynamodb/batch.py
@@ -41,12 +41,18 @@
     :ivar attributes_to_get: A list of attribute names.
         If supplied, only the specified attribute names will
         be returned.  Otherwise, all attributes will be returned.
+
+    :ivar consistent_read: Specify whether or not to use a
+        consistent read. Defaults to False.
+
     """
 
-    def __init__(self, table, keys, attributes_to_get=None):
+    def __init__(self, table, keys, attributes_to_get=None,
+                 consistent_read=False):
         self.table = table
         self.keys = keys
         self.attributes_to_get = attributes_to_get
+        self.consistent_read = consistent_read
 
     def to_dict(self):
         """
@@ -66,8 +72,13 @@
         batch_dict['Keys'] = key_list
         if self.attributes_to_get:
             batch_dict['AttributesToGet'] = self.attributes_to_get
+        if self.consistent_read:
+            batch_dict['ConsistentRead'] = True
+        else:
+            batch_dict['ConsistentRead'] = False
         return batch_dict
 
+
 class BatchWrite(object):
     """
     Used to construct a BatchWrite request.  Each BatchWrite object
@@ -126,7 +137,8 @@
         self.unprocessed = None
         self.layer2 = layer2
 
-    def add_batch(self, table, keys, attributes_to_get=None):
+    def add_batch(self, table, keys, attributes_to_get=None,
+                  consistent_read=False):
         """
         Add a Batch to this BatchList.
 
@@ -149,7 +161,7 @@
             If supplied, only the specified attribute names will
             be returned.  Otherwise, all attributes will be returned.
         """
-        self.append(Batch(table, keys, attributes_to_get))
+        self.append(Batch(table, keys, attributes_to_get, consistent_read))
 
     def resubmit(self):
         """
@@ -202,6 +214,7 @@
                 d[batch.table.name] = b
         return d
 
+
 class BatchWriteList(list):
     """
     A subclass of a list object that contains a collection of
diff --git a/boto/dynamodb/condition.py b/boto/dynamodb/condition.py
index 0b76790..f5db538 100644
--- a/boto/dynamodb/condition.py
+++ b/boto/dynamodb/condition.py
@@ -92,7 +92,7 @@
         self.values = values
 
     def __repr__(self):
-        return '{}({})'.format(self.__class__.__name__,
+        return '{0}({1})'.format(self.__class__.__name__,
                                ', '.join(self.values))
 
     def to_dict(self):
diff --git a/boto/dynamodb/exceptions.py b/boto/dynamodb/exceptions.py
index b60d5aa..12be2d7 100644
--- a/boto/dynamodb/exceptions.py
+++ b/boto/dynamodb/exceptions.py
@@ -4,6 +4,7 @@
 from boto.exception import BotoServerError, BotoClientError
 from boto.exception import DynamoDBResponseError
 
+
 class DynamoDBExpiredTokenError(BotoServerError):
     """
     Raised when a DynamoDB security token expires. This is generally boto's
@@ -28,6 +29,13 @@
     pass
 
 
+class DynamoDBNumberError(BotoClientError):
+    """
+    Raised in the event of incompatible numeric type casting.
+    """
+    pass
+
+
 class DynamoDBConditionalCheckFailedError(DynamoDBResponseError):
     """
     Raised when a ConditionalCheckFailedException response is received.
@@ -36,6 +44,7 @@
     """
     pass
 
+
 class DynamoDBValidationError(DynamoDBResponseError):
     """
     Raised when a ValidationException response is received. This happens
@@ -43,3 +52,13 @@
     has exceeded the 64Kb size limit.
     """
     pass
+
+
+class DynamoDBThroughputExceededError(DynamoDBResponseError):
+    """
+    Raised when the provisioned throughput has been exceeded.
+    Normally, when provisioned throughput is exceeded the operation
+    is retried.  If the retries are exhausted then this exception
+    will be raised.
+    """
+    pass
diff --git a/boto/dynamodb/item.py b/boto/dynamodb/item.py
index 4d4abda..b2b444d 100644
--- a/boto/dynamodb/item.py
+++ b/boto/dynamodb/item.py
@@ -194,3 +194,9 @@
         if self._updates is not None:
             self.delete_attribute(key)
         dict.__delitem__(self, key)
+
+    # Allow this item to still be pickled
+    def __getstate__(self):
+        return self.__dict__
+    def __setstate__(self, d):
+        self.__dict__.update(d)
diff --git a/boto/dynamodb/layer1.py b/boto/dynamodb/layer1.py
index 40dac5c..95c96a7 100644
--- a/boto/dynamodb/layer1.py
+++ b/boto/dynamodb/layer1.py
@@ -20,25 +20,15 @@
 # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
 # IN THE SOFTWARE.
 #
+import time
+from binascii import crc32
 
 import boto
 from boto.connection import AWSAuthConnection
 from boto.exception import DynamoDBResponseError
 from boto.provider import Provider
 from boto.dynamodb import exceptions as dynamodb_exceptions
-
-import time
-try:
-    import simplejson as json
-except ImportError:
-    import json
-
-#
-# To get full debug output, uncomment the following line and set the
-# value of Debug to be 2
-#
-#boto.set_stream_logger('dynamodb')
-Debug = 0
+from boto.compat import json
 
 
 class Layer1(AWSAuthConnection):
@@ -78,10 +68,13 @@
 
     ResponseError = DynamoDBResponseError
 
+    NumberRetries = 10
+    """The number of times an error is retried."""
+
     def __init__(self, aws_access_key_id=None, aws_secret_access_key=None,
                  is_secure=True, port=None, proxy=None, proxy_port=None,
                  debug=0, security_token=None, region=None,
-                 validate_certs=True):
+                 validate_certs=True, validate_checksums=True):
         if not region:
             region_name = boto.config.get('DynamoDB', 'region',
                                           self.DefaultRegionName)
@@ -98,6 +91,8 @@
                                    debug=debug, security_token=security_token,
                                    validate_certs=validate_certs)
         self.throughput_exceeded_events = 0
+        self._validate_checksums = boto.config.getbool(
+            'DynamoDB', 'validate_checksums', validate_checksums)
 
     def _get_session_token(self):
         self.provider = Provider(self._provider_type)
@@ -119,13 +114,13 @@
                                                     {}, headers, body, None)
         start = time.time()
         response = self._mexe(http_request, sender=None,
-                              override_num_retries=10,
+                              override_num_retries=self.NumberRetries,
                               retry_handler=self._retry_handler)
-        elapsed = (time.time() - start)*1000
+        elapsed = (time.time() - start) * 1000
         request_id = response.getheader('x-amzn-RequestId')
         boto.log.debug('RequestId: %s' % request_id)
-        boto.perflog.info('%s: id=%s time=%sms',
-                          headers['X-Amz-Target'], request_id, int(elapsed))
+        boto.perflog.debug('%s: id=%s time=%sms',
+                           headers['X-Amz-Target'], request_id, int(elapsed))
         response_body = response.read()
         boto.log.debug(response_body)
         return json.loads(response_body, object_hook=object_hook)
@@ -139,12 +134,15 @@
             if self.ThruputError in data.get('__type'):
                 self.throughput_exceeded_events += 1
                 msg = "%s, retry attempt %s" % (self.ThruputError, i)
-                if i == 0:
-                    next_sleep = 0
-                else:
-                    next_sleep = 0.05 * (2 ** i)
+                next_sleep = self._exponential_time(i)
                 i += 1
                 status = (msg, i, next_sleep)
+                if i == self.NumberRetries:
+                    # If this was our last retry attempt, raise
+                    # a specific error saying that the throughput
+                    # was exceeded.
+                    raise dynamodb_exceptions.DynamoDBThroughputExceededError(
+                        response.status, response.reason, data)
             elif self.SessionExpiredError in data.get('__type'):
                 msg = 'Renewing Session Token'
                 self._get_session_token()
@@ -158,8 +156,25 @@
             else:
                 raise self.ResponseError(response.status, response.reason,
                                          data)
+        expected_crc32 = response.getheader('x-amz-crc32')
+        if self._validate_checksums and expected_crc32 is not None:
+            boto.log.debug('Validating crc32 checksum for body: %s',
+                           response.read())
+            actual_crc32 = crc32(response.read()) & 0xffffffff
+            expected_crc32 = int(expected_crc32)
+            if actual_crc32 != expected_crc32:
+                msg = ("The calculated checksum %s did not match the expected "
+                       "checksum %s" % (actual_crc32, expected_crc32))
+                status = (msg, i + 1, self._exponential_time(i))
         return status
 
+    def _exponential_time(self, i):
+        if i == 0:
+            next_sleep = 0
+        else:
+            next_sleep = 0.05 * (2 ** i)
+        return next_sleep
+
     def list_tables(self, limit=None, start_table=None):
         """
         Returns a dictionary of results.  The dictionary contains
@@ -447,7 +462,7 @@
     def query(self, table_name, hash_key_value, range_key_conditions=None,
               attributes_to_get=None, limit=None, consistent_read=False,
               scan_index_forward=True, exclusive_start_key=None,
-              object_hook=None):
+              object_hook=None, count=False):
         """
         Perform a query of DynamoDB.  This version is currently punting
         and expecting you to provide a full and correct JSON body
@@ -471,6 +486,11 @@
         :type limit: int
         :param limit: The maximum number of items to return.
 
+        :type count: bool
+        :param count: If True, Amazon DynamoDB returns a total
+            number of items for the Query operation, even if the
+            operation has no matching items for the assigned filter.
+
         :type consistent_read: bool
         :param consistent_read: If True, a consistent read
             request is issued.  Otherwise, an eventually consistent
@@ -493,6 +513,8 @@
             data['AttributesToGet'] = attributes_to_get
         if limit:
             data['Limit'] = limit
+        if count:
+            data['Count'] = True
         if consistent_read:
             data['ConsistentRead'] = True
         if scan_index_forward:
@@ -507,8 +529,7 @@
 
     def scan(self, table_name, scan_filter=None,
              attributes_to_get=None, limit=None,
-             count=False, exclusive_start_key=None,
-             object_hook=None):
+             exclusive_start_key=None, object_hook=None, count=False):
         """
         Perform a scan of DynamoDB.  This version is currently punting
         and expecting you to provide a full and correct JSON body
@@ -527,7 +548,7 @@
             be returned.  Otherwise, all attributes will be returned.
 
         :type limit: int
-        :param limit: The maximum number of items to return.
+        :param limit: The maximum number of items to evaluate.
 
         :type count: bool
         :param count: If True, Amazon DynamoDB returns a total
diff --git a/boto/dynamodb/layer2.py b/boto/dynamodb/layer2.py
index 45fd069..16fcdbb 100644
--- a/boto/dynamodb/layer2.py
+++ b/boto/dynamodb/layer2.py
@@ -20,92 +20,124 @@
 # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
 # IN THE SOFTWARE.
 #
-import base64
-
 from boto.dynamodb.layer1 import Layer1
 from boto.dynamodb.table import Table
 from boto.dynamodb.schema import Schema
 from boto.dynamodb.item import Item
 from boto.dynamodb.batch import BatchList, BatchWriteList
-from boto.dynamodb.types import get_dynamodb_type, dynamize_value, \
-        convert_num, convert_binary
+from boto.dynamodb.types import get_dynamodb_type, Dynamizer, \
+        LossyFloatDynamizer
 
 
-def item_object_hook(dct):
-    """
-    A custom object hook for use when decoding JSON item bodys.
-    This hook will transform Amazon DynamoDB JSON responses to something
-    that maps directly to native Python types.
-    """
-    if len(dct.keys()) > 1:
-        return dct
-    if 'S' in dct:
-        return dct['S']
-    if 'N' in dct:
-        return convert_num(dct['N'])
-    if 'SS' in dct:
-        return set(dct['SS'])
-    if 'NS' in dct:
-        return set(map(convert_num, dct['NS']))
-    if 'B' in dct:
-        return base64.b64decode(dct['B'])
-    if 'BS' in dct:
-        return set(map(convert_binary, dct['BS']))
-    return dct
-
-
-def table_generator(tgen):
-    """
-    A low-level generator used to page through results from
-    query and scan operations.  This is used by
-    :class:`boto.dynamodb.layer2.TableGenerator` and is not intended
-    to be used outside of that context.
-    """
-    response = True
-    n = 0
-    while response:
-        if tgen.max_results and n == tgen.max_results:
-            break
-        if response is True:
-            pass
-        elif 'LastEvaluatedKey' in response:
-            lek = response['LastEvaluatedKey']
-            esk = tgen.table.layer2.dynamize_last_evaluated_key(lek)
-            tgen.kwargs['exclusive_start_key'] = esk
-        else:
-            break
-        response = tgen.callable(**tgen.kwargs)
-        if 'ConsumedCapacityUnits' in response:
-            tgen.consumed_units += response['ConsumedCapacityUnits']
-        for item in response['Items']:
-            if tgen.max_results and n == tgen.max_results:
-                break
-            yield tgen.item_class(tgen.table, attrs=item)
-            n += 1
-
-
-class TableGenerator:
+class TableGenerator(object):
     """
     This is an object that wraps up the table_generator function.
     The only real reason to have this is that we want to be able
     to accumulate and return the ConsumedCapacityUnits element that
     is part of each response.
 
-    :ivar consumed_units: An integer that holds the number of
-        ConsumedCapacityUnits accumulated thus far for this
-        generator.
+    :ivar last_evaluated_key: A sequence representing the key(s)
+        of the item last evaluated, or None if no additional
+        results are available.
+
+    :ivar remaining: The remaining quantity of results requested.
+
+    :ivar table: The table to which the call was made.
     """
 
-    def __init__(self, table, callable, max_results, item_class, kwargs):
+    def __init__(self, table, callable, remaining, item_class, kwargs):
         self.table = table
         self.callable = callable
-        self.max_results = max_results
+        self.remaining = -1 if remaining is None else remaining
         self.item_class = item_class
         self.kwargs = kwargs
-        self.consumed_units = 0
+        self._consumed_units = 0.0
+        self.last_evaluated_key = None
+        self._count = 0
+        self._scanned_count = 0
+        self._response = None
+
+    @property
+    def count(self):
+        """
+        The total number of items retrieved thus far.  This value changes with
+        iteration and even when issuing a call with count=True, it is necessary
+        to complete the iteration to assert an accurate count value.
+        """
+        self.response
+        return self._count
+
+    @property
+    def scanned_count(self):
+        """
+        As above, but representing the total number of items scanned by
+        DynamoDB, without regard to any filters.
+        """
+        self.response
+        return self._scanned_count
+
+    @property
+    def consumed_units(self):
+        """
+        Returns a float representing the ConsumedCapacityUnits accumulated.
+        """
+        self.response
+        return self._consumed_units
+
+    @property
+    def response(self):
+        """
+        The current response to the call from DynamoDB.
+        """
+        return self.next_response() if self._response is None else self._response
+
+    def next_response(self):
+        """
+        Issue a call and return the result.  You can invoke this method
+        while iterating over the TableGenerator in order to skip to the
+        next "page" of results.
+        """
+        # preserve any existing limit in case the user alters self.remaining
+        limit = self.kwargs.get('limit')
+        if (self.remaining > 0 and (limit is None or limit > self.remaining)):
+            self.kwargs['limit'] = self.remaining
+        self._response = self.callable(**self.kwargs)
+        self.kwargs['limit'] = limit
+        self._consumed_units += self._response.get('ConsumedCapacityUnits', 0.0)
+        self._count += self._response.get('Count', 0)
+        self._scanned_count += self._response.get('ScannedCount', 0)
+        # at the expense of a possibly gratuitous dynamize, ensure that
+        # early generator termination won't result in bad LEK values
+        if 'LastEvaluatedKey' in self._response:
+            lek = self._response['LastEvaluatedKey']
+            esk = self.table.layer2.dynamize_last_evaluated_key(lek)
+            self.kwargs['exclusive_start_key'] = esk
+            lektuple = (lek['HashKeyElement'],)
+            if 'RangeKeyElement' in lek:
+                lektuple += (lek['RangeKeyElement'],)
+            self.last_evaluated_key = lektuple
+        else:
+            self.last_evaluated_key = None
+        return self._response
 
     def __iter__(self):
-        return table_generator(self)
+        while self.remaining != 0:
+            response = self.response
+            for item in response.get('Items', []):
+                self.remaining -= 1
+                yield self.item_class(self.table, attrs=item)
+                if self.remaining == 0:
+                    break
+                if response is not self._response:
+                    break
+            else:
+                if self.last_evaluated_key is not None:
+                    self.next_response()
+                    continue
+                break
+            if response is not self._response:
+                continue
+            break
 
 
 class Layer2(object):
@@ -113,11 +145,24 @@
     def __init__(self, aws_access_key_id=None, aws_secret_access_key=None,
                  is_secure=True, port=None, proxy=None, proxy_port=None,
                  debug=0, security_token=None, region=None,
-                 validate_certs=True):
+                 validate_certs=True, dynamizer=LossyFloatDynamizer):
         self.layer1 = Layer1(aws_access_key_id, aws_secret_access_key,
                              is_secure, port, proxy, proxy_port,
                              debug, security_token, region,
                              validate_certs=validate_certs)
+        self.dynamizer = dynamizer()
+
+    def use_decimals(self):
+        """
+        Use the ``decimal.Decimal`` type for encoding/decoding numeric types.
+
+        By default, ints/floats are used to represent numeric types
+        ('N', 'NS') received from DynamoDB.  Using the ``Decimal``
+        type is recommended to prevent loss of precision.
+
+        """
+        # Eventually this should be made the default dynamizer.
+        self.dynamizer = Dynamizer()
 
     def dynamize_attribute_updates(self, pending_updates):
         """
@@ -132,13 +177,13 @@
                 d[attr_name] = {"Action": action}
             else:
                 d[attr_name] = {"Action": action,
-                                "Value": dynamize_value(value)}
+                                "Value": self.dynamizer.encode(value)}
         return d
 
     def dynamize_item(self, item):
         d = {}
         for attr_name in item:
-            d[attr_name] = dynamize_value(item[attr_name])
+            d[attr_name] = self.dynamizer.encode(item[attr_name])
         return d
 
     def dynamize_range_key_condition(self, range_key_condition):
@@ -176,7 +221,7 @@
                 elif attr_value is False:
                     attr_value = {'Exists': False}
                 else:
-                    val = dynamize_value(expected_value[attr_name])
+                    val = self.dynamizer.encode(expected_value[attr_name])
                     attr_value = {'Value': val}
                 d[attr_name] = attr_value
         return d
@@ -189,10 +234,10 @@
         d = None
         if last_evaluated_key:
             hash_key = last_evaluated_key['HashKeyElement']
-            d = {'HashKeyElement': dynamize_value(hash_key)}
+            d = {'HashKeyElement': self.dynamizer.encode(hash_key)}
             if 'RangeKeyElement' in last_evaluated_key:
                 range_key = last_evaluated_key['RangeKeyElement']
-                d['RangeKeyElement'] = dynamize_value(range_key)
+                d['RangeKeyElement'] = self.dynamizer.encode(range_key)
         return d
 
     def build_key_from_values(self, schema, hash_key, range_key=None):
@@ -204,25 +249,25 @@
         Otherwise, a Python dict version of a Amazon DynamoDB Key
         data structure is returned.
 
-        :type hash_key: int, float, str, or unicode
+        :type hash_key: int|float|str|unicode|Binary
         :param hash_key: The hash key of the item you are looking for.
             The type of the hash key should match the type defined in
             the schema.
 
-        :type range_key: int, float, str or unicode
+        :type range_key: int|float|str|unicode|Binary
         :param range_key: The range key of the item your are looking for.
             This should be supplied only if the schema requires a
             range key.  The type of the range key should match the
             type defined in the schema.
         """
         dynamodb_key = {}
-        dynamodb_value = dynamize_value(hash_key)
+        dynamodb_value = self.dynamizer.encode(hash_key)
         if dynamodb_value.keys()[0] != schema.hash_key_type:
             msg = 'Hashkey must be of type: %s' % schema.hash_key_type
             raise TypeError(msg)
         dynamodb_key['HashKeyElement'] = dynamodb_value
         if range_key is not None:
-            dynamodb_value = dynamize_value(range_key)
+            dynamodb_value = self.dynamizer.encode(range_key)
             if dynamodb_value.keys()[0] != schema.range_key_type:
                 msg = 'RangeKey must be of type: %s' % schema.range_key_type
                 raise TypeError(msg)
@@ -275,6 +320,33 @@
         """
         return self.layer1.describe_table(name)
 
+    def table_from_schema(self, name, schema):
+        """
+        Create a Table object from a schema.
+
+        This method will create a Table object without
+        making any API calls.  If you know the name and schema
+        of the table, you can use this method instead of
+        ``get_table``.
+
+        Example usage::
+
+            table = layer2.table_from_schema(
+                'tablename',
+                Schema.create(hash_key=('foo', 'N')))
+
+        :type name: str
+        :param name: The name of the table.
+
+        :type schema: :class:`boto.dynamodb.schema.Schema`
+        :param schema: The schema associated with the table.
+
+        :rtype: :class:`boto.dynamodb.table.Table`
+        :return: A Table object representing the table.
+
+        """
+        return Table.create_from_schema(self, name, schema)
+
     def get_table(self, name):
         """
         Retrieve the Table object for an existing table.
@@ -286,7 +358,7 @@
         :return: A Table object representing the table.
         """
         response = self.layer1.describe_table(name)
-        return Table(self,  response)
+        return Table(self, response)
 
     lookup = get_table
 
@@ -352,7 +424,7 @@
         :type hash_key_name: str
         :param hash_key_name: The name of the HashKey for the schema.
 
-        :type hash_key_proto_value: int|long|float|str|unicode
+        :type hash_key_proto_value: int|long|float|str|unicode|Binary
         :param hash_key_proto_value: A sample or prototype of the type
             of value you want to use for the HashKey.  Alternatively,
             you can also just pass in the Python type (e.g. int, float, etc.).
@@ -361,25 +433,19 @@
         :param range_key_name: The name of the RangeKey for the schema.
             This parameter is optional.
 
-        :type range_key_proto_value: int|long|float|str|unicode
+        :type range_key_proto_value: int|long|float|str|unicode|Binary
         :param range_key_proto_value: A sample or prototype of the type
             of value you want to use for the RangeKey.  Alternatively,
             you can also pass in the Python type (e.g. int, float, etc.)
             This parameter is optional.
         """
-        schema = {}
-        hash_key = {}
-        hash_key['AttributeName'] = hash_key_name
-        hash_key_type = get_dynamodb_type(hash_key_proto_value)
-        hash_key['AttributeType'] = hash_key_type
-        schema['HashKeyElement'] = hash_key
+        hash_key = (hash_key_name, get_dynamodb_type(hash_key_proto_value))
         if range_key_name and range_key_proto_value is not None:
-            range_key = {}
-            range_key['AttributeName'] = range_key_name
-            range_key_type = get_dynamodb_type(range_key_proto_value)
-            range_key['AttributeType'] = range_key_type
-            schema['RangeKeyElement'] = range_key
-        return Schema(schema)
+            range_key = (range_key_name,
+                         get_dynamodb_type(range_key_proto_value))
+        else:
+            range_key = None
+        return Schema.create(hash_key, range_key)
 
     def get_item(self, table, hash_key, range_key=None,
                  attributes_to_get=None, consistent_read=False,
@@ -390,12 +456,12 @@
         :type table: :class:`boto.dynamodb.table.Table`
         :param table: The Table object from which the item is retrieved.
 
-        :type hash_key: int|long|float|str|unicode
+        :type hash_key: int|long|float|str|unicode|Binary
         :param hash_key: The HashKey of the requested item.  The
             type of the value must match the type defined in the
             schema for the table.
 
-        :type range_key: int|long|float|str|unicode
+        :type range_key: int|long|float|str|unicode|Binary
         :param range_key: The optional RangeKey of the requested item.
             The type of the value must match the type defined in the
             schema for the table.
@@ -418,7 +484,7 @@
         key = self.build_key_from_values(table.schema, hash_key, range_key)
         response = self.layer1.get_item(table.name, key,
                                         attributes_to_get, consistent_read,
-                                        object_hook=item_object_hook)
+                                        object_hook=self.dynamizer.decode)
         item = item_class(table, hash_key, range_key, response['Item'])
         if 'ConsumedCapacityUnits' in response:
             item.consumed_units = response['ConsumedCapacityUnits']
@@ -438,7 +504,7 @@
         """
         request_items = batch_list.to_dict()
         return self.layer1.batch_get_item(request_items,
-                                          object_hook=item_object_hook)
+                                          object_hook=self.dynamizer.decode)
 
     def batch_write_item(self, batch_list):
         """
@@ -452,7 +518,7 @@
         """
         request_items = batch_list.to_dict()
         return self.layer1.batch_write_item(request_items,
-                                            object_hook=item_object_hook)
+                                            object_hook=self.dynamizer.decode)
 
     def put_item(self, item, expected_value=None, return_values=None):
         """
@@ -480,7 +546,7 @@
         response = self.layer1.put_item(item.table.name,
                                         self.dynamize_item(item),
                                         expected_value, return_values,
-                                        object_hook=item_object_hook)
+                                        object_hook=self.dynamizer.decode)
         if 'ConsumedCapacityUnits' in response:
             item.consumed_units = response['ConsumedCapacityUnits']
         return response
@@ -521,7 +587,7 @@
         response = self.layer1.update_item(item.table.name, key,
                                            attr_updates,
                                            expected_value, return_values,
-                                           object_hook=item_object_hook)
+                                           object_hook=self.dynamizer.decode)
         item._updates.clear()
         if 'ConsumedCapacityUnits' in response:
             item.consumed_units = response['ConsumedCapacityUnits']
@@ -554,20 +620,20 @@
         return self.layer1.delete_item(item.table.name, key,
                                        expected=expected_value,
                                        return_values=return_values,
-                                       object_hook=item_object_hook)
+                                       object_hook=self.dynamizer.decode)
 
     def query(self, table, hash_key, range_key_condition=None,
               attributes_to_get=None, request_limit=None,
               max_results=None, consistent_read=False,
               scan_index_forward=True, exclusive_start_key=None,
-              item_class=Item):
+              item_class=Item, count=False):
         """
         Perform a query on the table.
 
         :type table: :class:`boto.dynamodb.table.Table`
         :param table: The Table object that is being queried.
 
-        :type hash_key: int|long|float|str|unicode
+        :type hash_key: int|long|float|str|unicode|Binary
         :param hash_key: The HashKey of the requested item.  The
             type of the value must match the type defined in the
             schema for the table.
@@ -611,6 +677,14 @@
         :param scan_index_forward: Specified forward or backward
             traversal of the index.  Default is forward (True).
 
+        :type count: bool
+        :param count: If True, Amazon DynamoDB returns a total
+            number of items for the Query operation, even if the
+            operation has no matching items for the assigned filter.
+            If count is True, the actual items are not returned and
+            the count is accessible as the ``count`` attribute of
+            the returned object.
+
         :type exclusive_start_key: list or tuple
         :param exclusive_start_key: Primary key of the item from
             which to continue an earlier query.  This would be
@@ -633,20 +707,21 @@
         else:
             esk = None
         kwargs = {'table_name': table.name,
-                  'hash_key_value': dynamize_value(hash_key),
+                  'hash_key_value': self.dynamizer.encode(hash_key),
                   'range_key_conditions': rkc,
                   'attributes_to_get': attributes_to_get,
                   'limit': request_limit,
+                  'count': count,
                   'consistent_read': consistent_read,
                   'scan_index_forward': scan_index_forward,
                   'exclusive_start_key': esk,
-                  'object_hook': item_object_hook}
+                  'object_hook': self.dynamizer.decode}
         return TableGenerator(table, self.layer1.query,
                               max_results, item_class, kwargs)
 
     def scan(self, table, scan_filter=None,
              attributes_to_get=None, request_limit=None, max_results=None,
-             count=False, exclusive_start_key=None, item_class=Item):
+             exclusive_start_key=None, item_class=Item, count=False):
         """
         Perform a scan of DynamoDB.
 
@@ -697,6 +772,9 @@
         :param count: If True, Amazon DynamoDB returns a total
             number of items for the Scan operation, even if the
             operation has no matching items for the assigned filter.
+            If count is True, the actual items are not returned and
+            the count is accessible as the ``count`` attribute of
+            the returned object.
 
         :type exclusive_start_key: list or tuple
         :param exclusive_start_key: Primary key of the item from
@@ -721,6 +799,6 @@
                   'limit': request_limit,
                   'count': count,
                   'exclusive_start_key': esk,
-                  'object_hook': item_object_hook}
+                  'object_hook': self.dynamizer.decode}
         return TableGenerator(table, self.layer1.scan,
                               max_results, item_class, kwargs)
diff --git a/boto/dynamodb/schema.py b/boto/dynamodb/schema.py
index 34ff212..4a697a8 100644
--- a/boto/dynamodb/schema.py
+++ b/boto/dynamodb/schema.py
@@ -47,6 +47,38 @@
             s = 'Schema(%s)' % self.hash_key_name
         return s
 
+    @classmethod
+    def create(cls, hash_key, range_key=None):
+        """Convenience method to create a schema object.
+
+        Example usage::
+
+            schema = Schema.create(hash_key=('foo', 'N'))
+            schema2 = Schema.create(hash_key=('foo', 'N'),
+                                    range_key=('bar', 'S'))
+
+        :type hash_key: tuple
+        :param hash_key: A tuple of (hash_key_name, hash_key_type)
+
+        :type range_key: tuple
+        :param hash_key: A tuple of (range_key_name, range_key_type)
+
+        """
+        reconstructed = {
+            'HashKeyElement': {
+                'AttributeName': hash_key[0],
+                'AttributeType': hash_key[1],
+            }
+        }
+        if range_key is not None:
+            reconstructed['RangeKeyElement'] = {
+                'AttributeName': range_key[0],
+                'AttributeType': range_key[1],
+            }
+        instance = cls(None)
+        instance._dict = reconstructed
+        return instance
+
     @property
     def dict(self):
         return self._dict
@@ -72,3 +104,9 @@
         if 'RangeKeyElement' in self._dict:
             type = self._dict['RangeKeyElement']['AttributeType']
         return type
+
+    def __eq__(self, other):
+        return (self.hash_key_name == other.hash_key_name and
+                self.hash_key_type == other.hash_key_type and
+                self.range_key_name == other.range_key_name and
+                self.range_key_type == other.range_key_type)
diff --git a/boto/dynamodb/table.py b/boto/dynamodb/table.py
index ee73b1a..129b079 100644
--- a/boto/dynamodb/table.py
+++ b/boto/dynamodb/table.py
@@ -27,6 +27,7 @@
 from boto.dynamodb import exceptions as dynamodb_exceptions
 import time
 
+
 class TableBatchGenerator(object):
     """
     A low-level generator used to page through results from
@@ -37,11 +38,13 @@
         generator.
     """
 
-    def __init__(self, table, keys, attributes_to_get=None):
+    def __init__(self, table, keys, attributes_to_get=None,
+                 consistent_read=False):
         self.table = table
         self.keys = keys
         self.consumed_units = 0
         self.attributes_to_get = attributes_to_get
+        self.consistent_read = consistent_read
 
     def _queue_unprocessed(self, res):
         if not u'UnprocessedKeys' in res:
@@ -60,7 +63,8 @@
         while self.keys:
             # Build the next batch
             batch = BatchList(self.table.layer2)
-            batch.add_batch(self.table, self.keys[:100], self.attributes_to_get)
+            batch.add_batch(self.table, self.keys[:100],
+                            self.attributes_to_get)
             res = batch.submit()
 
             # parse the results
@@ -99,10 +103,53 @@
     """
 
     def __init__(self, layer2, response):
+        """
+
+        :type layer2: :class:`boto.dynamodb.layer2.Layer2`
+        :param layer2: A `Layer2` api object.
+
+        :type response: dict
+        :param response: The output of
+            `boto.dynamodb.layer1.Layer1.describe_table`.
+
+        """
         self.layer2 = layer2
         self._dict = {}
         self.update_from_response(response)
 
+    @classmethod
+    def create_from_schema(cls, layer2, name, schema):
+        """Create a Table object.
+
+        If you know the name and schema of your table, you can
+        create a ``Table`` object without having to make any
+        API calls (normally an API call is made to retrieve
+        the schema of a table).
+
+        Example usage::
+
+            table = Table.create_from_schema(
+                boto.connect_dynamodb(),
+                'tablename',
+                Schema.create(hash_key=('keyname', 'N')))
+
+        :type layer2: :class:`boto.dynamodb.layer2.Layer2`
+        :param layer2: A ``Layer2`` api object.
+
+        :type name: str
+        :param name: The name of the table.
+
+        :type schema: :class:`boto.dynamodb.schema.Schema`
+        :param schema: The schema associated with the table.
+
+        :rtype: :class:`boto.dynamodb.table.Table`
+        :return: A Table object representing the table.
+
+        """
+        table = cls(layer2, {'Table': {'TableName': name}})
+        table._schema = schema
+        return table
+
     def __repr__(self):
         return 'Table(%s)' % self.name
 
@@ -112,11 +159,11 @@
 
     @property
     def create_time(self):
-        return self._dict['CreationDateTime']
+        return self._dict.get('CreationDateTime', None)
 
     @property
     def status(self):
-        return self._dict['TableStatus']
+        return self._dict.get('TableStatus', None)
 
     @property
     def item_count(self):
@@ -132,19 +179,27 @@
 
     @property
     def read_units(self):
-        return self._dict['ProvisionedThroughput']['ReadCapacityUnits']
+        try:
+            return self._dict['ProvisionedThroughput']['ReadCapacityUnits']
+        except KeyError:
+            return None
 
     @property
     def write_units(self):
-        return self._dict['ProvisionedThroughput']['WriteCapacityUnits']
+        try:
+            return self._dict['ProvisionedThroughput']['WriteCapacityUnits']
+        except KeyError:
+            return None
 
     def update_from_response(self, response):
         """
         Update the state of the Table object based on the response
         data received from Amazon DynamoDB.
         """
+        # 'Table' is from a describe_table call.
         if 'Table' in response:
             self._dict.update(response['Table'])
+        # 'TableDescription' is from a create_table call.
         elif 'TableDescription' in response:
             self._dict.update(response['TableDescription'])
         if 'KeySchema' in self._dict:
@@ -202,12 +257,12 @@
         """
         Retrieve an existing item from the table.
 
-        :type hash_key: int|long|float|str|unicode
+        :type hash_key: int|long|float|str|unicode|Binary
         :param hash_key: The HashKey of the requested item.  The
             type of the value must match the type defined in the
             schema for the table.
 
-        :type range_key: int|long|float|str|unicode
+        :type range_key: int|long|float|str|unicode|Binary
         :param range_key: The optional RangeKey of the requested item.
             The type of the value must match the type defined in the
             schema for the table.
@@ -240,12 +295,12 @@
         the data that is returned, since this method specifically tells
         Amazon not to return anything but the Item's key.
 
-        :type hash_key: int|long|float|str|unicode
+        :type hash_key: int|long|float|str|unicode|Binary
         :param hash_key: The HashKey of the requested item.  The
             type of the value must match the type defined in the
             schema for the table.
 
-        :type range_key: int|long|float|str|unicode
+        :type range_key: int|long|float|str|unicode|Binary
         :param range_key: The optional RangeKey of the requested item.
             The type of the value must match the type defined in the
             schema for the table.
@@ -280,7 +335,7 @@
         the hash_key and range_key values of the item.  You can use
         these explicit parameters when calling the method, such as::
 
-        >>> my_item = my_table.new_item(hash_key='a', range_key=1,
+            >>> my_item = my_table.new_item(hash_key='a', range_key=1,
                                         attrs={'key1': 'val1', 'key2': 'val2'})
             >>> my_item
             {u'bar': 1, u'foo': 'a', 'key1': 'val1', 'key2': 'val2'}
@@ -302,12 +357,12 @@
            the explicit parameters, the values in the attrs will be
            ignored.
 
-        :type hash_key: int|long|float|str|unicode
+        :type hash_key: int|long|float|str|unicode|Binary
         :param hash_key: The HashKey of the new item.  The
             type of the value must match the type defined in the
             schema for the table.
 
-        :type range_key: int|long|float|str|unicode
+        :type range_key: int|long|float|str|unicode|Binary
         :param range_key: The optional RangeKey of the new item.
             The type of the value must match the type defined in the
             schema for the table.
@@ -323,15 +378,11 @@
         """
         return item_class(self, hash_key, range_key, attrs)
 
-    def query(self, hash_key, range_key_condition=None,
-              attributes_to_get=None, request_limit=None,
-              max_results=None, consistent_read=False,
-              scan_index_forward=True, exclusive_start_key=None,
-              item_class=Item):
+    def query(self, hash_key, *args, **kw):
         """
         Perform a query on the table.
 
-        :type hash_key: int|long|float|str|unicode
+        :type hash_key: int|long|float|str|unicode|Binary
         :param hash_key: The HashKey of the requested item.  The
             type of the value must match the type defined in the
             schema for the table.
@@ -380,31 +431,33 @@
             which to continue an earlier query.  This would be
             provided as the LastEvaluatedKey in that query.
 
+        :type count: bool
+        :param count: If True, Amazon DynamoDB returns a total
+            number of items for the Query operation, even if the
+            operation has no matching items for the assigned filter.
+            If count is True, the actual items are not returned and
+            the count is accessible as the ``count`` attribute of
+            the returned object.
+
+
         :type item_class: Class
         :param item_class: Allows you to override the class used
             to generate the items. This should be a subclass of
             :class:`boto.dynamodb.item.Item`
         """
-        return self.layer2.query(self, hash_key, range_key_condition,
-                                 attributes_to_get, request_limit,
-                                 max_results, consistent_read,
-                                 scan_index_forward, exclusive_start_key,
-                                 item_class=item_class)
+        return self.layer2.query(self, hash_key, *args, **kw)
 
-    def scan(self, scan_filter=None,
-             attributes_to_get=None, request_limit=None, max_results=None,
-             count=False, exclusive_start_key=None, item_class=Item):
+    def scan(self, *args, **kw):
         """
         Scan through this table, this is a very long
         and expensive operation, and should be avoided if
         at all possible.
 
-        :type scan_filter: A list of tuples
-        :param scan_filter: A list of tuples where each tuple consists
-            of an attribute name, a comparison operator, and either
-            a scalar or tuple consisting of the values to compare
-            the attribute to.  Valid comparison operators are shown below
-            along with the expected number of values that should be supplied.
+        :type scan_filter: A dict
+        :param scan_filter: A dictionary where the key is the
+            attribute name and the value is a
+            :class:`boto.dynamodb.condition.Condition` object.
+            Valid Condition objects include:
 
              * EQ - equal (1)
              * NE - not equal (1)
@@ -444,6 +497,9 @@
         :param count: If True, Amazon DynamoDB returns a total
             number of items for the Scan operation, even if the
             operation has no matching items for the assigned filter.
+            If count is True, the actual items are not returned and
+            the count is accessible as the ``count`` attribute of
+            the returned object.
 
         :type exclusive_start_key: list or tuple
         :param exclusive_start_key: Primary key of the item from
@@ -455,12 +511,11 @@
             to generate the items. This should be a subclass of
             :class:`boto.dynamodb.item.Item`
 
-        :return: A TableGenerator (generator) object which will iterate over all results
+        :return: A TableGenerator (generator) object which will iterate
+            over all results
         :rtype: :class:`boto.dynamodb.layer2.TableGenerator`
         """
-        return self.layer2.scan(self, scan_filter, attributes_to_get,
-                                request_limit, max_results, count,
-                                exclusive_start_key, item_class=item_class)
+        return self.layer2.scan(self, *args, **kw)
 
     def batch_get_item(self, keys, attributes_to_get=None):
         """
@@ -484,7 +539,8 @@
             If supplied, only the specified attribute names will
             be returned.  Otherwise, all attributes will be returned.
 
-        :return: A TableBatchGenerator (generator) object which will iterate over all results
+        :return: A TableBatchGenerator (generator) object which will
+            iterate over all results
         :rtype: :class:`boto.dynamodb.table.TableBatchGenerator`
         """
         return TableBatchGenerator(self, keys, attributes_to_get)
diff --git a/boto/dynamodb/types.py b/boto/dynamodb/types.py
index 5b33076..e3b4958 100644
--- a/boto/dynamodb/types.py
+++ b/boto/dynamodb/types.py
@@ -25,10 +25,33 @@
 Python types and vice-versa.
 """
 import base64
+from decimal import (Decimal, DecimalException, Context,
+                     Clamped, Overflow, Inexact, Underflow, Rounded)
+from exceptions import DynamoDBNumberError
+
+
+DYNAMODB_CONTEXT = Context(
+    Emin=-128, Emax=126, rounding=None, prec=38,
+    traps=[Clamped, Overflow, Inexact, Rounded, Underflow])
+
+
+# python2.6 cannot convert floats directly to
+# Decimals.  This is taken from:
+# http://docs.python.org/release/2.6.7/library/decimal.html#decimal-faq
+def float_to_decimal(f):
+    n, d = f.as_integer_ratio()
+    numerator, denominator = Decimal(n), Decimal(d)
+    ctx = DYNAMODB_CONTEXT
+    result = ctx.divide(numerator, denominator)
+    while ctx.flags[Inexact]:
+        ctx.flags[Inexact] = False
+        ctx.prec *= 2
+        result = ctx.divide(numerator, denominator)
+    return result
 
 
 def is_num(n):
-    types = (int, long, float, bool)
+    types = (int, long, float, bool, Decimal)
     return isinstance(n, types) or n in types
 
 
@@ -41,6 +64,15 @@
     return isinstance(n, Binary)
 
 
+def serialize_num(val):
+    """Cast a number to a string and perform
+       validation to ensure no loss of precision.
+    """
+    if isinstance(val, bool):
+        return str(int(val))
+    return str(val)
+
+
 def convert_num(s):
     if '.' in s:
         n = float(s)
@@ -86,23 +118,13 @@
     needs to be sent to Amazon DynamoDB.  If the type of the value
     is not supported, raise a TypeError
     """
-    def _str(val):
-        """
-        DynamoDB stores booleans as numbers. True is 1, False is 0.
-        This function converts Python booleans into DynamoDB friendly
-        representation.
-        """
-        if isinstance(val, bool):
-            return str(int(val))
-        return str(val)
-
     dynamodb_type = get_dynamodb_type(val)
     if dynamodb_type == 'N':
-        val = {dynamodb_type: _str(val)}
+        val = {dynamodb_type: serialize_num(val)}
     elif dynamodb_type == 'S':
         val = {dynamodb_type: val}
     elif dynamodb_type == 'NS':
-        val = {dynamodb_type: [str(n) for n in val]}
+        val = {dynamodb_type: map(serialize_num, val)}
     elif dynamodb_type == 'SS':
         val = {dynamodb_type: [n for n in val]}
     elif dynamodb_type == 'B':
@@ -136,3 +158,169 @@
 
     def __hash__(self):
         return hash(self.value)
+
+
+def item_object_hook(dct):
+    """
+    A custom object hook for use when decoding JSON item bodys.
+    This hook will transform Amazon DynamoDB JSON responses to something
+    that maps directly to native Python types.
+    """
+    if len(dct.keys()) > 1:
+        return dct
+    if 'S' in dct:
+        return dct['S']
+    if 'N' in dct:
+        return convert_num(dct['N'])
+    if 'SS' in dct:
+        return set(dct['SS'])
+    if 'NS' in dct:
+        return set(map(convert_num, dct['NS']))
+    if 'B' in dct:
+        return convert_binary(dct['B'])
+    if 'BS' in dct:
+        return set(map(convert_binary, dct['BS']))
+    return dct
+
+
+class Dynamizer(object):
+    """Control serialization/deserialization of types.
+
+    This class controls the encoding of python types to the
+    format that is expected by the DynamoDB API, as well as
+    taking DynamoDB types and constructing the appropriate
+    python types.
+
+    If you want to customize this process, you can subclass
+    this class and override the encoding/decoding of
+    specific types.  For example::
+
+        'foo'      (Python type)
+            |
+            v
+        encode('foo')
+            |
+            v
+        _encode_s('foo')
+            |
+            v
+        {'S': 'foo'}  (Encoding sent to/received from DynamoDB)
+            |
+            V
+        decode({'S': 'foo'})
+            |
+            v
+        _decode_s({'S': 'foo'})
+            |
+            v
+        'foo'     (Python type)
+
+    """
+    def _get_dynamodb_type(self, attr):
+        return get_dynamodb_type(attr)
+
+    def encode(self, attr):
+        """
+        Encodes a python type to the format expected
+        by DynamoDB.
+
+        """
+        dynamodb_type = self._get_dynamodb_type(attr)
+        try:
+            encoder = getattr(self, '_encode_%s' % dynamodb_type.lower())
+        except AttributeError:
+            raise ValueError("Unable to encode dynamodb type: %s" %
+                             dynamodb_type)
+        return {dynamodb_type: encoder(attr)}
+
+    def _encode_n(self, attr):
+        try:
+            if isinstance(attr, float) and not hasattr(Decimal, 'from_float'):
+                # python2.6 does not support creating Decimals directly
+                # from floats so we have to do this ourself.
+                n = str(float_to_decimal(attr))
+            else:
+                n = str(DYNAMODB_CONTEXT.create_decimal(attr))
+            if filter(lambda x: x in n, ('Infinity', 'NaN')):
+                raise TypeError('Infinity and NaN not supported')
+            return n
+        except (TypeError, DecimalException), e:
+            msg = '{0} numeric for `{1}`\n{2}'.format(
+                e.__class__.__name__, attr, str(e) or '')
+        raise DynamoDBNumberError(msg)
+
+    def _encode_s(self, attr):
+        if isinstance(attr, unicode):
+            attr = attr.encode('utf-8')
+        elif not isinstance(attr, str):
+            attr = str(attr)
+        return attr
+
+    def _encode_ns(self, attr):
+        return map(self._encode_n, attr)
+
+    def _encode_ss(self, attr):
+        return [self._encode_s(n) for n in attr]
+
+    def _encode_b(self, attr):
+        return attr.encode()
+
+    def _encode_bs(self, attr):
+        return [self._encode_b(n) for n in attr]
+
+    def decode(self, attr):
+        """
+        Takes the format returned by DynamoDB and constructs
+        the appropriate python type.
+
+        """
+        if len(attr) > 1 or not attr:
+            return attr
+        dynamodb_type = attr.keys()[0]
+        try:
+            decoder = getattr(self, '_decode_%s' % dynamodb_type.lower())
+        except AttributeError:
+            return attr
+        return decoder(attr[dynamodb_type])
+
+    def _decode_n(self, attr):
+        return DYNAMODB_CONTEXT.create_decimal(attr)
+
+    def _decode_s(self, attr):
+        return attr
+
+    def _decode_ns(self, attr):
+        return set(map(self._decode_n, attr))
+
+    def _decode_ss(self, attr):
+        return set(map(self._decode_s, attr))
+
+    def _decode_b(self, attr):
+        return convert_binary(attr)
+
+    def _decode_bs(self, attr):
+        return set(map(self._decode_b, attr))
+
+
+class LossyFloatDynamizer(Dynamizer):
+    """Use float/int instead of Decimal for numeric types.
+
+    This class is provided for backwards compatibility.  Instead of
+    using Decimals for the 'N', 'NS' types it uses ints/floats.
+
+    This class is deprecated and its usage is not encouraged,
+    as doing so may result in loss of precision.  Use the
+    `Dynamizer` class instead.
+
+    """
+    def _encode_n(self, attr):
+        return serialize_num(attr)
+
+    def _encode_ns(self, attr):
+        return [str(i) for i in attr]
+
+    def _decode_n(self, attr):
+        return convert_num(attr)
+
+    def _decode_ns(self, attr):
+        return set(map(self._decode_n, attr))
diff --git a/boto/dynamodb2/__init__.py b/boto/dynamodb2/__init__.py
new file mode 100644
index 0000000..8cdfcac
--- /dev/null
+++ b/boto/dynamodb2/__init__.py
@@ -0,0 +1,63 @@
+# Copyright (c) 2011 Mitch Garnaat http://garnaat.org/
+# Copyright (c) 2011 Amazon.com, Inc. or its affiliates.  All Rights Reserved
+#
+# Permission is hereby granted, free of charge, to any person obtaining a
+# copy of this software and associated documentation files (the
+# "Software"), to deal in the Software without restriction, including
+# without limitation the rights to use, copy, modify, merge, publish, dis-
+# tribute, sublicense, and/or sell copies of the Software, and to permit
+# persons to whom the Software is furnished to do so, subject to the fol-
+# lowing conditions:
+#
+# The above copyright notice and this permission notice shall be included
+# in all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
+# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
+# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+# IN THE SOFTWARE.
+#
+
+from boto.regioninfo import RegionInfo
+
+
+def regions():
+    """
+    Get all available regions for the Amazon DynamoDB service.
+
+    :rtype: list
+    :return: A list of :class:`boto.regioninfo.RegionInfo`
+    """
+    from boto.dynamodb2.layer1 import DynamoDBConnection
+    return [RegionInfo(name='us-east-1',
+                       endpoint='dynamodb.us-east-1.amazonaws.com',
+                       connection_cls=DynamoDBConnection),
+            RegionInfo(name='us-west-1',
+                       endpoint='dynamodb.us-west-1.amazonaws.com',
+                       connection_cls=DynamoDBConnection),
+            RegionInfo(name='us-west-2',
+                       endpoint='dynamodb.us-west-2.amazonaws.com',
+                       connection_cls=DynamoDBConnection),
+            RegionInfo(name='eu-west-1',
+                       endpoint='dynamodb.eu-west-1.amazonaws.com',
+                       connection_cls=DynamoDBConnection),
+            RegionInfo(name='ap-northeast-1',
+                       endpoint='dynamodb.ap-northeast-1.amazonaws.com',
+                       connection_cls=DynamoDBConnection),
+            RegionInfo(name='ap-southeast-1',
+                       endpoint='dynamodb.ap-southeast-1.amazonaws.com',
+                       connection_cls=DynamoDBConnection),
+            RegionInfo(name='sa-east-1',
+                       endpoint='dynamodb.sa-east-1.amazonaws.com',
+                       connection_cls=DynamoDBConnection),
+            ]
+
+
+def connect_to_region(region_name, **kw_params):
+    for region in regions():
+        if region.name == region_name:
+            return region.connect(**kw_params)
+    return None
diff --git a/boto/dynamodb2/exceptions.py b/boto/dynamodb2/exceptions.py
new file mode 100644
index 0000000..a9fcf75
--- /dev/null
+++ b/boto/dynamodb2/exceptions.py
@@ -0,0 +1,74 @@
+# Copyright (c) 2013 Amazon.com, Inc. or its affiliates.  All Rights Reserved
+#
+# Permission is hereby granted, free of charge, to any person obtaining a
+# copy of this software and associated documentation files (the
+# "Software"), to deal in the Software without restriction, including
+# without limitation the rights to use, copy, modify, merge, publish, dis-
+# tribute, sublicense, and/or sell copies of the Software, and to permit
+# persons to whom the Software is furnished to do so, subject to the fol-
+# lowing conditions:
+#
+# The above copyright notice and this permission notice shall be included
+# in all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
+# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
+# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+# IN THE SOFTWARE.
+#
+from boto.exception import JSONResponseError
+
+
+class ProvisionedThroughputExceededException(JSONResponseError):
+    pass
+
+
+class LimitExceededException(JSONResponseError):
+    pass
+
+
+class ConditionalCheckFailedException(JSONResponseError):
+    pass
+
+
+class ResourceInUseException(JSONResponseError):
+    pass
+
+
+class ResourceNotFoundException(JSONResponseError):
+    pass
+
+
+class InternalServerError(JSONResponseError):
+    pass
+
+
+class ValidationException(JSONResponseError):
+    pass
+
+
+class ItemCollectionSizeLimitExceededException(JSONResponseError):
+    pass
+
+
+class DynamoDBError(Exception):
+    pass
+
+
+class UnknownSchemaFieldError(DynamoDBError):
+    pass
+
+
+class UnknownIndexFieldError(DynamoDBError):
+    pass
+
+
+class UnknownFilterTypeError(DynamoDBError):
+    pass
+
+
+class QueryError(DynamoDBError):
+    pass
diff --git a/boto/dynamodb2/fields.py b/boto/dynamodb2/fields.py
new file mode 100644
index 0000000..25abffd
--- /dev/null
+++ b/boto/dynamodb2/fields.py
@@ -0,0 +1,212 @@
+from boto.dynamodb2.types import STRING
+
+
+class BaseSchemaField(object):
+    """
+    An abstract class for defining schema fields.
+
+    Contains most of the core functionality for the field. Subclasses must
+    define an ``attr_type`` to pass to DynamoDB.
+    """
+    attr_type = None
+
+    def __init__(self, name, data_type=STRING):
+        """
+        Creates a Python schema field, to represent the data to pass to
+        DynamoDB.
+
+        Requires a ``name`` parameter, which should be a string name of the
+        field.
+
+        Optionally accepts a ``data_type`` parameter, which should be a
+        constant from ``boto.dynamodb2.types``. (Default: ``STRING``)
+        """
+        self.name = name
+        self.data_type = data_type
+
+    def definition(self):
+        """
+        Returns the attribute definition structure DynamoDB expects.
+
+        Example::
+
+            >>> field.definition()
+            {
+                'AttributeName': 'username',
+                'AttributeType': 'S',
+            }
+
+        """
+        return {
+            'AttributeName': self.name,
+            'AttributeType': self.data_type,
+        }
+
+    def schema(self):
+        """
+        Returns the schema structure DynamoDB expects.
+
+        Example::
+
+            >>> field.schema()
+            {
+                'AttributeName': 'username',
+                'KeyType': 'HASH',
+            }
+
+        """
+        return {
+            'AttributeName': self.name,
+            'KeyType': self.attr_type,
+        }
+
+
+class HashKey(BaseSchemaField):
+    """
+    An field representing a hash key.
+
+    Example::
+
+        >>> from boto.dynamodb2.types import NUMBER
+        >>> HashKey('username')
+        >>> HashKey('date_joined', data_type=NUMBER)
+
+    """
+    attr_type = 'HASH'
+
+
+class RangeKey(BaseSchemaField):
+    """
+    An field representing a range key.
+
+    Example::
+
+        >>> from boto.dynamodb2.types import NUMBER
+        >>> HashKey('username')
+        >>> HashKey('date_joined', data_type=NUMBER)
+
+    """
+    attr_type = 'RANGE'
+
+
+class BaseIndexField(object):
+    """
+    An abstract class for defining schema fields.
+
+    Contains most of the core functionality for the field. Subclasses must
+    define an ``attr_type`` to pass to DynamoDB.
+    """
+    def __init__(self, name, parts):
+        self.name = name
+        self.parts = parts
+
+    def definition(self):
+        """
+        Returns the attribute definition structure DynamoDB expects.
+
+        Example::
+
+            >>> index.definition()
+            {
+                'AttributeName': 'username',
+                'AttributeType': 'S',
+            }
+
+        """
+        definition = []
+
+        for part in self.parts:
+            definition.append({
+                'AttributeName': part.name,
+                'AttributeType': part.data_type,
+            })
+
+        return definition
+
+    def schema(self):
+        """
+        Returns the schema structure DynamoDB expects.
+
+        Example::
+
+            >>> index.schema()
+            {
+                'IndexName': 'LastNameIndex',
+                'KeySchema': [
+                    {
+                        'AttributeName': 'username',
+                        'KeyType': 'HASH',
+                    },
+                ],
+                'Projection': {
+                    'ProjectionType': 'KEYS_ONLY,
+                }
+            }
+
+        """
+        key_schema = []
+
+        for part in self.parts:
+            key_schema.append(part.schema())
+
+        return {
+            'IndexName': self.name,
+            'KeySchema': key_schema,
+            'Projection': {
+                'ProjectionType': self.projection_type,
+            }
+        }
+
+
+class AllIndex(BaseIndexField):
+    """
+    An index signifying all fields should be in the index.
+
+    Example::
+
+        >>> AllIndex('MostRecentlyJoined', parts=[
+        ...     HashKey('username'),
+        ...     RangeKey('date_joined')
+        ... ])
+
+    """
+    projection_type = 'ALL'
+
+
+class KeysOnlyIndex(BaseIndexField):
+    """
+    An index signifying only key fields should be in the index.
+
+    Example::
+
+        >>> KeysOnlyIndex('MostRecentlyJoined', parts=[
+        ...     HashKey('username'),
+        ...     RangeKey('date_joined')
+        ... ])
+
+    """
+    projection_type = 'KEYS_ONLY'
+
+
+class IncludeIndex(BaseIndexField):
+    """
+    An index signifying only certain fields should be in the index.
+
+    Example::
+
+        >>> IncludeIndex('GenderIndex', parts=[
+        ...     HashKey('username'),
+        ...     RangeKey('date_joined')
+        ... ], includes=['gender'])
+
+    """
+    projection_type = 'INCLUDE'
+
+    def __init__(self, *args, **kwargs):
+        self.includes_fields = kwargs.pop('includes', [])
+        super(IncludeIndex, self).__init__(*args, **kwargs)
+
+    def schema(self):
+        schema_data = super(IncludeIndex, self).schema()
+        schema_data['Projection']['NonKeyAttributes'] = self.includes_fields
+        return schema_data
diff --git a/boto/dynamodb2/items.py b/boto/dynamodb2/items.py
new file mode 100644
index 0000000..8df5102
--- /dev/null
+++ b/boto/dynamodb2/items.py
@@ -0,0 +1,390 @@
+from boto.dynamodb2.types import Dynamizer
+
+
+class NEWVALUE(object):
+    # A marker for new data added.
+    pass
+
+
+class Item(object):
+    """
+    An object representing the item data within a DynamoDB table.
+
+    An item is largely schema-free, meaning it can contain any data. The only
+    limitation is that it must have data for the fields in the ``Table``'s
+    schema.
+
+    This object presents a dictionary-like interface for accessing/storing
+    data. It also tries to intelligently track how data has changed throughout
+    the life of the instance, to be as efficient as possible about updates.
+    """
+    def __init__(self, table, data=None):
+        """
+        Constructs an (unsaved) ``Item`` instance.
+
+        To persist the data in DynamoDB, you'll need to call the ``Item.save``
+        (or ``Item.partial_save``) on the instance.
+
+        Requires a ``table`` parameter, which should be a ``Table`` instance.
+        This is required, as DynamoDB's API is focus around all operations
+        being table-level. It's also for persisting schema around many objects.
+
+        Optionally accepts a ``data`` parameter, which should be a dictionary
+        of the fields & values of the item.
+
+        Example::
+
+            >>> users = Table('users')
+            >>> user = Item(users, data={
+            ...     'username': 'johndoe',
+            ...     'first_name': 'John',
+            ...     'date_joined': 1248o61592,
+            ... })
+
+            # Change existing data.
+            >>> user['first_name'] = 'Johann'
+            # Add more data.
+            >>> user['last_name'] = 'Doe'
+            # Delete data.
+            >>> del user['date_joined']
+
+            # Iterate over all the data.
+            >>> for field, val in user.items():
+            ...     print "%s: %s" % (field, val)
+            username: johndoe
+            first_name: John
+            date_joined: 1248o61592
+
+        """
+        self.table = table
+        self._data = {}
+        self._orig_data = {}
+        self._is_dirty = False
+        self._dynamizer = Dynamizer()
+
+        if data:
+            self._data = data
+            self._is_dirty = True
+
+            for key in data.keys():
+                self._orig_data[key] = NEWVALUE
+
+    def __getitem__(self, key):
+        return self._data.get(key, None)
+
+    def __setitem__(self, key, value):
+        # Stow the original value if present, so we can track what's changed.
+        if key in self._data:
+            self._orig_data[key] = self._data[key]
+        else:
+            # Use a marker to indicate we've never seen a value for this key.
+            self._orig_data[key] = NEWVALUE
+
+        self._data[key] = value
+        self._is_dirty = True
+
+    def __delitem__(self, key):
+        if not key in self._data:
+            return
+
+        # Stow the original value, so we can track what's changed.
+        value = self._data[key]
+        del self._data[key]
+        self._orig_data[key] = value
+        self._is_dirty = True
+
+    def keys(self):
+        return self._data.keys()
+
+    def values(self):
+        return self._data.values()
+
+    def items(self):
+        return self._data.items()
+
+    def get(self, key, default=None):
+        return self._data.get(key, default)
+
+    def __iter__(self):
+        for key in self._data:
+            yield self._data[key]
+
+    def __contains__(self, key):
+        return key in self._data
+
+    def needs_save(self):
+        """
+        Returns whether or not the data has changed on the ``Item``.
+
+        Example:
+
+            >>> user.needs_save()
+            False
+            >>> user['first_name'] = 'Johann'
+            >>> user.needs_save()
+            True
+
+        """
+        return self._is_dirty
+
+    def mark_clean(self):
+        """
+        Marks an ``Item`` instance as no longer needing to be saved.
+
+        Example:
+
+            >>> user.needs_save()
+            False
+            >>> user['first_name'] = 'Johann'
+            >>> user.needs_save()
+            True
+            >>> user.mark_clean()
+            >>> user.needs_save()
+            False
+
+        """
+        self._orig_data = {}
+        self._is_dirty = False
+
+    def mark_dirty(self):
+        """
+        Marks an ``Item`` instance as needing to be saved.
+
+        Example:
+
+            >>> user.needs_save()
+            False
+            >>> user.mark_dirty()
+            >>> user.needs_save()
+            True
+
+        """
+        self._is_dirty = True
+
+    def load(self, data):
+        """
+        This is only useful when being handed raw data from DynamoDB directly.
+        If you have a Python datastructure already, use the ``__init__`` or
+        manually set the data instead.
+
+        Largely internal, unless you know what you're doing or are trying to
+        mix the low-level & high-level APIs.
+        """
+        self._data = {}
+
+        for field_name, field_value in data.get('Item', {}).items():
+            self[field_name] = self._dynamizer.decode(field_value)
+
+        self.mark_clean()
+
+    def get_keys(self):
+        """
+        Returns a Python-style dict of the keys/values.
+
+        Largely internal.
+        """
+        key_fields = self.table.get_key_fields()
+        key_data = {}
+
+        for key in key_fields:
+            key_data[key] = self[key]
+
+        return key_data
+
+    def get_raw_keys(self):
+        """
+        Returns a DynamoDB-style dict of the keys/values.
+
+        Largely internal.
+        """
+        raw_key_data = {}
+
+        for key, value in self.get_keys().items():
+            raw_key_data[key] = self._dynamizer.encode(value)
+
+        return raw_key_data
+
+    def build_expects(self, fields=None):
+        """
+        Builds up a list of expecations to hand off to DynamoDB on save.
+
+        Largely internal.
+        """
+        expects = {}
+
+        if fields is None:
+            fields = self._data.keys() + self._orig_data.keys()
+
+        # Only uniques.
+        fields = set(fields)
+
+        for key in fields:
+            expects[key] = {
+                'Exists': True,
+            }
+            value = None
+
+            # Check for invalid keys.
+            if not key in self._orig_data and not key in self._data:
+                raise ValueError("Unknown key %s provided." % key)
+
+            # States:
+            # * New field (_data & _orig_data w/ marker)
+            # * Unchanged field (only _data)
+            # * Modified field (_data & _orig_data)
+            # * Deleted field (only _orig_data)
+            if not key in self._orig_data:
+                # Existing field unchanged.
+                value = self._data[key]
+            else:
+                if key in self._data:
+                    if self._orig_data[key] is NEWVALUE:
+                        # New field.
+                        expects[key]['Exists'] = False
+                    else:
+                        # Existing field modified.
+                        value = self._orig_data[key]
+                else:
+                   # Existing field deleted.
+                    value = self._orig_data[key]
+
+            if value is not None:
+                expects[key]['Value'] = self._dynamizer.encode(value)
+
+        return expects
+
+    def prepare_full(self):
+        """
+        Runs through all fields & encodes them to be handed off to DynamoDB
+        as part of an ``save`` (``put_item``) call.
+
+        Largely internal.
+        """
+        # This doesn't save on it's own. Rather, we prepare the datastructure
+        # and hand-off to the table to handle creation/update.
+        final_data = {}
+
+        for key, value in self._data.items():
+            final_data[key] = self._dynamizer.encode(value)
+
+        return final_data
+
+    def prepare_partial(self):
+        """
+        Runs through **ONLY** the changed/deleted fields & encodes them to be
+        handed off to DynamoDB as part of an ``partial_save`` (``update_item``)
+        call.
+
+        Largely internal.
+        """
+        # This doesn't save on it's own. Rather, we prepare the datastructure
+        # and hand-off to the table to handle creation/update.
+        final_data = {}
+
+        # Loop over ``_orig_data`` so that we only build up data that's changed.
+        for key, value in self._orig_data.items():
+            if key in self._data:
+                # It changed.
+                final_data[key] = {
+                    'Action': 'PUT',
+                    'Value': self._dynamizer.encode(self._data[key])
+                }
+            else:
+                # It was deleted.
+                final_data[key] = {
+                    'Action': 'DELETE',
+                }
+
+        return final_data
+
+    def partial_save(self):
+        """
+        Saves only the changed data to DynamoDB.
+
+        Extremely useful for high-volume/high-write data sets, this allows
+        you to update only a handful of fields rather than having to push
+        entire items. This prevents many accidental overwrite situations as
+        well as saves on the amount of data to transfer over the wire.
+
+        Returns ``True`` on success, ``False`` if no save was performed or
+        the write failed.
+
+        Example::
+
+            >>> user['last_name'] = 'Doh!'
+            # Only the last name field will be sent to DynamoDB.
+            >>> user.partial_save()
+
+        """
+        if not self.needs_save():
+            return False
+
+        key = self.get_keys()
+        # Build a new dict of only the data we're changing.
+        final_data = self.prepare_partial()
+        # Build expectations of only the fields we're planning to update.
+        expects = self.build_expects(fields=self._orig_data.keys())
+        returned = self.table._update_item(key, final_data, expects=expects)
+        # Mark the object as clean.
+        self.mark_clean()
+        return returned
+
+    def save(self, overwrite=False):
+        """
+        Saves all data to DynamoDB.
+
+        By default, this attempts to ensure that none of the underlying
+        data has changed. If any fields have changed in between when the
+        ``Item`` was constructed & when it is saved, this call will fail so
+        as not to cause any data loss.
+
+        If you're sure possibly overwriting data is acceptable, you can pass
+        an ``overwrite=True``. If that's not acceptable, you may be able to use
+        ``Item.partial_save`` to only write the changed field data.
+
+        Optionally accepts an ``overwrite`` parameter, which should be a
+        boolean. If you provide ``True``, the item will be forcibly overwritten
+        within DynamoDB, even if another process changed the data in the
+        meantime. (Default: ``False``)
+
+        Returns ``True`` on success, ``False`` if no save was performed.
+
+        Example::
+
+            >>> user['last_name'] = 'Doh!'
+            # All data on the Item is sent to DynamoDB.
+            >>> user.save()
+
+            # If it fails, you can overwrite.
+            >>> user.save(overwrite=True)
+
+        """
+        if not self.needs_save():
+            return False
+
+        final_data = self.prepare_full()
+        expects = None
+
+        if overwrite is False:
+            # Build expectations about *all* of the data.
+            expects = self.build_expects()
+
+        returned = self.table._put_item(final_data, expects=expects)
+        # Mark the object as clean.
+        self.mark_clean()
+        return returned
+
+    def delete(self):
+        """
+        Deletes the item's data to DynamoDB.
+
+        Returns ``True`` on success.
+
+        Example::
+
+            # Buh-bye now.
+            >>> user.delete()
+
+        """
+        key_data = self.get_keys()
+        return self.table.delete_item(**key_data)
diff --git a/boto/dynamodb2/layer1.py b/boto/dynamodb2/layer1.py
new file mode 100644
index 0000000..532e2f6
--- /dev/null
+++ b/boto/dynamodb2/layer1.py
@@ -0,0 +1,1539 @@
+# Copyright (c) 2013 Amazon.com, Inc. or its affiliates.  All Rights Reserved
+#
+# Permission is hereby granted, free of charge, to any person obtaining a
+# copy of this software and associated documentation files (the
+# "Software"), to deal in the Software without restriction, including
+# without limitation the rights to use, copy, modify, merge, publish, dis-
+# tribute, sublicense, and/or sell copies of the Software, and to permit
+# persons to whom the Software is furnished to do so, subject to the fol-
+# lowing conditions:
+#
+# The above copyright notice and this permission notice shall be included
+# in all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
+# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
+# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+# IN THE SOFTWARE.
+#
+from binascii import crc32
+
+import json
+import boto
+from boto.connection import AWSQueryConnection
+from boto.regioninfo import RegionInfo
+from boto.exception import JSONResponseError
+from boto.dynamodb2 import exceptions
+
+
+class DynamoDBConnection(AWSQueryConnection):
+    """
+    Amazon DynamoDB is a fast, highly scalable, highly available,
+    cost-effective non-relational database service. Amazon DynamoDB
+    removes traditional scalability limitations on data storage while
+    maintaining low latency and predictable performance.
+    """
+    APIVersion = "2012-08-10"
+    DefaultRegionName = "us-east-1"
+    DefaultRegionEndpoint = "dynamodb.us-east-1.amazonaws.com"
+    ServiceName = "DynamoDB"
+    TargetPrefix = "DynamoDB_20120810"
+    ResponseError = JSONResponseError
+
+    _faults = {
+        "ProvisionedThroughputExceededException": exceptions.ProvisionedThroughputExceededException,
+        "LimitExceededException": exceptions.LimitExceededException,
+        "ConditionalCheckFailedException": exceptions.ConditionalCheckFailedException,
+        "ResourceInUseException": exceptions.ResourceInUseException,
+        "ResourceNotFoundException": exceptions.ResourceNotFoundException,
+        "InternalServerError": exceptions.InternalServerError,
+        "ItemCollectionSizeLimitExceededException": exceptions.ItemCollectionSizeLimitExceededException,
+        "ValidationException": exceptions.ValidationException,
+    }
+
+    NumberRetries = 10
+
+
+    def __init__(self, **kwargs):
+        region = kwargs.pop('region', None)
+        validate_checksums = kwargs.pop('validate_checksums', True)
+        if not region:
+            region_name = boto.config.get('DynamoDB', 'region',
+                                          self.DefaultRegionName)
+            for reg in boto.dynamodb2.regions():
+                if reg.name == region_name:
+                    region = reg
+                    break
+        kwargs['host'] = region.endpoint
+        AWSQueryConnection.__init__(self, **kwargs)
+        self.region = region
+        self._validate_checksums = boto.config.getbool(
+            'DynamoDB', 'validate_checksums', validate_checksums)
+        self.throughput_exceeded_events = 0
+
+    def _required_auth_capability(self):
+        return ['hmac-v4']
+
+    def batch_get_item(self, request_items, return_consumed_capacity=None):
+        """
+        The BatchGetItem operation returns the attributes of one or
+        more items from one or more tables. You identify requested
+        items by primary key.
+
+        A single operation can retrieve up to 1 MB of data, which can
+        comprise as many as 100 items. BatchGetItem will return a
+        partial result if the response size limit is exceeded, the
+        table's provisioned throughput is exceeded, or an internal
+        processing failure occurs. If a partial result is returned,
+        the operation returns a value for UnprocessedKeys . You can
+        use this value to retry the operation starting with the next
+        item to get.
+
+        For example, if you ask to retrieve 100 items, but each
+        individual item is 50 KB in size, the system returns 20 items
+        (1 MB) and an appropriate UnprocessedKeys value so you can get
+        the next page of results. If desired, your application can
+        include its own logic to assemble the pages of results into
+        one dataset.
+
+        If no items can be processed because of insufficient
+        provisioned throughput on each of the tables involved in the
+        request, BatchGetItem throws
+        ProvisionedThroughputExceededException .
+
+        By default, BatchGetItem performs eventually consistent reads
+        on every table in the request. If you want strongly consistent
+        reads instead, you can set ConsistentRead to `True` for any or
+        all tables.
+
+        In order to minimize response latency, BatchGetItem fetches
+        items in parallel.
+
+        When designing your application, keep in mind that Amazon
+        DynamoDB does not return attributes in any particular order.
+        To help parse the response by item, include the primary key
+        values for the items in your request in the AttributesToGet
+        parameter.
+
+        If a requested item does not exist, it is not returned in the
+        result. Requests for nonexistent items consume the minimum
+        read capacity units according to the type of read. For more
+        information, see `Capacity Units Calculations`_ in the Amazon
+        DynamoDB Developer Guide .
+
+        :type request_items: map
+        :param request_items:
+        A map of one or more table names and, for each table, the corresponding
+            primary keys for the items to retrieve. Each table name can be
+            invoked only once.
+
+        Each element in the map consists of the following:
+
+
+        + Keys - An array of primary key attribute values that define specific
+              items in the table.
+        + AttributesToGet - One or more attributes to be retrieved from the
+              table or index. By default, all attributes are returned. If a
+              specified attribute is not found, it does not appear in the result.
+        + ConsistentRead - If `True`, a strongly consistent read is used; if
+              `False` (the default), an eventually consistent read is used.
+
+        :type return_consumed_capacity: string
+        :param return_consumed_capacity:
+
+        """
+        params = {'RequestItems': request_items, }
+        if return_consumed_capacity is not None:
+            params['ReturnConsumedCapacity'] = return_consumed_capacity
+        return self.make_request(action='BatchGetItem',
+                                 body=json.dumps(params))
+
+    def batch_write_item(self, request_items, return_consumed_capacity=None,
+                         return_item_collection_metrics=None):
+        """
+        The BatchWriteItem operation puts or deletes multiple items in
+        one or more tables. A single call to BatchWriteItem can write
+        up to 1 MB of data, which can comprise as many as 25 put or
+        delete requests. Individual items to be written can be as
+        large as 64 KB.
+
+        BatchWriteItem cannot update items. To update items, use the
+        UpdateItem API.
+
+        The individual PutItem and DeleteItem operations specified in
+        BatchWriteItem are atomic; however BatchWriteItem as a whole
+        is not. If any requested operations fail because the table's
+        provisioned throughput is exceeded or an internal processing
+        failure occurs, the failed operations are returned in the
+        UnprocessedItems response parameter. You can investigate and
+        optionally resend the requests. Typically, you would call
+        BatchWriteItem in a loop. Each iteration would check for
+        unprocessed items and submit a new BatchWriteItem request with
+        those unprocessed items until all items have been processed.
+
+        To write one item, you can use the PutItem operation; to
+        delete one item, you can use the DeleteItem operation.
+
+        With BatchWriteItem , you can efficiently write or delete
+        large amounts of data, such as from Amazon Elastic MapReduce
+        (EMR), or copy data from another database into Amazon
+        DynamoDB. In order to improve performance with these large-
+        scale operations, BatchWriteItem does not behave in the same
+        way as individual PutItem and DeleteItem calls would For
+        example, you cannot specify conditions on individual put and
+        delete requests, and BatchWriteItem does not return deleted
+        items in the response.
+
+        If you use a programming language that supports concurrency,
+        such as Java, you can use threads to write items in parallel.
+        Your application must include the necessary logic to manage
+        the threads.
+
+        With languages that don't support threading, such as PHP,
+        BatchWriteItem will write or delete the specified items one at
+        a time. In both situations, BatchWriteItem provides an
+        alternative where the API performs the specified put and
+        delete operations in parallel, giving you the power of the
+        thread pool approach without having to introduce complexity
+        into your application.
+
+        Parallel processing reduces latency, but each specified put
+        and delete request consumes the same number of write capacity
+        units whether it is processed in parallel or not. Delete
+        operations on nonexistent items consume one write capacity
+        unit.
+
+        If one or more of the following is true, Amazon DynamoDB
+        rejects the entire batch write operation:
+
+
+        + One or more tables specified in the BatchWriteItem request
+          does not exist.
+        + Primary key attributes specified on an item in the request
+          do not match those in the corresponding table's primary key
+          schema.
+        + You try to perform multiple operations on the same item in
+          the same BatchWriteItem request. For example, you cannot put
+          and delete the same item in the same BatchWriteItem request.
+        + The total request size exceeds 1 MB.
+        + Any individual item in a batch exceeds 64 KB.
+
+        :type request_items: map
+        :param request_items:
+        A map of one or more table names and, for each table, a list of
+            operations to be performed ( DeleteRequest or PutRequest ). Each
+            element in the map consists of the following:
+
+
+        + DeleteRequest - Perform a DeleteItem operation on the specified item.
+              The item to be deleted is identified by a Key subelement:
+
+            + Key - A map of primary key attribute values that uniquely identify
+                  the item. Each entry in this map consists of an attribute name and
+                  an attribute value.
+
+        + PutRequest - Perform a PutItem operation on the specified item. The
+              item to be put is identified by an Item subelement:
+
+            + Item - A map of attributes and their values. Each entry in this map
+                  consists of an attribute name and an attribute value. Attribute
+                  values must not be null; string and binary type attributes must
+                  have lengths greater than zero; and set type attributes must not be
+                  empty. Requests that contain empty values will be rejected with a
+                  ValidationException . If you specify any attributes that are part
+                  of an index key, then the data types for those attributes must
+                  match those of the schema in the table's attribute definition.
+
+        :type return_consumed_capacity: string
+        :param return_consumed_capacity:
+
+        :type return_item_collection_metrics: string
+        :param return_item_collection_metrics: If set to `SIZE`, statistics
+            about item collections, if any, that were modified during the
+            operation are returned in the response. If set to `NONE` (the
+            default), no statistics are returned..
+
+        """
+        params = {'RequestItems': request_items, }
+        if return_consumed_capacity is not None:
+            params['ReturnConsumedCapacity'] = return_consumed_capacity
+        if return_item_collection_metrics is not None:
+            params['ReturnItemCollectionMetrics'] = return_item_collection_metrics
+        return self.make_request(action='BatchWriteItem',
+                                 body=json.dumps(params))
+
+    def create_table(self, attribute_definitions, table_name, key_schema,
+                     provisioned_throughput, local_secondary_indexes=None):
+        """
+        The CreateTable operation adds a new table to your account. In
+        an AWS account, table names must be unique within each region.
+        That is, you can have two tables with same name if you create
+        the tables in different regions.
+
+        CreateTable is an asynchronous operation. Upon receiving a
+        CreateTable request, Amazon DynamoDB immediately returns a
+        response with a TableStatus of `CREATING`. After the table is
+        created, Amazon DynamoDB sets the TableStatus to `ACTIVE`. You
+        can perform read and write operations only on an `ACTIVE`
+        table.
+
+        If you want to create multiple tables with local secondary
+        indexes on them, you must create them sequentially. Only one
+        table with local secondary indexes can be in the `CREATING`
+        state at any given time.
+
+        You can use the DescribeTable API to check the table status.
+
+        :type attribute_definitions: list
+        :param attribute_definitions: An array of attributes that describe the
+            key schema for the table and indexes.
+
+        :type table_name: string
+        :param table_name: The name of the table to create.
+
+        :type key_schema: list
+        :param key_schema: Specifies the attributes that make up the primary
+            key for the table. The attributes in KeySchema must also be defined
+            in the AttributeDefinitions array. For more information, see `Data
+            Model`_ in the Amazon DynamoDB Developer Guide .
+        Each KeySchemaElement in the array is composed of:
+
+
+        + AttributeName - The name of this key attribute.
+        + KeyType - Determines whether the key attribute is `HASH` or `RANGE`.
+
+
+        For a primary key that consists of a hash attribute, you must specify
+            exactly one element with a KeyType of `HASH`.
+
+        For a primary key that consists of hash and range attributes, you must
+            specify exactly two elements, in this order: The first element must
+            have a KeyType of `HASH`, and the second element must have a
+            KeyType of `RANGE`.
+
+        For more information, see `Specifying the Primary Key`_ in the Amazon
+            DynamoDB Developer Guide .
+
+        :type local_secondary_indexes: list
+        :param local_secondary_indexes:
+        One or more secondary indexes (the maximum is five) to be created on
+            the table. Each index is scoped to a given hash key value. There is
+            a 10 gigabyte size limit per hash key; otherwise, the size of a
+            local secondary index is unconstrained.
+
+        Each secondary index in the array includes the following:
+
+
+        + IndexName - The name of the secondary index. Must be unique only for
+              this table.
+        + KeySchema - Specifies the key schema for the index. The key schema
+              must begin with the same hash key attribute as the table.
+        + Projection - Specifies attributes that are copied (projected) from
+              the table into the index. These are in addition to the primary key
+              attributes and index key attributes, which are automatically
+              projected. Each attribute specification is composed of:
+
+            + ProjectionType - One of the following:
+
+                + `KEYS_ONLY` - Only the index and primary keys are projected into the
+                      index.
+                + `INCLUDE` - Only the specified table attributes are projected into
+                      the index. The list of projected attributes are in NonKeyAttributes
+                      .
+                + `ALL` - All of the table attributes are projected into the index.
+
+            + NonKeyAttributes - A list of one or more non-key attribute names that
+                  are projected into the index. The total count of attributes
+                  specified in NonKeyAttributes , summed across all of the local
+                  secondary indexes, must not exceed 20. If you project the same
+                  attribute into two different indexes, this counts as two distinct
+                  attributes when determining the total.
+
+        :type provisioned_throughput: dict
+        :param provisioned_throughput:
+
+        """
+        params = {
+            'AttributeDefinitions': attribute_definitions,
+            'TableName': table_name,
+            'KeySchema': key_schema,
+            'ProvisionedThroughput': provisioned_throughput,
+        }
+        if local_secondary_indexes is not None:
+            params['LocalSecondaryIndexes'] = local_secondary_indexes
+        return self.make_request(action='CreateTable',
+                                 body=json.dumps(params))
+
+    def delete_item(self, table_name, key, expected=None, return_values=None,
+                    return_consumed_capacity=None,
+                    return_item_collection_metrics=None):
+        """
+        Deletes a single item in a table by primary key. You can
+        perform a conditional delete operation that deletes the item
+        if it exists, or if it has an expected attribute value.
+
+        In addition to deleting an item, you can also return the
+        item's attribute values in the same operation, using the
+        ReturnValues parameter.
+
+        Unless you specify conditions, the DeleteItem is an idempotent
+        operation; running it multiple times on the same item or
+        attribute does not result in an error response.
+
+        Conditional deletes are useful for only deleting items if
+        specific conditions are met. If those conditions are met,
+        Amazon DynamoDB performs the delete. Otherwise, the item is
+        not deleted.
+
+        :type table_name: string
+        :param table_name: The name of the table from which to delete the item.
+
+        :type key: map
+        :param key: A map of attribute names to AttributeValue objects,
+            representing the primary key of the item to delete.
+
+        :type expected: map
+        :param expected: A map of attribute/condition pairs. This is the
+            conditional block for the DeleteItem operation. All the conditions
+            must be met for the operation to succeed.
+        Expected allows you to provide an attribute name, and whether or not
+            Amazon DynamoDB should check to see if the attribute value already
+            exists; or if the attribute value exists and has a particular value
+            before changing it.
+
+        Each item in Expected represents an attribute name for Amazon DynamoDB
+            to check, along with the following:
+
+
+        + Value - The attribute value for Amazon DynamoDB to check.
+        + Exists - Causes Amazon DynamoDB to evaluate the value before
+              attempting a conditional operation:
+
+            + If Exists is `True`, Amazon DynamoDB will check to see if that
+                  attribute value already exists in the table. If it is found, then
+                  the operation succeeds. If it is not found, the operation fails
+                  with a ConditionalCheckFailedException .
+            + If Exists is `False`, Amazon DynamoDB assumes that the attribute
+                  value does not exist in the table. If in fact the value does not
+                  exist, then the assumption is valid and the operation succeeds. If
+                  the value is found, despite the assumption that it does not exist,
+                  the operation fails with a ConditionalCheckFailedException .
+          The default setting for Exists is `True`. If you supply a Value all by
+              itself, Amazon DynamoDB assumes the attribute exists: You don't
+              have to set Exists to `True`, because it is implied. Amazon
+              DynamoDB returns a ValidationException if:
+
+            + Exists is `True` but there is no Value to check. (You expect a value
+                  to exist, but don't specify what that value is.)
+            + Exists is `False` but you also specify a Value . (You cannot expect
+                  an attribute to have a value, while also expecting it not to
+                  exist.)
+
+
+
+        If you specify more than one condition for Exists , then all of the
+            conditions must evaluate to true. (In other words, the conditions
+            are ANDed together.) Otherwise, the conditional operation will
+            fail.
+
+        :type return_values: string
+        :param return_values:
+        Use ReturnValues if you want to get the item attributes as they
+            appeared before they were deleted. For DeleteItem , the valid
+            values are:
+
+
+        + `NONE` - If ReturnValues is not specified, or if its value is `NONE`,
+              then nothing is returned. (This is the default for ReturnValues .)
+        + `ALL_OLD` - The content of the old item is returned.
+
+        :type return_consumed_capacity: string
+        :param return_consumed_capacity:
+
+        :type return_item_collection_metrics: string
+        :param return_item_collection_metrics: If set to `SIZE`, statistics
+            about item collections, if any, that were modified during the
+            operation are returned in the response. If set to `NONE` (the
+            default), no statistics are returned..
+
+        """
+        params = {'TableName': table_name, 'Key': key, }
+        if expected is not None:
+            params['Expected'] = expected
+        if return_values is not None:
+            params['ReturnValues'] = return_values
+        if return_consumed_capacity is not None:
+            params['ReturnConsumedCapacity'] = return_consumed_capacity
+        if return_item_collection_metrics is not None:
+            params['ReturnItemCollectionMetrics'] = return_item_collection_metrics
+        return self.make_request(action='DeleteItem',
+                                 body=json.dumps(params))
+
+    def delete_table(self, table_name):
+        """
+        The DeleteTable operation deletes a table and all of its
+        items. After a DeleteTable request, the specified table is in
+        the `DELETING` state until Amazon DynamoDB completes the
+        deletion. If the table is in the `ACTIVE` state, you can
+        delete it. If a table is in `CREATING` or `UPDATING` states,
+        then Amazon DynamoDB returns a ResourceInUseException . If the
+        specified table does not exist, Amazon DynamoDB returns a
+        ResourceNotFoundException . If table is already in the
+        `DELETING` state, no error is returned.
+
+        Amazon DynamoDB might continue to accept data read and write
+        operations, such as GetItem and PutItem , on a table in the
+        `DELETING` state until the table deletion is complete.
+
+        Tables are unique among those associated with the AWS Account
+        issuing the request, and the AWS region that receives the
+        request (such as dynamodb.us-east-1.amazonaws.com). Each
+        Amazon DynamoDB endpoint is entirely independent. For example,
+        if you have two tables called "MyTable," one in dynamodb.us-
+        east-1.amazonaws.com and one in dynamodb.us-
+        west-1.amazonaws.com, they are completely independent and do
+        not share any data; deleting one does not delete the other.
+
+        When you delete a table, any local secondary indexes on that
+        table are also deleted.
+
+        Use the DescribeTable API to check the status of the table.
+
+        :type table_name: string
+        :param table_name: The name of the table to delete.
+
+        """
+        params = {'TableName': table_name, }
+        return self.make_request(action='DeleteTable',
+                                 body=json.dumps(params))
+
+    def describe_table(self, table_name):
+        """
+        Returns information about the table, including the current
+        status of the table, when it was created, the primary key
+        schema, and any indexes on the table.
+
+        :type table_name: string
+        :param table_name: The name of the table to describe.
+
+        """
+        params = {'TableName': table_name, }
+        return self.make_request(action='DescribeTable',
+                                 body=json.dumps(params))
+
+    def get_item(self, table_name, key, attributes_to_get=None,
+                 consistent_read=None, return_consumed_capacity=None):
+        """
+        The GetItem operation returns a set of attributes for the item
+        with the given primary key. If there is no matching item,
+        GetItem does not return any data.
+
+        GetItem provides an eventually consistent read by default. If
+        your application requires a strongly consistent read, set
+        ConsistentRead to `True`. Although a strongly consistent read
+        might take more time than an eventually consistent read, it
+        always returns the last updated value.
+
+        :type table_name: string
+        :param table_name: The name of the table containing the requested item.
+
+        :type key: map
+        :param key: A map of attribute names to AttributeValue objects,
+            representing the primary key of the item to retrieve.
+
+        :type attributes_to_get: list
+        :param attributes_to_get: The names of one or more attributes to
+            retrieve. If no attribute names are specified, then all attributes
+            will be returned. If any of the requested attributes are not found,
+            they will not appear in the result.
+
+        :type consistent_read: boolean
+        :param consistent_read: If set to `True`, then the operation uses
+            strongly consistent reads; otherwise, eventually consistent reads
+            are used.
+
+        :type return_consumed_capacity: string
+        :param return_consumed_capacity:
+
+        """
+        params = {'TableName': table_name, 'Key': key, }
+        if attributes_to_get is not None:
+            params['AttributesToGet'] = attributes_to_get
+        if consistent_read is not None:
+            params['ConsistentRead'] = consistent_read
+        if return_consumed_capacity is not None:
+            params['ReturnConsumedCapacity'] = return_consumed_capacity
+        return self.make_request(action='GetItem',
+                                 body=json.dumps(params))
+
+    def list_tables(self, exclusive_start_table_name=None, limit=None):
+        """
+        Returns an array of all the tables associated with the current
+        account and endpoint.
+
+        Each Amazon DynamoDB endpoint is entirely independent. For
+        example, if you have two tables called "MyTable," one in
+        dynamodb.us-east-1.amazonaws.com and one in dynamodb.us-
+        west-1.amazonaws.com , they are completely independent and do
+        not share any data. The ListTables operation returns all of
+        the table names associated with the account making the
+        request, for the endpoint that receives the request.
+
+        :type exclusive_start_table_name: string
+        :param exclusive_start_table_name: The name of the table that starts
+            the list. If you already ran a ListTables operation and received a
+            LastEvaluatedTableName value in the response, use that value here
+            to continue the list.
+
+        :type limit: integer
+        :param limit: A maximum number of table names to return.
+
+        """
+        params = {}
+        if exclusive_start_table_name is not None:
+            params['ExclusiveStartTableName'] = exclusive_start_table_name
+        if limit is not None:
+            params['Limit'] = limit
+        return self.make_request(action='ListTables',
+                                 body=json.dumps(params))
+
+    def put_item(self, table_name, item, expected=None, return_values=None,
+                 return_consumed_capacity=None,
+                 return_item_collection_metrics=None):
+        """
+        Creates a new item, or replaces an old item with a new item.
+        If an item already exists in the specified table with the same
+        primary key, the new item completely replaces the existing
+        item. You can perform a conditional put (insert a new item if
+        one with the specified primary key doesn't exist), or replace
+        an existing item if it has certain attribute values.
+
+        In addition to putting an item, you can also return the item's
+        attribute values in the same operation, using the ReturnValues
+        parameter.
+
+        When you add an item, the primary key attribute(s) are the
+        only required attributes. Attribute values cannot be null.
+        String and binary type attributes must have lengths greater
+        than zero. Set type attributes cannot be empty. Requests with
+        empty values will be rejected with a ValidationException .
+
+        You can request that PutItem return either a copy of the old
+        item (before the update) or a copy of the new item (after the
+        update). For more information, see the ReturnValues
+        description.
+
+        To prevent a new item from replacing an existing item, use a
+        conditional put operation with Exists set to `False` for the
+        primary key attribute, or attributes.
+
+        For more information about using this API, see `Working with
+        Items`_ in the Amazon DynamoDB Developer Guide .
+
+        :type table_name: string
+        :param table_name: The name of the table to contain the item.
+
+        :type item: map
+        :param item: A map of attribute name/value pairs, one for each
+            attribute. Only the primary key attributes are required; you can
+            optionally provide other attribute name-value pairs for the item.
+        If you specify any attributes that are part of an index key, then the
+            data types for those attributes must match those of the schema in
+            the table's attribute definition.
+
+        For more information about primary keys, see `Primary Key`_ in the
+            Amazon DynamoDB Developer Guide .
+
+        Each element in the Item map is an AttributeValue object.
+
+        :type expected: map
+        :param expected: A map of attribute/condition pairs. This is the
+            conditional block for the PutItem operation. All the conditions
+            must be met for the operation to succeed.
+        Expected allows you to provide an attribute name, and whether or not
+            Amazon DynamoDB should check to see if the attribute value already
+            exists; or if the attribute value exists and has a particular value
+            before changing it.
+
+        Each item in Expected represents an attribute name for Amazon DynamoDB
+            to check, along with the following:
+
+
+        + Value - The attribute value for Amazon DynamoDB to check.
+        + Exists - Causes Amazon DynamoDB to evaluate the value before
+              attempting a conditional operation:
+
+            + If Exists is `True`, Amazon DynamoDB will check to see if that
+                  attribute value already exists in the table. If it is found, then
+                  the operation succeeds. If it is not found, the operation fails
+                  with a ConditionalCheckFailedException .
+            + If Exists is `False`, Amazon DynamoDB assumes that the attribute
+                  value does not exist in the table. If in fact the value does not
+                  exist, then the assumption is valid and the operation succeeds. If
+                  the value is found, despite the assumption that it does not exist,
+                  the operation fails with a ConditionalCheckFailedException .
+          The default setting for Exists is `True`. If you supply a Value all by
+              itself, Amazon DynamoDB assumes the attribute exists: You don't
+              have to set Exists to `True`, because it is implied. Amazon
+              DynamoDB returns a ValidationException if:
+
+            + Exists is `True` but there is no Value to check. (You expect a value
+                  to exist, but don't specify what that value is.)
+            + Exists is `False` but you also specify a Value . (You cannot expect
+                  an attribute to have a value, while also expecting it not to
+                  exist.)
+
+
+
+        If you specify more than one condition for Exists , then all of the
+            conditions must evaluate to true. (In other words, the conditions
+            are ANDed together.) Otherwise, the conditional operation will
+            fail.
+
+        :type return_values: string
+        :param return_values:
+        Use ReturnValues if you want to get the item attributes as they
+            appeared before they were updated with the PutItem request. For
+            PutItem , the valid values are:
+
+
+        + `NONE` - If ReturnValues is not specified, or if its value is `NONE`,
+              then nothing is returned. (This is the default for ReturnValues .)
+        + `ALL_OLD` - If PutItem overwrote an attribute name-value pair, then
+              the content of the old item is returned.
+
+        :type return_consumed_capacity: string
+        :param return_consumed_capacity:
+
+        :type return_item_collection_metrics: string
+        :param return_item_collection_metrics: If set to `SIZE`, statistics
+            about item collections, if any, that were modified during the
+            operation are returned in the response. If set to `NONE` (the
+            default), no statistics are returned..
+
+        """
+        params = {'TableName': table_name, 'Item': item, }
+        if expected is not None:
+            params['Expected'] = expected
+        if return_values is not None:
+            params['ReturnValues'] = return_values
+        if return_consumed_capacity is not None:
+            params['ReturnConsumedCapacity'] = return_consumed_capacity
+        if return_item_collection_metrics is not None:
+            params['ReturnItemCollectionMetrics'] = return_item_collection_metrics
+        return self.make_request(action='PutItem',
+                                 body=json.dumps(params))
+
+    def query(self, table_name, index_name=None, select=None,
+              attributes_to_get=None, limit=None, consistent_read=None,
+              key_conditions=None, scan_index_forward=None,
+              exclusive_start_key=None, return_consumed_capacity=None):
+        """
+        A Query operation directly accesses items from a table using
+        the table primary key, or from an index using the index key.
+        You must provide a specific hash key value. You can narrow the
+        scope of the query by using comparison operators on the range
+        key value, or on the index key. You can use the
+        ScanIndexForward parameter to get results in forward or
+        reverse order, by range key or by index key.
+
+        Queries that do not return results consume the minimum read
+        capacity units according to the type of read.
+
+        If the total number of items meeting the query criteria
+        exceeds the result set size limit of 1 MB, the query stops and
+        results are returned to the user with a LastEvaluatedKey to
+        continue the query in a subsequent operation. Unlike a Scan
+        operation, a Query operation never returns an empty result set
+        and a LastEvaluatedKey . The LastEvaluatedKey is only provided
+        if the results exceed 1 MB, or if you have used Limit .
+
+        To request a strongly consistent result, set ConsistentRead to
+        true.
+
+        :type table_name: string
+        :param table_name: The name of the table containing the requested
+            items.
+
+        :type index_name: string
+        :param index_name: The name of an index on the table to query.
+
+        :type select: string
+        :param select: The attributes to be returned in the result. You can
+            retrieve all item attributes, specific item attributes, the count
+            of matching items, or in the case of an index, some or all of the
+            attributes projected into the index.
+
+        + `ALL_ATTRIBUTES`: Returns all of the item attributes. For a table,
+              this is the default. For an index, this mode causes Amazon DynamoDB
+              to fetch the full item from the table for each matching item in the
+              index. If the index is configured to project all item attributes,
+              the matching items will not be fetched from the table. Fetching
+              items from the table incurs additional throughput cost and latency.
+        + `ALL_PROJECTED_ATTRIBUTES`: Allowed only when querying an index.
+              Retrieves all attributes which have been projected into the index.
+              If the index is configured to project all attributes, this is
+              equivalent to specifying ALL_ATTRIBUTES .
+        + `COUNT`: Returns the number of matching items, rather than the
+              matching items themselves.
+        + `SPECIFIC_ATTRIBUTES` : Returns only the attributes listed in
+              AttributesToGet . This is equivalent to specifying AttributesToGet
+              without specifying any value for Select . If you are querying an
+              index and request only attributes that are projected into that
+              index, the operation will read only the index and not the table. If
+              any of the requested attributes are not projected into the index,
+              Amazon DynamoDB will need to fetch each matching item from the
+              table. This extra fetching incurs additional throughput cost and
+              latency.
+
+
+        When neither Select nor AttributesToGet are specified, Amazon DynamoDB
+            defaults to `ALL_ATTRIBUTES` when accessing a table, and
+            `ALL_PROJECTED_ATTRIBUTES` when accessing an index. You cannot use
+            both Select and AttributesToGet together in a single request,
+            unless the value for Select is `SPECIFIC_ATTRIBUTES`. (This usage
+            is equivalent to specifying AttributesToGet without any value for
+            Select .)
+
+        :type attributes_to_get: list
+        :param attributes_to_get: The names of one or more attributes to
+            retrieve. If no attribute names are specified, then all attributes
+            will be returned. If any of the requested attributes are not found,
+            they will not appear in the result.
+        If you are querying an index and request only attributes that are
+            projected into that index, the operation will read only the index
+            and not the table. If any of the requested attributes are not
+            projected into the index, Amazon DynamoDB will need to fetch each
+            matching item from the table. This extra fetching incurs additional
+            throughput cost and latency.
+
+        You cannot use both AttributesToGet and Select together in a Query
+            request, unless the value for Select is `SPECIFIC_ATTRIBUTES`.
+            (This usage is equivalent to specifying AttributesToGet without any
+            value for Select .)
+
+        :type limit: integer
+        :param limit: The maximum number of items to evaluate (not necessarily
+            the number of matching items). If Amazon DynamoDB processes the
+            number of items up to the limit while processing the results, it
+            stops the operation and returns the matching values up to that
+            point, and a LastEvaluatedKey to apply in a subsequent operation,
+            so that you can pick up where you left off. Also, if the processed
+            data set size exceeds 1 MB before Amazon DynamoDB reaches this
+            limit, it stops the operation and returns the matching values up to
+            the limit, and a LastEvaluatedKey to apply in a subsequent
+            operation to continue the operation. For more information see
+            `Query and Scan`_ in the Amazon DynamoDB Developer Guide .
+
+        :type consistent_read: boolean
+        :param consistent_read: If set to `True`, then the operation uses
+            strongly consistent reads; otherwise, eventually consistent reads
+            are used.
+
+        :type key_conditions: map
+        :param key_conditions:
+        The selection criteria for the query.
+
+        For a query on a table, you can only have conditions on the table
+            primary key attributes. you must specify the hash key attribute
+            name and value as an `EQ` condition. You can optionally specify a
+            second condition, referring to the range key attribute.
+
+        For a query on a secondary index, you can only have conditions on the
+            index key attributes. You must specify the index hash attribute
+            name and value as an EQ condition. You can optionally specify a
+            second condition, referring to the index key range attribute.
+
+        Multiple conditions are evaluated using "AND"; in other words, all of
+            the conditions must be met in order for an item to appear in the
+            results results.
+
+        Each KeyConditions element consists of an attribute name to compare,
+            along with the following:
+
+
+        + AttributeValueList - One or more values to evaluate against the
+              supplied attribute. This list contains exactly one value, except
+              for a `BETWEEN` or `IN` comparison, in which case the list contains
+              two values. For type Number, value comparisons are numeric. String
+              value comparisons for greater than, equals, or less than are based
+              on ASCII character code values. For example, `a` is greater than
+              `A`, and `aa` is greater than `B`. For a list of code values, see
+              `http://en.wikipedia.org/wiki/ASCII#ASCII_printable_characters`_.
+              For Binary, Amazon DynamoDB treats each byte of the binary data as
+              unsigned when it compares binary values, for example when
+              evaluating query expressions.
+        + ComparisonOperator - A comparator for evaluating attributes. For
+              example, equals, greater than, less than, etc. Valid comparison
+              operators for Query: `EQ | LE | LT | GE | GT | BEGINS_WITH |
+              BETWEEN` For information on specifying data types in JSON, see
+              `JSON Data Format`_ in the Amazon DynamoDB Developer Guide . The
+              following are descriptions of each comparison operator.
+
+            + `EQ` : Equal. AttributeValueList can contain only one AttributeValue
+                  of type String, Number, or Binary (not a set). If an item contains
+                  an AttributeValue of a different type than the one specified in the
+                  request, the value does not match. For example, `{"S":"6"}` does
+                  not equal `{"N":"6"}`. Also, `{"N":"6"}` does not equal
+                  `{"NS":["6", "2", "1"]}`.
+            + `LE` : Less than or equal. AttributeValueList can contain only one
+                  AttributeValue of type String, Number, or Binary (not a set). If an
+                  item contains an AttributeValue of a different type than the one
+                  specified in the request, the value does not match. For example,
+                  `{"S":"6"}` does not equal `{"N":"6"}`. Also, `{"N":"6"}` does not
+                  compare to `{"NS":["6", "2", "1"]}`.
+            + `LT` : Less than. AttributeValueList can contain only one
+                  AttributeValue of type String, Number, or Binary (not a set). If an
+                  item contains an AttributeValue of a different type than the one
+                  specified in the request, the value does not match. For example,
+                  `{"S":"6"}` does not equal `{"N":"6"}`. Also, `{"N":"6"}` does not
+                  compare to `{"NS":["6", "2", "1"]}`.
+            + `GE` : Greater than or equal. AttributeValueList can contain only one
+                  AttributeValue of type String, Number, or Binary (not a set). If an
+                  item contains an AttributeValue of a different type than the one
+                  specified in the request, the value does not match. For example,
+                  `{"S":"6"}` does not equal `{"N":"6"}`. Also, `{"N":"6"}` does not
+                  compare to `{"NS":["6", "2", "1"]}`.
+            + `GT` : Greater than. AttributeValueList can contain only one
+                  AttributeValue of type String, Number, or Binary (not a set). If an
+                  item contains an AttributeValue of a different type than the one
+                  specified in the request, the value does not match. For example,
+                  `{"S":"6"}` does not equal `{"N":"6"}`. Also, `{"N":"6"}` does not
+                  compare to `{"NS":["6", "2", "1"]}`.
+            + `BEGINS_WITH` : checks for a prefix. AttributeValueList can contain
+                  only one AttributeValue of type String or Binary (not a Number or a
+                  set). The target attribute of the comparison must be a String or
+                  Binary (not a Number or a set).
+            + `BETWEEN` : Greater than or equal to the first value, and less than
+                  or equal to the second value. AttributeValueList must contain two
+                  AttributeValue elements of the same type, either String, Number, or
+                  Binary (not a set). A target attribute matches if the target value
+                  is greater than, or equal to, the first element and less than, or
+                  equal to, the second element. If an item contains an AttributeValue
+                  of a different type than the one specified in the request, the
+                  value does not match. For example, `{"S":"6"}` does not compare to
+                  `{"N":"6"}`. Also, `{"N":"6"}` does not compare to `{"NS":["6",
+                  "2", "1"]}`
+
+        :type scan_index_forward: boolean
+        :param scan_index_forward: Specifies ascending (true) or descending
+            (false) traversal of the index. Amazon DynamoDB returns results
+            reflecting the requested order determined by the range key. If the
+            data type is Number, the results are returned in numeric order. For
+            String, the results are returned in order of ASCII character code
+            values. For Binary, Amazon DynamoDB treats each byte of the binary
+            data as unsigned when it compares binary values.
+        If ScanIndexForward is not specified, the results are returned in
+            ascending order.
+
+        :type exclusive_start_key: map
+        :param exclusive_start_key: The primary key of the item from which to
+            continue an earlier operation. An earlier operation might provide
+            this value as the LastEvaluatedKey if that operation was
+            interrupted before completion; either because of the result set
+            size or because of the setting for Limit . The LastEvaluatedKey can
+            be passed back in a new request to continue the operation from that
+            point.
+        The data type for ExclusiveStartKey must be String, Number or Binary.
+            No set data types are allowed.
+
+        :type return_consumed_capacity: string
+        :param return_consumed_capacity:
+
+        """
+        params = {'TableName': table_name, }
+        if index_name is not None:
+            params['IndexName'] = index_name
+        if select is not None:
+            params['Select'] = select
+        if attributes_to_get is not None:
+            params['AttributesToGet'] = attributes_to_get
+        if limit is not None:
+            params['Limit'] = limit
+        if consistent_read is not None:
+            params['ConsistentRead'] = consistent_read
+        if key_conditions is not None:
+            params['KeyConditions'] = key_conditions
+        if scan_index_forward is not None:
+            params['ScanIndexForward'] = scan_index_forward
+        if exclusive_start_key is not None:
+            params['ExclusiveStartKey'] = exclusive_start_key
+        if return_consumed_capacity is not None:
+            params['ReturnConsumedCapacity'] = return_consumed_capacity
+        return self.make_request(action='Query',
+                                 body=json.dumps(params))
+
+    def scan(self, table_name, attributes_to_get=None, limit=None,
+             select=None, scan_filter=None, exclusive_start_key=None,
+             return_consumed_capacity=None, total_segments=None,
+             segment=None):
+        """
+        The Scan operation returns one or more items and item
+        attributes by accessing every item in the table. To have
+        Amazon DynamoDB return fewer items, you can provide a
+        ScanFilter .
+
+        If the total number of scanned items exceeds the maximum data
+        set size limit of 1 MB, the scan stops and results are
+        returned to the user with a LastEvaluatedKey to continue the
+        scan in a subsequent operation. The results also include the
+        number of items exceeding the limit. A scan can result in no
+        table data meeting the filter criteria.
+
+        The result set is eventually consistent.
+
+        By default, Scan operations proceed sequentially; however, for
+        faster performance on large tables, applications can perform a
+        parallel Scan by specifying the Segment and TotalSegments
+        parameters. For more information, see `Parallel Scan`_ in the
+        Amazon DynamoDB Developer Guide .
+
+        :type table_name: string
+        :param table_name: The name of the table containing the requested
+            items.
+
+        :type attributes_to_get: list
+        :param attributes_to_get: The names of one or more attributes to
+            retrieve. If no attribute names are specified, then all attributes
+            will be returned. If any of the requested attributes are not found,
+            they will not appear in the result.
+
+        :type limit: integer
+        :param limit: The maximum number of items to evaluate (not necessarily
+            the number of matching items). If Amazon DynamoDB processes the
+            number of items up to the limit while processing the results, it
+            stops the operation and returns the matching values up to that
+            point, and a LastEvaluatedKey to apply in a subsequent operation,
+            so that you can pick up where you left off. Also, if the processed
+            data set size exceeds 1 MB before Amazon DynamoDB reaches this
+            limit, it stops the operation and returns the matching values up to
+            the limit, and a LastEvaluatedKey to apply in a subsequent
+            operation to continue the operation. For more information see
+            `Query and Scan`_ in the Amazon DynamoDB Developer Guide .
+
+        :type select: string
+        :param select: The attributes to be returned in the result. You can
+            retrieve all item attributes, specific item attributes, the count
+            of matching items, or in the case of an index, some or all of the
+            attributes projected into the index.
+
+        + `ALL_ATTRIBUTES`: Returns all of the item attributes. For a table,
+              this is the default. For an index, this mode causes Amazon DynamoDB
+              to fetch the full item from the table for each matching item in the
+              index. If the index is configured to project all item attributes,
+              the matching items will not be fetched from the table. Fetching
+              items from the table incurs additional throughput cost and latency.
+        + `ALL_PROJECTED_ATTRIBUTES`: Retrieves all attributes which have been
+              projected into the index. If the index is configured to project all
+              attributes, this is equivalent to specifying ALL_ATTRIBUTES .
+        + `COUNT`: Returns the number of matching items, rather than the
+              matching items themselves.
+        + `SPECIFIC_ATTRIBUTES` : Returns only the attributes listed in
+              AttributesToGet . This is equivalent to specifying AttributesToGet
+              without specifying any value for Select . If you are querying an
+              index and request only attributes that are projected into that
+              index, the operation will read only the index and not the table. If
+              any of the requested attributes are not projected into the index,
+              Amazon DynamoDB will need to fetch each matching item from the
+              table. This extra fetching incurs additional throughput cost and
+              latency.
+
+
+        When neither Select nor AttributesToGet are specified, Amazon DynamoDB
+            defaults to `ALL_ATTRIBUTES` when accessing a table, and
+            `ALL_PROJECTED_ATTRIBUTES` when accessing an index. You cannot use
+            both Select and AttributesToGet together in a single request,
+            unless the value for Select is `SPECIFIC_ATTRIBUTES`. (This usage
+            is equivalent to specifying AttributesToGet without any value for
+            Select .)
+
+        :type scan_filter: map
+        :param scan_filter:
+        Evaluates the scan results and returns only the desired values.
+            Multiple conditions are treated as "AND" operations: all conditions
+            must be met to be included in the results.
+
+        Each ScanConditions element consists of an attribute name to compare,
+            along with the following:
+
+
+        + AttributeValueList - One or more values to evaluate against the
+              supplied attribute. This list contains exactly one value, except
+              for a `BETWEEN` or `IN` comparison, in which case the list contains
+              two values. For type Number, value comparisons are numeric. String
+              value comparisons for greater than, equals, or less than are based
+              on ASCII character code values. For example, `a` is greater than
+              `A`, and `aa` is greater than `B`. For a list of code values, see
+              `http://en.wikipedia.org/wiki/ASCII#ASCII_printable_characters`_.
+              For Binary, Amazon DynamoDB treats each byte of the binary data as
+              unsigned when it compares binary values, for example when
+              evaluating query expressions.
+        + ComparisonOperator - A comparator for evaluating attributes. For
+              example, equals, greater than, less than, etc. Valid comparison
+              operators for Scan: `EQ | NE | LE | LT | GE | GT | NOT_NULL | NULL
+              | CONTAINS | NOT_CONTAINS | BEGINS_WITH | IN | BETWEEN` For
+              information on specifying data types in JSON, see `JSON Data
+              Format`_ in the Amazon DynamoDB Developer Guide . The following are
+              descriptions of each comparison operator.
+
+            + `EQ` : Equal. AttributeValueList can contain only one AttributeValue
+                  of type String, Number, or Binary (not a set). If an item contains
+                  an AttributeValue of a different type than the one specified in the
+                  request, the value does not match. For example, `{"S":"6"}` does
+                  not equal `{"N":"6"}`. Also, `{"N":"6"}` does not equal
+                  `{"NS":["6", "2", "1"]}`.
+            + `NE` : Not equal. AttributeValueList can contain only one
+                  AttributeValue of type String, Number, or Binary (not a set). If an
+                  item contains an AttributeValue of a different type than the one
+                  specified in the request, the value does not match. For example,
+                  `{"S":"6"}` does not equal `{"N":"6"}`. Also, `{"N":"6"}` does not
+                  equal `{"NS":["6", "2", "1"]}`.
+            + `LE` : Less than or equal. AttributeValueList can contain only one
+                  AttributeValue of type String, Number, or Binary (not a set). If an
+                  item contains an AttributeValue of a different type than the one
+                  specified in the request, the value does not match. For example,
+                  `{"S":"6"}` does not equal `{"N":"6"}`. Also, `{"N":"6"}` does not
+                  compare to `{"NS":["6", "2", "1"]}`.
+            + `LT` : Less than. AttributeValueList can contain only one
+                  AttributeValue of type String, Number, or Binary (not a set). If an
+                  item contains an AttributeValue of a different type than the one
+                  specified in the request, the value does not match. For example,
+                  `{"S":"6"}` does not equal `{"N":"6"}`. Also, `{"N":"6"}` does not
+                  compare to `{"NS":["6", "2", "1"]}`.
+            + `GE` : Greater than or equal. AttributeValueList can contain only one
+                  AttributeValue of type String, Number, or Binary (not a set). If an
+                  item contains an AttributeValue of a different type than the one
+                  specified in the request, the value does not match. For example,
+                  `{"S":"6"}` does not equal `{"N":"6"}`. Also, `{"N":"6"}` does not
+                  compare to `{"NS":["6", "2", "1"]}`.
+            + `GT` : Greater than. AttributeValueList can contain only one
+                  AttributeValue of type String, Number, or Binary (not a set). If an
+                  item contains an AttributeValue of a different type than the one
+                  specified in the request, the value does not match. For example,
+                  `{"S":"6"}` does not equal `{"N":"6"}`. Also, `{"N":"6"}` does not
+                  compare to `{"NS":["6", "2", "1"]}`.
+            + `NOT_NULL` : The attribute exists.
+            + `NULL` : The attribute does not exist.
+            + `CONTAINS` : checks for a subsequence, or value in a set.
+                  AttributeValueList can contain only one AttributeValue of type
+                  String, Number, or Binary (not a set). If the target attribute of
+                  the comparison is a String, then the operation checks for a
+                  substring match. If the target attribute of the comparison is
+                  Binary, then the operation looks for a subsequence of the target
+                  that matches the input. If the target attribute of the comparison
+                  is a set ("SS", "NS", or "BS"), then the operation checks for a
+                  member of the set (not as a substring).
+            + `NOT_CONTAINS` : checks for absence of a subsequence, or absence of a
+                  value in a set. AttributeValueList can contain only one
+                  AttributeValue of type String, Number, or Binary (not a set). If
+                  the target attribute of the comparison is a String, then the
+                  operation checks for the absence of a substring match. If the
+                  target attribute of the comparison is Binary, then the operation
+                  checks for the absence of a subsequence of the target that matches
+                  the input. If the target attribute of the comparison is a set
+                  ("SS", "NS", or "BS"), then the operation checks for the absence of
+                  a member of the set (not as a substring).
+            + `BEGINS_WITH` : checks for a prefix. AttributeValueList can contain
+                  only one AttributeValue of type String or Binary (not a Number or a
+                  set). The target attribute of the comparison must be a String or
+                  Binary (not a Number or a set).
+            + `IN` : checks for exact matches. AttributeValueList can contain more
+                  than one AttributeValue of type String, Number, or Binary (not a
+                  set). The target attribute of the comparison must be of the same
+                  type and exact value to match. A String never matches a String set.
+            + `BETWEEN` : Greater than or equal to the first value, and less than
+                  or equal to the second value. AttributeValueList must contain two
+                  AttributeValue elements of the same type, either String, Number, or
+                  Binary (not a set). A target attribute matches if the target value
+                  is greater than, or equal to, the first element and less than, or
+                  equal to, the second element. If an item contains an AttributeValue
+                  of a different type than the one specified in the request, the
+                  value does not match. For example, `{"S":"6"}` does not compare to
+                  `{"N":"6"}`. Also, `{"N":"6"}` does not compare to `{"NS":["6",
+                  "2", "1"]}`
+
+        :type exclusive_start_key: map
+        :param exclusive_start_key: The primary key of the item from which to
+            continue an earlier operation. An earlier operation might provide
+            this value as the LastEvaluatedKey if that operation was
+            interrupted before completion; either because of the result set
+            size or because of the setting for Limit . The LastEvaluatedKey can
+            be passed back in a new request to continue the operation from that
+            point.
+        The data type for ExclusiveStartKey must be String, Number or Binary.
+            No set data types are allowed.
+
+        If you are performing a parallel scan, the value of ExclusiveStartKey
+            must fall into the key space of the Segment being scanned. For
+            example, suppose that there are two application threads scanning a
+            table using the following Scan parameters
+
+
+        + Thread 0: Segment =0; TotalSegments =2
+        + Thread 1: Segment =1; TotalSegments =2
+
+
+        Now suppose that the Scan request for Thread 0 completed and returned a
+            LastEvaluatedKey of "X". Because "X" is part of Segment 0's key
+            space, it cannot be used anywhere else in the table. If Thread 1
+            were to issue another Scan request with an ExclusiveStartKey of
+            "X", Amazon DynamoDB would throw an InputValidationError because
+            hash key "X" cannot be in Segment 1.
+
+        :type return_consumed_capacity: string
+        :param return_consumed_capacity:
+
+        :type total_segments: integer
+        :param total_segments: For parallel Scan requests, TotalSegments
+            represents the total number of segments for a table that is being
+            scanned. Segments are a way to logically divide a table into
+            equally sized portions, for the duration of the Scan request. The
+            value of TotalSegments corresponds to the number of application
+            "workers" (such as threads or processes) that will perform the
+            parallel Scan . For example, if you want to scan a table using four
+            application threads, you would specify a TotalSegments value of 4.
+        The value for TotalSegments must be greater than or equal to 1, and
+            less than or equal to 4096. If you specify a TotalSegments value of
+            1, the Scan will be sequential rather than parallel.
+
+        If you specify TotalSegments , you must also specify Segment .
+
+        :type segment: integer
+        :param segment: For parallel Scan requests, Segment identifies an
+            individual segment to be scanned by an application "worker" (such
+            as a thread or a process). Each worker issues a Scan request with a
+            distinct value for the segment it will scan.
+        Segment IDs are zero-based, so the first segment is always 0. For
+            example, if you want to scan a table using four application
+            threads, the first thread would specify a Segment value of 0, the
+            second thread would specify 1, and so on.
+
+        The value for Segment must be greater than or equal to 0, and less than
+            the value provided for TotalSegments .
+
+        If you specify Segment , you must also specify TotalSegments .
+
+        """
+        params = {'TableName': table_name, }
+        if attributes_to_get is not None:
+            params['AttributesToGet'] = attributes_to_get
+        if limit is not None:
+            params['Limit'] = limit
+        if select is not None:
+            params['Select'] = select
+        if scan_filter is not None:
+            params['ScanFilter'] = scan_filter
+        if exclusive_start_key is not None:
+            params['ExclusiveStartKey'] = exclusive_start_key
+        if return_consumed_capacity is not None:
+            params['ReturnConsumedCapacity'] = return_consumed_capacity
+        if total_segments is not None:
+            params['TotalSegments'] = total_segments
+        if segment is not None:
+            params['Segment'] = segment
+        return self.make_request(action='Scan',
+                                 body=json.dumps(params))
+
+    def update_item(self, table_name, key, attribute_updates=None,
+                    expected=None, return_values=None,
+                    return_consumed_capacity=None,
+                    return_item_collection_metrics=None):
+        """
+        Edits an existing item's attributes, or inserts a new item if
+        it does not already exist. You can put, delete, or add
+        attribute values. You can also perform a conditional update
+        (insert a new attribute name-value pair if it doesn't exist,
+        or replace an existing name-value pair if it has certain
+        expected attribute values).
+
+        In addition to updating an item, you can also return the
+        item's attribute values in the same operation, using the
+        ReturnValues parameter.
+
+        :type table_name: string
+        :param table_name: The name of the table containing the item to update.
+
+        :type key: map
+        :param key: The primary key that defines the item. Each element
+            consists of an attribute name and a value for that attribute.
+
+        :type attribute_updates: map
+        :param attribute_updates: The names of attributes to be modified, the
+            action to perform on each, and the new value for each. If you are
+            updating an attribute that is an index key attribute for any
+            indexes on that table, the attribute type must match the index key
+            type defined in the AttributesDefinition of the table description.
+            You can use UpdateItem to update any non-key attributes.
+        Attribute values cannot be null. String and binary type attributes must
+            have lengths greater than zero. Set type attributes must not be
+            empty. Requests with empty values will be rejected with a
+            ValidationException .
+
+        Each AttributeUpdates element consists of an attribute name to modify,
+            along with the following:
+
+
+        + Value - The new value, if applicable, for this attribute.
+        + Action - Specifies how to perform the update. Valid values for Action
+              are `PUT`, `DELETE`, and `ADD`. The behavior depends on whether the
+              specified primary key already exists in the table. **If an item
+              with the specified Key is found in the table:**
+
+            + `PUT` - Adds the specified attribute to the item. If the attribute
+                  already exists, it is replaced by the new value.
+            + `DELETE` - If no value is specified, the attribute and its value are
+                  removed from the item. The data type of the specified value must
+                  match the existing value's data type. If a set of values is
+                  specified, then those values are subtracted from the old set. For
+                  example, if the attribute value was the set `[a,b,c]` and the
+                  DELETE action specified `[a,c]`, then the final attribute value
+                  would be `[b]`. Specifying an empty set is an error.
+            + `ADD` - If the attribute does not already exist, then the attribute
+                  and its values are added to the item. If the attribute does exist,
+                  then the behavior of `ADD` depends on the data type of the
+                  attribute:
+
+                + If the existing attribute is a number, and if Value is also a number,
+                      then the Value is mathematically added to the existing attribute.
+                      If Value is a negative number, then it is subtracted from the
+                      existing attribute. If you use `ADD` to increment or decrement a
+                      number value for an item that doesn't exist before the update,
+                      Amazon DynamoDB uses 0 as the initial value. In addition, if you
+                      use `ADD` to update an existing item, and intend to increment or
+                      decrement an attribute value which does not yet exist, Amazon
+                      DynamoDB uses `0` as the initial value. For example, suppose that
+                      the item you want to update does not yet have an attribute named
+                      itemcount , but you decide to `ADD` the number `3` to this
+                      attribute anyway, even though it currently does not exist. Amazon
+                      DynamoDB will create the itemcount attribute, set its initial value
+                      to `0`, and finally add `3` to it. The result will be a new
+                      itemcount attribute in the item, with a value of `3`.
+                + If the existing data type is a set, and if the Value is also a set,
+                      then the Value is added to the existing set. (This is a set
+                      operation, not mathematical addition.) For example, if the
+                      attribute value was the set `[1,2]`, and the `ADD` action specified
+                      `[3]`, then the final attribute value would be `[1,2,3]`. An error
+                      occurs if an Add action is specified for a set attribute and the
+                      attribute type specified does not match the existing set type. Both
+                      sets must have the same primitive data type. For example, if the
+                      existing data type is a set of strings, the Value must also be a
+                      set of strings. The same holds true for number sets and binary
+                      sets.
+              This action is only valid for an existing attribute whose data type is
+                  number or is a set. Do not use `ADD` for any other data types.
+          **If no item with the specified Key is found:**
+
+            + `PUT` - Amazon DynamoDB creates a new item with the specified primary
+                  key, and then adds the attribute.
+            + `DELETE` - Nothing happens; there is no attribute to delete.
+            + `ADD` - Amazon DynamoDB creates an item with the supplied primary key
+                  and number (or set of numbers) for the attribute value. The only
+                  data types allowed are number and number set; no other data types
+                  can be specified.
+
+
+
+        If you specify any attributes that are part of an index key, then the
+            data types for those attributes must match those of the schema in
+            the table's attribute definition.
+
+        :type expected: map
+        :param expected: A map of attribute/condition pairs. This is the
+            conditional block for the UpdateItem operation. All the conditions
+            must be met for the operation to succeed.
+        Expected allows you to provide an attribute name, and whether or not
+            Amazon DynamoDB should check to see if the attribute value already
+            exists; or if the attribute value exists and has a particular value
+            before changing it.
+
+        Each item in Expected represents an attribute name for Amazon DynamoDB
+            to check, along with the following:
+
+
+        + Value - The attribute value for Amazon DynamoDB to check.
+        + Exists - Causes Amazon DynamoDB to evaluate the value before
+              attempting a conditional operation:
+
+            + If Exists is `True`, Amazon DynamoDB will check to see if that
+                  attribute value already exists in the table. If it is found, then
+                  the operation succeeds. If it is not found, the operation fails
+                  with a ConditionalCheckFailedException .
+            + If Exists is `False`, Amazon DynamoDB assumes that the attribute
+                  value does not exist in the table. If in fact the value does not
+                  exist, then the assumption is valid and the operation succeeds. If
+                  the value is found, despite the assumption that it does not exist,
+                  the operation fails with a ConditionalCheckFailedException .
+          The default setting for Exists is `True`. If you supply a Value all by
+              itself, Amazon DynamoDB assumes the attribute exists: You don't
+              have to set Exists to `True`, because it is implied. Amazon
+              DynamoDB returns a ValidationException if:
+
+            + Exists is `True` but there is no Value to check. (You expect a value
+                  to exist, but don't specify what that value is.)
+            + Exists is `False` but you also specify a Value . (You cannot expect
+                  an attribute to have a value, while also expecting it not to
+                  exist.)
+
+
+
+        If you specify more than one condition for Exists , then all of the
+            conditions must evaluate to true. (In other words, the conditions
+            are ANDed together.) Otherwise, the conditional operation will
+            fail.
+
+        :type return_values: string
+        :param return_values:
+        Use ReturnValues if you want to get the item attributes as they
+            appeared either before or after they were updated. For UpdateItem ,
+            the valid values are:
+
+
+        + `NONE` - If ReturnValues is not specified, or if its value is `NONE`,
+              then nothing is returned. (This is the default for ReturnValues .)
+        + `ALL_OLD` - If UpdateItem overwrote an attribute name-value pair,
+              then the content of the old item is returned.
+        + `UPDATED_OLD` - The old versions of only the updated attributes are
+              returned.
+        + `ALL_NEW` - All of the attributes of the new version of the item are
+              returned.
+        + `UPDATED_NEW` - The new versions of only the updated attributes are
+              returned.
+
+        :type return_consumed_capacity: string
+        :param return_consumed_capacity:
+
+        :type return_item_collection_metrics: string
+        :param return_item_collection_metrics: If set to `SIZE`, statistics
+            about item collections, if any, that were modified during the
+            operation are returned in the response. If set to `NONE` (the
+            default), no statistics are returned..
+
+        """
+        params = {'TableName': table_name, 'Key': key, }
+        if attribute_updates is not None:
+            params['AttributeUpdates'] = attribute_updates
+        if expected is not None:
+            params['Expected'] = expected
+        if return_values is not None:
+            params['ReturnValues'] = return_values
+        if return_consumed_capacity is not None:
+            params['ReturnConsumedCapacity'] = return_consumed_capacity
+        if return_item_collection_metrics is not None:
+            params['ReturnItemCollectionMetrics'] = return_item_collection_metrics
+        return self.make_request(action='UpdateItem',
+                                 body=json.dumps(params))
+
+    def update_table(self, table_name, provisioned_throughput):
+        """
+        Updates the provisioned throughput for the given table.
+        Setting the throughput for a table helps you manage
+        performance and is part of the provisioned throughput feature
+        of Amazon DynamoDB.
+
+        The provisioned throughput values can be upgraded or
+        downgraded based on the maximums and minimums listed in the
+        `Limits`_ section in the Amazon DynamoDB Developer Guide .
+
+        The table must be in the `ACTIVE` state for this operation to
+        succeed. UpdateTable is an asynchronous operation; while
+        executing the operation, the table is in the `UPDATING` state.
+        While the table is in the `UPDATING` state, the table still
+        has the provisioned throughput from before the call. The new
+        provisioned throughput setting is in effect only when the
+        table returns to the `ACTIVE` state after the UpdateTable
+        operation.
+
+        You cannot add, modify or delete local secondary indexes using
+        UpdateTable . Local secondary indexes can only be defined at
+        table creation time.
+
+        :type table_name: string
+        :param table_name: The name of the table to be updated.
+
+        :type provisioned_throughput: dict
+        :param provisioned_throughput:
+
+        """
+        params = {
+            'TableName': table_name,
+            'ProvisionedThroughput': provisioned_throughput,
+        }
+        return self.make_request(action='UpdateTable',
+                                 body=json.dumps(params))
+
+    def make_request(self, action, body):
+        headers = {
+            'X-Amz-Target': '%s.%s' % (self.TargetPrefix, action),
+            'Host': self.region.endpoint,
+            'Content-Type': 'application/x-amz-json-1.0',
+            'Content-Length': str(len(body)),
+        }
+        http_request = self.build_base_http_request(
+            method='POST', path='/', auth_path='/', params={},
+            headers=headers, data=body)
+        response = self._mexe(http_request, sender=None,
+                              override_num_retries=self.NumberRetries,
+                              retry_handler=self._retry_handler)
+        response_body = response.read()
+        boto.log.debug(response_body)
+        if response.status == 200:
+            if response_body:
+                return json.loads(response_body)
+        else:
+            json_body = json.loads(response_body)
+            fault_name = json_body.get('__type', None)
+            exception_class = self._faults.get(fault_name, self.ResponseError)
+            raise exception_class(response.status, response.reason,
+                                  body=json_body)
+
+    def _retry_handler(self, response, i, next_sleep):
+        status = None
+        if response.status == 400:
+            response_body = response.read()
+            boto.log.debug(response_body)
+            data = json.loads(response_body)
+            if 'ProvisionedThroughputExceededException' in data.get('__type'):
+                self.throughput_exceeded_events += 1
+                msg = "%s, retry attempt %s" % (
+                    'ProvisionedThroughputExceededException',
+                    i
+                )
+                next_sleep = self._exponential_time(i)
+                i += 1
+                status = (msg, i, next_sleep)
+                if i == self.NumberRetries:
+                    # If this was our last retry attempt, raise
+                    # a specific error saying that the throughput
+                    # was exceeded.
+                    raise exceptions.ProvisionedThroughputExceededException(
+                        response.status, response.reason, data)
+            elif 'ConditionalCheckFailedException' in data.get('__type'):
+                raise exceptions.ConditionalCheckFailedException(
+                    response.status, response.reason, data)
+            elif 'ValidationException' in data.get('__type'):
+                raise exceptions.ValidationException(
+                    response.status, response.reason, data)
+            else:
+                raise self.ResponseError(response.status, response.reason,
+                                         data)
+        expected_crc32 = response.getheader('x-amz-crc32')
+        if self._validate_checksums and expected_crc32 is not None:
+            boto.log.debug('Validating crc32 checksum for body: %s',
+                           response.read())
+            actual_crc32 = crc32(response.read()) & 0xffffffff
+            expected_crc32 = int(expected_crc32)
+            if actual_crc32 != expected_crc32:
+                msg = ("The calculated checksum %s did not match the expected "
+                       "checksum %s" % (actual_crc32, expected_crc32))
+                status = (msg, i + 1, self._exponential_time(i))
+        return status
+
+    def _exponential_time(self, i):
+        if i == 0:
+            next_sleep = 0
+        else:
+            next_sleep = 0.05 * (2 ** i)
+        return next_sleep
diff --git a/boto/dynamodb2/results.py b/boto/dynamodb2/results.py
new file mode 100644
index 0000000..bcd855c
--- /dev/null
+++ b/boto/dynamodb2/results.py
@@ -0,0 +1,160 @@
+class ResultSet(object):
+    """
+    A class used to lazily handle page-to-page navigation through a set of
+    results.
+
+    It presents a transparent iterator interface, so that all the user has
+    to do is use it in a typical ``for`` loop (or list comprehension, etc.)
+    to fetch results, even if they weren't present in the current page of
+    results.
+
+    This is used by the ``Table.query`` & ``Table.scan`` methods.
+
+    Example::
+
+        >>> users = Table('users')
+        >>> results = ResultSet()
+        >>> results.to_call(users.query, username__gte='johndoe')
+        # Now iterate. When it runs out of results, it'll fetch the next page.
+        >>> for res in results:
+        ...     print res['username']
+
+    """
+    def __init__(self):
+        super(ResultSet, self).__init__()
+        self.the_callable = None
+        self.call_args = []
+        self.call_kwargs = {}
+        self._results = []
+        self._offset = -1
+        self._results_left = True
+        self._last_key_seen = None
+
+    @property
+    def first_key(self):
+        return 'exclusive_start_key'
+
+    def _reset(self):
+        """
+        Resets the internal state of the ``ResultSet``.
+
+        This prevents results from being cached long-term & consuming
+        excess memory.
+
+        Largely internal.
+        """
+        self._results = []
+        self._offset = 0
+
+    def __iter__(self):
+        return self
+
+    def next(self):
+        self._offset += 1
+
+        if self._offset >= len(self._results):
+            if self._results_left is False:
+                raise StopIteration()
+
+            self.fetch_more()
+
+        if self._offset < len(self._results):
+            return self._results[self._offset]
+        else:
+            raise StopIteration()
+
+    def to_call(self, the_callable, *args, **kwargs):
+        """
+        Sets up the callable & any arguments to run it with.
+
+        This is stored for subsequent calls so that those queries can be
+        run without requiring user intervention.
+
+        Example::
+
+            # Just an example callable.
+            >>> def squares_to(y):
+            ...     for x in range(1, y):
+            ...         yield x**2
+            >>> rs = ResultSet()
+            # Set up what to call & arguments.
+            >>> rs.to_call(squares_to, y=3)
+
+        """
+        if not callable(the_callable):
+            raise ValueError(
+                'You must supply an object or function to be called.'
+            )
+
+        self.the_callable = the_callable
+        self.call_args = args
+        self.call_kwargs = kwargs
+
+    def fetch_more(self):
+        """
+        When the iterator runs out of results, this method is run to re-execute
+        the callable (& arguments) to fetch the next page.
+
+        Largely internal.
+        """
+        self._reset()
+
+        args = self.call_args[:]
+        kwargs = self.call_kwargs.copy()
+
+        if self._last_key_seen is not None:
+            kwargs[self.first_key] = self._last_key_seen
+
+        results = self.the_callable(*args, **kwargs)
+
+        if not len(results.get('results', [])):
+            self._results_left = False
+            return
+
+        self._results.extend(results['results'])
+        self._last_key_seen = results.get('last_key', None)
+
+        if self._last_key_seen is None:
+            self._results_left = False
+
+        # Decrease the limit, if it's present.
+        if self.call_kwargs.get('limit'):
+            self.call_kwargs['limit'] -= len(results['results'])
+
+
+class BatchGetResultSet(ResultSet):
+    def __init__(self, *args, **kwargs):
+        self._keys_left = kwargs.pop('keys', [])
+        self._max_batch_get = kwargs.pop('max_batch_get', 100)
+        super(BatchGetResultSet, self).__init__(*args, **kwargs)
+
+    def fetch_more(self):
+        self._reset()
+
+        args = self.call_args[:]
+        kwargs = self.call_kwargs.copy()
+
+        # Slice off the max we can fetch.
+        kwargs['keys'] = self._keys_left[:self._max_batch_get]
+        self._keys_left = self._keys_left[self._max_batch_get:]
+
+        results = self.the_callable(*args, **kwargs)
+
+        if not len(results.get('results', [])):
+            self._results_left = False
+            return
+
+        self._results.extend(results['results'])
+
+        for offset, key_data in enumerate(results.get('unprocessed_keys', [])):
+            # We've got an unprocessed key. Reinsert it into the list.
+            # DynamoDB only returns valid keys, so there should be no risk of
+            # missing keys ever making it here.
+            self._keys_left.insert(offset, key_data)
+
+        if len(self._keys_left) <= 0:
+            self._results_left = False
+
+        # Decrease the limit, if it's present.
+        if self.call_kwargs.get('limit'):
+            self.call_kwargs['limit'] -= len(results['results'])
diff --git a/boto/dynamodb2/table.py b/boto/dynamodb2/table.py
new file mode 100644
index 0000000..36f918e
--- /dev/null
+++ b/boto/dynamodb2/table.py
@@ -0,0 +1,1061 @@
+from boto.dynamodb2 import exceptions
+from boto.dynamodb2.fields import (HashKey, RangeKey,
+                                   AllIndex, KeysOnlyIndex, IncludeIndex)
+from boto.dynamodb2.items import Item
+from boto.dynamodb2.layer1 import DynamoDBConnection
+from boto.dynamodb2.results import ResultSet, BatchGetResultSet
+from boto.dynamodb2.types import Dynamizer, FILTER_OPERATORS, QUERY_OPERATORS
+
+
+class Table(object):
+    """
+    Interacts & models the behavior of a DynamoDB table.
+
+    The ``Table`` object represents a set (or rough categorization) of
+    records within DynamoDB. The important part is that all records within the
+    table, while largely-schema-free, share the same schema & are essentially
+    namespaced for use in your application. For example, you might have a
+    ``users`` table or a ``forums`` table.
+    """
+    max_batch_get = 100
+
+    def __init__(self, table_name, schema=None, throughput=None, indexes=None,
+                 connection=None):
+        """
+        Sets up a new in-memory ``Table``.
+
+        This is useful if the table already exists within DynamoDB & you simply
+        want to use it for additional interactions. The only required parameter
+        is the ``table_name``. However, under the hood, the object will call
+        ``describe_table`` to determine the schema/indexes/throughput. You
+        can avoid this extra call by passing in ``schema`` & ``indexes``.
+
+        **IMPORTANT** - If you're creating a new ``Table`` for the first time,
+        you should use the ``Table.create`` method instead, as it will
+        persist the table structure to DynamoDB.
+
+        Requires a ``table_name`` parameter, which should be a simple string
+        of the name of the table.
+
+        Optionally accepts a ``schema`` parameter, which should be a list of
+        ``BaseSchemaField`` subclasses representing the desired schema.
+
+        Optionally accepts a ``throughput`` parameter, which should be a
+        dictionary. If provided, it should specify a ``read`` & ``write`` key,
+        both of which should have an integer value associated with them.
+
+        Optionally accepts a ``indexes`` parameter, which should be a list of
+        ``BaseIndexField`` subclasses representing the desired indexes.
+
+        Optionally accepts a ``connection`` parameter, which should be a
+        ``DynamoDBConnection`` instance (or subclass). This is primarily useful
+        for specifying alternate connection parameters.
+
+        Example::
+
+            # The simple, it-already-exists case.
+            >>> conn = Table('users')
+
+            # The full, minimum-extra-calls case.
+            >>> from boto.dynamodb2.layer1 import DynamoDBConnection
+            >>> users = Table('users', schema=[
+            ...     HashKey('username'),
+            ...     RangeKey('date_joined', data_type=NUMBER)
+            ... ], throughput={
+            ...     'read':20,
+            ...     'write': 10,
+            ... }, indexes=[
+            ...     KeysOnlyIndex('MostRecentlyJoined', parts=[
+            ...         RangeKey('date_joined')
+            ...     ]),
+            ... ],
+            ... connection=DynamoDBConnection(
+            ...     aws_access_key_id='key',
+            ...     aws_secret_access_key='key',
+            ...     region='us-west-2'
+            ... ))
+
+        """
+        self.table_name = table_name
+        self.connection = connection
+        self.throughput = {
+            'read': 5,
+            'write': 5,
+        }
+        self.schema = schema
+        self.indexes = indexes
+
+        if self.connection is None:
+            self.connection = DynamoDBConnection()
+
+        if throughput is not None:
+            self.throughput = throughput
+
+        self._dynamizer = Dynamizer()
+
+    @classmethod
+    def create(cls, table_name, schema, throughput=None, indexes=None,
+               connection=None):
+        """
+        Creates a new table in DynamoDB & returns an in-memory ``Table`` object.
+
+        This will setup a brand new table within DynamoDB. The ``table_name``
+        must be unique for your AWS account. The ``schema`` is also required
+        to define the key structure of the table.
+
+        **IMPORTANT** - You should consider the usage pattern of your table
+        up-front, as the schema & indexes can **NOT** be modified once the
+        table is created, requiring the creation of a new table & migrating
+        the data should you wish to revise it.
+
+        **IMPORTANT** - If the table already exists in DynamoDB, additional
+        calls to this method will result in an error. If you just need
+        a ``Table`` object to interact with the existing table, you should
+        just initialize a new ``Table`` object, which requires only the
+        ``table_name``.
+
+        Requires a ``table_name`` parameter, which should be a simple string
+        of the name of the table.
+
+        Requires a ``schema`` parameter, which should be a list of
+        ``BaseSchemaField`` subclasses representing the desired schema.
+
+        Optionally accepts a ``throughput`` parameter, which should be a
+        dictionary. If provided, it should specify a ``read`` & ``write`` key,
+        both of which should have an integer value associated with them.
+
+        Optionally accepts a ``indexes`` parameter, which should be a list of
+        ``BaseIndexField`` subclasses representing the desired indexes.
+
+        Optionally accepts a ``connection`` parameter, which should be a
+        ``DynamoDBConnection`` instance (or subclass). This is primarily useful
+        for specifying alternate connection parameters.
+
+        Example::
+
+            >>> users = Table.create_table('users', schema=[
+            ...     HashKey('username'),
+            ...     RangeKey('date_joined', data_type=NUMBER)
+            ... ], throughput={
+            ...     'read':20,
+            ...     'write': 10,
+            ... }, indexes=[
+            ...     KeysOnlyIndex('MostRecentlyJoined', parts=[
+            ...         RangeKey('date_joined')
+            ...     ]),
+            ... ])
+
+        """
+        table = cls(table_name=table_name, connection=connection)
+        table.schema = schema
+
+        if throughput is not None:
+            table.throughput = throughput
+
+        if indexes is not None:
+            table.indexes = indexes
+
+        # Prep the schema.
+        raw_schema = []
+        attr_defs = []
+
+        for field in table.schema:
+            raw_schema.append(field.schema())
+            # Build the attributes off what we know.
+            attr_defs.append(field.definition())
+
+        raw_throughput = {
+            'ReadCapacityUnits': int(table.throughput['read']),
+            'WriteCapacityUnits': int(table.throughput['write']),
+        }
+        kwargs = {}
+
+        if table.indexes:
+            # Prep the LSIs.
+            raw_lsi = []
+
+            for index_field in table.indexes:
+                raw_lsi.append(index_field.schema())
+                # Again, build the attributes off what we know.
+                # HOWEVER, only add attributes *NOT* already seen.
+                attr_define = index_field.definition()
+
+                for part in attr_define:
+                    attr_names = [attr['AttributeName'] for attr in attr_defs]
+
+                    if not part['AttributeName'] in attr_names:
+                        attr_defs.append(part)
+
+            kwargs['local_secondary_indexes'] = raw_lsi
+
+        table.connection.create_table(
+            table_name=table.table_name,
+            attribute_definitions=attr_defs,
+            key_schema=raw_schema,
+            provisioned_throughput=raw_throughput,
+            **kwargs
+        )
+        return table
+
+    def _introspect_schema(self, raw_schema):
+        """
+        Given a raw schema structure back from a DynamoDB response, parse
+        out & build the high-level Python objects that represent them.
+        """
+        schema = []
+
+        for field in raw_schema:
+            if field['KeyType'] == 'HASH':
+                schema.append(HashKey(field['AttributeName']))
+            elif field['KeyType'] == 'RANGE':
+                schema.append(RangeKey(field['AttributeName']))
+            else:
+                raise exceptions.UnknownSchemaFieldError(
+                    "%s was seen, but is unknown. Please report this at "
+                    "https://github.com/boto/boto/issues." % field['KeyType']
+                )
+
+        return schema
+
+    def _introspect_indexes(self, raw_indexes):
+        """
+        Given a raw index structure back from a DynamoDB response, parse
+        out & build the high-level Python objects that represent them.
+        """
+        indexes = []
+
+        for field in raw_indexes:
+            index_klass = AllIndex
+            kwargs = {
+                'parts': []
+            }
+
+            if field['Projection']['ProjectionType'] == 'ALL':
+                index_klass = AllIndex
+            elif field['Projection']['ProjectionType'] == 'KEYS_ONLY':
+                index_klass = KeysOnlyIndex
+            elif field['Projection']['ProjectionType'] == 'INCLUDE':
+                index_klass = IncludeIndex
+                kwargs['includes'] = field['Projection']['NonKeyAttributes']
+            else:
+                raise exceptions.UnknownIndexFieldError(
+                    "%s was seen, but is unknown. Please report this at "
+                    "https://github.com/boto/boto/issues." % \
+                    field['Projection']['ProjectionType']
+                )
+
+            name = field['IndexName']
+            kwargs['parts'] = self._introspect_schema(field['KeySchema'])
+            indexes.append(index_klass(name, **kwargs))
+
+        return indexes
+
+    def describe(self):
+        """
+        Describes the current structure of the table in DynamoDB.
+
+        This information will be used to update the ``schema``, ``indexes``
+        and ``throughput`` information on the ``Table``. Some calls, such as
+        those involving creating keys or querying, will require this
+        information to be populated.
+
+        It also returns the full raw datastructure from DynamoDB, in the
+        event you'd like to parse out additional information (such as the
+        ``ItemCount`` or usage information).
+
+        Example::
+
+            >>> users.describe()
+            {
+                # Lots of keys here...
+            }
+            >>> len(users.schema)
+            2
+
+        """
+        result = self.connection.describe_table(self.table_name)
+
+        # Blindly update throughput, since what's on DynamoDB's end is likely
+        # more correct.
+        raw_throughput = result['Table']['ProvisionedThroughput']
+        self.throughput['read'] = int(raw_throughput['ReadCapacityUnits'])
+        self.throughput['write'] = int(raw_throughput['WriteCapacityUnits'])
+
+        if not self.schema:
+            # Since we have the data, build the schema.
+            raw_schema = result['Table'].get('KeySchema', [])
+            self.schema = self._introspect_schema(raw_schema)
+
+        if not self.indexes:
+            # Build the index information as well.
+            raw_indexes = result['Table'].get('LocalSecondaryIndexes', [])
+            self.indexes = self._introspect_indexes(raw_indexes)
+
+        # This is leaky.
+        return result
+
+    def update(self, throughput):
+        """
+        Updates table attributes in DynamoDB.
+
+        Currently, the only thing you can modify about a table after it has
+        been created is the throughput.
+
+        Requires a ``throughput`` parameter, which should be a
+        dictionary. If provided, it should specify a ``read`` & ``write`` key,
+        both of which should have an integer value associated with them.
+
+        Returns ``True`` on success.
+
+        Example::
+
+            # For a read-heavier application...
+            >>> users.update(throughput={
+            ...     'read': 20,
+            ...     'write': 10,
+            ... })
+            True
+
+        """
+        self.throughput = throughput
+        self.connection.update_table(self.table_name, {
+            'ReadCapacityUnits': int(self.throughput['read']),
+            'WriteCapacityUnits': int(self.throughput['write']),
+        })
+        return True
+
+    def delete(self):
+        """
+        Deletes a table in DynamoDB.
+
+        **IMPORTANT** - Be careful when using this method, there is no undo.
+
+        Returns ``True`` on success.
+
+        Example::
+
+            >>> users.delete()
+            True
+
+        """
+        self.connection.delete_table(self.table_name)
+        return True
+
+    def _encode_keys(self, keys):
+        """
+        Given a flat Python dictionary of keys/values, converts it into the
+        nested dictionary DynamoDB expects.
+
+        Converts::
+
+            {
+                'username': 'john',
+                'tags': [1, 2, 5],
+            }
+
+        ...to...::
+
+            {
+                'username': {'S': 'john'},
+                'tags': {'NS': ['1', '2', '5']},
+            }
+
+        """
+        raw_key = {}
+
+        for key, value in keys.items():
+            raw_key[key] = self._dynamizer.encode(value)
+
+        return raw_key
+
+    def get_item(self, consistent=False, **kwargs):
+        """
+        Fetches an item (record) from a table in DynamoDB.
+
+        To specify the key of the item you'd like to get, you can specify the
+        key attributes as kwargs.
+
+        Optionally accepts a ``consistent`` parameter, which should be a
+        boolean. If you provide ``True``, it will perform
+        a consistent (but more expensive) read from DynamoDB.
+        (Default: ``False``)
+
+        Returns an ``Item`` instance containing all the data for that record.
+
+        Example::
+
+            # A simple hash key.
+            >>> john = users.get_item(username='johndoe')
+            >>> john['first_name']
+            'John'
+
+            # A complex hash+range key.
+            >>> john = users.get_item(username='johndoe', last_name='Doe')
+            >>> john['first_name']
+            'John'
+
+            # A consistent read (assuming the data might have just changed).
+            >>> john = users.get_item(username='johndoe', consistent=True)
+            >>> john['first_name']
+            'Johann'
+
+            # With a key that is an invalid variable name in Python.
+            # Also, assumes a different schema than previous examples.
+            >>> john = users.get_item(**{
+            ...     'date-joined': 127549192,
+            ... })
+            >>> john['first_name']
+            'John'
+
+        """
+        raw_key = self._encode_keys(kwargs)
+        item_data = self.connection.get_item(
+            self.table_name,
+            raw_key,
+            consistent_read=consistent
+        )
+        item = Item(self)
+        item.load(item_data)
+        return item
+
+    def put_item(self, data, overwrite=False):
+        """
+        Saves an entire item to DynamoDB.
+
+        By default, if any part of the ``Item``'s original data doesn't match
+        what's currently in DynamoDB, this request will fail. This prevents
+        other processes from updating the data in between when you read the
+        item & when your request to update the item's data is processed, which
+        would typically result in some data loss.
+
+        Requires a ``data`` parameter, which should be a dictionary of the data
+        you'd like to store in DynamoDB.
+
+        Optionally accepts an ``overwrite`` parameter, which should be a
+        boolean. If you provide ``True``, this will tell DynamoDB to blindly
+        overwrite whatever data is present, if any.
+
+        Returns ``True`` on success.
+
+        Example::
+
+            >>> users.put_item(data={
+            ...     'username': 'jane',
+            ...     'first_name': 'Jane',
+            ...     'last_name': 'Doe',
+            ...     'date_joined': 126478915,
+            ... })
+            True
+
+        """
+        item = Item(self, data=data)
+        return item.save(overwrite=overwrite)
+
+    def _put_item(self, item_data, expects=None):
+        """
+        The internal variant of ``put_item`` (full data). This is used by the
+        ``Item`` objects, since that operation is represented at the
+        table-level by the API, but conceptually maps better to telling an
+        individual ``Item`` to save itself.
+        """
+        kwargs = {}
+
+        if expects is not None:
+            kwargs['expected'] = expects
+
+        self.connection.put_item(self.table_name, item_data, **kwargs)
+        return True
+
+    def _update_item(self, key, item_data, expects=None):
+        """
+        The internal variant of ``put_item`` (partial data). This is used by the
+        ``Item`` objects, since that operation is represented at the
+        table-level by the API, but conceptually maps better to telling an
+        individual ``Item`` to save itself.
+        """
+        raw_key = self._encode_keys(key)
+        kwargs = {}
+
+        if expects is not None:
+            kwargs['expected'] = expects
+
+        self.connection.update_item(self.table_name, raw_key, item_data, **kwargs)
+        return True
+
+    def delete_item(self, **kwargs):
+        """
+        Deletes an item in DynamoDB.
+
+        **IMPORTANT** - Be careful when using this method, there is no undo.
+
+        To specify the key of the item you'd like to get, you can specify the
+        key attributes as kwargs.
+
+        Returns ``True`` on success.
+
+        Example::
+
+            # A simple hash key.
+            >>> users.delete_item(username='johndoe')
+            True
+
+            # A complex hash+range key.
+            >>> users.delete_item(username='jane', last_name='Doe')
+            True
+
+            # With a key that is an invalid variable name in Python.
+            # Also, assumes a different schema than previous examples.
+            >>> users.delete_item(**{
+            ...     'date-joined': 127549192,
+            ... })
+            True
+
+        """
+        raw_key = self._encode_keys(kwargs)
+        self.connection.delete_item(self.table_name, raw_key)
+        return True
+
+    def get_key_fields(self):
+        """
+        Returns the fields necessary to make a key for a table.
+
+        If the ``Table`` does not already have a populated ``schema``,
+        this will request it via a ``Table.describe`` call.
+
+        Returns a list of fieldnames (strings).
+
+        Example::
+
+            # A simple hash key.
+            >>> users.get_key_fields()
+            ['username']
+
+            # A complex hash+range key.
+            >>> users.get_key_fields()
+            ['username', 'last_name']
+
+        """
+        if not self.schema:
+            # We don't know the structure of the table. Get a description to
+            # populate the schema.
+            self.describe()
+
+        return [field.name for field in self.schema]
+
+    def batch_write(self):
+        """
+        Allows the batching of writes to DynamoDB.
+
+        Since each write/delete call to DynamoDB has a cost associated with it,
+        when loading lots of data, it makes sense to batch them, creating as
+        few calls as possible.
+
+        This returns a context manager that will transparently handle creating
+        these batches. The object you get back lightly-resembles a ``Table``
+        object, sharing just the ``put_item`` & ``delete_item`` methods
+        (which are all that DynamoDB can batch in terms of writing data).
+
+        DynamoDB's maximum batch size is 25 items per request. If you attempt
+        to put/delete more than that, the context manager will batch as many
+        as it can up to that number, then flush them to DynamoDB & continue
+        batching as more calls come in.
+
+        Example::
+
+            # Assuming a table with one record...
+            >>> with users.batch_write() as batch:
+            ...     batch.put_item(data={
+            ...         'username': 'johndoe',
+            ...         'first_name': 'John',
+            ...         'last_name': 'Doe',
+            ...         'owner': 1,
+            ...     })
+            ...     # Nothing across the wire yet.
+            ...     batch.delete_item(username='bob')
+            ...     # Still no requests sent.
+            ...     batch.put_item(data={
+            ...         'username': 'jane',
+            ...         'first_name': 'Jane',
+            ...         'last_name': 'Doe',
+            ...         'date_joined': 127436192,
+            ...     })
+            ...     # Nothing yet, but once we leave the context, the
+            ...     # put/deletes will be sent.
+
+        """
+        # PHENOMENAL COSMIC DOCS!!! itty-bitty code.
+        return BatchTable(self)
+
+    def _build_filters(self, filter_kwargs, using=QUERY_OPERATORS):
+        """
+        An internal method for taking query/scan-style ``**kwargs`` & turning
+        them into the raw structure DynamoDB expects for filtering.
+        """
+        filters = {}
+
+        for field_and_op, value in filter_kwargs.items():
+            field_bits = field_and_op.split('__')
+            fieldname = '__'.join(field_bits[:-1])
+
+            try:
+                op = using[field_bits[-1]]
+            except KeyError:
+                raise exceptions.UnknownFilterTypeError(
+                    "Operator '%s' from '%s' is not recognized." % (
+                        field_bits[-1],
+                        field_and_op
+                    )
+                )
+
+            lookup = {
+                'AttributeValueList': [],
+                'ComparisonOperator': op,
+            }
+ 
+            # Special-case the ``NULL/NOT_NULL`` case.
+            if field_bits[-1] == 'null':
+                del lookup['AttributeValueList']
+
+                if value is False:
+                    lookup['ComparisonOperator'] = 'NOT_NULL'
+                else:
+                    lookup['ComparisonOperator'] = 'NULL'
+            # Special-case the ``BETWEEN`` case.
+            elif field_bits[-1] == 'between':
+                if len(value) == 2 and isinstance(value, (list, tuple)):
+                    lookup['AttributeValueList'].append(
+                        self._dynamizer.encode(value[0])
+                    )
+                    lookup['AttributeValueList'].append(
+                        self._dynamizer.encode(value[1])
+                    )
+            else:
+                # Fix up the value for encoding, because it was built to only work
+                # with ``set``s.
+                if isinstance(value, (list, tuple)):
+                    value = set(value)
+                lookup['AttributeValueList'].append(
+                    self._dynamizer.encode(value)
+                )
+
+            # Finally, insert it into the filters.
+            filters[fieldname] = lookup
+
+        return filters
+
+    def query(self, limit=None, index=None, reverse=False, consistent=False,
+              **filter_kwargs):
+        """
+        Queries for a set of matching items in a DynamoDB table.
+
+        Queries can be performed against a hash key, a hash+range key or
+        against any data stored in your local secondary indexes.
+
+        **Note** - You can not query against arbitrary fields within the data
+        stored in DynamoDB.
+
+        To specify the filters of the items you'd like to get, you can specify
+        the filters as kwargs. Each filter kwarg should follow the pattern
+        ``<fieldname>__<filter_operation>=<value_to_look_for>``.
+
+        Optionally accepts a ``limit`` parameter, which should be an integer
+        count of the total number of items to return. (Default: ``None`` -
+        all results)
+
+        Optionally accepts an ``index`` parameter, which should be a string of
+        name of the local secondary index you want to query against.
+        (Default: ``None``)
+
+        Optionally accepts a ``reverse`` parameter, which will present the
+        results in reverse order. (Default: ``None`` - normal order)
+
+        Optionally accepts a ``consistent`` parameter, which should be a
+        boolean. If you provide ``True``, it will force a consistent read of
+        the data (more expensive). (Default: ``False`` - use eventually
+        consistent reads)
+
+        Returns a ``ResultSet``, which transparently handles the pagination of
+        results you get back.
+
+        Example::
+
+            # Look for last names equal to "Doe".
+            >>> results = users.query(last_name__eq='Doe')
+            >>> for res in results:
+            ...     print res['first_name']
+            'John'
+            'Jane'
+
+            # Look for last names beginning with "D", in reverse order, limit 3.
+            >>> results = users.query(
+            ...     last_name__beginswith='D',
+            ...     reverse=True,
+            ...     limit=3
+            ... )
+            >>> for res in results:
+            ...     print res['first_name']
+            'Alice'
+            'Jane'
+            'John'
+
+            # Use an LSI & a consistent read.
+            >>> results = users.query(
+            ...     date_joined__gte=1236451000,
+            ...     owner__eq=1,
+            ...     index='DateJoinedIndex',
+            ...     consistent=True
+            ... )
+            >>> for res in results:
+            ...     print res['first_name']
+            'Alice'
+            'Bob'
+            'John'
+            'Fred'
+
+        """
+        if self.schema:
+            if len(self.schema) == 1 and len(filter_kwargs) <= 1:
+                raise exceptions.QueryError(
+                    "You must specify more than one key to filter on."
+                )
+
+        results = ResultSet()
+        kwargs = filter_kwargs.copy()
+        kwargs.update({
+            'limit': limit,
+            'index': index,
+            'reverse': reverse,
+            'consistent': consistent,
+        })
+        results.to_call(self._query, **kwargs)
+        return results
+
+    def _query(self, limit=None, index=None, reverse=False, consistent=False,
+               exclusive_start_key=None, **filter_kwargs):
+        """
+        The internal method that performs the actual queries. Used extensively
+        by ``ResultSet`` to perform each (paginated) request.
+        """
+        kwargs = {
+            'limit': limit,
+            'index_name': index,
+            'scan_index_forward': reverse,
+            'consistent_read': consistent,
+        }
+
+        if exclusive_start_key:
+            kwargs['exclusive_start_key'] = {}
+
+            for key, value in exclusive_start_key.items():
+                kwargs['exclusive_start_key'][key] = \
+                    self._dynamizer.encode(value)
+
+        # Convert the filters into something we can actually use.
+        kwargs['key_conditions'] = self._build_filters(
+            filter_kwargs,
+            using=QUERY_OPERATORS
+        )
+
+        raw_results = self.connection.query(
+            self.table_name,
+            **kwargs
+        )
+        results = []
+        last_key = None
+
+        for raw_item in raw_results.get('Items', []):
+            item = Item(self)
+            item.load({
+                'Item': raw_item,
+            })
+            results.append(item)
+
+        if raw_results.get('LastEvaluatedKey', None):
+            last_key = {}
+
+            for key, value in raw_results['LastEvaluatedKey'].items():
+                last_key[key] = self._dynamizer.decode(value)
+
+        return {
+            'results': results,
+            'last_key': last_key,
+        }
+
+    def scan(self, limit=None, segment=None, total_segments=None,
+             **filter_kwargs):
+        """
+        Scans across all items within a DynamoDB table.
+
+        Scans can be performed against a hash key or a hash+range key. You can
+        additionally filter the results after the table has been read but
+        before the response is returned.
+
+        To specify the filters of the items you'd like to get, you can specify
+        the filters as kwargs. Each filter kwarg should follow the pattern
+        ``<fieldname>__<filter_operation>=<value_to_look_for>``.
+
+        Optionally accepts a ``limit`` parameter, which should be an integer
+        count of the total number of items to return. (Default: ``None`` -
+        all results)
+
+        Returns a ``ResultSet``, which transparently handles the pagination of
+        results you get back.
+
+        Example::
+
+            # All results.
+            >>> everything = users.scan()
+
+            # Look for last names beginning with "D".
+            >>> results = users.scan(last_name__beginswith='D')
+            >>> for res in results:
+            ...     print res['first_name']
+            'Alice'
+            'John'
+            'Jane'
+
+            # Use an ``IN`` filter & limit.
+            >>> results = users.scan(
+            ...     age__in=[25, 26, 27, 28, 29],
+            ...     limit=1
+            ... )
+            >>> for res in results:
+            ...     print res['first_name']
+            'Alice'
+
+        """
+        results = ResultSet()
+        kwargs = filter_kwargs.copy()
+        kwargs.update({
+            'limit': limit,
+            'segment': segment,
+            'total_segments': total_segments,
+        })
+        results.to_call(self._scan, **kwargs)
+        return results
+
+    def _scan(self, limit=None, exclusive_start_key=None, segment=None,
+              total_segments=None, **filter_kwargs):
+        """
+        The internal method that performs the actual scan. Used extensively
+        by ``ResultSet`` to perform each (paginated) request.
+        """
+        kwargs = {
+            'limit': limit,
+            'segment': segment,
+            'total_segments': total_segments,
+        }
+
+        if exclusive_start_key:
+            kwargs['exclusive_start_key'] = {}
+
+            for key, value in exclusive_start_key.items():
+                kwargs['exclusive_start_key'][key] = \
+                    self._dynamizer.encode(value)
+
+        # Convert the filters into something we can actually use.
+        kwargs['scan_filter'] = self._build_filters(
+            filter_kwargs,
+            using=FILTER_OPERATORS
+        )
+
+        raw_results = self.connection.scan(
+            self.table_name,
+            **kwargs
+        )
+        results = []
+        last_key = None
+
+        for raw_item in raw_results.get('Items', []):
+            item = Item(self)
+            item.load({
+                'Item': raw_item,
+            })
+            results.append(item)
+
+        if raw_results.get('LastEvaluatedKey', None):
+            last_key = {}
+
+            for key, value in raw_results['LastEvaluatedKey'].items():
+                last_key[key] = self._dynamizer.decode(value)
+
+        return {
+            'results': results,
+            'last_key': last_key,
+        }
+
+    def batch_get(self, keys, consistent=False):
+        """
+        Fetches many specific items in batch from a table.
+
+        Requires a ``keys`` parameter, which should be a list of dictionaries.
+        Each dictionary should consist of the keys values to specify.
+
+        Optionally accepts a ``consistent`` parameter, which should be a
+        boolean. If you provide ``True``, a strongly consistent read will be
+        used. (Default: False)
+
+        Returns a ``ResultSet``, which transparently handles the pagination of
+        results you get back.
+
+        Example::
+
+            >>> results = users.batch_get(keys=[
+            ...     {
+            ...         'username': 'johndoe',
+            ...     },
+            ...     {
+            ...         'username': 'jane',
+            ...     },
+            ...     {
+            ...         'username': 'fred',
+            ...     },
+            ... ])
+            >>> for res in results:
+            ...     print res['first_name']
+            'John'
+            'Jane'
+            'Fred'
+
+        """
+        # We pass the keys to the constructor instead, so it can maintain it's
+        # own internal state as to what keys have been processed.
+        results = BatchGetResultSet(keys=keys, max_batch_get=self.max_batch_get)
+        results.to_call(self._batch_get, consistent=False)
+        return results
+
+    def _batch_get(self, keys, consistent=False):
+        """
+        The internal method that performs the actual batch get. Used extensively
+        by ``BatchGetResultSet`` to perform each (paginated) request.
+        """
+        items = {
+            self.table_name: {
+                'Keys': [],
+            },
+        }
+
+        if consistent:
+            items[self.table_name]['ConsistentRead'] = True
+
+        for key_data in keys:
+            raw_key = {}
+
+            for key, value in key_data.items():
+                raw_key[key] = self._dynamizer.encode(value)
+
+            items[self.table_name]['Keys'].append(raw_key)
+
+        raw_results = self.connection.batch_get_item(request_items=items)
+        results = []
+        unprocessed_keys = []
+
+        for raw_item in raw_results['Responses'].get(self.table_name, []):
+            item = Item(self)
+            item.load({
+                'Item': raw_item,
+            })
+            results.append(item)
+
+        raw_unproccessed = raw_results.get('UnprocessedKeys', {})
+
+        for raw_key in raw_unproccessed.get('Keys', []):
+            py_key = {}
+
+            for key, value in raw_key.items():
+                py_key[key] = self._dynamizer.decode(value)
+
+            unprocessed_keys.append(py_key)
+
+        return {
+            'results': results,
+            # NEVER return a ``last_key``. Just in-case any part of
+            # ``ResultSet`` peeks through, since much of the
+            # original underlying implementation is based on this key.
+            'last_key': None,
+            'unprocessed_keys': unprocessed_keys,
+        }
+
+    def count(self):
+        """
+        Returns a (very) eventually consistent count of the number of items
+        in a table.
+
+        Lag time is about 6 hours, so don't expect a high degree of accuracy.
+
+        Example::
+
+            >>> users.count()
+            6
+
+        """
+        info = self.describe()
+        return info['Table'].get('ItemCount', 0)
+
+
+class BatchTable(object):
+    """
+    Used by ``Table`` as the context manager for batch writes.
+
+    You likely don't want to try to use this object directly.
+    """
+    def __init__(self, table):
+        self.table = table
+        self._to_put = []
+        self._to_delete = []
+
+    def __enter__(self):
+        return self
+
+    def __exit__(self, type, value, traceback):
+        if not self._to_put and not self._to_delete:
+            return False
+
+        # Flush anything that's left.
+        self.flush()
+        return True
+
+    def put_item(self, data, overwrite=False):
+        self._to_put.append(data)
+
+        if self.should_flush():
+            self.flush()
+
+    def delete_item(self, **kwargs):
+        self._to_delete.append(kwargs)
+
+        if self.should_flush():
+            self.flush()
+
+    def should_flush(self):
+        if len(self._to_put) + len(self._to_delete) == 25:
+            return True
+
+        return False
+
+    def flush(self):
+        batch_data = {
+            self.table.table_name: [
+                # We'll insert data here shortly.
+            ],
+        }
+
+        for put in self._to_put:
+            item = Item(self.table, data=put)
+            batch_data[self.table.table_name].append({
+                'PutRequest': {
+                    'Item': item.prepare_full(),
+                }
+            })
+
+        for delete in self._to_delete:
+            batch_data[self.table.table_name].append({
+                'DeleteRequest': {
+                    'Key': self.table._encode_keys(delete),
+                }
+            })
+
+        self.table.connection.batch_write_item(batch_data)
+        self._to_put = []
+        self._to_delete = []
+        return True
diff --git a/boto/dynamodb2/types.py b/boto/dynamodb2/types.py
new file mode 100644
index 0000000..fc67aa0
--- /dev/null
+++ b/boto/dynamodb2/types.py
@@ -0,0 +1,40 @@
+# Shadow the DynamoDB v1 bits.
+# This way, no end user should have to cross-import between versions & we
+# reserve the namespace to extend v2 if it's ever needed.
+from boto.dynamodb.types import Dynamizer
+
+
+# Some constants for our use.
+STRING = 'S'
+NUMBER = 'N'
+BINARY = 'B'
+STRING_SET = 'SS'
+NUMBER_SET = 'NS'
+BINARY_SET = 'BS'
+
+QUERY_OPERATORS = {
+    'eq': 'EQ',
+    'lte': 'LE',
+    'lt': 'LT',
+    'gte': 'GE',
+    'gt': 'GT',
+    'beginswith': 'BEGINS_WITH',
+    'between': 'BETWEEN',
+}
+
+FILTER_OPERATORS = {
+    'eq': 'EQ',
+    'ne': 'NE',
+    'lte': 'LE',
+    'lt': 'LT',
+    'gte': 'GE',
+    'gt': 'GT',
+    # FIXME: Is this necessary? i.e. ``whatever__null=False``
+    'nnull': 'NOT_NULL',
+    'null': 'NULL',
+    'contains': 'CONTAINS',
+    'ncontains': 'NOT_CONTAINS',
+    'beginswith': 'BEGINS_WITH',
+    'in': 'IN',
+    'between': 'BETWEEN',
+}
diff --git a/boto/ec2/__init__.py b/boto/ec2/__init__.py
index 963b6d9..cc1e582 100644
--- a/boto/ec2/__init__.py
+++ b/boto/ec2/__init__.py
@@ -24,6 +24,19 @@
 service from AWS.
 """
 from boto.ec2.connection import EC2Connection
+from boto.regioninfo import RegionInfo
+
+
+RegionData = {
+    'us-east-1': 'ec2.us-east-1.amazonaws.com',
+    'us-west-1': 'ec2.us-west-1.amazonaws.com',
+    'us-west-2': 'ec2.us-west-2.amazonaws.com',
+    'sa-east-1': 'ec2.sa-east-1.amazonaws.com',
+    'eu-west-1': 'ec2.eu-west-1.amazonaws.com',
+    'ap-northeast-1': 'ec2.ap-northeast-1.amazonaws.com',
+    'ap-southeast-1': 'ec2.ap-southeast-1.amazonaws.com',
+    'ap-southeast-2': 'ec2.ap-southeast-2.amazonaws.com',
+}
 
 
 def regions(**kw_params):
@@ -36,8 +49,13 @@
     :rtype: list
     :return: A list of :class:`boto.ec2.regioninfo.RegionInfo`
     """
-    c = EC2Connection(**kw_params)
-    return c.get_all_regions()
+    regions = []
+    for region_name in RegionData:
+        region = RegionInfo(name=region_name,
+                            endpoint=RegionData[region_name],
+                            connection_cls=EC2Connection)
+        regions.append(region)
+    return regions
 
 
 def connect_to_region(region_name, **kw_params):
diff --git a/boto/ec2/attributes.py b/boto/ec2/attributes.py
new file mode 100644
index 0000000..d76e5c5
--- /dev/null
+++ b/boto/ec2/attributes.py
@@ -0,0 +1,71 @@
+# Copyright (c) 2012 Amazon.com, Inc. or its affiliates.  All Rights Reserved
+#
+# Permission is hereby granted, free of charge, to any person obtaining a
+# copy of this software and associated documentation files (the
+# "Software"), to deal in the Software without restriction, including
+# without limitation the rights to use, copy, modify, merge, publish, dis-
+# tribute, sublicense, and/or sell copies of the Software, and to permit
+# persons to whom the Software is furnished to do so, subject to the fol-
+# lowing conditions:
+#
+# The above copyright notice and this permission notice shall be included
+# in all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
+# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
+# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+# IN THE SOFTWARE.
+
+
+class AccountAttribute(object):
+    def __init__(self, connection=None):
+        self.connection = connection
+        self.attribute_name = None
+        self.attribute_values = None
+
+    def startElement(self, name, attrs, connection):
+        if name == 'attributeValueSet':
+            self.attribute_values = AttributeValues()
+            return self.attribute_values
+
+    def endElement(self, name, value, connection):
+        if name == 'attributeName':
+            self.attribute_name = value
+
+
+class AttributeValues(list):
+    def startElement(self, name, attrs, connection):
+        pass
+
+    def endElement(self, name, value, connection):
+        if name == 'attributeValue':
+            self.append(value)
+
+
+class VPCAttribute(object):
+    def __init__(self, connection=None):
+        self.connection = connection
+        self.vpc_id = None
+        self.enable_dns_hostnames = None
+        self.enable_dns_support = None
+        self._current_attr = None
+
+    def startElement(self, name, attrs, connection):
+        if name in ('enableDnsHostnames', 'enableDnsSupport'):
+            self._current_attr = name
+
+    def endElement(self, name, value, connection):
+        if name == 'vpcId':
+            self.vpc_id = value
+        elif name == 'value':
+            if value == 'true':
+                value = True
+            else:
+                value = False
+            if self._current_attr == 'enableDnsHostnames':
+                self.enable_dns_hostnames = value
+            elif self._current_attr == 'enableDnsSupport':
+                self.enable_dns_support = value
diff --git a/boto/ec2/autoscale/__init__.py b/boto/ec2/autoscale/__init__.py
index 80c3c85..17a89e1 100644
--- a/boto/ec2/autoscale/__init__.py
+++ b/boto/ec2/autoscale/__init__.py
@@ -40,6 +40,7 @@
 from boto.ec2.autoscale.policy import AdjustmentType
 from boto.ec2.autoscale.policy import MetricCollectionTypes
 from boto.ec2.autoscale.policy import ScalingPolicy
+from boto.ec2.autoscale.policy import TerminationPolicies
 from boto.ec2.autoscale.instance import Instance
 from boto.ec2.autoscale.scheduled import ScheduledUpdateGroupAction
 from boto.ec2.autoscale.tag import Tag
@@ -51,7 +52,9 @@
     'sa-east-1': 'autoscaling.sa-east-1.amazonaws.com',
     'eu-west-1': 'autoscaling.eu-west-1.amazonaws.com',
     'ap-northeast-1': 'autoscaling.ap-northeast-1.amazonaws.com',
-    'ap-southeast-1': 'autoscaling.ap-southeast-1.amazonaws.com'}
+    'ap-southeast-1': 'autoscaling.ap-southeast-1.amazonaws.com',
+    'ap-southeast-2': 'autoscaling.ap-southeast-2.amazonaws.com',
+}
 
 
 def regions():
@@ -171,6 +174,9 @@
             params['DefaultCooldown'] = as_group.default_cooldown
         if as_group.placement_group:
             params['PlacementGroup'] = as_group.placement_group
+        if as_group.termination_policies:
+            self.build_list_params(params, as_group.termination_policies,
+                                   'TerminationPolicies')
         if op.startswith('Create'):
             # you can only associate load balancers with an autoscale
             # group at creation time
@@ -231,6 +237,10 @@
             params['SpotPrice'] = str(launch_config.spot_price)
         if launch_config.instance_profile_name is not None:
             params['IamInstanceProfile'] = launch_config.instance_profile_name
+        if launch_config.ebs_optimized:
+            params['EbsOptimized'] = 'true'
+        else:
+            params['EbsOptimized'] = 'false'
         return self.get_object('CreateLaunchConfiguration', params,
                                Request, verb='POST')
 
@@ -361,6 +371,15 @@
         return self.get_list('DescribeScalingActivities',
                              params, [('member', Activity)])
 
+    def get_termination_policies(self):
+        """Gets all valid termination policies.
+
+        These values can then be used as the termination_policies arg
+        when creating and updating autoscale groups.
+        """
+        return self.get_object('DescribeTerminationPolicyTypes',
+                               {}, TerminationPolicies)
+
     def delete_scheduled_action(self, scheduled_action_name,
                                 autoscale_group=None):
         """
@@ -533,9 +552,11 @@
                                    'ScalingProcesses')
         return self.get_status('ResumeProcesses', params)
 
-    def create_scheduled_group_action(self, as_group, name, time,
+    def create_scheduled_group_action(self, as_group, name, time=None,
                                       desired_capacity=None,
-                                      min_size=None, max_size=None):
+                                      min_size=None, max_size=None,
+                                      start_time=None, end_time=None,
+                                      recurrence=None):
         """
         Creates a scheduled scaling action for a Auto Scaling group. If you
         leave a parameter unspecified, the corresponding value remains
@@ -548,7 +569,7 @@
         :param name: Scheduled action name.
 
         :type time: datetime.datetime
-        :param time: The time for this action to start.
+        :param time: The time for this action to start. (Depracated)
 
         :type desired_capacity: int
         :param desired_capacity: The number of EC2 instances that should
@@ -559,10 +580,26 @@
 
         :type max_size: int
         :param max_size: The minimum size for the new auto scaling group.
+
+        :type start_time: datetime.datetime
+        :param start_time: The time for this action to start. When StartTime and EndTime are specified with Recurrence, they form the boundaries of when the recurring action will start and stop.
+
+        :type end_time: datetime.datetime
+        :param end_time: The time for this action to end. When StartTime and EndTime are specified with Recurrence, they form the boundaries of when the recurring action will start and stop.
+
+        :type recurrence: string
+        :param recurrence: The time when recurring future actions will start. Start time is specified by the user following the Unix cron syntax format. EXAMPLE: '0 10 * * *'
         """
         params = {'AutoScalingGroupName': as_group,
-                  'ScheduledActionName': name,
-                  'Time': time.isoformat()}
+                  'ScheduledActionName': name}
+        if start_time is not None:
+            params['StartTime'] = start_time.isoformat()
+        if end_time is not None:
+            params['EndTime'] = end_time.isoformat()
+        if recurrence is not None:
+            params['Recurrence'] = recurrence
+        if time:
+            params['Time'] = time.isoformat()
         if desired_capacity is not None:
             params['DesiredCapacity'] = desired_capacity
         if min_size is not None:
@@ -688,6 +725,29 @@
             params['ShouldRespectGracePeriod'] = 'false'
         return self.get_status('SetInstanceHealth', params)
 
+    def set_desired_capacity(self, group_name, desired_capacity, honor_cooldown=False):
+        """
+        Adjusts the desired size of the AutoScalingGroup by initiating scaling
+        activities. When reducing the size of the group, it is not possible to define
+        which Amazon EC2 instances will be terminated. This applies to any Auto Scaling
+        decisions that might result in terminating instances.
+
+        :type group_name: string
+        :param group_name: name of the auto scaling group
+
+        :type desired_capacity: integer
+        :param desired_capacity: new capacity setting for auto scaling group
+
+        :type honor_cooldown: boolean
+        :param honor_cooldown: by default, overrides any cooldown period
+        """
+        params = {'AutoScalingGroupName': group_name,
+                  'DesiredCapacity': desired_capacity}
+        if honor_cooldown:
+            params['HonorCooldown'] = json.dumps('True')
+
+        return self.get_status('SetDesiredCapacity', params)
+
     # Tag methods
 
     def get_all_tags(self, filters=None, max_records=None, next_token=None):
diff --git a/boto/ec2/autoscale/group.py b/boto/ec2/autoscale/group.py
index eb72f6f..e9fadce 100644
--- a/boto/ec2/autoscale/group.py
+++ b/boto/ec2/autoscale/group.py
@@ -98,7 +98,7 @@
                  health_check_type=None, health_check_period=None,
                  placement_group=None, vpc_zone_identifier=None,
                  desired_capacity=None, min_size=None, max_size=None,
-                 tags=None, **kwargs):
+                 tags=None, termination_policies=None, **kwargs):
         """
         Creates a new AutoScalingGroup with the specified name.
 
@@ -136,10 +136,10 @@
         :param load_balancers: List of load balancers.
 
         :type max_size: int
-        :param maxsize: Maximum size of group (required).
+        :param max_size: Maximum size of group (required).
 
         :type min_size: int
-        :param minsize: Minimum size of group (required).
+        :param min_size: Minimum size of group (required).
 
         :type placement_group: str
         :param placement_group: Physical location of your cluster placement
@@ -149,6 +149,12 @@
         :param vpc_zone_identifier: The subnet identifier of the Virtual
             Private Cloud.
 
+        :type termination_policies: list
+        :param termination_policies: A list of termination policies. Valid values
+            are: "OldestInstance", "NewestInstance", "OldestLaunchConfiguration",
+            "ClosestToNextInstanceHour", "Default".  If no value is specified,
+            the "Default" value is used.
+
         :rtype: :class:`boto.ec2.autoscale.group.AutoScalingGroup`
         :return: An autoscale group.
         """
@@ -177,7 +183,8 @@
         self.vpc_zone_identifier = vpc_zone_identifier
         self.instances = None
         self.tags = tags or None
-        self.termination_policies = TerminationPolicies()
+        termination_policies = termination_policies or []
+        self.termination_policies = ListElement(termination_policies)
 
     # backwards compatible access to 'cooldown' param
     def _get_cooldown(self):
diff --git a/boto/ec2/autoscale/launchconfig.py b/boto/ec2/autoscale/launchconfig.py
index e6e38fd..f558041 100644
--- a/boto/ec2/autoscale/launchconfig.py
+++ b/boto/ec2/autoscale/launchconfig.py
@@ -94,7 +94,7 @@
                  instance_type='m1.small', kernel_id=None,
                  ramdisk_id=None, block_device_mappings=None,
                  instance_monitoring=False, spot_price=None,
-                 instance_profile_name=None):
+                 instance_profile_name=None, ebs_optimized=False):
         """
         A launch configuration.
 
@@ -140,6 +140,10 @@
         :param instance_profile_name: The name or the Amazon Resource
             Name (ARN) of the instance profile associated with the IAM
             role for the instance.
+
+        :type ebs_optimized: bool
+        :param ebs_optimized: Specifies whether the instance is optimized
+            for EBS I/O (true) or not (false).
         """
         self.connection = connection
         self.name = name
@@ -158,6 +162,7 @@
         self.spot_price = spot_price
         self.instance_profile_name = instance_profile_name
         self.launch_configuration_arn = None
+        self.ebs_optimized = ebs_optimized
 
     def __repr__(self):
         return 'LaunchConfiguration:%s' % self.name
@@ -201,6 +206,8 @@
             self.spot_price = float(value)
         elif name == 'IamInstanceProfile':
             self.instance_profile_name = value
+        elif name == 'EbsOptimized':
+            self.ebs_optimized = True if value.lower() == 'true' else False
         else:
             setattr(self, name, value)
 
diff --git a/boto/ec2/autoscale/policy.py b/boto/ec2/autoscale/policy.py
index d9d1ac6..adcdbdc 100644
--- a/boto/ec2/autoscale/policy.py
+++ b/boto/ec2/autoscale/policy.py
@@ -153,3 +153,14 @@
     def delete(self):
         return self.connection.delete_policy(self.name, self.as_name)
 
+
+class TerminationPolicies(list):
+    def __init__(self, connection=None, **kwargs):
+        pass
+
+    def startElement(self, name, attrs, connection):
+        pass
+
+    def endElement(self, name, value, connection):
+        if name == 'member':
+            self.append(value)
diff --git a/boto/ec2/autoscale/scheduled.py b/boto/ec2/autoscale/scheduled.py
index d8f051c..8e307c2 100644
--- a/boto/ec2/autoscale/scheduled.py
+++ b/boto/ec2/autoscale/scheduled.py
@@ -28,7 +28,11 @@
         self.connection = connection
         self.name = None
         self.action_arn = None
+        self.as_group = None
         self.time = None
+        self.start_time = None
+        self.end_time = None
+        self.recurrence = None
         self.desired_capacity = None
         self.max_size = None
         self.min_size = None
@@ -44,17 +48,31 @@
             self.desired_capacity = value
         elif name == 'ScheduledActionName':
             self.name = value
+        elif name == 'AutoScalingGroupName':
+            self.as_group = value
         elif name == 'MaxSize':
             self.max_size = int(value)
         elif name == 'MinSize':
             self.min_size = int(value)
         elif name == 'ScheduledActionARN':
             self.action_arn = value
+        elif name == 'Recurrence':
+            self.recurrence = value
         elif name == 'Time':
             try:
                 self.time = datetime.strptime(value, '%Y-%m-%dT%H:%M:%S.%fZ')
             except ValueError:
                 self.time = datetime.strptime(value, '%Y-%m-%dT%H:%M:%SZ')
+        elif name == 'StartTime':
+            try:
+                self.start_time = datetime.strptime(value, '%Y-%m-%dT%H:%M:%S.%fZ')
+            except ValueError:
+                self.start_time = datetime.strptime(value, '%Y-%m-%dT%H:%M:%SZ')
+        elif name == 'EndTime':
+            try:
+                self.end_time = datetime.strptime(value, '%Y-%m-%dT%H:%M:%S.%fZ')
+            except ValueError:
+                self.end_time = datetime.strptime(value, '%Y-%m-%dT%H:%M:%SZ')
         else:
             setattr(self, name, value)
 
diff --git a/boto/ec2/blockdevicemapping.py b/boto/ec2/blockdevicemapping.py
index ca0e937..df774ae 100644
--- a/boto/ec2/blockdevicemapping.py
+++ b/boto/ec2/blockdevicemapping.py
@@ -125,17 +125,18 @@
                 params['%s.VirtualName' % pre] = block_dev.ephemeral_name
             else:
                 if block_dev.no_device:
-                    params['%s.Ebs.NoDevice' % pre] = 'true'
-                if block_dev.snapshot_id:
-                    params['%s.Ebs.SnapshotId' % pre] = block_dev.snapshot_id
-                if block_dev.size:
-                    params['%s.Ebs.VolumeSize' % pre] = block_dev.size
-                if block_dev.delete_on_termination:
-                    params['%s.Ebs.DeleteOnTermination' % pre] = 'true'
+                    params['%s.NoDevice' % pre] = ''
                 else:
-                    params['%s.Ebs.DeleteOnTermination' % pre] = 'false'
-                if block_dev.volume_type:
-                    params['%s.Ebs.VolumeType' % pre] = block_dev.volume_type
-                if block_dev.iops is not None:
-                    params['%s.Ebs.Iops' % pre] = block_dev.iops
+                    if block_dev.snapshot_id:
+                        params['%s.Ebs.SnapshotId' % pre] = block_dev.snapshot_id
+                    if block_dev.size:
+                        params['%s.Ebs.VolumeSize' % pre] = block_dev.size
+                    if block_dev.delete_on_termination:
+                        params['%s.Ebs.DeleteOnTermination' % pre] = 'true'
+                    else:
+                        params['%s.Ebs.DeleteOnTermination' % pre] = 'false'
+                    if block_dev.volume_type:
+                        params['%s.Ebs.VolumeType' % pre] = block_dev.volume_type
+                    if block_dev.iops is not None:
+                        params['%s.Ebs.Iops' % pre] = block_dev.iops
             i += 1
diff --git a/boto/ec2/cloudwatch/__init__.py b/boto/ec2/cloudwatch/__init__.py
index 5b8db5b..dd7b681 100644
--- a/boto/ec2/cloudwatch/__init__.py
+++ b/boto/ec2/cloudwatch/__init__.py
@@ -23,11 +23,7 @@
 This module provides an interface to the Elastic Compute Cloud (EC2)
 CloudWatch service from AWS.
 """
-try:
-    import simplejson as json
-except ImportError:
-    import json
-
+from boto.compat import json
 from boto.connection import AWSQueryConnection
 from boto.ec2.cloudwatch.metric import Metric
 from boto.ec2.cloudwatch.alarm import MetricAlarm, MetricAlarms, AlarmHistoryItem
@@ -42,7 +38,9 @@
     'sa-east-1': 'monitoring.sa-east-1.amazonaws.com',
     'eu-west-1': 'monitoring.eu-west-1.amazonaws.com',
     'ap-northeast-1': 'monitoring.ap-northeast-1.amazonaws.com',
-    'ap-southeast-1': 'monitoring.ap-southeast-1.amazonaws.com'}
+    'ap-southeast-1': 'monitoring.ap-southeast-1.amazonaws.com',
+    'ap-southeast-2': 'monitoring.ap-southeast-2.amazonaws.com',
+}
 
 
 def regions():
@@ -179,16 +177,16 @@
                 metric_data['StatisticValues.SampleCount'] = s['samplecount']
                 metric_data['StatisticValues.Sum'] = s['sum']
                 if value != None:
-                    msg = 'You supplied a value and statistics for a metric.'
-                    msg += 'Posting statistics and not value.'
+                    msg = 'You supplied a value and statistics for a ' + \
+                          'metric.Posting statistics and not value.'
                     boto.log.warn(msg)
             elif value != None:
                 metric_data['Value'] = v
             else:
                 raise Exception('Must specify a value or statistics to put.')
 
-            for key, value in metric_data.iteritems():
-                params['MetricData.member.%d.%s' % (index + 1, key)] = value
+            for key, val in metric_data.iteritems():
+                params['MetricData.member.%d.%s' % (index + 1, key)] = val
 
     def get_metric_statistics(self, period, start_time, end_time, metric_name,
                               namespace, statistics, dimensions=None,
@@ -390,8 +388,12 @@
             params['NextToken'] = next_token
         if state_value:
             params['StateValue'] = state_value
-        return self.get_list('DescribeAlarms', params,
-                             [('MetricAlarms', MetricAlarms)])[0]
+
+        result = self.get_list('DescribeAlarms', params,
+                               [('MetricAlarms', MetricAlarms)])
+        ret = result[0]
+        ret.next_token = result.next_token
+        return ret
 
     def describe_alarm_history(self, alarm_name=None,
                                start_date=None, end_date=None,
diff --git a/boto/ec2/cloudwatch/alarm.py b/boto/ec2/cloudwatch/alarm.py
index b0b9fd0..e0f7242 100644
--- a/boto/ec2/cloudwatch/alarm.py
+++ b/boto/ec2/cloudwatch/alarm.py
@@ -24,11 +24,7 @@
 from boto.resultset import ResultSet
 from boto.ec2.cloudwatch.listelement import ListElement
 from boto.ec2.cloudwatch.dimension import Dimension
-
-try:
-    import simplejson as json
-except ImportError:
-    import json
+from boto.compat import json
 
 
 class MetricAlarms(list):
@@ -312,5 +308,9 @@
         elif name == 'HistorySummary':
             self.summary = value
         elif name == 'Timestamp':
-            self.timestamp = datetime.strptime(value, '%Y-%m-%dT%H:%M:%S.%fZ')
+            try:
+                self.timestamp = datetime.strptime(value,
+                                                   '%Y-%m-%dT%H:%M:%S.%fZ')
+            except ValueError:
+                self.timestamp = datetime.strptime(value, '%Y-%m-%dT%H:%M:%SZ')
 
diff --git a/boto/ec2/connection.py b/boto/ec2/connection.py
index 029c796..7752d23 100644
--- a/boto/ec2/connection.py
+++ b/boto/ec2/connection.py
@@ -1,6 +1,6 @@
 # Copyright (c) 2006-2012 Mitch Garnaat http://garnaat.org/
 # Copyright (c) 2010, Eucalyptus Systems, Inc.
-# Copyright (c) 2012 Amazon.com, Inc. or its affiliates.  All Rights Reserved
+# Copyright (c) 2013 Amazon.com, Inc. or its affiliates.  All Rights Reserved
 #
 # Permission is hereby granted, free of charge, to any person obtaining a
 # copy of this software and associated documentation files (the
@@ -33,7 +33,7 @@
 import boto
 from boto.connection import AWSQueryConnection
 from boto.resultset import ResultSet
-from boto.ec2.image import Image, ImageAttribute
+from boto.ec2.image import Image, ImageAttribute, CopyImage
 from boto.ec2.instance import Reservation, Instance
 from boto.ec2.instance import ConsoleOutput, InstanceAttribute
 from boto.ec2.keypair import KeyPair
@@ -54,9 +54,11 @@
 from boto.ec2.bundleinstance import BundleInstanceTask
 from boto.ec2.placementgroup import PlacementGroup
 from boto.ec2.tag import Tag
+from boto.ec2.vmtype import VmType
 from boto.ec2.instancestatus import InstanceStatusSet
 from boto.ec2.volumestatus import VolumeStatusSet
 from boto.ec2.networkinterface import NetworkInterface
+from boto.ec2.attributes import AccountAttribute, VPCAttribute
 from boto.exception import EC2ResponseError
 
 #boto.set_stream_logger('ec2')
@@ -64,7 +66,7 @@
 
 class EC2Connection(AWSQueryConnection):
 
-    APIVersion = boto.config.get('Boto', 'ec2_version', '2012-08-15')
+    APIVersion = boto.config.get('Boto', 'ec2_version', '2013-02-01')
     DefaultRegionName = boto.config.get('Boto', 'ec2_region_name', 'us-east-1')
     DefaultRegionEndpoint = boto.config.get('Boto', 'ec2_region_endpoint',
                                             'ec2.us-east-1.amazonaws.com')
@@ -146,14 +148,12 @@
                               user ID has explicit launch permissions
 
         :type filters: dict
-        :param filters: Optional filters that can be used to limit
-                        the results returned.  Filters are provided
-                        in the form of a dictionary consisting of
-                        filter names as the key and filter values
-                        as the value.  The set of allowable filter
-                        names/values is dependent on the request
-                        being performed.  Check the EC2 API guide
-                        for details.
+        :param filters: Optional filters that can be used to limit the
+            results returned.  Filters are provided in the form of a
+            dictionary consisting of filter names as the key and
+            filter values as the value.  The set of allowable filter
+            names/values is dependent on the request being performed.
+            Check the EC2 API guide for details.
 
         :rtype: list
         :return: A list of :class:`boto.ec2.image.Image`
@@ -298,8 +298,7 @@
 
         :type delete_snapshot: bool
         :param delete_snapshot: Set to True if we should delete the
-                                snapshot associated with an EBS volume
-                                mounted at /dev/sda1
+            snapshot associated with an EBS volume mounted at /dev/sda1
 
         :rtype: bool
         :return: True if successful
@@ -332,14 +331,14 @@
 
         :type description: string
         :param description: An optional human-readable string describing
-                            the contents and purpose of the AMI.
+            the contents and purpose of the AMI.
 
         :type no_reboot: bool
-        :param no_reboot: An optional flag indicating that the bundling process
-                          should not attempt to shutdown the instance before
-                          bundling.  If this flag is True, the responsibility
-                          of maintaining file system integrity is left to the
-                          owner of the instance.
+        :param no_reboot: An optional flag indicating that the
+            bundling process should not attempt to shutdown the
+            instance before bundling.  If this flag is True, the
+            responsibility of maintaining file system integrity is
+            left to the owner of the instance.
 
         :rtype: string
         :return: The new image id
@@ -364,10 +363,10 @@
 
         :type attribute: string
         :param attribute: The attribute you need information about.
-                          Valid choices are:
-                          * launchPermission
-                          * productCodes
-                          * blockDeviceMapping
+            Valid choices are:
+            * launchPermission
+            * productCodes
+            * blockDeviceMapping
 
         :rtype: :class:`boto.ec2.image.ImageAttribute`
         :return: An ImageAttribute object representing the value of the
@@ -392,7 +391,7 @@
 
         :type operation: string
         :param operation: Either add or remove (this is required for changing
-                          launchPermissions)
+            launchPermissions)
 
         :type user_ids: list
         :param user_ids: The Amazon IDs of users to add/remove attributes
@@ -402,8 +401,8 @@
 
         :type product_codes: list
         :param product_codes: Amazon DevPay product code. Currently only one
-                              product code can be associated with an AMI. Once
-                              set, the product code cannot be changed or reset.
+            product code can be associated with an AMI. Once
+            set, the product code cannot be changed or reset.
         """
         params = {'ImageId': image_id,
                   'Attribute': attribute,
@@ -525,7 +524,7 @@
                       security_group_ids=None,
                       additional_info=None, instance_profile_name=None,
                       instance_profile_arn=None, tenancy=None,
-                      ebs_optimized=False):
+                      ebs_optimized=False, network_interfaces=None):
         """
         Runs an image on EC2.
 
@@ -544,10 +543,11 @@
 
         :type security_groups: list of strings
         :param security_groups: The names of the security groups with which to
-            associate instances
+            associate instances.
 
         :type user_data: string
-        :param user_data: The user data passed to the launched instances
+        :param user_data: The Base64-encoded MIME user data to be made
+            available to the instance(s) in this reservation.
 
         :type instance_type: string
         :param instance_type: The type of instance to run:
@@ -557,18 +557,22 @@
             * m1.medium
             * m1.large
             * m1.xlarge
+            * m3.xlarge
+            * m3.2xlarge
             * c1.medium
             * c1.xlarge
             * m2.xlarge
             * m2.2xlarge
             * m2.4xlarge
+            * cr1.8xlarge
+            * hi1.4xlarge
+            * hs1.8xlarge
             * cc1.4xlarge
             * cg1.4xlarge
             * cc2.8xlarge
 
         :type placement: string
-        :param placement: The availability zone in which to launch
-            the instances.
+        :param placement: The Availability Zone to launch the instance into.
 
         :type kernel_id: string
         :param kernel_id: The ID of the kernel with which to launch the
@@ -594,7 +598,7 @@
 
         :type block_device_map: :class:`boto.ec2.blockdevicemapping.BlockDeviceMapping`
         :param block_device_map: A BlockDeviceMapping data structure
-            describing the EBS volumes associated  with the Image.
+            describing the EBS volumes associated with the Image.
 
         :type disable_api_termination: bool
         :param disable_api_termination: If True, the instances will be locked
@@ -614,7 +618,7 @@
 
         :type client_token: string
         :param client_token: Unique, case-sensitive identifier you provide
-            to ensure idempotency of the request.  Maximum 64 ASCII characters.
+            to ensure idempotency of the request. Maximum 64 ASCII characters.
 
         :type security_group_ids: list of strings
         :param security_group_ids: The ID of the VPC security groups with
@@ -647,6 +651,10 @@
             provide optimal EBS I/O performance.  This optimization
             isn't available with all instance types.
 
+        :type network_interfaces: list
+        :param network_interfaces: A list of
+            :class:`boto.ec2.networkinterface.NetworkInterfaceSpecification`
+
         :rtype: Reservation
         :return: The :class:`boto.ec2.instance.Reservation` associated with
                  the request for machines
@@ -711,6 +719,8 @@
             params['IamInstanceProfile.Arn'] = instance_profile_arn
         if ebs_optimized:
             params['EbsOptimized'] = 'true'
+        if network_interfaces:
+            network_interfaces.build_list_params(params)
         return self.get_object('RunInstances', params, Reservation,
                                verb='POST')
 
@@ -853,6 +863,7 @@
             * userData - Base64 encoded String (None)
             * disableApiTermination - Boolean (true)
             * instanceInitiatedShutdownBehavior - stop|terminate
+            * blockDeviceMapping - List of strings - ie: ['/dev/sda=false']
             * sourceDestCheck - Boolean (true)
             * groupSet - Set of Security Groups or IDs
             * ebsOptimized - Boolean (false)
@@ -882,6 +893,12 @@
                 if isinstance(sg, SecurityGroup):
                     sg = sg.id
                 params['GroupId.%s' % (idx + 1)] = sg
+        elif attribute.lower() == 'blockdevicemapping':
+            for idx, kv in enumerate(value):
+                dev_name, _, flag = kv.partition('=')
+                pre = 'BlockDeviceMapping.%d' % (idx + 1)
+                params['%s.DeviceName' % pre] = dev_name
+                params['%s.Ebs.DeleteOnTermination' % pre] = flag or 'true'
         else:
             # for backwards compatibility handle lowercase first letter
             attribute = attribute[0].upper() + attribute[1:]
@@ -1008,7 +1025,8 @@
                                instance_profile_arn=None,
                                instance_profile_name=None,
                                security_group_ids=None,
-                               ebs_optimized=False):
+                               ebs_optimized=False,
+                               network_interfaces=None):
         """
         Request instances on the spot market at a particular price.
 
@@ -1111,6 +1129,10 @@
             provide optimal EBS I/O performance.  This optimization
             isn't available with all instance types.
 
+        :type network_interfaces: list
+        :param network_interfaces: A list of
+            :class:`boto.ec2.networkinterface.NetworkInterfaceSpecification`
+
         :rtype: Reservation
         :return: The :class:`boto.ec2.spotinstancerequest.SpotInstanceRequest`
                  associated with the request for machines
@@ -1174,6 +1196,8 @@
             params['%s.IamInstanceProfile.Arn' % ls] = instance_profile_arn
         if ebs_optimized:
             params['%s.EbsOptimized' % ls] = 'true'
+        if network_interfaces:
+            network_interfaces.build_list_params(params, prefix=ls + '.')
         return self.get_list('RequestSpotInstances', params,
                              [('item', SpotInstanceRequest)],
                              verb='POST')
@@ -1313,6 +1337,11 @@
         """
         Allocate a new Elastic IP address and associate it with your account.
 
+        :type domain: string
+        :param domain: Optional string. If domain is set to "vpc" the address
+            will be allocated to VPC . Will return address object with
+            allocation_id.
+
         :rtype: :class:`boto.ec2.address.Address`
         :return: The newly allocated Address
         """
@@ -1809,8 +1838,8 @@
         :param description: A description of the snapshot.
                             Limited to 255 characters.
 
-        :rtype: bool
-        :return: True if successful
+        :rtype: :class:`boto.ec2.snapshot.Snapshot`
+        :return: The created Snapshot object
         """
         params = {'VolumeId': volume_id}
         if description:
@@ -1827,6 +1856,40 @@
         params = {'SnapshotId': snapshot_id}
         return self.get_status('DeleteSnapshot', params, verb='POST')
 
+    def copy_snapshot(self, source_region, source_snapshot_id,
+                      description=None):
+        """
+        Copies a point-in-time snapshot of an Amazon Elastic Block Store
+        (Amazon EBS) volume and stores it in Amazon Simple Storage Service
+        (Amazon S3). You can copy the snapshot within the same region or from
+        one region to another. You can use the snapshot to create new Amazon
+        EBS volumes or Amazon Machine Images (AMIs).
+
+
+        :type source_region: str
+        :param source_region: The ID of the AWS region that contains the
+            snapshot to be copied (e.g 'us-east-1', 'us-west-2', etc.).
+
+        :type source_snapshot_id: str
+        :param source_snapshot_id: The ID of the Amazon EBS snapshot to copy
+
+        :type description: str
+        :param description: A description of the new Amazon EBS snapshot.
+
+        :rtype: str
+        :return: The snapshot ID
+
+        """
+        params = {
+            'SourceRegion': source_region,
+            'SourceSnapshotId': source_snapshot_id,
+        }
+        if description is not None:
+            params['Description'] = description
+        snapshot = self.get_object('CopySnapshot', params, Snapshot,
+                                   verb='POST')
+        return snapshot.id
+
     def trim_snapshots(self, hourly_backups=8, daily_backups=7,
                        weekly_backups=4):
         """
@@ -2150,9 +2213,9 @@
                                     it to AWS.
 
         :rtype: :class:`boto.ec2.keypair.KeyPair`
-        :return: The newly created :class:`boto.ec2.keypair.KeyPair`.
-                 The material attribute of the new KeyPair object
-                 will contain the the unencrypted PEM encoded RSA private key.
+        :return: A :class:`boto.ec2.keypair.KeyPair` object representing
+            the newly imported key pair.  This object will contain only
+            the key name and the fingerprint.
         """
         public_key_material = base64.b64encode(public_key_material)
         params = {'KeyName': key_name,
@@ -2216,7 +2279,7 @@
                        if any.
 
         :rtype: :class:`boto.ec2.securitygroup.SecurityGroup`
-        :return: The newly created :class:`boto.ec2.keypair.KeyPair`.
+        :return: The newly created :class:`boto.ec2.securitygroup.SecurityGroup`.
         """
         params = {'GroupName': name,
                   'GroupDescription': description}
@@ -2713,11 +2776,9 @@
             single-tenant hardware and can only be launched within a VPC.
 
         :type offering_type: string
-        :param offering_type: The Reserved Instance offering type.
-            Valid Values:
-                * Heavy Utilization
-                * Medium Utilization
-                * Light Utilization
+        :param offering_type: The Reserved Instance offering type.  Valid
+            Values: `"Heavy Utilization" | "Medium Utilization" | "Light
+            Utilization"`
 
         :type include_marketplace: bool
         :param include_marketplace: Include Marketplace offerings in the
@@ -2785,22 +2846,20 @@
     def get_all_reserved_instances(self, reserved_instances_id=None,
                                    filters=None):
         """
-        Describes Reserved Instance offerings that are available for purchase.
+        Describes one or more of the Reserved Instances that you purchased.
 
         :type reserved_instance_ids: list
         :param reserved_instance_ids: A list of the reserved instance ids that
-                                      will be returned. If not provided, all
-                                      reserved instances will be returned.
+            will be returned. If not provided, all reserved instances
+            will be returned.
 
         :type filters: dict
-        :param filters: Optional filters that can be used to limit
-                        the results returned.  Filters are provided
-                        in the form of a dictionary consisting of
-                        filter names as the key and filter values
-                        as the value.  The set of allowable filter
-                        names/values is dependent on the request
-                        being performed.  Check the EC2 API guide
-                        for details.
+        :param filters: Optional filters that can be used to limit the
+            results returned.  Filters are provided in the form of a
+            dictionary consisting of filter names as the key and
+            filter values as the value.  The set of allowable filter
+            names/values is dependent on the request being performed.
+            Check the EC2 API guide for details.
 
         :rtype: list
         :return: A list of :class:`boto.ec2.reservedinstance.ReservedInstance`
@@ -2825,16 +2884,16 @@
 
         :type reserved_instances_offering_id: string
         :param reserved_instances_offering_id: The offering ID of the Reserved
-                                               Instance to purchase
+            Instance to purchase
 
         :type instance_count: int
         :param instance_count: The number of Reserved Instances to purchase.
-                               Default value is 1.
+            Default value is 1.
 
         :type limit_price: tuple
         :param instance_count: Limit the price on the total order.
-                               Must be a tuple of (amount, currency_code), for example:
-                                   (100.0, 'USD').
+            Must be a tuple of (amount, currency_code), for example:
+            (100.0, 'USD').
 
         :rtype: :class:`boto.ec2.reservedinstance.ReservedInstance`
         :return: The newly created Reserved Instance
@@ -3314,7 +3373,7 @@
                   'DeviceIndex': device_index}
         return self.get_status('AttachNetworkInterface', params, verb='POST')
 
-    def detach_network_interface(self, attachement_id, force=False):
+    def detach_network_interface(self, attachment_id, force=False):
         """
         Detaches a network interface from an instance.
 
@@ -3325,7 +3384,7 @@
         :param force: Set to true to force a detachment.
 
         """
-        params = {'AttachmentId': network_interface_id}
+        params = {'AttachmentId': attachment_id}
         if force:
             params['Force'] = 'true'
         return self.get_status('DetachNetworkInterface', params, verb='POST')
@@ -3340,3 +3399,59 @@
         """
         params = {'NetworkInterfaceId': network_interface_id}
         return self.get_status('DeleteNetworkInterface', params, verb='POST')
+
+    def get_all_vmtypes(self):
+        """
+        Get all vmtypes available on this cloud (eucalyptus specific)
+
+        :rtype: list of :class:`boto.ec2.vmtype.VmType`
+        :return: The requested VmType objects
+        """
+        params = {}
+        return self.get_list('DescribeVmTypes', params, [('euca:item', VmType)], verb='POST')
+
+    def copy_image(self, source_region, source_image_id, name,
+                   description=None, client_token=None):
+        params = {
+            'SourceRegion': source_region,
+            'SourceImageId': source_image_id,
+            'Name': name
+        }
+        if description is not None:
+            params['Description'] = description
+        if client_token is not None:
+            params['ClientToken'] = client_token
+        image = self.get_object('CopyImage', params, CopyImage,
+                                 verb='POST')
+        return image
+
+    def describe_account_attributes(self, attribute_names=None):
+        params = {}
+        if attribute_names is not None:
+            self.build_list_params(params, attribute_names, 'AttributeName')
+        return self.get_list('DescribeAccountAttributes', params,
+                             [('item', AccountAttribute)], verb='POST')
+
+    def describe_vpc_attribute(self, vpc_id, attribute=None):
+        params = {
+            'VpcId': vpc_id
+        }
+        if attribute is not None:
+            params['Attribute'] = attribute
+        attr = self.get_object('DescribeVpcAttribute', params,
+                               VPCAttribute, verb='POST')
+        return attr
+
+    def modify_vpc_attribute(self, vpc_id, enable_dns_support=None,
+                             enable_dns_hostnames=None):
+        params = {
+            'VpcId': vpc_id
+        }
+        if enable_dns_support is not None:
+            params['EnableDnsSupport.Value'] = (
+                'true' if enable_dns_support else 'false')
+        if enable_dns_hostnames is not None:
+            params['EnableDnsHostnames.Value'] = (
+                'true' if enable_dns_hostnames else 'false')
+        result = self.get_status('ModifyVpcAttribute', params, verb='POST')
+        return result
diff --git a/boto/ec2/elb/__init__.py b/boto/ec2/elb/__init__.py
index 9a5e324..c5e71b9 100644
--- a/boto/ec2/elb/__init__.py
+++ b/boto/ec2/elb/__init__.py
@@ -1,4 +1,6 @@
-# Copyright (c) 2006-2011 Mitch Garnaat http://garnaat.org/
+# Copyright (c) 2006-2012 Mitch Garnaat http://garnaat.org/
+# Copyright (c) 2012 Amazon.com, Inc. or its affiliates.
+# All Rights Reserved
 #
 # Permission is hereby granted, free of charge, to any person obtaining a
 # copy of this software and associated documentation files (the
@@ -25,9 +27,10 @@
 """
 from boto.connection import AWSQueryConnection
 from boto.ec2.instanceinfo import InstanceInfo
-from boto.ec2.elb.loadbalancer import LoadBalancer
+from boto.ec2.elb.loadbalancer import LoadBalancer, LoadBalancerZones
 from boto.ec2.elb.instancestate import InstanceState
 from boto.ec2.elb.healthcheck import HealthCheck
+from boto.ec2.elb.listelement import ListElement
 from boto.regioninfo import RegionInfo
 import boto
 
@@ -38,7 +41,9 @@
     'sa-east-1': 'elasticloadbalancing.sa-east-1.amazonaws.com',
     'eu-west-1': 'elasticloadbalancing.eu-west-1.amazonaws.com',
     'ap-northeast-1': 'elasticloadbalancing.ap-northeast-1.amazonaws.com',
-    'ap-southeast-1': 'elasticloadbalancing.ap-southeast-1.amazonaws.com'}
+    'ap-southeast-1': 'elasticloadbalancing.ap-southeast-1.amazonaws.com',
+    'ap-southeast-2': 'elasticloadbalancing.ap-southeast-2.amazonaws.com',
+}
 
 
 def regions():
@@ -76,7 +81,7 @@
 
 class ELBConnection(AWSQueryConnection):
 
-    APIVersion = boto.config.get('Boto', 'elb_version', '2011-11-15')
+    APIVersion = boto.config.get('Boto', 'elb_version', '2012-06-01')
     DefaultRegionName = boto.config.get('Boto', 'elb_region_name', 'us-east-1')
     DefaultRegionEndpoint = boto.config.get('Boto', 'elb_region_endpoint',
                                             'elasticloadbalancing.us-east-1.amazonaws.com')
@@ -178,9 +183,11 @@
             to an Amazon VPC.
 
         :rtype: :class:`boto.ec2.elb.loadbalancer.LoadBalancer`
-        :return: The newly created :class:`boto.ec2.elb.loadbalancer.LoadBalancer`
+        :return: The newly created
+            :class:`boto.ec2.elb.loadbalancer.LoadBalancer`
         """
-        params = {'LoadBalancerName': name}
+        params = {'LoadBalancerName': name,
+                  'Scheme': scheme}
         for index, listener in enumerate(listeners):
             i = index + 1
             protocol = listener[2].upper()
@@ -286,8 +293,9 @@
         params = {'LoadBalancerName': load_balancer_name}
         self.build_list_params(params, zones_to_add,
                                'AvailabilityZones.member.%d')
-        return self.get_list('EnableAvailabilityZonesForLoadBalancer',
-                             params, None)
+        obj = self.get_object('EnableAvailabilityZonesForLoadBalancer',
+                               params, LoadBalancerZones)
+        return obj.zones
 
     def disable_availability_zones(self, load_balancer_name, zones_to_remove):
         """
@@ -310,8 +318,9 @@
         params = {'LoadBalancerName': load_balancer_name}
         self.build_list_params(params, zones_to_remove,
                                'AvailabilityZones.member.%d')
-        return self.get_list('DisableAvailabilityZonesForLoadBalancer',
-                             params, None)
+        obj = self.get_object('DisableAvailabilityZonesForLoadBalancer',
+                               params, LoadBalancerZones)
+        return obj.zones
 
     def register_instances(self, load_balancer_name, instances):
         """
@@ -450,10 +459,13 @@
         from the same user to that server. The validity of the cookie is based
         on the cookie expiration time, which is specified in the policy
         configuration.
+
+        None may be passed for cookie_expiration_period.
         """
-        params = {'CookieExpirationPeriod': cookie_expiration_period,
-                  'LoadBalancerName': lb_name,
+        params = {'LoadBalancerName': lb_name,
                   'PolicyName': policy_name}
+        if cookie_expiration_period is not None:
+            params['CookieExpirationPeriod'] = cookie_expiration_period
         return self.get_status('CreateLBCookieStickinessPolicy', params)
 
     def delete_lb_policy(self, lb_name, policy_name):
diff --git a/boto/ec2/elb/healthcheck.py b/boto/ec2/elb/healthcheck.py
index 6661ea1..040f962 100644
--- a/boto/ec2/elb/healthcheck.py
+++ b/boto/ec2/elb/healthcheck.py
@@ -1,4 +1,6 @@
-# Copyright (c) 2006-2009 Mitch Garnaat http://garnaat.org/
+# Copyright (c) 2006-2012 Mitch Garnaat http://garnaat.org/
+# Copyright (c) 2012 Amazon.com, Inc. or its affiliates.
+# All Rights Reserved
 #
 # Permission is hereby granted, free of charge, to any person obtaining a
 # copy of this software and associated documentation files (the
@@ -19,6 +21,7 @@
 # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
 # IN THE SOFTWARE.
 
+
 class HealthCheck(object):
     """
     Represents an EC2 Access Point Health Check. See
@@ -77,11 +80,10 @@
         if not self.access_point:
             return
 
-        new_hc = self.connection.configure_health_check(self.access_point, self)
+        new_hc = self.connection.configure_health_check(self.access_point,
+                                                        self)
         self.interval = new_hc.interval
         self.target = new_hc.target
         self.healthy_threshold = new_hc.healthy_threshold
         self.unhealthy_threshold = new_hc.unhealthy_threshold
         self.timeout = new_hc.timeout
-
-
diff --git a/boto/ec2/elb/instancestate.py b/boto/ec2/elb/instancestate.py
index 37a4727..40f4cbe 100644
--- a/boto/ec2/elb/instancestate.py
+++ b/boto/ec2/elb/instancestate.py
@@ -60,6 +60,3 @@
             self.reason_code = value
         else:
             setattr(self, name, value)
-
-
-
diff --git a/boto/ec2/elb/listelement.py b/boto/ec2/elb/listelement.py
index 3529041..0fe3a1e 100644
--- a/boto/ec2/elb/listelement.py
+++ b/boto/ec2/elb/listelement.py
@@ -1,4 +1,6 @@
-# Copyright (c) 2006-2009 Mitch Garnaat http://garnaat.org/
+# Copyright (c) 2006-2012 Mitch Garnaat http://garnaat.org/
+# Copyright (c) 2012 Amazon.com, Inc. or its affiliates.
+# All Rights Reserved
 #
 # Permission is hereby granted, free of charge, to any person obtaining a
 # copy of this software and associated documentation files (the
@@ -14,15 +16,16 @@
 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
 # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
 # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
-# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, 
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
 # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
 # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
 # IN THE SOFTWARE.
 
+
 class ListElement(list):
     """
-    A :py:class:`list` subclass that has some additional methods for interacting
-    with Amazon's XML API.
+    A :py:class:`list` subclass that has some additional methods
+    for interacting with Amazon's XML API.
     """
 
     def startElement(self, name, attrs, connection):
@@ -31,5 +34,3 @@
     def endElement(self, name, value, connection):
         if name == 'member':
             self.append(value)
-    
-    
diff --git a/boto/ec2/elb/listener.py b/boto/ec2/elb/listener.py
index bbb49d0..a50b02c 100644
--- a/boto/ec2/elb/listener.py
+++ b/boto/ec2/elb/listener.py
@@ -1,4 +1,6 @@
-# Copyright (c) 2006-2009 Mitch Garnaat http://garnaat.org/
+# Copyright (c) 2006-2012 Mitch Garnaat http://garnaat.org/
+# Copyright (c) 2012 Amazon.com, Inc. or its affiliates.
+# All Rights Reserved
 #
 # Permission is hereby granted, free of charge, to any person obtaining a
 # copy of this software and associated documentation files (the
@@ -21,6 +23,7 @@
 
 from boto.ec2.elb.listelement import ListElement
 
+
 class Listener(object):
     """
     Represents an EC2 Load Balancer Listener tuple
@@ -70,7 +73,3 @@
         if key == 2:
             return self.protocol
         raise KeyError
-
-
-
-
diff --git a/boto/ec2/elb/loadbalancer.py b/boto/ec2/elb/loadbalancer.py
index efb7151..7b6afc7 100644
--- a/boto/ec2/elb/loadbalancer.py
+++ b/boto/ec2/elb/loadbalancer.py
@@ -29,6 +29,22 @@
 from boto.resultset import ResultSet
 
 
+class LoadBalancerZones(object):
+    """
+    Used to collect the zones for a Load Balancer when enable_zones
+    or disable_zones are called.
+    """
+    def __init__(self, connection=None):
+        self.connection = connection
+        self.zones = ListElement()
+
+    def startElement(self, name, attrs, connection):
+        if name == 'AvailabilityZones':
+            return self.zones
+
+    def endElement(self, name, value, connection):
+        pass
+
 class LoadBalancer(object):
     """
     Represents an EC2 Load Balancer.
diff --git a/boto/ec2/image.py b/boto/ec2/image.py
index f00e55a..376fc86 100644
--- a/boto/ec2/image.py
+++ b/boto/ec2/image.py
@@ -15,7 +15,7 @@
 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
 # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
 # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
-# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, 
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
 # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
 # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
 # IN THE SOFTWARE.
@@ -31,12 +31,12 @@
     def endElement(self, name, value, connection):
         if name == 'productCode':
             self.append(value)
-    
+
 class Image(TaggedEC2Object):
     """
     Represents an EC2 Image
     """
-    
+
     def __init__(self, connection=None):
         TaggedEC2Object.__init__(self, connection)
         self.id = None
@@ -94,7 +94,7 @@
             else:
                 raise Exception(
                     'Unexpected value of isPublic %s for image %s'%(
-                        value, 
+                        value,
                         self.id
                     )
                 )
@@ -151,7 +151,7 @@
             raise ValueError('%s is not a valid Image ID' % self.id)
         return self.state
 
-    def run(self, min_count=1, max_count=1, key_name=None, 
+    def run(self, min_count=1, max_count=1, key_name=None,
             security_groups=None, user_data=None,
             addressing_type=None, instance_type='m1.small', placement=None,
             kernel_id=None, ramdisk_id=None,
@@ -166,95 +166,119 @@
 
         """
         Runs this instance.
-        
+
         :type min_count: int
         :param min_count: The minimum number of instances to start
-        
+
         :type max_count: int
         :param max_count: The maximum number of instances to start
-        
+
         :type key_name: string
-        :param key_name: The name of the keypair to run this instance with.
-        
-        :type security_groups: 
-        :param security_groups:
-        
-        :type user_data: 
-        :param user_data:
-        
-        :type addressing_type: 
-        :param daddressing_type:
-        
+        :param key_name: The name of the key pair with which to
+            launch instances.
+
+        :type security_groups: list of strings
+        :param security_groups: The names of the security groups with which to
+            associate instances.
+
+        :type user_data: string
+        :param user_data: The Base64-encoded MIME user data to be made
+            available to the instance(s) in this reservation.
+
         :type instance_type: string
-        :param instance_type: The type of instance to run.  Current choices are:
-                              m1.small | m1.large | m1.xlarge | c1.medium |
-                              c1.xlarge | m2.xlarge | m2.2xlarge |
-                              m2.4xlarge | cc1.4xlarge
-        
+        :param instance_type: The type of instance to run:
+
+            * t1.micro
+            * m1.small
+            * m1.medium
+            * m1.large
+            * m1.xlarge
+            * m3.xlarge
+            * m3.2xlarge
+            * c1.medium
+            * c1.xlarge
+            * m2.xlarge
+            * m2.2xlarge
+            * m2.4xlarge
+            * cr1.8xlarge
+            * hi1.4xlarge
+            * hs1.8xlarge
+            * cc1.4xlarge
+            * cg1.4xlarge
+            * cc2.8xlarge
+
         :type placement: string
-        :param placement: The availability zone in which to launch the instances
+        :param placement: The Availability Zone to launch the instance into.
 
         :type kernel_id: string
-        :param kernel_id: The ID of the kernel with which to launch the instances
-        
+        :param kernel_id: The ID of the kernel with which to launch the
+            instances.
+
         :type ramdisk_id: string
-        :param ramdisk_id: The ID of the RAM disk with which to launch the instances
-        
+        :param ramdisk_id: The ID of the RAM disk with which to launch the
+            instances.
+
         :type monitoring_enabled: bool
-        :param monitoring_enabled: Enable CloudWatch monitoring on the instance.
-        
-        :type subnet_id: string
-        :param subnet_id: The subnet ID within which to launch the instances for VPC.
-        
+        :param monitoring_enabled: Enable CloudWatch monitoring on
+            the instance.
+
+         :type subnet_id: string
+        :param subnet_id: The subnet ID within which to launch the instances
+            for VPC.
+
         :type private_ip_address: string
-        :param private_ip_address: If you're using VPC, you can optionally use
-                                   this parameter to assign the instance a
-                                   specific available IP address from the
-                                   subnet (e.g., 10.0.0.25).
+        :param private_ip_address: If you're using VPC, you can
+            optionally use this parameter to assign the instance a
+            specific available IP address from the subnet (e.g.,
+            10.0.0.25).
 
         :type block_device_map: :class:`boto.ec2.blockdevicemapping.BlockDeviceMapping`
         :param block_device_map: A BlockDeviceMapping data structure
-                                 describing the EBS volumes associated
-                                 with the Image.
+            describing the EBS volumes associated with the Image.
 
         :type disable_api_termination: bool
         :param disable_api_termination: If True, the instances will be locked
-                                        and will not be able to be terminated
-                                        via the API.
+            and will not be able to be terminated via the API.
 
         :type instance_initiated_shutdown_behavior: string
-        :param instance_initiated_shutdown_behavior: Specifies whether the instance
-                                                     stops or terminates on instance-initiated
-                                                     shutdown. Valid values are:
-                                                     stop | terminate
+        :param instance_initiated_shutdown_behavior: Specifies whether the
+            instance stops or terminates on instance-initiated shutdown.
+            Valid values are:
+
+            * stop
+            * terminate
 
         :type placement_group: string
         :param placement_group: If specified, this is the name of the placement
-                                group in which the instance(s) will be launched.
+            group in which the instance(s) will be launched.
 
         :type additional_info: string
-        :param additional_info:  Specifies additional information to make
-            available to the instance(s)
+        :param additional_info: Specifies additional information to make
+            available to the instance(s).
 
-        :type security_group_ids: 
-        :param security_group_ids:
+        :type security_group_ids: list of strings
+        :param security_group_ids: The ID of the VPC security groups with
+            which to associate instances.
 
         :type instance_profile_name: string
-        :param instance_profile_name: The name of an IAM instance profile to use.
+        :param instance_profile_name: The name of
+            the IAM Instance Profile (IIP) to associate with the instances.
 
         :type instance_profile_arn: string
-        :param instance_profile_arn: The ARN of an IAM instance profile to use.
-        
+        :param instance_profile_arn: The Amazon resource name (ARN) of
+            the IAM Instance Profile (IIP) to associate with the instances.
+
         :type tenancy: string
-        :param tenancy: The tenancy of the instance you want to launch. An
-                        instance with a tenancy of 'dedicated' runs on
-                        single-tenant hardware and can only be launched into a
-                        VPC. Valid values are: "default" or "dedicated".
-                        NOTE: To use dedicated tenancy you MUST specify a VPC
-                        subnet-ID as well.
+        :param tenancy: The tenancy of the instance you want to
+            launch. An instance with a tenancy of 'dedicated' runs on
+            single-tenant hardware and can only be launched into a
+            VPC. Valid values are:"default" or "dedicated".
+            NOTE: To use dedicated tenancy you MUST specify a VPC
+            subnet-ID as well.
 
         :rtype: Reservation
-        :return: The :class:`boto.ec2.instance.Reservation` associated with the request for machines
+        :return: The :class:`boto.ec2.instance.Reservation` associated with
+                 the request for machines
 
         """
 
@@ -266,9 +290,9 @@
                                              monitoring_enabled, subnet_id,
                                              block_device_map, disable_api_termination,
                                              instance_initiated_shutdown_behavior,
-                                             private_ip_address, placement_group, 
+                                             private_ip_address, placement_group,
                                              security_group_ids=security_group_ids,
-                                             additional_info=additional_info, 
+                                             additional_info=additional_info,
                                              instance_profile_name=instance_profile_name,
                                              instance_profile_arn=instance_profile_arn,
                                              tenancy=tenancy)
@@ -348,3 +372,16 @@
             self.ramdisk = value
         else:
             setattr(self, name, value)
+
+
+class CopyImage(object):
+    def __init__(self, parent=None):
+        self._parent = parent
+        self.image_id = None
+
+    def startElement(self, name, attrs, connection):
+        pass
+
+    def endElement(self, name, value, connection):
+        if name == 'imageId':
+            self.image_id = value
diff --git a/boto/ec2/instance.py b/boto/ec2/instance.py
index 7435788..5be701f 100644
--- a/boto/ec2/instance.py
+++ b/boto/ec2/instance.py
@@ -188,6 +188,8 @@
     :ivar product_codes: A list of product codes associated with this instance.
     :ivar ami_launch_index: This instances position within it's launch group.
     :ivar monitored: A boolean indicating whether monitoring is enabled or not.
+    :ivar monitoring_state: A string value that contains the actual value
+        of the monitoring element returned by EC2.
     :ivar spot_instance_request_id: The ID of the spot instance request
         if this is a spot instance.
     :ivar subnet_id: The VPC Subnet ID, if running in VPC.
@@ -223,6 +225,7 @@
         self.product_codes = ProductCodes()
         self.ami_launch_index = None
         self.monitored = False
+        self.monitoring_state = None
         self.spot_instance_request_id = None
         self.subnet_id = None
         self.vpc_id = None
@@ -273,10 +276,6 @@
         return 0
 
     @property
-    def state(self):
-        return self._state.name
-
-    @property
     def placement(self):
         return self._placement.zone
 
@@ -310,6 +309,7 @@
             return self.eventsSet
         elif name == 'networkInterfaceSet':
             self.interfaces = ResultSet([('item', NetworkInterface)])
+            return self.interfaces
         elif name == 'iamInstanceProfile':
             self.instance_profile = SubParse('iamInstanceProfile')
             return self.instance_profile
@@ -364,6 +364,7 @@
             self.ramdisk = value
         elif name == 'state':
             if self._in_monitoring_element:
+                self.monitoring_state = value
                 if value == 'enabled':
                     self.monitored = True
                 self._in_monitoring_element = False
@@ -473,6 +474,18 @@
         return self.connection.confirm_product_instance(self.id, product_code)
 
     def use_ip(self, ip_address):
+        """
+        Associates an Elastic IP to the instance.
+
+        :type ip_address: Either an instance of
+            :class:`boto.ec2.address.Address` or a string.
+        :param ip_address: The IP address to associate
+            with the instance.
+
+        :rtype: bool
+        :return: True if successful
+        """
+
         if isinstance(ip_address, Address):
             ip_address = ip_address.public_ip
         return self.connection.associate_address(self.id, ip_address)
@@ -549,6 +562,33 @@
         """
         return self.connection.reset_instance_attribute(self.id, attribute)
 
+    def create_image(
+        self, name,
+        description=None, no_reboot=False
+    ):
+        """
+        Will create an AMI from the instance in the running or stopped
+        state.
+
+        :type name: string
+        :param name: The name of the new image
+
+        :type description: string
+        :param description: An optional human-readable string describing
+                            the contents and purpose of the AMI.
+
+        :type no_reboot: bool
+        :param no_reboot: An optional flag indicating that the bundling process
+                          should not attempt to shutdown the instance before
+                          bundling.  If this flag is True, the responsibility
+                          of maintaining file system integrity is left to the
+                          owner of the instance.
+
+        :rtype: string
+        :return: The new image id
+        """
+        return self.connection.create_image(self.id, name, description, no_reboot)
+
 
 class ConsoleOutput:
 
diff --git a/boto/ec2/instancestatus.py b/boto/ec2/instancestatus.py
index 3a9b543..b09b55e 100644
--- a/boto/ec2/instancestatus.py
+++ b/boto/ec2/instancestatus.py
@@ -16,11 +16,12 @@
 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
 # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
 # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
-# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, 
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
 # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
 # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
 # IN THE SOFTWARE.
 
+
 class Details(dict):
     """
     A dict object that contains name/value pairs which provide
@@ -38,7 +39,8 @@
             self[self._name] = value
         else:
             setattr(self, name, value)
-    
+
+
 class Event(object):
     """
     A status event for an instance.
@@ -57,7 +59,7 @@
         self.description = description
         self.not_before = not_before
         self.not_after = not_after
-        
+
     def __repr__(self):
         return 'Event:%s' % self.code
 
@@ -76,6 +78,7 @@
         else:
             setattr(self, name, value)
 
+
 class Status(object):
     """
     A generic Status object used for system status and instance status.
@@ -90,7 +93,7 @@
         if not details:
             details = Details()
         self.details = details
-        
+
     def __repr__(self):
         return 'Status:%s' % self.status
 
@@ -105,8 +108,9 @@
         else:
             setattr(self, name, value)
 
+
 class EventSet(list):
-    
+
     def startElement(self, name, attrs, connection):
         if name == 'item':
             event = Event()
@@ -118,6 +122,7 @@
     def endElement(self, name, value, connection):
         setattr(self, name, value)
 
+
 class InstanceStatus(object):
     """
     Represents an EC2 Instance status as reported by
@@ -137,7 +142,7 @@
     :ivar instance_status: A Status object that reports impaired
         functionality that arises from problems internal to the instance.
     """
-    
+
     def __init__(self, id=None, zone=None, events=None,
                  state_code=None, state_name=None):
         self.id = id
@@ -174,6 +179,7 @@
         else:
             setattr(self, name, value)
 
+
 class InstanceStatusSet(list):
     """
     A list object that contains the results of a call to
@@ -191,7 +197,7 @@
         list.__init__(self)
         self.connection = connection
         self.next_token = None
-    
+
     def startElement(self, name, attrs, connection):
         if name == 'item':
             status = InstanceStatus()
@@ -201,7 +207,6 @@
             return None
 
     def endElement(self, name, value, connection):
-        if name == 'NextToken':
+        if name == 'nextToken':
             self.next_token = value
         setattr(self, name, value)
-
diff --git a/boto/ec2/networkinterface.py b/boto/ec2/networkinterface.py
index 2658e3f..5c6088f 100644
--- a/boto/ec2/networkinterface.py
+++ b/boto/ec2/networkinterface.py
@@ -1,4 +1,5 @@
 # Copyright (c) 2011 Mitch Garnaat http://garnaat.org/
+# Copyright (c) 2012 Amazon.com, Inc. or its affiliates.  All Rights Reserved
 #
 # Permission is hereby granted, free of charge, to any person obtaining a
 # copy of this software and associated documentation files (the
@@ -14,7 +15,7 @@
 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
 # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
 # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
-# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, 
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
 # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
 # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
 # IN THE SOFTWARE.
@@ -26,6 +27,7 @@
 from boto.resultset import ResultSet
 from boto.ec2.group import Group
 
+
 class Attachment(object):
     """
     :ivar id: The ID of the attachment.
@@ -45,13 +47,13 @@
         self.status = None
         self.attach_time = None
         self.delete_on_termination = False
-        
+
     def __repr__(self):
         return 'Attachment:%s' % self.id
 
     def startElement(self, name, attrs, connection):
         return None
-        
+
     def endElement(self, name, value, connection):
         if name == 'attachmentId':
             self.id = value
@@ -71,6 +73,7 @@
         else:
             setattr(self, name, value)
 
+
 class NetworkInterface(TaggedEC2Object):
     """
     An Elastic Network Interface.
@@ -80,7 +83,7 @@
     :ivar vpc_id: The ID of the VPC.
     :ivar description: The description.
     :ivar owner_id: The ID of the owner of the ENI.
-    :ivar requester_managed: 
+    :ivar requester_managed:
     :ivar status: The interface's status (available|in-use).
     :ivar mac_address: The MAC address of the interface.
     :ivar private_ip_address: The IP address of the interface within
@@ -89,8 +92,9 @@
         network traffic to or from this network interface.
     :ivar groups: List of security groups associated with the interface.
     :ivar attachment: The attachment object.
+    :ivar private_ip_addresses: A list of PrivateIPAddress objects.
     """
-    
+
     def __init__(self, connection=None):
         TaggedEC2Object.__init__(self, connection)
         self.id = None
@@ -106,6 +110,7 @@
         self.source_dest_check = None
         self.groups = []
         self.attachment = None
+        self.private_ip_addresses = []
 
     def __repr__(self):
         return 'NetworkInterface:%s' % self.id
@@ -120,9 +125,12 @@
         elif name == 'attachment':
             self.attachment = Attachment()
             return self.attachment
+        elif name == 'privateIpAddressesSet':
+            self.private_ip_addresses = ResultSet([('item', PrivateIPAddress)])
+            return self.private_ip_addresses
         else:
             return None
-        
+
     def endElement(self, name, value, connection):
         if name == 'networkInterfaceId':
             self.id = value
@@ -159,5 +167,81 @@
         return self.connection.delete_network_interface(self.id)
 
 
+class PrivateIPAddress(object):
+    def __init__(self, connection=None, private_ip_address=None,
+                 primary=None):
+        self.connection = connection
+        self.private_ip_address = private_ip_address
+        self.primary = primary
 
-            
+    def startElement(self, name, attrs, connection):
+        pass
+
+    def endElement(self, name, value, connection):
+        if name == 'privateIpAddress':
+            self.private_ip_address = value
+        elif name == 'primary':
+            self.primary = True if value.lower() == 'true' else False
+
+    def __repr__(self):
+        return "PrivateIPAddress(%s, primary=%s)" % (self.private_ip_address,
+                                                     self.primary)
+
+
+class NetworkInterfaceCollection(list):
+    def __init__(self, *interfaces):
+        self.extend(interfaces)
+
+    def build_list_params(self, params, prefix=''):
+        for i, spec in enumerate(self):
+            full_prefix = '%sNetworkInterface.%s.' % (prefix, i+1)
+            if spec.network_interface_id is not None:
+                params[full_prefix + 'NetworkInterfaceId'] = \
+                        str(spec.network_interface_id)
+            if spec.device_index is not None:
+                params[full_prefix + 'DeviceIndex'] = \
+                        str(spec.device_index)
+            if spec.subnet_id is not None:
+                params[full_prefix + 'SubnetId'] = str(spec.subnet_id)
+            if spec.description is not None:
+                params[full_prefix + 'Description'] = str(spec.description)
+            if spec.delete_on_termination is not None:
+                params[full_prefix + 'DeleteOnTermination'] = \
+                        'true' if spec.delete_on_termination else 'false'
+            if spec.secondary_private_ip_address_count is not None:
+                params[full_prefix + 'SecondaryPrivateIpAddressCount'] = \
+                        str(spec.secondary_private_ip_address_count)
+            if spec.private_ip_address is not None:
+                params[full_prefix + 'PrivateIpAddress'] = \
+                        str(spec.private_ip_address)
+            if spec.groups is not None:
+                for j, group_id in enumerate(spec.groups):
+                    query_param_key = '%sSecurityGroupId.%s' % (full_prefix, j+1)
+                    params[query_param_key] = str(group_id)
+            if spec.private_ip_addresses is not None:
+                for k, ip_addr in enumerate(spec.private_ip_addresses):
+                    query_param_key_prefix = (
+                        '%sPrivateIpAddresses.%s' % (full_prefix, k+1))
+                    params[query_param_key_prefix + '.PrivateIpAddress'] = \
+                            str(ip_addr.private_ip_address)
+                    if ip_addr.primary is not None:
+                        params[query_param_key_prefix + '.Primary'] = \
+                                'true' if ip_addr.primary else 'false'
+
+
+class NetworkInterfaceSpecification(object):
+    def __init__(self, network_interface_id=None, device_index=None,
+                 subnet_id=None, description=None, private_ip_address=None,
+                 groups=None, delete_on_termination=None,
+                 private_ip_addresses=None,
+                 secondary_private_ip_address_count=None):
+        self.network_interface_id = network_interface_id
+        self.device_index = device_index
+        self.subnet_id = subnet_id
+        self.description = description
+        self.private_ip_address = private_ip_address
+        self.groups = groups
+        self.delete_on_termination = delete_on_termination
+        self.private_ip_addresses = private_ip_addresses
+        self.secondary_private_ip_address_count = \
+                secondary_private_ip_address_count
diff --git a/boto/ec2/reservedinstance.py b/boto/ec2/reservedinstance.py
index e71c1ad..d92f168 100644
--- a/boto/ec2/reservedinstance.py
+++ b/boto/ec2/reservedinstance.py
@@ -128,6 +128,7 @@
                                            usage_price, description)
         self.instance_count = instance_count
         self.state = state
+        self.start = None
 
     def __repr__(self):
         return 'ReservedInstance:%s' % self.id
@@ -139,6 +140,8 @@
             self.instance_count = int(value)
         elif name == 'state':
             self.state = value
+        elif name == 'start':
+            self.start = value
         else:
             ReservedInstancesOffering.endElement(self, name, value, connection)
 
diff --git a/boto/ec2/spotinstancerequest.py b/boto/ec2/spotinstancerequest.py
index a3562ac..54fba1d 100644
--- a/boto/ec2/spotinstancerequest.py
+++ b/boto/ec2/spotinstancerequest.py
@@ -29,6 +29,12 @@
 
 
 class SpotInstanceStateFault(object):
+    """
+    The fault codes for the Spot Instance request, if any.
+
+    :ivar code: The reason code for the Spot Instance state change.
+    :ivar message: The message for the Spot Instance state change.
+    """
 
     def __init__(self, code=None, message=None):
         self.code = code
@@ -48,7 +54,70 @@
         setattr(self, name, value)
 
 
+class SpotInstanceStatus(object):
+    """
+    Contains the status of a Spot Instance Request.
+
+    :ivar code: Status code of the request.
+    :ivar message: The description for the status code for the Spot request.
+    :ivar update_time: Time the status was stated.
+    """
+
+    def __init__(self, code=None, update_time=None, message=None):
+        self.code = code
+        self.update_time = update_time
+        self.message = message
+
+    def __repr__(self):
+        return '<Status: %s>' % self.code
+
+    def startElement(self, name, attrs, connection):
+        return None
+
+    def endElement(self, name, value, connection):
+        if name == 'code':
+            self.code = value
+        elif name == 'message':
+            self.message = value
+        elif name == 'updateTime':
+            self.update_time = value
+
+
 class SpotInstanceRequest(TaggedEC2Object):
+    """
+
+    :ivar id: The ID of the Spot Instance Request.
+    :ivar price: The maximum hourly price for any Spot Instance launched to
+        fulfill the request.
+    :ivar type: The Spot Instance request type.
+    :ivar state: The state of the Spot Instance request.
+    :ivar fault: The fault codes for the Spot Instance request, if any.
+    :ivar valid_from: The start date of the request. If this is a one-time
+        request, the request becomes active at this date and time and remains
+        active until all instances launch, the request expires, or the request is
+        canceled. If the request is persistent, the request becomes active at this
+        date and time and remains active until it expires or is canceled.
+    :ivar valid_until: The end date of the request. If this is a one-time
+        request, the request remains active until all instances launch, the request
+        is canceled, or this date is reached. If the request is persistent, it
+        remains active until it is canceled or this date is reached.
+    :ivar launch_group: The instance launch group. Launch groups are Spot
+        Instances that launch together and terminate together.
+    :ivar launched_availability_zone: foo
+    :ivar product_description: The Availability Zone in which the bid is
+        launched.
+    :ivar availability_zone_group: The Availability Zone group. If you specify
+        the same Availability Zone group for all Spot Instance requests, all Spot
+        Instances are launched in the same Availability Zone.
+    :ivar create_time: The time stamp when the Spot Instance request was
+        created.
+    :ivar launch_specification: Additional information for launching instances.
+    :ivar instance_id: The instance ID, if an instance has been launched to
+        fulfill the Spot Instance request.
+    :ivar status: The status code and status message describing the Spot
+        Instance request.
+
+    """
 
     def __init__(self, connection=None):
         TaggedEC2Object.__init__(self, connection)
@@ -66,6 +135,7 @@
         self.create_time = None
         self.launch_specification = None
         self.instance_id = None
+        self.status = None
 
     def __repr__(self):
         return 'SpotInstanceRequest:%s' % self.id
@@ -80,6 +150,9 @@
         elif name == 'fault':
             self.fault = SpotInstanceStateFault()
             return self.fault
+        elif name == 'status':
+            self.status = SpotInstanceStatus()
+            return self.status
         else:
             return None
 
diff --git a/boto/ec2/tag.py b/boto/ec2/tag.py
index 8032e6f..deb2c78 100644
--- a/boto/ec2/tag.py
+++ b/boto/ec2/tag.py
@@ -15,11 +15,12 @@
 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
 # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
 # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
-# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, 
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
 # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
 # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
 # IN THE SOFTWARE.
 
+
 class TagSet(dict):
     """
     A TagSet is used to collect the tags associated with a particular
@@ -27,7 +28,7 @@
     can, this dict object will be used to collect those values.  See
     :class:`boto.ec2.ec2object.TaggedEC2Object` for more details.
     """
-    
+
     def __init__(self, connection=None):
         self.connection = connection
         self._current_key = None
@@ -55,7 +56,7 @@
     also the ID of the resource to which the tag is attached
     as well as the type of the resource.
     """
-    
+
     def __init__(self, connection=None, res_id=None, res_type=None,
                  name=None, value=None):
         self.connection = connection
@@ -81,7 +82,3 @@
             self.value = value
         else:
             setattr(self, name, value)
-
-
-            
-
diff --git a/boto/ec2/vmtype.py b/boto/ec2/vmtype.py
new file mode 100644
index 0000000..fdb4f36
--- /dev/null
+++ b/boto/ec2/vmtype.py
@@ -0,0 +1,59 @@
+# Copyright (c) 2006-2009 Mitch Garnaat http://garnaat.org/
+#
+# Permission is hereby granted, free of charge, to any person obtaining a
+# copy of this software and associated documentation files (the
+# "Software"), to deal in the Software without restriction, including
+# without limitation the rights to use, copy, modify, merge, publish, dis-
+# tribute, sublicense, and/or sell copies of the Software, and to permit
+# persons to whom the Software is furnished to do so, subject to the fol-
+# lowing conditions:
+#
+# The above copyright notice and this permission notice shall be included
+# in all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
+# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
+# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+# IN THE SOFTWARE.
+
+
+from boto.ec2.ec2object import EC2Object
+
+
+class VmType(EC2Object):
+    """
+    Represents an EC2 VM Type
+
+    :ivar name: The name of the vm type
+    :ivar cores: The number of cpu cores for this vm type
+    :ivar memory: The amount of memory in megabytes for this vm type
+    :ivar disk: The amount of disk space in gigabytes for this vm type
+    """
+
+    def __init__(self, connection=None, name=None, cores=None,
+                 memory=None, disk=None):
+        EC2Object.__init__(self, connection)
+        self.connection = connection
+        self.name = name
+        self.cores = cores
+        self.memory = memory
+        self.disk = disk
+
+    def __repr__(self):
+        return 'VmType:%s-%s,%s,%s' % (self.name, self.cores,
+                                       self.memory, self.disk)
+
+    def endElement(self, name, value, connection):
+        if name == 'euca:name':
+            self.name = value
+        elif name == 'euca:cpu':
+            self.cores = value
+        elif name == 'euca:disk':
+            self.disk = value
+        elif name == 'euca:memory':
+            self.memory = value
+        else:
+            setattr(self, name, value)
diff --git a/boto/ec2/volumestatus.py b/boto/ec2/volumestatus.py
index 7bbc173..78de2bb 100644
--- a/boto/ec2/volumestatus.py
+++ b/boto/ec2/volumestatus.py
@@ -16,13 +16,14 @@
 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
 # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
 # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
-# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, 
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
 # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
 # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
 # IN THE SOFTWARE.
 
 from boto.ec2.instancestatus import Status, Details
 
+
 class Event(object):
     """
     A status event for an instance.
@@ -43,7 +44,7 @@
         self.description = description
         self.not_before = not_before
         self.not_after = not_after
-        
+
     def __repr__(self):
         return 'Event:%s' % self.type
 
@@ -64,8 +65,9 @@
         else:
             setattr(self, name, value)
 
+
 class EventSet(list):
-    
+
     def startElement(self, name, attrs, connection):
         if name == 'item':
             event = Event()
@@ -77,6 +79,7 @@
     def endElement(self, name, value, connection):
         setattr(self, name, value)
 
+
 class Action(object):
     """
     An action for an instance.
@@ -92,7 +95,7 @@
         self.id = id
         self.type = type
         self.description = description
-        
+
     def __repr__(self):
         return 'Action:%s' % self.code
 
@@ -111,8 +114,9 @@
         else:
             setattr(self, name, value)
 
+
 class ActionSet(list):
-    
+
     def startElement(self, name, attrs, connection):
         if name == 'item':
             action = Action()
@@ -124,6 +128,7 @@
     def endElement(self, name, value, connection):
         setattr(self, name, value)
 
+
 class VolumeStatus(object):
     """
     Represents an EC2 Volume status as reported by
@@ -136,7 +141,7 @@
     :ivar events: A list of events relevant to the instance.
     :ivar actions: A list of events relevant to the instance.
     """
-    
+
     def __init__(self, id=None, zone=None):
         self.id = id
         self.zone = zone
@@ -167,6 +172,7 @@
         else:
             setattr(self, name, value)
 
+
 class VolumeStatusSet(list):
     """
     A list object that contains the results of a call to
@@ -184,7 +190,7 @@
         list.__init__(self)
         self.connection = connection
         self.next_token = None
-    
+
     def startElement(self, name, attrs, connection):
         if name == 'item':
             status = VolumeStatus()
@@ -197,4 +203,3 @@
         if name == 'NextToken':
             self.next_token = value
         setattr(self, name, value)
-
diff --git a/boto/elasticache/__init__.py b/boto/elasticache/__init__.py
new file mode 100644
index 0000000..fe35d70
--- /dev/null
+++ b/boto/elasticache/__init__.py
@@ -0,0 +1,62 @@
+# Copyright (c) 2013 Amazon.com, Inc. or its affiliates.
+# All Rights Reserved
+#
+# Permission is hereby granted, free of charge, to any person obtaining a
+# copy of this software and associated documentation files (the
+# "Software"), to deal in the Software without restriction, including
+# without limitation the rights to use, copy, modify, merge, publish, dis-
+# tribute, sublicense, and/or sell copies of the Software, and to permit
+# persons to whom the Software is furnished to do so, subject to the fol-
+# lowing conditions:
+#
+# The above copyright notice and this permission notice shall be included
+# in all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
+# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
+# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+# IN THE SOFTWARE.
+#
+from boto.regioninfo import RegionInfo
+
+
+def regions():
+    """
+    Get all available regions for the AWS ElastiCache service.
+
+    :rtype: list
+    :return: A list of :class:`boto.regioninfo.RegionInfo`
+    """
+    from boto.elasticache.layer1 import ElastiCacheConnection
+    return [RegionInfo(name='us-east-1',
+                       endpoint='elasticache.us-east-1.amazonaws.com',
+                       connection_cls=ElastiCacheConnection),
+            RegionInfo(name='us-west-1',
+                       endpoint='elasticache.us-west-1.amazonaws.com',
+                       connection_cls=ElastiCacheConnection),
+            RegionInfo(name='us-west-2',
+                       endpoint='elasticache.us-west-2.amazonaws.com',
+                       connection_cls=ElastiCacheConnection),
+            RegionInfo(name='eu-west-1',
+                       endpoint='elasticache.eu-west-1.amazonaws.com',
+                       connection_cls=ElastiCacheConnection),
+            RegionInfo(name='ap-northeast-1',
+                       endpoint='elasticache.ap-northeast-1.amazonaws.com',
+                       connection_cls=ElastiCacheConnection),
+            RegionInfo(name='ap-southeast-1',
+                       endpoint='elasticache.ap-southeast-1.amazonaws.com',
+                       connection_cls=ElastiCacheConnection),
+            RegionInfo(name='sa-east-1',
+                       endpoint='elasticache.sa-east-1.amazonaws.com',
+                       connection_cls=ElastiCacheConnection),
+            ]
+
+
+def connect_to_region(region_name, **kw_params):
+    for region in regions():
+        if region.name == region_name:
+            return region.connect(**kw_params)
+    return None
diff --git a/boto/elasticache/layer1.py b/boto/elasticache/layer1.py
new file mode 100644
index 0000000..6c50438
--- /dev/null
+++ b/boto/elasticache/layer1.py
@@ -0,0 +1,1253 @@
+# Copyright (c) 2013 Amazon.com, Inc. or its affiliates.  All Rights Reserved
+#
+# Permission is hereby granted, free of charge, to any person obtaining a
+# copy of this software and associated documentation files (the
+# "Software"), to deal in the Software without restriction, including
+# without limitation the rights to use, copy, modify, merge, publish, dis-
+# tribute, sublicense, and/or sell copies of the Software, and to permit
+# persons to whom the Software is furnished to do so, subject to the fol-
+# lowing conditions:
+#
+# The above copyright notice and this permission notice shall be included
+# in all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
+# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
+# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+# IN THE SOFTWARE.
+#
+
+
+import boto
+from boto.compat import json
+from boto.connection import AWSQueryConnection
+from boto.regioninfo import RegionInfo
+
+
+class ElastiCacheConnection(AWSQueryConnection):
+    """
+    Amazon ElastiCache
+    Amazon ElastiCache is a web service that makes it easier to set
+    up, operate, and scale a distributed cache in the cloud.
+
+    With Amazon ElastiCache, customers gain all of the benefits of a
+    high-performance, in-memory cache with far less of the
+    administrative burden of launching and managing a distributed
+    cache. The service makes set-up, scaling, and cluster failure
+    handling much simpler than in a self-managed cache deployment.
+
+    In addition, through integration with Amazon CloudWatch, customers
+    get enhanced visibility into the key performance statistics
+    associated with their cache and can receive alarms if a part of
+    their cache runs hot.
+    """
+    APIVersion = "2012-11-15"
+    DefaultRegionName = "us-east-1"
+    DefaultRegionEndpoint = "elasticache.us-east-1.amazonaws.com"
+
+    def __init__(self, **kwargs):
+        region = kwargs.get('region')
+        if not region:
+            region = RegionInfo(self, self.DefaultRegionName,
+                                self.DefaultRegionEndpoint)
+        else:
+            del kwargs['region']
+        kwargs['host'] = region.endpoint
+        AWSQueryConnection.__init__(self, **kwargs)
+        self.region = region
+
+
+    def _required_auth_capability(self):
+        return ['sign-v2']
+
+    def authorize_cache_security_group_ingress(self,
+                                               cache_security_group_name,
+                                               ec2_security_group_name,
+                                               ec2_security_group_owner_id):
+        """
+        Authorizes ingress to a CacheSecurityGroup using EC2 Security
+        Groups as authorization (therefore the application using the
+        cache must be running on EC2 clusters). This API requires the
+        following parameters: EC2SecurityGroupName and
+        EC2SecurityGroupOwnerId.
+        You cannot authorize ingress from an EC2 security group in one
+        Region to an Amazon Cache Cluster in another.
+
+        :type cache_security_group_name: string
+        :param cache_security_group_name: The name of the Cache Security Group
+            to authorize.
+
+        :type ec2_security_group_name: string
+        :param ec2_security_group_name: Name of the EC2 Security Group to
+            include in the authorization.
+
+        :type ec2_security_group_owner_id: string
+        :param ec2_security_group_owner_id: AWS Account Number of the owner of
+            the security group specified in the EC2SecurityGroupName parameter.
+            The AWS Access Key ID is not an acceptable value.
+
+        """
+        params = {
+            'CacheSecurityGroupName': cache_security_group_name,
+            'EC2SecurityGroupName': ec2_security_group_name,
+            'EC2SecurityGroupOwnerId': ec2_security_group_owner_id,
+        }
+        return self._make_request(
+            action='AuthorizeCacheSecurityGroupIngress',
+            verb='POST',
+            path='/', params=params)
+
+    def create_cache_cluster(self, cache_cluster_id, num_cache_nodes,
+                             cache_node_type, engine, engine_version=None,
+                             cache_parameter_group_name=None,
+                             cache_subnet_group_name=None,
+                             cache_security_group_names=None,
+                             security_group_ids=None,
+                             preferred_availability_zone=None,
+                             preferred_maintenance_window=None, port=None,
+                             notification_topic_arn=None,
+                             auto_minor_version_upgrade=None):
+        """
+        Creates a new Cache Cluster.
+
+        :type cache_cluster_id: string
+        :param cache_cluster_id: The Cache Cluster identifier. This parameter
+            is stored as a lowercase string.
+
+        :type num_cache_nodes: integer
+        :param num_cache_nodes: The number of Cache Nodes the Cache Cluster
+            should have.
+
+        :type cache_node_type: string
+        :param cache_node_type: The compute and memory capacity of nodes in a
+            Cache Cluster.
+
+        :type engine: string
+        :param engine: The name of the cache engine to be used for this Cache
+            Cluster.  Currently, memcached is the only cache engine supported
+            by the service.
+
+        :type engine_version: string
+        :param engine_version: The version of the cache engine to be used for
+            this cluster.
+
+        :type cache_parameter_group_name: string
+        :param cache_parameter_group_name: The name of the cache parameter
+            group to associate with this Cache cluster. If this argument is
+            omitted, the default CacheParameterGroup for the specified engine
+            will be used.
+
+        :type cache_subnet_group_name: string
+        :param cache_subnet_group_name: The name of the Cache Subnet Group to
+            be used for the Cache Cluster.  Use this parameter only when you
+            are creating a cluster in an Amazon Virtual Private Cloud (VPC).
+
+        :type cache_security_group_names: list
+        :param cache_security_group_names: A list of Cache Security Group Names
+            to associate with this Cache Cluster.  Use this parameter only when
+            you are creating a cluster outside of an Amazon Virtual Private
+            Cloud (VPC).
+
+        :type security_group_ids: list
+        :param security_group_ids: Specifies the VPC Security Groups associated
+            with the Cache Cluster.  Use this parameter only when you are
+            creating a cluster in an Amazon Virtual Private Cloud (VPC).
+
+        :type preferred_availability_zone: string
+        :param preferred_availability_zone: The EC2 Availability Zone that the
+            Cache Cluster will be created in.  All cache nodes belonging to a
+            cache cluster are placed in the preferred availability zone.
+            Default: System chosen (random) availability zone.
+
+        :type preferred_maintenance_window: string
+        :param preferred_maintenance_window: The weekly time range (in UTC)
+            during which system maintenance can occur.  Example:
+            `sun:05:00-sun:09:00`
+
+        :type port: integer
+        :param port: The port number on which each of the Cache Nodes will
+            accept connections.
+
+        :type notification_topic_arn: string
+        :param notification_topic_arn: The Amazon Resource Name (ARN) of the
+            Amazon Simple Notification Service (SNS) topic to which
+            notifications will be sent.  The Amazon SNS topic owner must be the
+            same as the Cache Cluster owner.
+
+        :type auto_minor_version_upgrade: boolean
+        :param auto_minor_version_upgrade: Indicates that minor engine upgrades
+            will be applied automatically to the Cache Cluster during the
+            maintenance window.  Default: `True`
+
+        """
+        params = {
+            'CacheClusterId': cache_cluster_id,
+            'NumCacheNodes': num_cache_nodes,
+            'CacheNodeType': cache_node_type,
+            'Engine': engine,
+        }
+        if engine_version is not None:
+            params['EngineVersion'] = engine_version
+        if cache_parameter_group_name is not None:
+            params['CacheParameterGroupName'] = cache_parameter_group_name
+        if cache_subnet_group_name is not None:
+            params['CacheSubnetGroupName'] = cache_subnet_group_name
+        if cache_security_group_names is not None:
+            self.build_list_params(params,
+                                   cache_security_group_names,
+                                   'CacheSecurityGroupNames.member')
+        if security_group_ids is not None:
+            self.build_list_params(params,
+                                   security_group_ids,
+                                   'SecurityGroupIds.member')
+        if preferred_availability_zone is not None:
+            params['PreferredAvailabilityZone'] = preferred_availability_zone
+        if preferred_maintenance_window is not None:
+            params['PreferredMaintenanceWindow'] = preferred_maintenance_window
+        if port is not None:
+            params['Port'] = port
+        if notification_topic_arn is not None:
+            params['NotificationTopicArn'] = notification_topic_arn
+        if auto_minor_version_upgrade is not None:
+            params['AutoMinorVersionUpgrade'] = str(
+                auto_minor_version_upgrade).lower()
+        return self._make_request(
+            action='CreateCacheCluster',
+            verb='POST',
+            path='/', params=params)
+
+    def create_cache_parameter_group(self, cache_parameter_group_name,
+                                     cache_parameter_group_family,
+                                     description):
+        """
+        Creates a new Cache Parameter Group. Cache Parameter groups
+        control the parameters for a Cache Cluster.
+
+        :type cache_parameter_group_name: string
+        :param cache_parameter_group_name: The name of the Cache Parameter
+            Group.
+
+        :type cache_parameter_group_family: string
+        :param cache_parameter_group_family: The name of the Cache Parameter
+            Group Family the Cache Parameter Group can be used with.
+            Currently, memcached1.4 is the only cache parameter group family
+            supported by the service.
+
+        :type description: string
+        :param description: The description for the Cache Parameter Group.
+
+        """
+        params = {
+            'CacheParameterGroupName': cache_parameter_group_name,
+            'CacheParameterGroupFamily': cache_parameter_group_family,
+            'Description': description,
+        }
+        return self._make_request(
+            action='CreateCacheParameterGroup',
+            verb='POST',
+            path='/', params=params)
+
+    def create_cache_security_group(self, cache_security_group_name,
+                                    description):
+        """
+        Creates a new Cache Security Group. Cache Security groups
+        control access to one or more Cache Clusters.
+
+        Only use cache security groups when you are creating a cluster
+        outside of an Amazon Virtual Private Cloud (VPC). Inside of a
+        VPC, use VPC security groups.
+
+        :type cache_security_group_name: string
+        :param cache_security_group_name: The name for the Cache Security
+            Group. This value is stored as a lowercase string.  Constraints:
+            Must contain no more than 255 alphanumeric characters. Must not be
+            "Default".  Example: `mysecuritygroup`
+
+        :type description: string
+        :param description: The description for the Cache Security Group.
+
+        """
+        params = {
+            'CacheSecurityGroupName': cache_security_group_name,
+            'Description': description,
+        }
+        return self._make_request(
+            action='CreateCacheSecurityGroup',
+            verb='POST',
+            path='/', params=params)
+
+    def create_cache_subnet_group(self, cache_subnet_group_name,
+                                  cache_subnet_group_description, subnet_ids):
+        """
+        Creates a new Cache Subnet Group.
+
+        :type cache_subnet_group_name: string
+        :param cache_subnet_group_name: The name for the Cache Subnet Group.
+            This value is stored as a lowercase string.  Constraints: Must
+            contain no more than 255 alphanumeric characters or hyphens.
+            Example: `mysubnetgroup`
+
+        :type cache_subnet_group_description: string
+        :param cache_subnet_group_description: The description for the Cache
+            Subnet Group.
+
+        :type subnet_ids: list
+        :param subnet_ids: The EC2 Subnet IDs for the Cache Subnet Group.
+
+        """
+        params = {
+            'CacheSubnetGroupName': cache_subnet_group_name,
+            'CacheSubnetGroupDescription': cache_subnet_group_description,
+        }
+        self.build_list_params(params,
+                               subnet_ids,
+                               'SubnetIds.member')
+        return self._make_request(
+            action='CreateCacheSubnetGroup',
+            verb='POST',
+            path='/', params=params)
+
+    def delete_cache_cluster(self, cache_cluster_id):
+        """
+        Deletes a previously provisioned Cache Cluster. A successful
+        response from the web service indicates the request was
+        received correctly. This action cannot be canceled or
+        reverted. DeleteCacheCluster deletes all associated Cache
+        Nodes, node endpoints and the Cache Cluster itself.
+
+        :type cache_cluster_id: string
+        :param cache_cluster_id: The Cache Cluster identifier for the Cache
+            Cluster to be deleted. This parameter isn't case sensitive.
+
+        """
+        params = {'CacheClusterId': cache_cluster_id, }
+        return self._make_request(
+            action='DeleteCacheCluster',
+            verb='POST',
+            path='/', params=params)
+
+    def delete_cache_parameter_group(self, cache_parameter_group_name):
+        """
+        Deletes the specified CacheParameterGroup. The
+        CacheParameterGroup cannot be deleted if it is associated with
+        any cache clusters.
+
+        :type cache_parameter_group_name: string
+        :param cache_parameter_group_name: The name of the Cache Parameter
+            Group to delete.  The specified cache security group must not be
+            associated with any Cache clusters.
+
+        """
+        params = {
+            'CacheParameterGroupName': cache_parameter_group_name,
+        }
+        return self._make_request(
+            action='DeleteCacheParameterGroup',
+            verb='POST',
+            path='/', params=params)
+
+    def delete_cache_security_group(self, cache_security_group_name):
+        """
+        Deletes a Cache Security Group.
+        The specified Cache Security Group must not be associated with
+        any Cache Clusters.
+
+        :type cache_security_group_name: string
+        :param cache_security_group_name: The name of the Cache Security Group
+            to delete.  You cannot delete the default security group.
+
+        """
+        params = {
+            'CacheSecurityGroupName': cache_security_group_name,
+        }
+        return self._make_request(
+            action='DeleteCacheSecurityGroup',
+            verb='POST',
+            path='/', params=params)
+
+    def delete_cache_subnet_group(self, cache_subnet_group_name):
+        """
+        Deletes a Cache Subnet Group.
+        The specified Cache Subnet Group must not be associated with
+        any Cache Clusters.
+
+        :type cache_subnet_group_name: string
+        :param cache_subnet_group_name: The name of the Cache Subnet Group to
+            delete.  Constraints: Must contain no more than 255 alphanumeric
+            characters or hyphens.
+
+        """
+        params = {'CacheSubnetGroupName': cache_subnet_group_name, }
+        return self._make_request(
+            action='DeleteCacheSubnetGroup',
+            verb='POST',
+            path='/', params=params)
+
+    def describe_cache_clusters(self, cache_cluster_id=None,
+                                max_records=None, marker=None,
+                                show_cache_node_info=None):
+        """
+        Returns information about all provisioned Cache Clusters if no
+        Cache Cluster identifier is specified, or about a specific
+        Cache Cluster if a Cache Cluster identifier is supplied.
+
+        Cluster information will be returned by default. An optional
+        ShowDetails flag can be used to retrieve detailed information
+        about the Cache Nodes associated with the Cache Cluster.
+        Details include the DNS address and port for the Cache Node
+        endpoint.
+
+        If the cluster is in the CREATING state, only cluster level
+        information will be displayed until all of the nodes are
+        successfully provisioned.
+
+        If the cluster is in the DELETING state, only cluster level
+        information will be displayed.
+
+        While adding Cache Nodes, node endpoint information and
+        creation time for the additional nodes will not be displayed
+        until they are completely provisioned. The cluster lifecycle
+        tells the customer when new nodes are AVAILABLE.
+
+        While removing existing Cache Nodes from an cluster, endpoint
+        information for the removed nodes will not be displayed.
+
+        DescribeCacheClusters supports pagination.
+
+        :type cache_cluster_id: string
+        :param cache_cluster_id: The user-supplied cluster identifier. If this
+            parameter is specified, only information about that specific Cache
+            Cluster is returned. This parameter isn't case sensitive.
+
+        :type max_records: integer
+        :param max_records: The maximum number of records to include in the
+            response. If more records exist than the specified MaxRecords
+            value, a marker is included in the response so that the remaining
+            results may be retrieved.
+
+        :type marker: string
+        :param marker: An optional marker provided in the previous
+            DescribeCacheClusters request. If this parameter is specified, the
+            response includes only records beyond the marker, up to the value
+            specified by MaxRecords .
+
+        :type show_cache_node_info: boolean
+        :param show_cache_node_info: An optional flag that can be included in
+            the DescribeCacheCluster request to retrieve Cache Nodes
+            information.
+
+        """
+        params = {}
+        if cache_cluster_id is not None:
+            params['CacheClusterId'] = cache_cluster_id
+        if max_records is not None:
+            params['MaxRecords'] = max_records
+        if marker is not None:
+            params['Marker'] = marker
+        if show_cache_node_info is not None:
+            params['ShowCacheNodeInfo'] = str(
+                show_cache_node_info).lower()
+        return self._make_request(
+            action='DescribeCacheClusters',
+            verb='POST',
+            path='/', params=params)
+
+    def describe_cache_engine_versions(self, engine=None,
+                                       engine_version=None,
+                                       cache_parameter_group_family=None,
+                                       max_records=None, marker=None,
+                                       default_only=None):
+        """
+        Returns a list of the available cache engines and their
+        versions.
+
+        :type engine: string
+        :param engine: The cache engine to return.
+
+        :type engine_version: string
+        :param engine_version: The cache engine version to return.  Example:
+            `1.4.14`
+
+        :type cache_parameter_group_family: string
+        :param cache_parameter_group_family: The name of a specific Cache
+            Parameter Group family to return details for.  Constraints:   +
+            Must be 1 to 255 alphanumeric characters + First character must be
+            a letter + Cannot end with a hyphen or contain two consecutive
+            hyphens
+
+        :type max_records: integer
+        :param max_records: The maximum number of records to include in the
+            response. If more records exist than the specified MaxRecords
+            value, a marker is included in the response so that the remaining
+            results may be retrieved.
+
+        :type marker: string
+        :param marker: An optional marker provided in the previous
+            DescribeCacheParameterGroups request. If this parameter is
+            specified, the response includes only records beyond the marker, up
+            to the value specified by MaxRecords .
+
+        :type default_only: boolean
+        :param default_only: Indicates that only the default version of the
+            specified engine or engine and major version combination is
+            returned.
+
+        """
+        params = {}
+        if engine is not None:
+            params['Engine'] = engine
+        if engine_version is not None:
+            params['EngineVersion'] = engine_version
+        if cache_parameter_group_family is not None:
+            params['CacheParameterGroupFamily'] = cache_parameter_group_family
+        if max_records is not None:
+            params['MaxRecords'] = max_records
+        if marker is not None:
+            params['Marker'] = marker
+        if default_only is not None:
+            params['DefaultOnly'] = str(
+                default_only).lower()
+        return self._make_request(
+            action='DescribeCacheEngineVersions',
+            verb='POST',
+            path='/', params=params)
+
+    def describe_cache_parameter_groups(self,
+                                        cache_parameter_group_name=None,
+                                        max_records=None, marker=None):
+        """
+        Returns a list of CacheParameterGroup descriptions. If a
+        CacheParameterGroupName is specified, the list will contain
+        only the descriptions of the specified CacheParameterGroup.
+
+        :type cache_parameter_group_name: string
+        :param cache_parameter_group_name: The name of a specific cache
+            parameter group to return details for.
+
+        :type max_records: integer
+        :param max_records: The maximum number of records to include in the
+            response. If more records exist than the specified MaxRecords
+            value, a marker is included in the response so that the remaining
+            results may be retrieved.
+
+        :type marker: string
+        :param marker: An optional marker provided in the previous
+            DescribeCacheParameterGroups request. If this parameter is
+            specified, the response includes only records beyond the marker, up
+            to the value specified by MaxRecords .
+
+        """
+        params = {}
+        if cache_parameter_group_name is not None:
+            params['CacheParameterGroupName'] = cache_parameter_group_name
+        if max_records is not None:
+            params['MaxRecords'] = max_records
+        if marker is not None:
+            params['Marker'] = marker
+        return self._make_request(
+            action='DescribeCacheParameterGroups',
+            verb='POST',
+            path='/', params=params)
+
+    def describe_cache_parameters(self, cache_parameter_group_name,
+                                  source=None, max_records=None, marker=None):
+        """
+        Returns the detailed parameter list for a particular
+        CacheParameterGroup.
+
+        :type cache_parameter_group_name: string
+        :param cache_parameter_group_name: The name of a specific cache
+            parameter group to return details for.
+
+        :type source: string
+        :param source: The parameter types to return.  Valid values: `user` |
+            `system` | `engine-default`
+
+        :type max_records: integer
+        :param max_records: The maximum number of records to include in the
+            response. If more records exist than the specified MaxRecords
+            value, a marker is included in the response so that the remaining
+            results may be retrieved.
+
+        :type marker: string
+        :param marker: An optional marker provided in the previous
+            DescribeCacheClusters request. If this parameter is specified, the
+            response includes only records beyond the marker, up to the value
+            specified by MaxRecords .
+
+        """
+        params = {
+            'CacheParameterGroupName': cache_parameter_group_name,
+        }
+        if source is not None:
+            params['Source'] = source
+        if max_records is not None:
+            params['MaxRecords'] = max_records
+        if marker is not None:
+            params['Marker'] = marker
+        return self._make_request(
+            action='DescribeCacheParameters',
+            verb='POST',
+            path='/', params=params)
+
+    def describe_cache_security_groups(self, cache_security_group_name=None,
+                                       max_records=None, marker=None):
+        """
+        Returns a list of CacheSecurityGroup descriptions. If a
+        CacheSecurityGroupName is specified, the list will contain
+        only the description of the specified CacheSecurityGroup.
+
+        :type cache_security_group_name: string
+        :param cache_security_group_name: The name of the Cache Security Group
+            to return details for.
+
+        :type max_records: integer
+        :param max_records: The maximum number of records to include in the
+            response. If more records exist than the specified MaxRecords
+            value, a marker is included in the response so that the remaining
+            results may be retrieved.
+
+        :type marker: string
+        :param marker: An optional marker provided in the previous
+            DescribeCacheClusters request. If this parameter is specified, the
+            response includes only records beyond the marker, up to the value
+            specified by MaxRecords .
+
+        """
+        params = {}
+        if cache_security_group_name is not None:
+            params['CacheSecurityGroupName'] = cache_security_group_name
+        if max_records is not None:
+            params['MaxRecords'] = max_records
+        if marker is not None:
+            params['Marker'] = marker
+        return self._make_request(
+            action='DescribeCacheSecurityGroups',
+            verb='POST',
+            path='/', params=params)
+
+    def describe_cache_subnet_groups(self, cache_subnet_group_name=None,
+                                     max_records=None, marker=None):
+        """
+        Returns a list of CacheSubnetGroup descriptions. If a
+        CacheSubnetGroupName is specified, the list will contain only
+        the description of the specified Cache Subnet Group.
+
+        :type cache_subnet_group_name: string
+        :param cache_subnet_group_name: The name of the Cache Subnet Group to
+            return details for.
+
+        :type max_records: integer
+        :param max_records: The maximum number of records to include in the
+            response. If more records exist than the specified `MaxRecords`
+            value, a marker is included in the response so that the remaining
+            results may be retrieved.  Default: 100  Constraints: minimum 20,
+            maximum 100
+
+        :type marker: string
+        :param marker: An optional marker provided in the previous
+            DescribeCacheSubnetGroups request. If this parameter is specified,
+            the response includes only records beyond the marker, up to the
+            value specified by `MaxRecords`.
+
+        """
+        params = {}
+        if cache_subnet_group_name is not None:
+            params['CacheSubnetGroupName'] = cache_subnet_group_name
+        if max_records is not None:
+            params['MaxRecords'] = max_records
+        if marker is not None:
+            params['Marker'] = marker
+        return self._make_request(
+            action='DescribeCacheSubnetGroups',
+            verb='POST',
+            path='/', params=params)
+
+    def describe_engine_default_parameters(self,
+                                           cache_parameter_group_family,
+                                           max_records=None, marker=None):
+        """
+        Returns the default engine and system parameter information
+        for the specified cache engine.
+
+        :type cache_parameter_group_family: string
+        :param cache_parameter_group_family: The name of the Cache Parameter
+            Group Family.  Currently, memcached1.4 is the only cache parameter
+            group family supported by the service.
+
+        :type max_records: integer
+        :param max_records: The maximum number of records to include in the
+            response. If more records exist than the specified MaxRecords
+            value, a marker is included in the response so that the remaining
+            results may be retrieved.
+
+        :type marker: string
+        :param marker: An optional marker provided in the previous
+            DescribeCacheClusters request. If this parameter is specified, the
+            response includes only records beyond the marker, up to the value
+            specified by MaxRecords .
+
+        """
+        params = {
+            'CacheParameterGroupFamily': cache_parameter_group_family,
+        }
+        if max_records is not None:
+            params['MaxRecords'] = max_records
+        if marker is not None:
+            params['Marker'] = marker
+        return self._make_request(
+            action='DescribeEngineDefaultParameters',
+            verb='POST',
+            path='/', params=params)
+
+    def describe_events(self, source_identifier=None, source_type=None,
+                        start_time=None, end_time=None, duration=None,
+                        max_records=None, marker=None):
+        """
+        Returns events related to Cache Clusters, Cache Security
+        Groups, and Cache Parameter Groups for the past 14 days.
+        Events specific to a particular Cache Cluster, Cache Security
+        Group, or Cache Parameter Group can be obtained by providing
+        the name as a parameter. By default, the past hour of events
+        are returned.
+
+        :type source_identifier: string
+        :param source_identifier: The identifier of the event source for which
+            events will be returned. If not specified, then all sources are
+            included in the response.
+
+        :type source_type: string
+        :param source_type: The event source to retrieve events for. If no
+            value is specified, all events are returned.
+
+        :type start_time: string
+        :param start_time: The beginning of the time interval to retrieve
+            events for, specified in ISO 8601 format.
+
+        :type end_time: string
+        :param end_time: The end of the time interval for which to retrieve
+            events, specified in ISO 8601 format.
+
+        :type duration: integer
+        :param duration: The number of minutes to retrieve events for.
+
+        :type max_records: integer
+        :param max_records: The maximum number of records to include in the
+            response. If more records exist than the specified MaxRecords
+            value, a marker is included in the response so that the remaining
+            results may be retrieved.
+
+        :type marker: string
+        :param marker: An optional marker provided in the previous
+            DescribeCacheClusters request. If this parameter is specified, the
+            response includes only records beyond the marker, up to the value
+            specified by MaxRecords .
+
+        """
+        params = {}
+        if source_identifier is not None:
+            params['SourceIdentifier'] = source_identifier
+        if source_type is not None:
+            params['SourceType'] = source_type
+        if start_time is not None:
+            params['StartTime'] = start_time
+        if end_time is not None:
+            params['EndTime'] = end_time
+        if duration is not None:
+            params['Duration'] = duration
+        if max_records is not None:
+            params['MaxRecords'] = max_records
+        if marker is not None:
+            params['Marker'] = marker
+        return self._make_request(
+            action='DescribeEvents',
+            verb='POST',
+            path='/', params=params)
+
+    def describe_reserved_cache_nodes(self, reserved_cache_node_id=None,
+                                      reserved_cache_nodes_offering_id=None,
+                                      cache_node_type=None, duration=None,
+                                      product_description=None,
+                                      offering_type=None, max_records=None,
+                                      marker=None):
+        """
+        Returns information about reserved Cache Nodes for this
+        account, or about a specified reserved Cache Node.
+
+        :type reserved_cache_node_id: string
+        :param reserved_cache_node_id: The reserved Cache Node identifier
+            filter value. Specify this parameter to show only the reservation
+            that matches the specified reservation ID.
+
+        :type reserved_cache_nodes_offering_id: string
+        :param reserved_cache_nodes_offering_id: The offering identifier filter
+            value. Specify this parameter to show only purchased reservations
+            matching the specified offering identifier.
+
+        :type cache_node_type: string
+        :param cache_node_type: The Cache Node type filter value. Specify this
+            parameter to show only those reservations matching the specified
+            Cache Nodes type.
+
+        :type duration: string
+        :param duration: The duration filter value, specified in years or
+            seconds. Specify this parameter to show only reservations for this
+            duration.  Valid Values: `1 | 3 | 31536000 | 94608000`
+
+        :type product_description: string
+        :param product_description: The product description filter value.
+            Specify this parameter to show only those reservations matching the
+            specified product description.
+
+        :type offering_type: string
+        :param offering_type: The offering type filter value. Specify this
+            parameter to show only the available offerings matching the
+            specified offering type.  Valid Values: `"Light Utilization" |
+            "Medium Utilization" | "Heavy Utilization"`
+
+        :type max_records: integer
+        :param max_records: The maximum number of records to include in the
+            response. If more than the `MaxRecords` value is available, a
+            marker is included in the response so that the following results
+            can be retrieved.  Default: 100  Constraints: minimum 20, maximum
+            100
+
+        :type marker: string
+        :param marker: The marker provided in the previous request. If this
+            parameter is specified, the response includes records beyond the
+            marker only, up to `MaxRecords`.
+
+        """
+        params = {}
+        if reserved_cache_node_id is not None:
+            params['ReservedCacheNodeId'] = reserved_cache_node_id
+        if reserved_cache_nodes_offering_id is not None:
+            params['ReservedCacheNodesOfferingId'] = reserved_cache_nodes_offering_id
+        if cache_node_type is not None:
+            params['CacheNodeType'] = cache_node_type
+        if duration is not None:
+            params['Duration'] = duration
+        if product_description is not None:
+            params['ProductDescription'] = product_description
+        if offering_type is not None:
+            params['OfferingType'] = offering_type
+        if max_records is not None:
+            params['MaxRecords'] = max_records
+        if marker is not None:
+            params['Marker'] = marker
+        return self._make_request(
+            action='DescribeReservedCacheNodes',
+            verb='POST',
+            path='/', params=params)
+
+    def describe_reserved_cache_nodes_offerings(self,
+                                                reserved_cache_nodes_offering_id=None,
+                                                cache_node_type=None,
+                                                duration=None,
+                                                product_description=None,
+                                                offering_type=None,
+                                                max_records=None,
+                                                marker=None):
+        """
+        Lists available reserved Cache Node offerings.
+
+        :type reserved_cache_nodes_offering_id: string
+        :param reserved_cache_nodes_offering_id: The offering identifier filter
+            value. Specify this parameter to show only the available offering
+            that matches the specified reservation identifier.  Example:
+            `438012d3-4052-4cc7-b2e3-8d3372e0e706`
+
+        :type cache_node_type: string
+        :param cache_node_type: The Cache Node type filter value. Specify this
+            parameter to show only the available offerings matching the
+            specified Cache Node type.
+
+        :type duration: string
+        :param duration: Duration filter value, specified in years or seconds.
+            Specify this parameter to show only reservations for this duration.
+            Valid Values: `1 | 3 | 31536000 | 94608000`
+
+        :type product_description: string
+        :param product_description: Product description filter value. Specify
+            this parameter to show only the available offerings matching the
+            specified product description.
+
+        :type offering_type: string
+        :param offering_type: The offering type filter value. Specify this
+            parameter to show only the available offerings matching the
+            specified offering type.  Valid Values: `"Light Utilization" |
+            "Medium Utilization" | "Heavy Utilization"`
+
+        :type max_records: integer
+        :param max_records: The maximum number of records to include in the
+            response. If more than the `MaxRecords` value is available, a
+            marker is included in the response so that the following results
+            can be retrieved.  Default: 100  Constraints: minimum 20, maximum
+            100
+
+        :type marker: string
+        :param marker: The marker provided in the previous request. If this
+            parameter is specified, the response includes records beyond the
+            marker only, up to `MaxRecords`.
+
+        """
+        params = {}
+        if reserved_cache_nodes_offering_id is not None:
+            params['ReservedCacheNodesOfferingId'] = reserved_cache_nodes_offering_id
+        if cache_node_type is not None:
+            params['CacheNodeType'] = cache_node_type
+        if duration is not None:
+            params['Duration'] = duration
+        if product_description is not None:
+            params['ProductDescription'] = product_description
+        if offering_type is not None:
+            params['OfferingType'] = offering_type
+        if max_records is not None:
+            params['MaxRecords'] = max_records
+        if marker is not None:
+            params['Marker'] = marker
+        return self._make_request(
+            action='DescribeReservedCacheNodesOfferings',
+            verb='POST',
+            path='/', params=params)
+
+    def modify_cache_cluster(self, cache_cluster_id, num_cache_nodes=None,
+                             cache_node_ids_to_remove=None,
+                             cache_security_group_names=None,
+                             security_group_ids=None,
+                             preferred_maintenance_window=None,
+                             notification_topic_arn=None,
+                             cache_parameter_group_name=None,
+                             notification_topic_status=None,
+                             apply_immediately=None, engine_version=None,
+                             auto_minor_version_upgrade=None):
+        """
+        Modifies the Cache Cluster settings. You can change one or
+        more Cache Cluster configuration parameters by specifying the
+        parameters and the new values in the request.
+
+        :type cache_cluster_id: string
+        :param cache_cluster_id: The Cache Cluster identifier. This value is
+            stored as a lowercase string.
+
+        :type num_cache_nodes: integer
+        :param num_cache_nodes: The number of Cache Nodes the Cache Cluster
+            should have. If NumCacheNodes is greater than the existing number
+            of Cache Nodes, Cache Nodes will be added. If NumCacheNodes is less
+            than the existing number of Cache Nodes, Cache Nodes will be
+            removed. When removing Cache Nodes, the Ids of the specific Cache
+            Nodes to be removed must be supplied using the CacheNodeIdsToRemove
+            parameter.
+
+        :type cache_node_ids_to_remove: list
+        :param cache_node_ids_to_remove: The list of Cache Node IDs to be
+            removed. This parameter is only valid when NumCacheNodes is less
+            than the existing number of Cache Nodes. The number of Cache Node
+            Ids supplied in this parameter must match the difference between
+            the existing number of Cache Nodes in the cluster and the new
+            NumCacheNodes requested.
+
+        :type cache_security_group_names: list
+        :param cache_security_group_names: A list of Cache Security Group Names
+            to authorize on this Cache Cluster. This change is asynchronously
+            applied as soon as possible.  This parameter can be used only with
+            clusters that are created outside of an Amazon Virtual Private
+            Cloud (VPC).  Constraints: Must contain no more than 255
+            alphanumeric characters. Must not be "Default".
+
+        :type security_group_ids: list
+        :param security_group_ids: Specifies the VPC Security Groups associated
+            with the Cache Cluster.  This parameter can be used only with
+            clusters that are created in an Amazon Virtual Private Cloud (VPC).
+
+        :type preferred_maintenance_window: string
+        :param preferred_maintenance_window: The weekly time range (in UTC)
+            during which system maintenance can occur, which may result in an
+            outage. This change is made immediately. If moving this window to
+            the current time, there must be at least 120 minutes between the
+            current time and end of the window to ensure pending changes are
+            applied.
+
+        :type notification_topic_arn: string
+        :param notification_topic_arn: The Amazon Resource Name (ARN) of the
+            SNS topic to which notifications will be sent.  The SNS topic owner
+            must be same as the Cache Cluster owner.
+
+        :type cache_parameter_group_name: string
+        :param cache_parameter_group_name: The name of the Cache Parameter
+            Group to apply to this Cache Cluster. This change is asynchronously
+            applied as soon as possible for parameters when the
+            ApplyImmediately parameter is specified as true for this request.
+
+        :type notification_topic_status: string
+        :param notification_topic_status: The status of the Amazon SNS
+            notification topic. The value can be active or inactive .
+            Notifications are sent only if the status is active .
+
+        :type apply_immediately: boolean
+        :param apply_immediately: Specifies whether or not the modifications in
+            this request and any pending modifications are asynchronously
+            applied as soon as possible, regardless of the
+            PreferredMaintenanceWindow setting for the Cache Cluster.  If this
+            parameter is passed as `False`, changes to the Cache Cluster are
+            applied on the next maintenance reboot, or the next failure reboot,
+            whichever occurs first.  Default: `False`
+
+        :type engine_version: string
+        :param engine_version: The version of the cache engine to upgrade this
+            cluster to.
+
+        :type auto_minor_version_upgrade: boolean
+        :param auto_minor_version_upgrade: Indicates that minor engine upgrades
+            will be applied automatically to the Cache Cluster during the
+            maintenance window.  Default: `True`
+
+        """
+        params = {'CacheClusterId': cache_cluster_id, }
+        if num_cache_nodes is not None:
+            params['NumCacheNodes'] = num_cache_nodes
+        if cache_node_ids_to_remove is not None:
+            self.build_list_params(params,
+                                   cache_node_ids_to_remove,
+                                   'CacheNodeIdsToRemove.member')
+        if cache_security_group_names is not None:
+            self.build_list_params(params,
+                                   cache_security_group_names,
+                                   'CacheSecurityGroupNames.member')
+        if security_group_ids is not None:
+            self.build_list_params(params,
+                                   security_group_ids,
+                                   'SecurityGroupIds.member')
+        if preferred_maintenance_window is not None:
+            params['PreferredMaintenanceWindow'] = preferred_maintenance_window
+        if notification_topic_arn is not None:
+            params['NotificationTopicArn'] = notification_topic_arn
+        if cache_parameter_group_name is not None:
+            params['CacheParameterGroupName'] = cache_parameter_group_name
+        if notification_topic_status is not None:
+            params['NotificationTopicStatus'] = notification_topic_status
+        if apply_immediately is not None:
+            params['ApplyImmediately'] = str(
+                apply_immediately).lower()
+        if engine_version is not None:
+            params['EngineVersion'] = engine_version
+        if auto_minor_version_upgrade is not None:
+            params['AutoMinorVersionUpgrade'] = str(
+                auto_minor_version_upgrade).lower()
+        return self._make_request(
+            action='ModifyCacheCluster',
+            verb='POST',
+            path='/', params=params)
+
+    def modify_cache_parameter_group(self, cache_parameter_group_name,
+                                     parameter_name_values):
+        """
+        Modifies the parameters of a CacheParameterGroup. To modify
+        more than one parameter, submit a list of ParameterName and
+        ParameterValue parameters. A maximum of 20 parameters can be
+        modified in a single request.
+
+        :type cache_parameter_group_name: string
+        :param cache_parameter_group_name: The name of the cache parameter
+            group to modify.
+
+        :type parameter_name_values: list
+        :param parameter_name_values: An array of parameter names and values
+            for the parameter update. At least one parameter name and value
+            must be supplied; subsequent arguments are optional. A maximum of
+            20 parameters may be modified in a single request.
+
+        """
+        params = {
+            'CacheParameterGroupName': cache_parameter_group_name,
+        }
+        self.build_complex_list_params(
+            params, parameter_name_values,
+            'ParameterNameValues.member',
+            ('ParameterName', 'ParameterValue'))
+        return self._make_request(
+            action='ModifyCacheParameterGroup',
+            verb='POST',
+            path='/', params=params)
+
+    def modify_cache_subnet_group(self, cache_subnet_group_name,
+                                  cache_subnet_group_description=None,
+                                  subnet_ids=None):
+        """
+        Modifies an existing Cache Subnet Group.
+
+        :type cache_subnet_group_name: string
+        :param cache_subnet_group_name: The name for the Cache Subnet Group.
+            This value is stored as a lowercase string.  Constraints: Must
+            contain no more than 255 alphanumeric characters or hyphens.
+            Example: `mysubnetgroup`
+
+        :type cache_subnet_group_description: string
+        :param cache_subnet_group_description: The description for the Cache
+            Subnet Group.
+
+        :type subnet_ids: list
+        :param subnet_ids: The EC2 Subnet IDs for the Cache Subnet Group.
+
+        """
+        params = {'CacheSubnetGroupName': cache_subnet_group_name, }
+        if cache_subnet_group_description is not None:
+            params['CacheSubnetGroupDescription'] = cache_subnet_group_description
+        if subnet_ids is not None:
+            self.build_list_params(params,
+                                   subnet_ids,
+                                   'SubnetIds.member')
+        return self._make_request(
+            action='ModifyCacheSubnetGroup',
+            verb='POST',
+            path='/', params=params)
+
+    def purchase_reserved_cache_nodes_offering(self,
+                                               reserved_cache_nodes_offering_id,
+                                               reserved_cache_node_id=None,
+                                               cache_node_count=None):
+        """
+        Purchases a reserved Cache Node offering.
+
+        :type reserved_cache_nodes_offering_id: string
+        :param reserved_cache_nodes_offering_id: The ID of the Reserved Cache
+            Node offering to purchase.  Example:
+            438012d3-4052-4cc7-b2e3-8d3372e0e706
+
+        :type reserved_cache_node_id: string
+        :param reserved_cache_node_id: Customer-specified identifier to track
+            this reservation.  Example: myreservationID
+
+        :type cache_node_count: integer
+        :param cache_node_count: The number of instances to reserve.  Default:
+            `1`
+
+        """
+        params = {
+            'ReservedCacheNodesOfferingId': reserved_cache_nodes_offering_id,
+        }
+        if reserved_cache_node_id is not None:
+            params['ReservedCacheNodeId'] = reserved_cache_node_id
+        if cache_node_count is not None:
+            params['CacheNodeCount'] = cache_node_count
+        return self._make_request(
+            action='PurchaseReservedCacheNodesOffering',
+            verb='POST',
+            path='/', params=params)
+
+    def reboot_cache_cluster(self, cache_cluster_id,
+                             cache_node_ids_to_reboot):
+        """
+        Reboots some (or all) of the cache cluster nodes within a
+        previously provisioned ElastiCache cluster. This API results
+        in the application of modified CacheParameterGroup parameters
+        to the cache cluster. This action is taken as soon as
+        possible, and results in a momentary outage to the cache
+        cluster during which the cache cluster status is set to
+        rebooting. During that momentary outage, the contents of the
+        cache (for each cache cluster node being rebooted) are lost. A
+        CacheCluster event is created when the reboot is completed.
+
+        :type cache_cluster_id: string
+        :param cache_cluster_id: The Cache Cluster identifier. This parameter
+            is stored as a lowercase string.
+
+        :type cache_node_ids_to_reboot: list
+        :param cache_node_ids_to_reboot: A list of Cache Cluster Node Ids to
+            reboot. To reboot an entire cache cluster, specify all cache
+            cluster node Ids.
+
+        """
+        params = {'CacheClusterId': cache_cluster_id, }
+        self.build_list_params(params,
+                               cache_node_ids_to_reboot,
+                               'CacheNodeIdsToReboot.member')
+        return self._make_request(
+            action='RebootCacheCluster',
+            verb='POST',
+            path='/', params=params)
+
+    def reset_cache_parameter_group(self, cache_parameter_group_name,
+                                    parameter_name_values,
+                                    reset_all_parameters=None):
+        """
+        Modifies the parameters of a CacheParameterGroup to the engine
+        or system default value. To reset specific parameters submit a
+        list of the parameter names. To reset the entire
+        CacheParameterGroup, specify the CacheParameterGroup name and
+        ResetAllParameters parameters.
+
+        :type cache_parameter_group_name: string
+        :param cache_parameter_group_name: The name of the Cache Parameter
+            Group.
+
+        :type reset_all_parameters: boolean
+        :param reset_all_parameters: Specifies whether ( true ) or not ( false
+            ) to reset all parameters in the Cache Parameter Group to default
+            values.
+
+        :type parameter_name_values: list
+        :param parameter_name_values: An array of parameter names which should
+            be reset. If not resetting the entire CacheParameterGroup, at least
+            one parameter name must be supplied.
+
+        """
+        params = {
+            'CacheParameterGroupName': cache_parameter_group_name,
+        }
+        self.build_complex_list_params(
+            params, parameter_name_values,
+            'ParameterNameValues.member',
+            ('ParameterName', 'ParameterValue'))
+        if reset_all_parameters is not None:
+            params['ResetAllParameters'] = str(
+                reset_all_parameters).lower()
+        return self._make_request(
+            action='ResetCacheParameterGroup',
+            verb='POST',
+            path='/', params=params)
+
+    def revoke_cache_security_group_ingress(self, cache_security_group_name,
+                                            ec2_security_group_name,
+                                            ec2_security_group_owner_id):
+        """
+        Revokes ingress from a CacheSecurityGroup for previously
+        authorized EC2 Security Groups.
+
+        :type cache_security_group_name: string
+        :param cache_security_group_name: The name of the Cache Security Group
+            to revoke ingress from.
+
+        :type ec2_security_group_name: string
+        :param ec2_security_group_name: The name of the EC2 Security Group to
+            revoke access from.
+
+        :type ec2_security_group_owner_id: string
+        :param ec2_security_group_owner_id: The AWS Account Number of the owner
+            of the security group specified in the EC2SecurityGroupName
+            parameter. The AWS Access Key ID is not an acceptable value.
+
+        """
+        params = {
+            'CacheSecurityGroupName': cache_security_group_name,
+            'EC2SecurityGroupName': ec2_security_group_name,
+            'EC2SecurityGroupOwnerId': ec2_security_group_owner_id,
+        }
+        return self._make_request(
+            action='RevokeCacheSecurityGroupIngress',
+            verb='POST',
+            path='/', params=params)
+
+    def _make_request(self, action, verb, path, params):
+        params['ContentType'] = 'JSON'
+        response = self.make_request(action=action, verb='POST',
+                                     path='/', params=params)
+        body = response.read()
+        boto.log.debug(body)
+        if response.status == 200:
+            return json.loads(body)
+        else:
+            raise self.ResponseError(response.status, response.reason, body)
diff --git a/boto/elastictranscoder/__init__.py b/boto/elastictranscoder/__init__.py
new file mode 100644
index 0000000..c53bc0c
--- /dev/null
+++ b/boto/elastictranscoder/__init__.py
@@ -0,0 +1,62 @@
+# Copyright (c) 2013 Amazon.com, Inc. or its affiliates.
+# All Rights Reserved
+#
+# Permission is hereby granted, free of charge, to any person obtaining a
+# copy of this software and associated documentation files (the
+# "Software"), to deal in the Software without restriction, including
+# without limitation the rights to use, copy, modify, merge, publish, dis-
+# tribute, sublicense, and/or sell copies of the Software, and to permit
+# persons to whom the Software is furnished to do so, subject to the fol-
+# lowing conditions:
+#
+# The above copyright notice and this permission notice shall be included
+# in all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
+# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
+# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+# IN THE SOFTWARE.
+#
+from boto.regioninfo import RegionInfo
+
+
+def regions():
+    """
+    Get all available regions for the AWS Elastic Transcoder service.
+
+    :rtype: list
+    :return: A list of :class:`boto.regioninfo.RegionInfo`
+    """
+    from boto.elastictranscoder.layer1 import ElasticTranscoderConnection
+    cls = ElasticTranscoderConnection
+    return [
+        RegionInfo(name='us-east-1',
+                   endpoint='elastictranscoder.us-east-1.amazonaws.com',
+                   connection_cls=cls),
+        RegionInfo(name='us-west-1',
+                    endpoint='elastictranscoder.us-west-1.amazonaws.com',
+                    connection_cls=cls),
+        RegionInfo(name='us-west-2',
+                    endpoint='elastictranscoder.us-west-2.amazonaws.com',
+                    connection_cls=cls),
+        RegionInfo(name='ap-northeast-1',
+                    endpoint='elastictranscoder.ap-northeast-1.amazonaws.com',
+                    connection_cls=cls),
+        RegionInfo(name='ap-southeast-1',
+                    endpoint='elastictranscoder.ap-southeast-1.amazonaws.com',
+                    connection_cls=cls),
+        RegionInfo(name='eu-west-1',
+                    endpoint='elastictranscoder.eu-west-1.amazonaws.com',
+                    connection_cls=cls),
+    ]
+
+
+def connect_to_region(region_name, **kw_params):
+    for region in regions():
+        if region.name == region_name:
+            return region.connect(**kw_params)
+    return None
+
diff --git a/boto/elastictranscoder/exceptions.py b/boto/elastictranscoder/exceptions.py
new file mode 100644
index 0000000..94b399f
--- /dev/null
+++ b/boto/elastictranscoder/exceptions.py
@@ -0,0 +1,50 @@
+# Copyright (c) 2013 Amazon.com, Inc. or its affiliates.  All Rights Reserved
+#
+# Permission is hereby granted, free of charge, to any person obtaining a
+# copy of this software and associated documentation files (the
+# "Software"), to deal in the Software without restriction, including
+# without limitation the rights to use, copy, modify, merge, publish, dis-
+# tribute, sublicense, and/or sell copies of the Software, and to permit
+# persons to whom the Software is furnished to do so, subject to the fol-
+# lowing conditions:
+#
+# The above copyright notice and this permission notice shall be included
+# in all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
+# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
+# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+# IN THE SOFTWARE.
+#
+from boto.exception import JSONResponseError
+
+
+class LimitExceededException(JSONResponseError):
+    pass
+
+
+class ResourceInUseException(JSONResponseError):
+    pass
+
+
+class AccessDeniedException(JSONResponseError):
+    pass
+
+
+class ResourceNotFoundException(JSONResponseError):
+    pass
+
+
+class InternalServiceException(JSONResponseError):
+    pass
+
+
+class ValidationException(JSONResponseError):
+    pass
+
+
+class IncompatibleVersionException(JSONResponseError):
+    pass
diff --git a/boto/elastictranscoder/layer1.py b/boto/elastictranscoder/layer1.py
new file mode 100644
index 0000000..0a22510
--- /dev/null
+++ b/boto/elastictranscoder/layer1.py
@@ -0,0 +1,782 @@
+# Copyright (c) 2013 Amazon.com, Inc. or its affiliates.  All Rights Reserved
+#
+# Permission is hereby granted, free of charge, to any person obtaining a
+# copy of this software and associated documentation files (the
+# "Software"), to deal in the Software without restriction, including
+# without limitation the rights to use, copy, modify, merge, publish, dis-
+# tribute, sublicense, and/or sell copies of the Software, and to permit
+# persons to whom the Software is furnished to do so, subject to the fol-
+# lowing conditions:
+#
+# The above copyright notice and this permission notice shall be included
+# in all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
+# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
+# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+# IN THE SOFTWARE.
+#
+from boto.compat import json
+from boto.exception import JSONResponseError
+from boto.connection import AWSAuthConnection
+from boto.regioninfo import RegionInfo
+from boto.elastictranscoder import exceptions
+
+
+class ElasticTranscoderConnection(AWSAuthConnection):
+    """
+    AWS Elastic Transcoder Service
+    The AWS Elastic Transcoder Service.
+    """
+    APIVersion = "2012-09-25"
+    DefaultRegionName = "us-east-1"
+    DefaultRegionEndpoint = "elastictranscoder.us-east-1.amazonaws.com"
+    ResponseError = JSONResponseError
+
+    _faults = {
+        "IncompatibleVersionException": exceptions.IncompatibleVersionException,
+        "LimitExceededException": exceptions.LimitExceededException,
+        "ResourceInUseException": exceptions.ResourceInUseException,
+        "AccessDeniedException": exceptions.AccessDeniedException,
+        "ResourceNotFoundException": exceptions.ResourceNotFoundException,
+        "InternalServiceException": exceptions.InternalServiceException,
+        "ValidationException": exceptions.ValidationException,
+    }
+
+
+    def __init__(self, **kwargs):
+        region = kwargs.get('region')
+        if not region:
+            region = RegionInfo(self, self.DefaultRegionName,
+                                self.DefaultRegionEndpoint)
+        else:
+            del kwargs['region']
+        kwargs['host'] = region.endpoint
+        AWSAuthConnection.__init__(self, **kwargs)
+        self.region = region
+
+    def _required_auth_capability(self):
+        return ['hmac-v4']
+
+    def cancel_job(self, id=None):
+        """
+        To cancel a job, send a DELETE request to the
+        `/2012-09-25/jobs/ [jobId] ` resource.
+        You can only cancel a job that has a status of `Submitted`. To
+        prevent a pipeline from starting to process a job while you're
+        getting the job identifier, use UpdatePipelineStatus to
+        temporarily pause the pipeline.
+
+        :type id: string
+        :param id: The identifier of the job that you want to delete.
+        To get a list of the jobs (including their `jobId`) that have a status
+            of `Submitted`, use the ListJobsByStatus API action.
+
+        """
+        uri = '/2012-09-25/jobs/{0}'.format(id)
+        return self.make_request('DELETE', uri, expected_status=202)
+
+    def create_job(self, pipeline_id=None, input_name=None, output=None,
+                   outputs=None, output_key_prefix=None, playlists=None):
+        """
+        To create a job, send a POST request to the `/2012-09-25/jobs`
+        resource.
+
+        When you create a job, Elastic Transcoder returns JSON data
+        that includes the values that you specified plus information
+        about the job that is created.
+
+        If you have specified more than one output for your jobs (for
+        example, one output for the Kindle Fire and another output for
+        the Apple iPhone 4s), you currently must use the Elastic
+        Transcoder API to list the jobs (as opposed to the AWS
+        Console).
+
+        :type pipeline_id: string
+        :param pipeline_id: The `Id` of the pipeline that you want Elastic
+            Transcoder to use for transcoding. The pipeline determines several
+            settings, including the Amazon S3 bucket from which Elastic
+            Transcoder gets the files to transcode and the bucket into which
+            Elastic Transcoder puts the transcoded files.
+
+        :type input_name: dict
+        :param input_name: A section of the request body that provides
+            information about the file that is being transcoded.
+
+        :type output: dict
+        :param output:
+
+        :type outputs: list
+        :param outputs: A section of the request body that provides information
+            about the transcoded (target) files. We recommend that you use the
+            `Outputs` syntax instead of the `Output` syntax.
+
+        :type output_key_prefix: string
+        :param output_key_prefix: The value, if any, that you want Elastic
+            Transcoder to prepend to the names of all files that this job
+            creates, including output files, thumbnails, and playlists.
+
+        :type playlists: list
+        :param playlists: If you specify a preset in `PresetId` for which the
+            value of `Container` is ts (MPEG-TS), Playlists contains
+            information about the master playlists that you want Elastic
+            Transcoder to create.
+        We recommend that you create only one master playlist. The maximum
+            number of master playlists in a job is 30.
+
+        """
+        uri = '/2012-09-25/jobs'
+        params = {}
+        if pipeline_id is not None:
+            params['PipelineId'] = pipeline_id
+        if input_name is not None:
+            params['Input'] = input_name
+        if output is not None:
+            params['Output'] = output
+        if outputs is not None:
+            params['Outputs'] = outputs
+        if output_key_prefix is not None:
+            params['OutputKeyPrefix'] = output_key_prefix
+        if playlists is not None:
+            params['Playlists'] = playlists
+        return self.make_request('POST', uri, expected_status=201,
+                                 data=json.dumps(params))
+
+    def create_pipeline(self, name=None, input_bucket=None,
+                        output_bucket=None, role=None, notifications=None,
+                        content_config=None, thumbnail_config=None):
+        """
+        To create a pipeline, send a POST request to the
+        `2012-09-25/pipelines` resource.
+
+        :type name: string
+        :param name: The name of the pipeline. We recommend that the name be
+            unique within the AWS account, but uniqueness is not enforced.
+        Constraints: Maximum 40 characters.
+
+        :type input_bucket: string
+        :param input_bucket: The Amazon S3 bucket in which you saved the media
+            files that you want to transcode.
+
+        :type output_bucket: string
+        :param output_bucket: The Amazon S3 bucket in which you want Elastic
+            Transcoder to save the transcoded files. (Use this, or use
+            ContentConfig:Bucket plus ThumbnailConfig:Bucket.)
+        Specify this value when all of the following are true:
+
+        + You want to save transcoded files, thumbnails (if any), and playlists
+              (if any) together in one bucket.
+        + You do not want to specify the users or groups who have access to the
+              transcoded files, thumbnails, and playlists.
+        + You do not want to specify the permissions that Elastic Transcoder
+              grants to the files. When Elastic Transcoder saves files in
+              `OutputBucket`, it grants full control over the files only to the
+              AWS account that owns the role that is specified by `Role`.
+        + You want to associate the transcoded files and thumbnails with the
+              Amazon S3 Standard storage class.
+
+
+
+        If you want to save transcoded files and playlists in one bucket and
+            thumbnails in another bucket, specify which users can access the
+            transcoded files or the permissions the users have, or change the
+            Amazon S3 storage class, omit `OutputBucket` and specify values for
+            `ContentConfig` and `ThumbnailConfig` instead.
+
+        :type role: string
+        :param role: The IAM Amazon Resource Name (ARN) for the role that you
+            want Elastic Transcoder to use to create the pipeline.
+
+        :type notifications: dict
+        :param notifications:
+        The Amazon Simple Notification Service (Amazon SNS) topic that you want
+            to notify to report job status.
+        To receive notifications, you must also subscribe to the new topic in
+            the Amazon SNS console.
+
+        + **Progressing**: The topic ARN for the Amazon Simple Notification
+              Service (Amazon SNS) topic that you want to notify when Elastic
+              Transcoder has started to process a job in this pipeline. This is
+              the ARN that Amazon SNS returned when you created the topic. For
+              more information, see Create a Topic in the Amazon Simple
+              Notification Service Developer Guide.
+        + **Completed**: The topic ARN for the Amazon SNS topic that you want
+              to notify when Elastic Transcoder has finished processing a job in
+              this pipeline. This is the ARN that Amazon SNS returned when you
+              created the topic.
+        + **Warning**: The topic ARN for the Amazon SNS topic that you want to
+              notify when Elastic Transcoder encounters a warning condition while
+              processing a job in this pipeline. This is the ARN that Amazon SNS
+              returned when you created the topic.
+        + **Error**: The topic ARN for the Amazon SNS topic that you want to
+              notify when Elastic Transcoder encounters an error condition while
+              processing a job in this pipeline. This is the ARN that Amazon SNS
+              returned when you created the topic.
+
+        :type content_config: dict
+        :param content_config:
+        The optional `ContentConfig` object specifies information about the
+            Amazon S3 bucket in which you want Elastic Transcoder to save
+            transcoded files and playlists: which bucket to use, which users
+            you want to have access to the files, the type of access you want
+            users to have, and the storage class that you want to assign to the
+            files.
+
+        If you specify values for `ContentConfig`, you must also specify values
+            for `ThumbnailConfig`.
+
+        If you specify values for `ContentConfig` and `ThumbnailConfig`, omit
+            the `OutputBucket` object.
+
+
+        + **Bucket**: The Amazon S3 bucket in which you want Elastic Transcoder
+              to save transcoded files and playlists.
+        + **Permissions** (Optional): The Permissions object specifies which
+              users you want to have access to transcoded files and the type of
+              access you want them to have. You can grant permissions to a
+              maximum of 30 users and/or predefined Amazon S3 groups.
+        + **Grantee Type**: Specify the type of value that appears in the
+              `Grantee` object:
+
+            + **Canonical**: The value in the `Grantee` object is either the
+                  canonical user ID for an AWS account or an origin access identity
+                  for an Amazon CloudFront distribution. For more information about
+                  canonical user IDs, see Access Control List (ACL) Overview in the
+                  Amazon Simple Storage Service Developer Guide. For more information
+                  about using CloudFront origin access identities to require that
+                  users use CloudFront URLs instead of Amazon S3 URLs, see Using an
+                  Origin Access Identity to Restrict Access to Your Amazon S3
+                  Content. A canonical user ID is not the same as an AWS account
+                  number.
+            + **Email**: The value in the `Grantee` object is the registered email
+                  address of an AWS account.
+            + **Group**: The value in the `Grantee` object is one of the following
+                  predefined Amazon S3 groups: `AllUsers`, `AuthenticatedUsers`, or
+                  `LogDelivery`.
+
+        + **Grantee**: The AWS user or group that you want to have access to
+              transcoded files and playlists. To identify the user or group, you
+              can specify the canonical user ID for an AWS account, an origin
+              access identity for a CloudFront distribution, the registered email
+              address of an AWS account, or a predefined Amazon S3 group
+        + **Access**: The permission that you want to give to the AWS user that
+              you specified in `Grantee`. Permissions are granted on the files
+              that Elastic Transcoder adds to the bucket, including playlists and
+              video files. Valid values include:
+
+            + `READ`: The grantee can read the objects and metadata for objects
+                  that Elastic Transcoder adds to the Amazon S3 bucket.
+            + `READ_ACP`: The grantee can read the object ACL for objects that
+                  Elastic Transcoder adds to the Amazon S3 bucket.
+            + `WRITE_ACP`: The grantee can write the ACL for the objects that
+                  Elastic Transcoder adds to the Amazon S3 bucket.
+            + `FULL_CONTROL`: The grantee has `READ`, `READ_ACP`, and `WRITE_ACP`
+                  permissions for the objects that Elastic Transcoder adds to the
+                  Amazon S3 bucket.
+
+        + **StorageClass**: The Amazon S3 storage class, `Standard` or
+              `ReducedRedundancy`, that you want Elastic Transcoder to assign to
+              the video files and playlists that it stores in your Amazon S3
+              bucket.
+
+        :type thumbnail_config: dict
+        :param thumbnail_config:
+        The `ThumbnailConfig` object specifies several values, including the
+            Amazon S3 bucket in which you want Elastic Transcoder to save
+            thumbnail files, which users you want to have access to the files,
+            the type of access you want users to have, and the storage class
+            that you want to assign to the files.
+
+        If you specify values for `ContentConfig`, you must also specify values
+            for `ThumbnailConfig` even if you don't want to create thumbnails.
+
+        If you specify values for `ContentConfig` and `ThumbnailConfig`, omit
+            the `OutputBucket` object.
+
+
+        + **Bucket**: The Amazon S3 bucket in which you want Elastic Transcoder
+              to save thumbnail files.
+        + **Permissions** (Optional): The `Permissions` object specifies which
+              users and/or predefined Amazon S3 groups you want to have access to
+              thumbnail files, and the type of access you want them to have. You
+              can grant permissions to a maximum of 30 users and/or predefined
+              Amazon S3 groups.
+        + **GranteeType**: Specify the type of value that appears in the
+              Grantee object:
+
+            + **Canonical**: The value in the `Grantee` object is either the
+                  canonical user ID for an AWS account or an origin access identity
+                  for an Amazon CloudFront distribution. A canonical user ID is not
+                  the same as an AWS account number.
+            + **Email**: The value in the `Grantee` object is the registered email
+                  address of an AWS account.
+            + **Group**: The value in the `Grantee` object is one of the following
+                  predefined Amazon S3 groups: `AllUsers`, `AuthenticatedUsers`, or
+                  `LogDelivery`.
+
+        + **Grantee**: The AWS user or group that you want to have access to
+              thumbnail files. To identify the user or group, you can specify the
+              canonical user ID for an AWS account, an origin access identity for
+              a CloudFront distribution, the registered email address of an AWS
+              account, or a predefined Amazon S3 group.
+        + **Access**: The permission that you want to give to the AWS user that
+              you specified in `Grantee`. Permissions are granted on the
+              thumbnail files that Elastic Transcoder adds to the bucket. Valid
+              values include:
+
+            + `READ`: The grantee can read the thumbnails and metadata for objects
+                  that Elastic Transcoder adds to the Amazon S3 bucket.
+            + `READ_ACP`: The grantee can read the object ACL for thumbnails that
+                  Elastic Transcoder adds to the Amazon S3 bucket.
+            + `WRITE_ACP`: The grantee can write the ACL for the thumbnails that
+                  Elastic Transcoder adds to the Amazon S3 bucket.
+            + `FULL_CONTROL`: The grantee has `READ`, `READ_ACP`, and `WRITE_ACP`
+                  permissions for the thumbnails that Elastic Transcoder adds to the
+                  Amazon S3 bucket.
+
+        + **StorageClass**: The Amazon S3 storage class, `Standard` or
+              `ReducedRedundancy`, that you want Elastic Transcoder to assign to
+              the thumbnails that it stores in your Amazon S3 bucket.
+
+        """
+        uri = '/2012-09-25/pipelines'
+        params = {}
+        if name is not None:
+            params['Name'] = name
+        if input_bucket is not None:
+            params['InputBucket'] = input_bucket
+        if output_bucket is not None:
+            params['OutputBucket'] = output_bucket
+        if role is not None:
+            params['Role'] = role
+        if notifications is not None:
+            params['Notifications'] = notifications
+        if content_config is not None:
+            params['ContentConfig'] = content_config
+        if thumbnail_config is not None:
+            params['ThumbnailConfig'] = thumbnail_config
+        return self.make_request('POST', uri, expected_status=201,
+                                 data=json.dumps(params))
+
+    def create_preset(self, name=None, description=None, container=None,
+                      video=None, audio=None, thumbnails=None):
+        """
+        To create a preset, send a POST request to the
+        `/2012-09-25/presets` resource.
+        Elastic Transcoder checks the settings that you specify to
+        ensure that they meet Elastic Transcoder requirements and to
+        determine whether they comply with H.264 standards. If your
+        settings are not valid for Elastic Transcoder, Elastic
+        Transcoder returns an HTTP 400 response (
+        `ValidationException`) and does not create the preset. If the
+        settings are valid for Elastic Transcoder but aren't strictly
+        compliant with the H.264 standard, Elastic Transcoder creates
+        the preset and returns a warning message in the response. This
+        helps you determine whether your settings comply with the
+        H.264 standard while giving you greater flexibility with
+        respect to the video that Elastic Transcoder produces.
+        Elastic Transcoder uses the H.264 video-compression format.
+        For more information, see the International Telecommunication
+        Union publication Recommendation ITU-T H.264: Advanced video
+        coding for generic audiovisual services .
+
+        :type name: string
+        :param name: The name of the preset. We recommend that the name be
+            unique within the AWS account, but uniqueness is not enforced.
+
+        :type description: string
+        :param description: A description of the preset.
+
+        :type container: string
+        :param container: The container type for the output file. This value
+            must be `mp4`.
+
+        :type video: dict
+        :param video: A section of the request body that specifies the video
+            parameters.
+
+        :type audio: dict
+        :param audio: A section of the request body that specifies the audio
+            parameters.
+
+        :type thumbnails: dict
+        :param thumbnails: A section of the request body that specifies the
+            thumbnail parameters, if any.
+
+        """
+        uri = '/2012-09-25/presets'
+        params = {}
+        if name is not None:
+            params['Name'] = name
+        if description is not None:
+            params['Description'] = description
+        if container is not None:
+            params['Container'] = container
+        if video is not None:
+            params['Video'] = video
+        if audio is not None:
+            params['Audio'] = audio
+        if thumbnails is not None:
+            params['Thumbnails'] = thumbnails
+        return self.make_request('POST', uri, expected_status=201,
+                                 data=json.dumps(params))
+
+    def delete_pipeline(self, id=None):
+        """
+        To delete a pipeline, send a DELETE request to the
+        `/2012-09-25/pipelines/ [pipelineId] ` resource.
+
+        You can only delete a pipeline that has never been used or
+        that is not currently in use (doesn't contain any active
+        jobs). If the pipeline is currently in use, `DeletePipeline`
+        returns an error.
+
+        :type id: string
+        :param id: The identifier of the pipeline that you want to delete.
+
+        """
+        uri = '/2012-09-25/pipelines/{0}'.format(id)
+        return self.make_request('DELETE', uri, expected_status=202)
+
+    def delete_preset(self, id=None):
+        """
+        To delete a preset, send a DELETE request to the
+        `/2012-09-25/presets/ [presetId] ` resource.
+
+        If the preset has been used, you cannot delete it.
+
+        :type id: string
+        :param id: The identifier of the preset for which you want to get
+            detailed information.
+
+        """
+        uri = '/2012-09-25/presets/{0}'.format(id)
+        return self.make_request('DELETE', uri, expected_status=202)
+
+    def list_jobs_by_pipeline(self, pipeline_id=None, ascending=None,
+                              page_token=None):
+        """
+        To get a list of the jobs currently in a pipeline, send a GET
+        request to the `/2012-09-25/jobsByPipeline/ [pipelineId] `
+        resource.
+
+        Elastic Transcoder returns all of the jobs currently in the
+        specified pipeline. The response body contains one element for
+        each job that satisfies the search criteria.
+
+        :type pipeline_id: string
+        :param pipeline_id: The ID of the pipeline for which you want to get
+            job information.
+
+        :type ascending: string
+        :param ascending: To list jobs in chronological order by the date and
+            time that they were submitted, enter `True`. To list jobs in
+            reverse chronological order, enter `False`.
+
+        :type page_token: string
+        :param page_token: When Elastic Transcoder returns more than one page
+            of results, use `pageToken` in subsequent `GET` requests to get
+            each successive page of results.
+
+        """
+        uri = '/2012-09-25/jobsByPipeline/{0}'.format(pipeline_id)
+        params = {}
+        if pipeline_id is not None:
+            params['PipelineId'] = pipeline_id
+        if ascending is not None:
+            params['Ascending'] = ascending
+        if page_token is not None:
+            params['PageToken'] = page_token
+        return self.make_request('GET', uri, expected_status=200,
+                                 params=params)
+
+    def list_jobs_by_status(self, status=None, ascending=None,
+                            page_token=None):
+        """
+        To get a list of the jobs that have a specified status, send a
+        GET request to the `/2012-09-25/jobsByStatus/ [status] `
+        resource.
+
+        Elastic Transcoder returns all of the jobs that have the
+        specified status. The response body contains one element for
+        each job that satisfies the search criteria.
+
+        :type status: string
+        :param status: To get information about all of the jobs associated with
+            the current AWS account that have a given status, specify the
+            following status: `Submitted`, `Progressing`, `Complete`,
+            `Canceled`, or `Error`.
+
+        :type ascending: string
+        :param ascending: To list jobs in chronological order by the date and
+            time that they were submitted, enter `True`. To list jobs in
+            reverse chronological order, enter `False`.
+
+        :type page_token: string
+        :param page_token: When Elastic Transcoder returns more than one page
+            of results, use `pageToken` in subsequent `GET` requests to get
+            each successive page of results.
+
+        """
+        uri = '/2012-09-25/jobsByStatus/{0}'.format(status)
+        params = {}
+        if status is not None:
+            params['Status'] = status
+        if ascending is not None:
+            params['Ascending'] = ascending
+        if page_token is not None:
+            params['PageToken'] = page_token
+        return self.make_request('GET', uri, expected_status=200,
+                                 params=params)
+
+    def list_pipelines(self):
+        """
+        To get a list of the pipelines associated with the current AWS
+        account, send a GET request to the `/2012-09-25/pipelines`
+        resource.
+
+        
+        """
+        uri = '/2012-09-25/pipelines'
+        return self.make_request('GET', uri, expected_status=200)
+
+    def list_presets(self):
+        """
+        To get a list of all presets associated with the current AWS
+        account, send a GET request to the `/2012-09-25/presets`
+        resource.
+
+        
+        """
+        uri = '/2012-09-25/presets'
+        return self.make_request('GET', uri, expected_status=200)
+
+    def read_job(self, id=None):
+        """
+        To get detailed information about a job, send a GET request to
+        the `/2012-09-25/jobs/ [jobId] ` resource.
+
+        :type id: string
+        :param id: The identifier of the job for which you want to get detailed
+            information.
+
+        """
+        uri = '/2012-09-25/jobs/{0}'.format(id)
+        return self.make_request('GET', uri, expected_status=200)
+
+    def read_pipeline(self, id=None):
+        """
+        To get detailed information about a pipeline, send a GET
+        request to the `/2012-09-25/pipelines/ [pipelineId] `
+        resource.
+
+        :type id: string
+        :param id: The identifier of the pipeline to read.
+
+        """
+        uri = '/2012-09-25/pipelines/{0}'.format(id)
+        return self.make_request('GET', uri, expected_status=200)
+
+    def read_preset(self, id=None):
+        """
+        To get detailed information about a preset, send a GET request
+        to the `/2012-09-25/presets/ [presetId] ` resource.
+
+        :type id: string
+        :param id: The identifier of the preset for which you want to get
+            detailed information.
+
+        """
+        uri = '/2012-09-25/presets/{0}'.format(id)
+        return self.make_request('GET', uri, expected_status=200)
+
+    def test_role(self, role=None, input_bucket=None, output_bucket=None,
+                  topics=None):
+        """
+        To test the IAM role that's used by Elastic Transcoder to
+        create the pipeline, send a POST request to the
+        `/2012-09-25/roleTests` resource.
+
+        The `TestRole` action lets you determine whether the IAM role
+        you are using has sufficient permissions to let Elastic
+        Transcoder perform tasks associated with the transcoding
+        process. The action attempts to assume the specified IAM role,
+        checks read access to the input and output buckets, and tries
+        to send a test notification to Amazon SNS topics that you
+        specify.
+
+        :type role: string
+        :param role: The IAM Amazon Resource Name (ARN) for the role that you
+            want Elastic Transcoder to test.
+
+        :type input_bucket: string
+        :param input_bucket: The Amazon S3 bucket that contains media files to
+            be transcoded. The action attempts to read from this bucket.
+
+        :type output_bucket: string
+        :param output_bucket: The Amazon S3 bucket that Elastic Transcoder will
+            write transcoded media files to. The action attempts to read from
+            this bucket.
+
+        :type topics: list
+        :param topics: The ARNs of one or more Amazon Simple Notification
+            Service (Amazon SNS) topics that you want the action to send a test
+            notification to.
+
+        """
+        uri = '/2012-09-25/roleTests'
+        params = {}
+        if role is not None:
+            params['Role'] = role
+        if input_bucket is not None:
+            params['InputBucket'] = input_bucket
+        if output_bucket is not None:
+            params['OutputBucket'] = output_bucket
+        if topics is not None:
+            params['Topics'] = topics
+        return self.make_request('POST', uri, expected_status=200,
+                                 data=json.dumps(params))
+
+    def update_pipeline(self, id, name=None, input_bucket=None, role=None,
+                        notifications=None, content_config=None,
+                        thumbnail_config=None):
+        """
+        
+
+        :type id: string
+        :param id:
+
+        :type name: string
+        :param name:
+
+        :type input_bucket: string
+        :param input_bucket:
+
+        :type role: string
+        :param role:
+
+        :type notifications: dict
+        :param notifications:
+
+        :type content_config: dict
+        :param content_config:
+
+        :type thumbnail_config: dict
+        :param thumbnail_config:
+
+        """
+        uri = '/2012-09-25/pipelines/{0}'.format(id)
+        params = {}
+        if name is not None:
+            params['Name'] = name
+        if input_bucket is not None:
+            params['InputBucket'] = input_bucket
+        if role is not None:
+            params['Role'] = role
+        if notifications is not None:
+            params['Notifications'] = notifications
+        if content_config is not None:
+            params['ContentConfig'] = content_config
+        if thumbnail_config is not None:
+            params['ThumbnailConfig'] = thumbnail_config
+        return self.make_request('PUT', uri, expected_status=200,
+                                 data=json.dumps(params))
+
+    def update_pipeline_notifications(self, id=None, notifications=None):
+        """
+        To update Amazon Simple Notification Service (Amazon SNS)
+        notifications for a pipeline, send a POST request to the
+        `/2012-09-25/pipelines/ [pipelineId] /notifications` resource.
+
+        When you update notifications for a pipeline, Elastic
+        Transcoder returns the values that you specified in the
+        request.
+
+        :type id: string
+        :param id: The identifier of the pipeline for which you want to change
+            notification settings.
+
+        :type notifications: dict
+        :param notifications:
+        The topic ARN for the Amazon Simple Notification Service (Amazon SNS)
+            topic that you want to notify to report job status.
+        To receive notifications, you must also subscribe to the new topic in
+            the Amazon SNS console.
+
+        + **Progressing**: The topic ARN for the Amazon Simple Notification
+              Service (Amazon SNS) topic that you want to notify when Elastic
+              Transcoder has started to process jobs that are added to this
+              pipeline. This is the ARN that Amazon SNS returned when you created
+              the topic.
+        + **Completed**: The topic ARN for the Amazon SNS topic that you want
+              to notify when Elastic Transcoder has finished processing a job.
+              This is the ARN that Amazon SNS returned when you created the
+              topic.
+        + **Warning**: The topic ARN for the Amazon SNS topic that you want to
+              notify when Elastic Transcoder encounters a warning condition. This
+              is the ARN that Amazon SNS returned when you created the topic.
+        + **Error**: The topic ARN for the Amazon SNS topic that you want to
+              notify when Elastic Transcoder encounters an error condition. This
+              is the ARN that Amazon SNS returned when you created the topic.
+
+        """
+        uri = '/2012-09-25/pipelines/{0}/notifications'.format(id)
+        params = {}
+        if id is not None:
+            params['Id'] = id
+        if notifications is not None:
+            params['Notifications'] = notifications
+        return self.make_request('POST', uri, expected_status=200,
+                                 data=json.dumps(params))
+
+    def update_pipeline_status(self, id=None, status=None):
+        """
+        To pause or reactivate a pipeline, so the pipeline stops or
+        restarts processing jobs, update the status for the pipeline.
+        Send a POST request to the `/2012-09-25/pipelines/
+        [pipelineId] /status` resource.
+
+        Changing the pipeline status is useful if you want to cancel
+        one or more jobs. You can't cancel jobs after Elastic
+        Transcoder has started processing them; if you pause the
+        pipeline to which you submitted the jobs, you have more time
+        to get the job IDs for the jobs that you want to cancel, and
+        to send a CancelJob request.
+
+        :type id: string
+        :param id: The identifier of the pipeline to update.
+
+        :type status: string
+        :param status:
+        The desired status of the pipeline:
+
+
+        + `Active`: The pipeline is processing jobs.
+        + `Paused`: The pipeline is not currently processing jobs.
+
+        """
+        uri = '/2012-09-25/pipelines/{0}/status'.format(id)
+        params = {}
+        if id is not None:
+            params['Id'] = id
+        if status is not None:
+            params['Status'] = status
+        return self.make_request('POST', uri, expected_status=200,
+                                 data=json.dumps(params))
+
+    def make_request(self, verb, resource, headers=None, data='',
+                     expected_status=None, params=None):
+        if headers is None:
+            headers = {}
+        response = AWSAuthConnection.make_request(
+            self, verb, resource, headers=headers, data=data)
+        body = json.load(response)
+        if response.status == expected_status:
+            return body
+        else:
+            error_type = response.getheader('x-amzn-ErrorType').split(':')[0]
+            error_class = self._faults.get(error_type, self.ResponseError)
+            raise error_class(response.status, response.reason, body)
diff --git a/boto/emr/__init__.py b/boto/emr/__init__.py
index 09ad2b4..562c582 100644
--- a/boto/emr/__init__.py
+++ b/boto/emr/__init__.py
@@ -54,6 +54,9 @@
             RegionInfo(name='ap-southeast-1',
                        endpoint='ap-southeast-1.elasticmapreduce.amazonaws.com',
                        connection_cls=EmrConnection),
+            RegionInfo(name='ap-southeast-2',
+                       endpoint='ap-southeast-2.elasticmapreduce.amazonaws.com',
+                       connection_cls=EmrConnection),
             RegionInfo(name='eu-west-1',
                        endpoint='eu-west-1.elasticmapreduce.amazonaws.com',
                        connection_cls=EmrConnection),
diff --git a/boto/emr/connection.py b/boto/emr/connection.py
index cae8ed1..95083ab 100644
--- a/boto/emr/connection.py
+++ b/boto/emr/connection.py
@@ -214,7 +214,9 @@
                     instance_groups=None,
                     additional_info=None,
                     ami_version=None,
-                    api_params=None):
+                    api_params=None,
+                    visible_to_all_users=None,
+                    job_flow_role=None):
         """
         Runs a job flow
         :type name: str
@@ -251,7 +253,7 @@
 
         :type hadoop_version: str
         :param hadoop_version: Version of Hadoop to use. This no longer
-        defaults to '0.20' and now uses the AMI default.
+            defaults to '0.20' and now uses the AMI default.
 
         :type steps: list(boto.emr.Step)
         :param steps: List of steps to add with the job
@@ -281,6 +283,21 @@
             use new EMR features). You can also delete an API parameter
             by setting it to None.
 
+        :type visible_to_all_users: bool
+        :param visible_to_all_users: Whether the job flow is visible to all IAM
+            users of the AWS account associated with the job flow. If this
+            value is set to ``True``, all IAM users of that AWS
+            account can view and (if they have the proper policy permissions
+            set) manage the job flow. If it is set to ``False``, only
+            the IAM user that created the job flow can view and manage
+            it.
+
+        :type job_flow_role: str
+        :param job_flow_role: An IAM role for the job flow. The EC2
+            instances of the job flow assume this role. The default role is
+            ``EMRJobflowDefault``. In order to use the default role,
+            you must have already created it using the CLI.
+
         :rtype: str
         :return: The jobflow id
         """
@@ -349,6 +366,15 @@
                 else:
                     params[key] = value
 
+        if visible_to_all_users is not None:
+            if visible_to_all_users:
+                params['VisibleToAllUsers'] = 'true'
+            else:
+                params['VisibleToAllUsers'] = 'false'
+
+        if job_flow_role is not None:
+            params['JobFlowRole'] = job_flow_role
+
         response = self.get_object(
             'RunJobFlow', params, RunJobFlowResponse, verb='POST')
         return response.jobflowid
@@ -372,6 +398,24 @@
 
         return self.get_status('SetTerminationProtection', params, verb='POST')
 
+    def set_visible_to_all_users(self, jobflow_id, visibility):
+        """
+        Set whether specified Elastic Map Reduce job flows are visible to all IAM users
+
+        :type jobflow_ids: list or str
+        :param jobflow_ids: A list of job flow IDs
+
+        :type visibility: bool
+        :param visibility: Visibility
+        """
+        assert visibility in (True, False)
+
+        params = {}
+        params['VisibleToAllUsers'] = (visibility and "true") or "false"
+        self.build_list_params(params, [jobflow_id], 'JobFlowIds.member')
+
+        return self.get_status('SetVisibleToAllUsers', params, verb='POST')
+
     def _build_bootstrap_action_args(self, bootstrap_action):
         bootstrap_action_params = {}
         bootstrap_action_params['ScriptBootstrapAction.Path'] = bootstrap_action.path
diff --git a/boto/emr/emrobject.py b/boto/emr/emrobject.py
index c088812..95ca7e6 100644
--- a/boto/emr/emrobject.py
+++ b/boto/emr/emrobject.py
@@ -153,6 +153,7 @@
         'TerminationProtected',
         'Type',
         'Value',
+        'VisibleToAllUsers',
     ])
 
     def __init__(self, connection=None):
diff --git a/boto/emr/step.py b/boto/emr/step.py
index a538903..b17defb 100644
--- a/boto/emr/step.py
+++ b/boto/emr/step.py
@@ -20,6 +20,7 @@
 # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
 # IN THE SOFTWARE.
 
+
 class Step(object):
     """
     Jobflow Step base class
@@ -62,7 +63,8 @@
         :type main_class: str
         :param main_class: The class to execute in the jar
         :type action_on_failure: str
-        :param action_on_failure: An action, defined in the EMR docs to take on failure.
+        :param action_on_failure: An action, defined in the EMR docs to
+            take on failure.
         :type step_args: list(str)
         :param step_args: A list of arguments to pass to the step
         """
@@ -110,13 +112,16 @@
         :type reducer: str
         :param reducer: The reducer URI
         :type combiner: str
-        :param combiner: The combiner URI. Only works for Hadoop 0.20 and later!
+        :param combiner: The combiner URI. Only works for Hadoop 0.20
+            and later!
         :type action_on_failure: str
-        :param action_on_failure: An action, defined in the EMR docs to take on failure.
+        :param action_on_failure: An action, defined in the EMR docs to
+            take on failure.
         :type cache_files: list(str)
         :param cache_files: A list of cache files to be bundled with the job
         :type cache_archives: list(str)
-        :param cache_archives: A list of jar archives to be bundled with the job
+        :param cache_archives: A list of jar archives to be bundled with
+            the job
         :type step_args: list(str)
         :param step_args: A list of arguments to pass to the step
         :type input: str or a list of str
@@ -124,7 +129,8 @@
         :type output: str
         :param output: The output uri
         :type jar: str
-        :param jar: The hadoop streaming jar. This can be either a local path on the master node, or an s3:// URI.
+        :param jar: The hadoop streaming jar. This can be either a local
+            path on the master node, or an s3:// URI.
         """
         self.name = name
         self.mapper = mapper
@@ -180,7 +186,7 @@
                 args.extend(('-cacheFile', cache_file))
 
         if self.cache_archives:
-           for cache_archive in self.cache_archives:
+            for cache_archive in self.cache_archives:
                 args.extend(('-cacheArchive', cache_archive))
 
         return args
@@ -192,6 +198,7 @@
             self.cache_files, self.cache_archives, self.step_args,
             self.input, self.output, self._jar)
 
+
 class ScriptRunnerStep(JarStep):
 
     ScriptRunnerJar = 's3n://us-east-1.elasticmapreduce/libs/script-runner/script-runner.jar'
@@ -199,11 +206,13 @@
     def __init__(self, name, **kw):
         JarStep.__init__(self, name, self.ScriptRunnerJar, **kw)
 
+
 class PigBase(ScriptRunnerStep):
 
     BaseArgs = ['s3n://us-east-1.elasticmapreduce/libs/pig/pig-script',
                 '--base-path', 's3n://us-east-1.elasticmapreduce/libs/pig/']
 
+
 class InstallPigStep(PigBase):
     """
     Install pig on emr step
@@ -218,6 +227,7 @@
         step_args.extend(['--pig-versions', pig_versions])
         ScriptRunnerStep.__init__(self, self.InstallPigName, step_args=step_args)
 
+
 class PigStep(PigBase):
     """
     Pig script step
@@ -231,25 +241,28 @@
         step_args.extend(pig_args)
         ScriptRunnerStep.__init__(self, name, step_args=step_args)
 
+
 class HiveBase(ScriptRunnerStep):
 
     BaseArgs = ['s3n://us-east-1.elasticmapreduce/libs/hive/hive-script',
                 '--base-path', 's3n://us-east-1.elasticmapreduce/libs/hive/']
 
+
 class InstallHiveStep(HiveBase):
     """
     Install Hive on EMR step
     """
     InstallHiveName = 'Install Hive'
 
-    def __init__(self, hive_versions = 'latest', hive_site = None):
+    def __init__(self, hive_versions='latest', hive_site=None):
         step_args = []
         step_args.extend(self.BaseArgs)
         step_args.extend(['--install-hive'])
         step_args.extend(['--hive-versions', hive_versions])
         if hive_site is not None:
             step_args.extend(['--hive-site=%s' % hive_site])
-        ScriptRunnerStep.__init__(self, self.InstallHiveName, step_args = step_args)
+        ScriptRunnerStep.__init__(self, self.InstallHiveName,
+                                  step_args=step_args)
 
 
 class HiveStep(HiveBase):
@@ -257,13 +270,12 @@
     Hive script step
     """
 
-    def __init__(self, name, hive_file, hive_versions = 'latest',
-                 hive_args = None):
+    def __init__(self, name, hive_file, hive_versions='latest',
+                 hive_args=None):
         step_args = []
         step_args.extend(self.BaseArgs)
         step_args.extend(['--hive-versions', hive_versions])
-        step_args.extend(['--hive-script', '--args', '-f', hive_file])
+        step_args.extend(['--run-hive-script', '--args', '-f', hive_file])
         if hive_args is not None:
             step_args.extend(hive_args)
-        ScriptRunnerStep.__init__(self, name, step_args = step_args)
-
+        ScriptRunnerStep.__init__(self, name, step_args=step_args)
diff --git a/boto/exception.py b/boto/exception.py
index 590f3d1..6461836 100644
--- a/boto/exception.py
+++ b/boto/exception.py
@@ -83,8 +83,8 @@
         # then just ignore the error response.
         if self.body:
             try:
-                h = handler.XmlHandler(self, self)
-                xml.sax.parseString(self.body, h)
+                h = handler.XmlHandlerWrapper(self, self)
+                h.parseString(self.body)
             except (TypeError, xml.sax.SAXParseException), pe:
                 # Remove unparsable message body so we don't include garbage
                 # in exception. But first, save self.body in self.error_message
@@ -298,7 +298,7 @@
         for p in ('errors'):
             setattr(self, p, None)
 
-class DynamoDBResponseError(BotoServerError):
+class JSONResponseError(BotoServerError):
     """
     This exception expects the fully parsed and decoded JSON response
     body to be passed as the body parameter.
@@ -311,7 +311,6 @@
     :ivar error_code: A short string that identifies the AWS error
         (e.g. ConditionalCheckFailedException)
     """
-
     def __init__(self, status, reason, body=None, *args):
         self.status = status
         self.reason = reason
@@ -323,29 +322,12 @@
                 self.error_code = self.error_code.split('#')[-1]
 
 
-class SWFResponseError(BotoServerError):
-    """
-    This exception expects the fully parsed and decoded JSON response
-    body to be passed as the body parameter.
+class DynamoDBResponseError(JSONResponseError):
+    pass
 
-    :ivar status: The HTTP status code.
-    :ivar reason: The HTTP reason message.
-    :ivar body: The Python dict that represents the decoded JSON
-        response body.
-    :ivar error_message: The full description of the AWS error encountered.
-    :ivar error_code: A short string that identifies the AWS error
-        (e.g. ConditionalCheckFailedException)
-    """
 
-    def __init__(self, status, reason, body=None, *args):
-        self.status = status
-        self.reason = reason
-        self.body = body
-        if self.body:
-            self.error_message = self.body.get('message', None)
-            self.error_code = self.body.get('__type', None)
-            if self.error_code:
-                self.error_code = self.error_code.split('#')[-1]
+class SWFResponseError(JSONResponseError):
+    pass
 
 
 class EmrResponseError(BotoServerError):
@@ -427,17 +409,6 @@
     """Is raised when no auth handlers were found ready to authenticate."""
     pass
 
-class TooManyAuthHandlerReadyToAuthenticate(Exception):
-    """Is raised when there are more than one auth handler ready.
-
-    In normal situation there should only be one auth handler that is ready to
-    authenticate. In case where more than one auth handler is ready to
-    authenticate, we raise this exception, to prevent unpredictable behavior
-    when multiple auth handlers can handle a particular case and the one chosen
-    depends on the order they were checked.
-    """
-    pass
-
 # Enum class for resumable upload failure disposition.
 class ResumableTransferDisposition(object):
     # START_OVER means an attempt to resume an existing transfer failed,
@@ -493,3 +464,28 @@
     def __repr__(self):
         return 'ResumableDownloadException("%s", %s)' % (
             self.message, self.disposition)
+
+class TooManyRecordsException(Exception):
+    """
+    Exception raised when a search of Route53 records returns more
+    records than requested.
+    """
+
+    def __init__(self, message):
+        Exception.__init__(self, message)
+        self.message = message
+
+
+class PleaseRetryException(Exception):
+    """
+    Indicates a request should be retried.
+    """
+    def __init__(self, message, response=None):
+        self.message = message
+        self.response = response
+
+    def __repr__(self):
+        return 'PleaseRetryException("%s", %s)' % (
+            self.message,
+            self.response
+        )
diff --git a/boto/file/key.py b/boto/file/key.py
index d39c8c6..2f20cae 100755
--- a/boto/file/key.py
+++ b/boto/file/key.py
@@ -37,8 +37,10 @@
         self.full_path = name
         if name == '-':
             self.name = None
+            self.size = None
         else:
             self.name = name
+            self.size = os.stat(name).st_size
         self.key_type = key_type
         if key_type == self.KEY_STREAM_READABLE:
             self.fp = sys.stdin
@@ -68,9 +70,9 @@
         :type cb: int
         :param num_cb: ignored in this subclass.
         """
-        if self.key_type & self.KEY_STREAM_READABLE:
-            raise BotoClientError('Stream is not Readable')
-        elif self.key_type & self.KEY_STREAM_WRITABLE:
+        if self.key_type & self.KEY_STREAM_WRITABLE:
+            raise BotoClientError('Stream is not readable')
+        elif self.key_type & self.KEY_STREAM_READABLE:
             key_file = self.fp
         else:
             key_file = open(self.full_path, 'rb')
@@ -114,9 +116,9 @@
                    This is the same format returned by the compute_md5 method.
         :param md5: ignored in this subclass.
         """
-        if self.key_type & self.KEY_STREAM_WRITABLE:
+        if self.key_type & self.KEY_STREAM_READABLE:
             raise BotoClientError('Stream is not writable')
-        elif self.key_type & self.KEY_STREAM_READABLE:
+        elif self.key_type & self.KEY_STREAM_WRITABLE:
             key_file = self.fp
         else:
             if not replace and os.path.exists(self.full_path):
@@ -127,6 +129,35 @@
         finally:
             key_file.close()
 
+    def get_contents_to_file(self, fp, headers=None, cb=None, num_cb=None,
+                             torrent=False, version_id=None,
+                             res_download_handler=None, response_headers=None):
+        """
+        Copy contents from the current file to the file pointed to by 'fp'.
+
+        :type fp: File-like object
+        :param fp:
+
+        :type headers: dict
+        :param headers: Unused in this subclass.
+
+        :type cb: function
+        :param cb: Unused in this subclass.
+
+        :type cb: int
+        :param num_cb: Unused in this subclass.
+
+        :type torrent: bool
+        :param torrent: Unused in this subclass.
+
+        :type res_upload_handler: ResumableDownloadHandler
+        :param res_download_handler: Unused in this subclass.
+
+        :type response_headers: dict
+        :param response_headers: Unused in this subclass.
+        """
+        shutil.copyfileobj(self.fp, fp)
+
     def get_contents_as_string(self, headers=None, cb=None, num_cb=10,
                                torrent=False):
         """
diff --git a/boto/fps/connection.py b/boto/fps/connection.py
index 3b9057e..8f2aaee 100644
--- a/boto/fps/connection.py
+++ b/boto/fps/connection.py
@@ -120,58 +120,65 @@
                'SenderTokenId',      'SettlementAmount.CurrencyCode'])
     @api_action()
     def settle_debt(self, action, response, **kw):
-        """Allows a caller to initiate a transaction that atomically
-           transfers money from a sender's payment instrument to the
-           recipient, while decreasing corresponding debt balance.
+        """
+        Allows a caller to initiate a transaction that atomically transfers
+        money from a sender's payment instrument to the recipient, while
+        decreasing corresponding debt balance.
         """
         return self.get_object(action, kw, response)
 
     @requires(['TransactionId'])
     @api_action()
     def get_transaction_status(self, action, response, **kw):
-        """Gets the latest status of a transaction.
+        """
+        Gets the latest status of a transaction.
         """
         return self.get_object(action, kw, response)
 
     @requires(['StartDate'])
     @api_action()
     def get_account_activity(self, action, response, **kw):
-        """Returns transactions for a given date range.
+        """
+        Returns transactions for a given date range.
         """
         return self.get_object(action, kw, response)
 
     @requires(['TransactionId'])
     @api_action()
     def get_transaction(self, action, response, **kw):
-        """Returns all details of a transaction.
+        """
+        Returns all details of a transaction.
         """
         return self.get_object(action, kw, response)
 
     @api_action()
     def get_outstanding_debt_balance(self, action, response):
-        """Returns the total outstanding balance for all the credit
-           instruments for the given creditor account.
+        """
+        Returns the total outstanding balance for all the credit instruments
+        for the given creditor account.
         """
         return self.get_object(action, {}, response)
 
     @requires(['PrepaidInstrumentId'])
     @api_action()
     def get_prepaid_balance(self, action, response, **kw):
-        """Returns the balance available on the given prepaid instrument.
+        """
+        Returns the balance available on the given prepaid instrument.
         """
         return self.get_object(action, kw, response)
 
     @api_action()
     def get_total_prepaid_liability(self, action, response):
-        """Returns the total liability held by the given account
-           corresponding to all the prepaid instruments owned by the
-           account.
+        """
+        Returns the total liability held by the given account corresponding to
+        all the prepaid instruments owned by the account.
         """
         return self.get_object(action, {}, response)
 
     @api_action()
     def get_account_balance(self, action, response):
-        """Returns the account balance for an account in real time.
+        """
+        Returns the account balance for an account in real time.
         """
         return self.get_object(action, {}, response)
 
@@ -179,15 +186,17 @@
     @requires(['PaymentInstruction', 'TokenType'])
     @api_action()
     def install_payment_instruction(self, action, response, **kw):
-        """Installs a payment instruction for caller.
+        """
+        Installs a payment instruction for caller.
         """
         return self.get_object(action, kw, response)
 
     @needs_caller_reference
     @requires(['returnURL', 'pipelineName'])
     def cbui_url(self, **kw):
-        """Generate a signed URL for the Co-Branded service API given
-           arguments as payload.
+        """
+        Generate a signed URL for the Co-Branded service API given arguments as
+        payload.
         """
         sandbox = 'sandbox' in self.host and 'payments-sandbox' or 'payments'
         endpoint = 'authorize.{0}.amazon.com'.format(sandbox)
@@ -220,9 +229,10 @@
                                 'TransactionAmount.CurrencyCode'])
     @api_action()
     def reserve(self, action, response, **kw):
-        """Reserve API is part of the Reserve and Settle API conjunction
-           that serve the purpose of a pay where the authorization and
-           settlement have a timing difference.
+        """
+        Reserve API is part of the Reserve and Settle API conjunction that
+        serve the purpose of a pay where the authorization and settlement have
+        a timing difference.
         """
         return self.get_object(action, kw, response)
 
@@ -232,15 +242,16 @@
                                 'TransactionAmount.CurrencyCode'])
     @api_action()
     def pay(self, action, response, **kw):
-        """Allows calling applications to move money from a sender to
-           a recipient.
+        """
+        Allows calling applications to move money from a sender to a recipient.
         """
         return self.get_object(action, kw, response)
 
     @requires(['TransactionId'])
     @api_action()
     def cancel(self, action, response, **kw):
-        """Cancels an ongoing transaction and puts it in cancelled state.
+        """
+        Cancels an ongoing transaction and puts it in cancelled state.
         """
         return self.get_object(action, kw, response)
 
@@ -249,8 +260,9 @@
                                        'TransactionAmount.CurrencyCode'])
     @api_action()
     def settle(self, action, response, **kw):
-        """The Settle API is used in conjunction with the Reserve API and
-           is used to settle previously reserved transaction.
+        """
+        The Settle API is used in conjunction with the Reserve API and is used
+        to settle previously reserved transaction.
         """
         return self.get_object(action, kw, response)
 
@@ -259,50 +271,57 @@
                'CallerReference', 'RefundAmount.CurrencyCode'])
     @api_action()
     def refund(self, action, response, **kw):
-        """Refunds a previously completed transaction.
+        """
+        Refunds a previously completed transaction.
         """
         return self.get_object(action, kw, response)
 
     @requires(['RecipientTokenId'])
     @api_action()
     def get_recipient_verification_status(self, action, response, **kw):
-        """Returns the recipient status.
+        """
+        Returns the recipient status.
         """
         return self.get_object(action, kw, response)
 
     @requires(['CallerReference'], ['TokenId'])
     @api_action()
     def get_token_by_caller(self, action, response, **kw):
-        """Returns the details of a particular token installed by this
-           calling application using the subway co-branded UI.
+        """
+        Returns the details of a particular token installed by this calling
+        application using the subway co-branded UI.
         """
         return self.get_object(action, kw, response)
 
     @requires(['UrlEndPoint', 'HttpParameters'])
     @api_action()
     def verify_signature(self, action, response, **kw):
-        """Verify the signature that FPS sent in IPN or callback urls.
+        """
+        Verify the signature that FPS sent in IPN or callback urls.
         """
         return self.get_object(action, kw, response)
 
     @api_action()
     def get_tokens(self, action, response, **kw):
-        """Returns a list of tokens installed on the given account.
+        """
+        Returns a list of tokens installed on the given account.
         """
         return self.get_object(action, kw, response)
 
     @requires(['TokenId'])
     @api_action()
     def get_token_usage(self, action, response, **kw):
-        """Returns the usage of a token.
+        """
+        Returns the usage of a token.
         """
         return self.get_object(action, kw, response)
 
     @requires(['TokenId'])
     @api_action()
     def cancel_token(self, action, response, **kw):
-        """Cancels any token installed by the calling application on
-           its own account.
+        """
+        Cancels any token installed by the calling application on its own
+        account.
         """
         return self.get_object(action, kw, response)
 
@@ -312,14 +331,16 @@
                'SenderTokenId',       'FundingAmount.CurrencyCode'])
     @api_action()
     def fund_prepaid(self, action, response, **kw):
-        """Funds the prepaid balance on the given prepaid instrument.
+        """
+        Funds the prepaid balance on the given prepaid instrument.
         """
         return self.get_object(action, kw, response)
 
     @requires(['CreditInstrumentId'])
     @api_action()
     def get_debt_balance(self, action, response, **kw):
-        """Returns the balance corresponding to the given credit instrument.
+        """
+        Returns the balance corresponding to the given credit instrument.
         """
         return self.get_object(action, kw, response)
 
@@ -329,22 +350,25 @@
                                      'AdjustmentAmount.CurrencyCode'])
     @api_action()
     def write_off_debt(self, action, response, **kw):
-        """Allows a creditor to write off the debt balance accumulated
-           partially or fully at any time.
+        """
+        Allows a creditor to write off the debt balance accumulated partially
+        or fully at any time.
         """
         return self.get_object(action, kw, response)
 
     @requires(['SubscriptionId'])
     @api_action()
     def get_transactions_for_subscription(self, action, response, **kw):
-        """Returns the transactions for a given subscriptionID.
+        """
+        Returns the transactions for a given subscriptionID.
         """
         return self.get_object(action, kw, response)
 
     @requires(['SubscriptionId'])
     @api_action()
     def get_subscription_details(self, action, response, **kw):
-        """Returns the details of Subscription for a given subscriptionID.
+        """
+        Returns the details of Subscription for a given subscriptionID.
         """
         return self.get_object(action, kw, response)
 
@@ -353,7 +377,8 @@
     @requires(['SubscriptionId'])
     @api_action()
     def cancel_subscription_and_refund(self, action, response, **kw):
-        """Cancels a subscription.
+        """
+        Cancels a subscription.
         """
         message = "If you specify a RefundAmount, " \
                   "you must specify CallerReference."
@@ -364,6 +389,7 @@
     @requires(['TokenId'])
     @api_action()
     def get_payment_instruction(self, action, response, **kw):
-        """Gets the payment instruction of a token.
+        """
+        Gets the payment instruction of a token.
         """
         return self.get_object(action, kw, response)
diff --git a/boto/glacier/concurrent.py b/boto/glacier/concurrent.py
index b993c67..af727ec 100644
--- a/boto/glacier/concurrent.py
+++ b/boto/glacier/concurrent.py
@@ -26,17 +26,53 @@
 import time
 import logging
 from Queue import Queue, Empty
+import binascii
 
-from .writer import chunk_hashes, tree_hash, bytes_to_hex
-from .exceptions import UploadArchiveError
+from .utils import DEFAULT_PART_SIZE, minimum_part_size, chunk_hashes, \
+        tree_hash, bytes_to_hex
+from .exceptions import UploadArchiveError, DownloadArchiveError, \
+        TreeHashDoesNotMatchError
 
 
-DEFAULT_PART_SIZE = 4 * 1024 * 1024
 _END_SENTINEL = object()
 log = logging.getLogger('boto.glacier.concurrent')
 
 
-class ConcurrentUploader(object):
+class ConcurrentTransferer(object):
+    def __init__(self, part_size=DEFAULT_PART_SIZE, num_threads=10):
+        self._part_size = part_size
+        self._num_threads = num_threads
+        self._threads = []
+
+    def _calculate_required_part_size(self, total_size):
+        min_part_size_required = minimum_part_size(total_size)
+        if self._part_size >= min_part_size_required:
+            part_size = self._part_size
+        else:
+            part_size = min_part_size_required
+            log.debug("The part size specified (%s) is smaller than "
+                      "the minimum required part size.  Using a part "
+                      "size of: %s", self._part_size, part_size)
+        total_parts = int(math.ceil(total_size / float(part_size)))
+        return total_parts, part_size
+
+    def _shutdown_threads(self):
+        log.debug("Shutting down threads.")
+        for thread in self._threads:
+            thread.should_continue = False
+        for thread in self._threads:
+            thread.join()
+        log.debug("Threads have exited.")
+
+    def _add_work_items_to_queue(self, total_parts, worker_queue, part_size):
+        log.debug("Adding work items to queue.")
+        for i in xrange(total_parts):
+            worker_queue.put((i, part_size))
+        for i in xrange(self._num_threads):
+            worker_queue.put(_END_SENTINEL)
+
+
+class ConcurrentUploader(ConcurrentTransferer):
     """Concurrently upload an archive to glacier.
 
     This class uses a thread pool to concurrently upload an archive
@@ -60,16 +96,25 @@
             the archive parts.  The part size must be a megabyte multiplied by
             a power of two.
 
+        :type num_threads: int
+        :param num_threads: The number of threads to spawn for the thread pool.
+            The number of threads will control how much parts are being
+            concurrently uploaded.
+
         """
+        super(ConcurrentUploader, self).__init__(part_size, num_threads)
         self._api = api
         self._vault_name = vault_name
-        self._part_size = part_size
-        self._num_threads = num_threads
-        self._threads = []
 
     def upload(self, filename, description=None):
         """Concurrently create an archive.
 
+        The part_size value specified when the class was constructed
+        will be used *unless* it is smaller than the minimum required
+        part size needed for the size of the given file.  In that case,
+        the part size used will be the minimum part size required
+        to properly upload the given file.
+
         :type file: str
         :param file: The filename to upload
 
@@ -80,28 +125,28 @@
         :return: The archive id of the newly created archive.
 
         """
-        fileobj = open(filename, 'rb')
-        total_size = os.fstat(fileobj.fileno()).st_size
-        total_parts = int(math.ceil(total_size / float(self._part_size)))
+        total_size = os.stat(filename).st_size
+        total_parts, part_size = self._calculate_required_part_size(total_size)
         hash_chunks = [None] * total_parts
         worker_queue = Queue()
         result_queue = Queue()
         response = self._api.initiate_multipart_upload(self._vault_name,
-                                                       self._part_size,
+                                                       part_size,
                                                        description)
         upload_id = response['UploadId']
         # The basic idea is to add the chunks (the offsets not the actual
         # contents) to a work queue, start up a thread pool, let the crank
         # through the items in the work queue, and then place their results
         # in a result queue which we use to complete the multipart upload.
-        self._add_work_items_to_queue(total_parts, worker_queue)
+        self._add_work_items_to_queue(total_parts, worker_queue, part_size)
         self._start_upload_threads(result_queue, upload_id,
                                    worker_queue, filename)
         try:
-            self._wait_for_upload_threads(hash_chunks, result_queue, total_parts)
+            self._wait_for_upload_threads(hash_chunks, result_queue,
+                                          total_parts)
         except UploadArchiveError, e:
-            log.debug("An error occurred while uploading an archive, aborting "
-                      "multipart upload.")
+            log.debug("An error occurred while uploading an archive, "
+                      "aborting multipart upload.")
             self._api.abort_multipart_upload(self._vault_name, upload_id)
             raise e
         log.debug("Completing upload.")
@@ -127,15 +172,8 @@
             hash_chunks[part_number] = tree_sha256
         self._shutdown_threads()
 
-    def _shutdown_threads(self):
-        log.debug("Shutting down threads.")
-        for thread in self._threads:
-            thread.should_continue = False
-        for thread in self._threads:
-            thread.join()
-        log.debug("Threads have exited.")
-
-    def _start_upload_threads(self, result_queue, upload_id, worker_queue, filename):
+    def _start_upload_threads(self, result_queue, upload_id, worker_queue,
+                              filename):
         log.debug("Starting threads.")
         for _ in xrange(self._num_threads):
             thread = UploadWorkerThread(self._api, self._vault_name, filename,
@@ -144,30 +182,14 @@
             thread.start()
             self._threads.append(thread)
 
-    def _add_work_items_to_queue(self, total_parts, worker_queue):
-        log.debug("Adding work items to queue.")
-        for i in xrange(total_parts):
-            worker_queue.put((i, self._part_size))
-        for i in xrange(self._num_threads):
-            worker_queue.put(_END_SENTINEL)
 
-
-class UploadWorkerThread(threading.Thread):
-    def __init__(self, api, vault_name, filename, upload_id,
-                 worker_queue, result_queue, num_retries=5,
-                 time_between_retries=5,
-                 retry_exceptions=Exception):
-        threading.Thread.__init__(self)
-        self._api = api
-        self._vault_name = vault_name
-        self._filename = filename
-        self._fileobj = open(filename, 'rb')
+class TransferThread(threading.Thread):
+    def __init__(self, worker_queue, result_queue):
+        super(TransferThread, self).__init__()
         self._worker_queue = worker_queue
         self._result_queue = result_queue
-        self._upload_id = upload_id
-        self._num_retries = num_retries
-        self._time_between_retries = time_between_retries
-        self._retry_exceptions = retry_exceptions
+        # This value can be set externally by other objects
+        # to indicate that the thread should be shut down.
         self.should_continue = True
 
     def run(self):
@@ -177,20 +199,46 @@
             except Empty:
                 continue
             if work is _END_SENTINEL:
+                self._cleanup()
                 return
             result = self._process_chunk(work)
             self._result_queue.put(result)
+        self._cleanup()
+
+    def _process_chunk(self, work):
+        pass
+
+    def _cleanup(self):
+        pass
+
+
+class UploadWorkerThread(TransferThread):
+    def __init__(self, api, vault_name, filename, upload_id,
+                 worker_queue, result_queue, num_retries=5,
+                 time_between_retries=5,
+                 retry_exceptions=Exception):
+        super(UploadWorkerThread, self).__init__(worker_queue, result_queue)
+        self._api = api
+        self._vault_name = vault_name
+        self._filename = filename
+        self._fileobj = open(filename, 'rb')
+        self._upload_id = upload_id
+        self._num_retries = num_retries
+        self._time_between_retries = time_between_retries
+        self._retry_exceptions = retry_exceptions
 
     def _process_chunk(self, work):
         result = None
-        for _ in xrange(self._num_retries):
+        for i in xrange(self._num_retries + 1):
             try:
                 result = self._upload_chunk(work)
                 break
             except self._retry_exceptions, e:
                 log.error("Exception caught uploading part number %s for "
-                          "vault %s, filename: %s", work[0], self._vault_name,
-                          self._filename)
+                          "vault %s, attempt: (%s / %s), filename: %s, "
+                          "exception: %s, msg: %s",
+                          work[0], self._vault_name, i + 1, self._num_retries + 1,
+                          self._filename, e.__class__, e)
                 time.sleep(self._time_between_retries)
                 result = e
         return result
@@ -211,3 +259,166 @@
         # Reading the response allows the connection to be reused.
         response.read()
         return (part_number, tree_hash_bytes)
+
+    def _cleanup(self):
+        self._fileobj.close()
+
+
+class ConcurrentDownloader(ConcurrentTransferer):
+    """
+    Concurrently download an archive from glacier.
+
+    This class uses a thread pool to concurrently download an archive
+    from glacier.
+
+    The threadpool is completely managed by this class and is
+    transparent to the users of this class.
+
+    """
+    def __init__(self, job, part_size=DEFAULT_PART_SIZE,
+                 num_threads=10):
+        """
+        :param job: A layer2 job object for archive retrieval object.
+
+        :param part_size: The size, in bytes, of the chunks to use when uploading
+            the archive parts.  The part size must be a megabyte multiplied by
+            a power of two.
+
+        """
+        super(ConcurrentDownloader, self).__init__(part_size, num_threads)
+        self._job = job
+
+    def download(self, filename):
+        """
+        Concurrently download an archive.
+
+        :param filename: The filename to download the archive to
+        :type filename: str
+
+        """
+        total_size = self._job.archive_size
+        total_parts, part_size = self._calculate_required_part_size(total_size)
+        worker_queue = Queue()
+        result_queue = Queue()
+        self._add_work_items_to_queue(total_parts, worker_queue, part_size)
+        self._start_download_threads(result_queue, worker_queue)
+        try:
+            self._wait_for_download_threads(filename, result_queue, total_parts)
+        except DownloadArchiveError, e:
+            log.debug("An error occurred while downloading an archive: %s", e)
+            raise e
+        log.debug("Download completed.")
+
+    def _wait_for_download_threads(self, filename, result_queue, total_parts):
+        """
+        Waits until the result_queue is filled with all the downloaded parts
+        This indicates that all part downloads have completed
+
+        Saves downloaded parts into filename
+
+        :param filename:
+        :param result_queue:
+        :param total_parts:
+        """
+        hash_chunks = [None] * total_parts
+        with open(filename, "wb") as f:
+            for _ in xrange(total_parts):
+                result = result_queue.get()
+                if isinstance(result, Exception):
+                    log.debug("An error was found in the result queue, "
+                              "terminating threads: %s", result)
+                    self._shutdown_threads()
+                    raise DownloadArchiveError(
+                        "An error occurred while uploading "
+                        "an archive: %s" % result)
+                part_number, part_size, actual_hash, data = result
+                hash_chunks[part_number] = actual_hash
+                start_byte = part_number * part_size
+                f.seek(start_byte)
+                f.write(data)
+                f.flush()
+        final_hash = bytes_to_hex(tree_hash(hash_chunks))
+        log.debug("Verifying final tree hash of archive, expecting: %s, "
+                  "actual: %s", self._job.sha256_treehash, final_hash)
+        if self._job.sha256_treehash != final_hash:
+            self._shutdown_threads()
+            raise TreeHashDoesNotMatchError(
+                "Tree hash for entire archive does not match, "
+                "expected: %s, got: %s" % (self._job.sha256_treehash,
+                                           final_hash))
+        self._shutdown_threads()
+
+    def _start_download_threads(self, result_queue, worker_queue):
+        log.debug("Starting threads.")
+        for _ in xrange(self._num_threads):
+            thread = DownloadWorkerThread(self._job, worker_queue, result_queue)
+            time.sleep(0.2)
+            thread.start()
+            self._threads.append(thread)
+
+
+class DownloadWorkerThread(TransferThread):
+    def __init__(self, job,
+                 worker_queue, result_queue,
+                 num_retries=5,
+                 time_between_retries=5,
+                 retry_exceptions=Exception):
+        """
+        Individual download thread that will download parts of the file from Glacier. Parts
+        to download stored in work queue.
+
+        Parts download to a temp dir with each part a separate file
+
+        :param job: Glacier job object
+        :param work_queue: A queue of tuples which include the part_number and
+            part_size
+        :param result_queue: A priority queue of tuples which include the
+            part_number and the path to the temp file that holds that
+            part's data.
+
+        """
+        super(DownloadWorkerThread, self).__init__(worker_queue, result_queue)
+        self._job = job
+        self._num_retries = num_retries
+        self._time_between_retries = time_between_retries
+        self._retry_exceptions = retry_exceptions
+
+    def _process_chunk(self, work):
+        """
+        Attempt to download a part of the archive from Glacier
+        Store the result in the result_queue
+
+        :param work:
+        """
+        result = None
+        for _ in xrange(self._num_retries):
+            try:
+                result = self._download_chunk(work)
+                break
+            except self._retry_exceptions, e:
+                log.error("Exception caught downloading part number %s for "
+                          "job %s", work[0], self._job,)
+                time.sleep(self._time_between_retries)
+                result = e
+        return result
+
+    def _download_chunk(self, work):
+        """
+        Downloads a chunk of archive from Glacier. Saves the data to a temp file
+        Returns the part number and temp file location
+
+        :param work:
+        """
+        part_number, part_size = work
+        start_byte = part_number * part_size
+        byte_range = (start_byte, start_byte + part_size - 1)
+        log.debug("Downloading chunk %s of size %s", part_number, part_size)
+        response = self._job.get_output(byte_range)
+        data = response.read()
+        actual_hash = bytes_to_hex(tree_hash(chunk_hashes(data)))
+        if response['TreeHash'] != actual_hash:
+            raise TreeHashDoesNotMatchError(
+                "Tree hash for part number %s does not match, "
+                "expected: %s, got: %s" % (part_number, response['TreeHash'],
+                                           actual_hash))
+        return (part_number, part_size, binascii.unhexlify(actual_hash), data)
diff --git a/boto/glacier/exceptions.py b/boto/glacier/exceptions.py
index e525880..c8bce1f 100644
--- a/boto/glacier/exceptions.py
+++ b/boto/glacier/exceptions.py
@@ -20,7 +20,7 @@
 # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
 # IN THE SOFTWARE.
 #
-import json
+from boto.compat import json
 
 
 class UnexpectedHTTPResponseError(Exception):
@@ -42,13 +42,17 @@
         super(UnexpectedHTTPResponseError, self).__init__(msg)
 
 
-class UploadArchiveError(Exception):
+class ArchiveError(Exception):
     pass
 
 
-class DownloadArchiveError(Exception):
+class UploadArchiveError(ArchiveError):
     pass
 
 
-class TreeHashDoesNotMatchError(DownloadArchiveError):
+class DownloadArchiveError(ArchiveError):
+    pass
+
+
+class TreeHashDoesNotMatchError(ArchiveError):
     pass
diff --git a/boto/glacier/job.py b/boto/glacier/job.py
index 62f0758..c740174 100644
--- a/boto/glacier/job.py
+++ b/boto/glacier/job.py
@@ -20,11 +20,12 @@
 # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
 # IN THE SOFTWARE.
 #
+from __future__ import with_statement
 import math
 import socket
 
 from .exceptions import TreeHashDoesNotMatchError, DownloadArchiveError
-from .writer import bytes_to_hex, chunk_hashes, tree_hash
+from .utils import tree_hash_from_str
 
 
 class Job(object):
@@ -58,7 +59,7 @@
     def __repr__(self):
         return 'Job(%s)' % self.arn
 
-    def get_output(self, byte_range=None):
+    def get_output(self, byte_range=None, validate_checksum=False):
         """
         This operation downloads the output of the job.  Depending on
         the job type you specified when you initiated the job, the
@@ -76,10 +77,25 @@
         :type byte_range: tuple
         :param range: A tuple of integer specifying the slice (in bytes)
             of the archive you want to receive
+
+        :type validate_checksum: bool
+        :param validate_checksum: Specify whether or not to validate
+            the associate tree hash.  If the response does not contain
+            a TreeHash, then no checksum will be verified.
+
         """
-        return self.vault.layer1.get_job_output(self.vault.name,
-                                                self.id,
-                                                byte_range)
+        response = self.vault.layer1.get_job_output(self.vault.name,
+                                                    self.id,
+                                                    byte_range)
+        if validate_checksum and 'TreeHash' in response:
+            data = response.read()
+            actual_tree_hash = tree_hash_from_str(data)
+            if response['TreeHash'] != actual_tree_hash:
+                raise TreeHashDoesNotMatchError(
+                    "The calculated tree hash %s does not match the "
+                    "expected tree hash %s for the byte range %s" % (
+                        actual_tree_hash, response['TreeHash'], byte_range))
+        return response
 
     def download_to_file(self, filename, chunk_size=DefaultPartSize,
                          verify_hashes=True, retry_exceptions=(socket.error,)):
@@ -110,7 +126,7 @@
             data, expected_tree_hash = self._download_byte_range(
                 byte_range, retry_exceptions)
             if verify_hashes:
-                actual_tree_hash = bytes_to_hex(tree_hash(chunk_hashes(data)))
+                actual_tree_hash = tree_hash_from_str(data)
                 if expected_tree_hash != actual_tree_hash:
                     raise TreeHashDoesNotMatchError(
                         "The calculated tree hash %s does not match the "
diff --git a/boto/glacier/layer1.py b/boto/glacier/layer1.py
index 1888a8e..e5a3963 100644
--- a/boto/glacier/layer1.py
+++ b/boto/glacier/layer1.py
@@ -23,13 +23,13 @@
 #
 
 import os
-import json
-import urllib
 
 import boto.glacier
+from boto.compat import json
 from boto.connection import AWSAuthConnection
 from .exceptions import UnexpectedHTTPResponseError
 from .response import GlacierResponse
+from .utils import ResettingFileSender
 
 
 class Layer1(AWSAuthConnection):
@@ -56,7 +56,7 @@
         self.account_id = account_id
         AWSAuthConnection.__init__(self, region.endpoint,
                                    aws_access_key_id, aws_secret_access_key,
-                                   True, port, proxy, proxy_port,
+                                   is_secure, port, proxy, proxy_port,
                                    proxy_user, proxy_pass, debug,
                                    https_connection_factory,
                                    path, provider, security_token,
@@ -67,7 +67,7 @@
 
     def make_request(self, verb, resource, headers=None,
                      data='', ok_responses=(200,), params=None,
-                     response_headers=None):
+                     sender=None, response_headers=None):
         if headers is None:
             headers = {}
         headers['x-amz-glacier-version'] = self.Version
@@ -75,6 +75,7 @@
         response = AWSAuthConnection.make_request(self, verb, uri,
                                                   params=params,
                                                   headers=headers,
+                                                  sender=sender,
                                                   data=data)
         if response.status in ok_responses:
             return GlacierResponse(response, response_headers)
@@ -337,6 +338,9 @@
               output is ready for you to download.
             * Type - The job type.  Valid values are:
               archive-retrieval|inventory-retrieval
+            * RetrievalByteRange - Optionally specify the range of
+              bytes to retrieve.
+
         """
         uri = 'vaults/%s/jobs' % vault_name
         response_headers = [('x-amz-job-id', u'JobId'),
@@ -418,7 +422,7 @@
         uri = 'vaults/%s/archives' % vault_name
         try:
             content_length = str(len(archive))
-        except TypeError:
+        except (TypeError, AttributeError):
             # If a file like object is provided, try to retrieve
             # the file size via fstat.
             content_length = str(os.fstat(archive.fileno()).st_size)
@@ -427,9 +431,17 @@
                    'Content-Length': content_length}
         if description:
             headers['x-amz-archive-description'] = description
+        if self._is_file_like(archive):
+            sender = ResettingFileSender(archive)
+        else:
+            sender = None
         return self.make_request('POST', uri, headers=headers,
-                                 data=archive, ok_responses=(201,),
-                                 response_headers=response_headers)
+                                sender=sender,
+                                data=archive, ok_responses=(201,),
+                                response_headers=response_headers)
+
+    def _is_file_like(self, archive):
+        return hasattr(archive, 'seek') and hasattr(archive, 'tell')
 
     def delete_archive(self, vault_name, archive_id):
         """
diff --git a/boto/glacier/response.py b/boto/glacier/response.py
index 57bd4e4..78d9f5f 100644
--- a/boto/glacier/response.py
+++ b/boto/glacier/response.py
@@ -20,7 +20,8 @@
 # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
 # IN THE SOFTWARE.
 #
-import json
+from boto.compat import json
+
 
 class GlacierResponse(dict):
     """
@@ -41,7 +42,7 @@
         size = http_response.getheader('Content-Length', None)
         if size is not None:
             self.size = size
-        
+
     def read(self, amt=None):
         "Reads and returns the response body, or up to the next amt bytes."
         return self.http_response.read(amt)
diff --git a/boto/glacier/utils.py b/boto/glacier/utils.py
new file mode 100644
index 0000000..af779f5
--- /dev/null
+++ b/boto/glacier/utils.py
@@ -0,0 +1,163 @@
+# Copyright (c) 2012 Amazon.com, Inc. or its affiliates.  All Rights Reserved
+#
+# Permission is hereby granted, free of charge, to any person obtaining a
+# copy of this software and associated documentation files (the
+# "Software"), to deal in the Software without restriction, including
+# without limitation the rights to use, copy, modify, merge, publish, dis-
+# tribute, sublicense, and/or sell copies of the Software, and to permit
+# persons to whom the Software is furnished to do so, subject to the fol-
+# lowing conditions:
+#
+# The above copyright notice and this permission notice shall be included
+# in all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
+# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
+# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+# IN THE SOFTWARE.
+#
+import hashlib
+import math
+
+
+_MEGABYTE = 1024 * 1024
+DEFAULT_PART_SIZE = 4 * _MEGABYTE
+MAXIMUM_NUMBER_OF_PARTS = 10000
+
+
+def minimum_part_size(size_in_bytes, default_part_size=DEFAULT_PART_SIZE):
+    """Calculate the minimum part size needed for a multipart upload.
+
+    Glacier allows a maximum of 10,000 parts per upload.  It also
+    states that the maximum archive size is 10,000 * 4 GB, which means
+    the part size can range from 1MB to 4GB (provided it is one 1MB
+    multiplied by a power of 2).
+
+    This function will compute what the minimum part size must be in
+    order to upload a file of size ``size_in_bytes``.
+
+    It will first check if ``default_part_size`` is sufficient for
+    a part size given the ``size_in_bytes``.  If this is not the case,
+    then the smallest part size than can accomodate a file of size
+    ``size_in_bytes`` will be returned.
+
+    If the file size is greater than the maximum allowed archive
+    size of 10,000 * 4GB, a ``ValueError`` will be raised.
+
+    """
+    # The default part size (4 MB) will be too small for a very large
+    # archive, as there is a limit of 10,000 parts in a multipart upload.
+    # This puts the maximum allowed archive size with the default part size
+    # at 40,000 MB. We need to do a sanity check on the part size, and find
+    # one that works if the default is too small.
+    part_size = _MEGABYTE
+    if (default_part_size * MAXIMUM_NUMBER_OF_PARTS) < size_in_bytes:
+        if size_in_bytes > (4096 * _MEGABYTE * 10000):
+            raise ValueError("File size too large: %s" % size_in_bytes)
+        min_part_size = size_in_bytes / 10000
+        power = 3
+        while part_size < min_part_size:
+            part_size = math.ldexp(_MEGABYTE, power)
+            power += 1
+        part_size = int(part_size)
+    else:
+        part_size = default_part_size
+    return part_size
+
+
+def chunk_hashes(bytestring, chunk_size=_MEGABYTE):
+    chunk_count = int(math.ceil(len(bytestring) / float(chunk_size)))
+    hashes = []
+    for i in xrange(chunk_count):
+        start = i * chunk_size
+        end = (i + 1) * chunk_size
+        hashes.append(hashlib.sha256(bytestring[start:end]).digest())
+    if not hashes:
+        return [hashlib.sha256('').digest()]
+    return hashes
+
+
+def tree_hash(fo):
+    """
+    Given a hash of each 1MB chunk (from chunk_hashes) this will hash
+    together adjacent hashes until it ends up with one big one. So a
+    tree of hashes.
+    """
+    hashes = []
+    hashes.extend(fo)
+    while len(hashes) > 1:
+        new_hashes = []
+        while True:
+            if len(hashes) > 1:
+                first = hashes.pop(0)
+                second = hashes.pop(0)
+                new_hashes.append(hashlib.sha256(first + second).digest())
+            elif len(hashes) == 1:
+                only = hashes.pop(0)
+                new_hashes.append(only)
+            else:
+                break
+        hashes.extend(new_hashes)
+    return hashes[0]
+
+
+def compute_hashes_from_fileobj(fileobj, chunk_size=1024 * 1024):
+    """Compute the linear and tree hash from a fileobj.
+
+    This function will compute the linear/tree hash of a fileobj
+    in a single pass through the fileobj.
+
+    :param fileobj: A file like object.
+
+    :param chunk_size: The size of the chunks to use for the tree
+        hash.  This is also the buffer size used to read from
+        `fileobj`.
+
+    :rtype: tuple
+    :return: A tuple of (linear_hash, tree_hash).  Both hashes
+        are returned in hex.
+
+    """
+    linear_hash = hashlib.sha256()
+    chunks = []
+    chunk = fileobj.read(chunk_size)
+    while chunk:
+        linear_hash.update(chunk)
+        chunks.append(hashlib.sha256(chunk).digest())
+        chunk = fileobj.read(chunk_size)
+    if not chunks:
+        chunks = [hashlib.sha256('').digest()]
+    return linear_hash.hexdigest(), bytes_to_hex(tree_hash(chunks))
+
+
+def bytes_to_hex(str_as_bytes):
+    return ''.join(["%02x" % ord(x) for x in str_as_bytes]).strip()
+
+
+def tree_hash_from_str(str_as_bytes):
+    """
+
+    :type str_as_bytes: str
+    :param str_as_bytes: The string for which to compute the tree hash.
+
+    :rtype: str
+    :return: The computed tree hash, returned as hex.
+
+    """
+    return bytes_to_hex(tree_hash(chunk_hashes(str_as_bytes)))
+
+
+class ResettingFileSender(object):
+    def __init__(self, archive):
+        self._archive = archive
+        self._starting_offset = archive.tell()
+
+    def __call__(self, connection, method, path, body, headers):
+        try:
+            connection.request(method, path, self._archive, headers)
+            return connection.getresponse()
+        finally:
+            self._archive.seek(self._starting_offset)
diff --git a/boto/glacier/vault.py b/boto/glacier/vault.py
index 4d0e072..0186dbd 100644
--- a/boto/glacier/vault.py
+++ b/boto/glacier/vault.py
@@ -1,5 +1,6 @@
 # -*- coding: utf-8 -*-
 # Copyright (c) 2012 Thomas Parslow http://almostobsolete.net/
+# Copyright (c) 2012 Robie Basak <robie@justgohome.co.uk>
 #
 # Permission is hereby granted, free of charge, to any person obtaining a
 # copy of this software and associated documentation files (the
@@ -20,18 +21,25 @@
 # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
 # IN THE SOFTWARE.
 #
-
+from __future__ import with_statement
+from .exceptions import UploadArchiveError
 from .job import Job
-from .writer import Writer, compute_hashes_from_fileobj
+from .writer import compute_hashes_from_fileobj, resume_file_upload, Writer
 from .concurrent import ConcurrentUploader
+from .utils import minimum_part_size, DEFAULT_PART_SIZE
 import os.path
 
+
 _MEGABYTE = 1024 * 1024
+_GIGABYTE = 1024 * _MEGABYTE
+
+MAXIMUM_ARCHIVE_SIZE = 10000 * 4 * _GIGABYTE
+MAXIMUM_NUMBER_OF_PARTS = 10000
 
 
 class Vault(object):
 
-    DefaultPartSize = 4 * _MEGABYTE
+    DefaultPartSize = DEFAULT_PART_SIZE
     SingleOperationThreshold = 100 * _MEGABYTE
 
     ResponseDataElements = (('VaultName', 'name', None),
@@ -62,7 +70,7 @@
         """
         self.layer1.delete_vault(self.name)
 
-    def upload_archive(self, filename):
+    def upload_archive(self, filename, description=None):
         """
         Adds an archive to a vault. For archives greater than 100MB the
         multipart upload will be used.
@@ -70,20 +78,27 @@
         :type file: str
         :param file: A filename to upload
 
+        :type description: str
+        :param description: An optional description for the archive.
+
         :rtype: str
         :return: The archive id of the newly created archive
         """
         if os.path.getsize(filename) > self.SingleOperationThreshold:
-            return self.create_archive_from_file(filename)
-        return self._upload_archive_single_operation(filename)
+            return self.create_archive_from_file(filename, description=description)
+        return self._upload_archive_single_operation(filename, description)
 
-    def _upload_archive_single_operation(self, filename):
+    def _upload_archive_single_operation(self, filename, description):
         """
         Adds an archive to a vault in a single operation. It's recommended for
         archives less than 100MB
+
         :type file: str
         :param file: A filename to upload
 
+        :type description: str
+        :param description: A description for the archive.
+
         :rtype: str
         :return: The archive id of the newly created archive
         """
@@ -91,7 +106,8 @@
             linear_hash, tree_hash = compute_hashes_from_fileobj(fileobj)
             fileobj.seek(0)
             response = self.layer1.upload_archive(self.name, fileobj,
-                                                  linear_hash, tree_hash)
+                                                  linear_hash, tree_hash,
+                                                  description)
         return response['ArchiveId']
 
     def create_archive_writer(self, part_size=DefaultPartSize,
@@ -106,7 +122,10 @@
         :type part_size: int
         :param part_size: The part size for the multipart upload.
 
-        :rtype: :class:`boto.glaicer.writer.Writer`
+        :type description: str
+        :param description: An optional description for the archive.
+
+        :rtype: :class:`boto.glacier.writer.Writer`
         :return: A Writer object that to which the archive data
             should be written.
         """
@@ -115,7 +134,8 @@
                                                          description)
         return Writer(self, response['UploadId'], part_size=part_size)
 
-    def create_archive_from_file(self, filename=None, file_obj=None):
+    def create_archive_from_file(self, filename=None, file_obj=None,
+                                 description=None, upload_id_callback=None):
         """
         Create a new archive and upload the data from the given file
         or file-like object.
@@ -126,22 +146,98 @@
         :type file_obj: file
         :param file_obj: A file-like object to upload
 
+        :type description: str
+        :param description: An optional description for the archive.
+
+        :type upload_id_callback: function
+        :param upload_id_callback: if set, call with the upload_id as the
+            only parameter when it becomes known, to enable future calls
+            to resume_archive_from_file in case resume is needed.
+
         :rtype: str
         :return: The archive id of the newly created archive
         """
+        part_size = self.DefaultPartSize
         if not file_obj:
+            file_size = os.path.getsize(filename)
+            try:
+                part_size = minimum_part_size(file_size, part_size)
+            except ValueError:
+                raise UploadArchiveError("File size of %s bytes exceeds "
+                                         "40,000 GB archive limit of Glacier.")
             file_obj = open(filename, "rb")
-
-        writer = self.create_archive_writer()
+        writer = self.create_archive_writer(
+            description=description,
+            part_size=part_size)
+        if upload_id_callback:
+            upload_id_callback(writer.upload_id)
         while True:
-            data = file_obj.read(self.DefaultPartSize)
+            data = file_obj.read(part_size)
             if not data:
                 break
             writer.write(data)
         writer.close()
         return writer.get_archive_id()
 
-    def concurrent_create_archive_from_file(self, filename):
+    @staticmethod
+    def _range_string_to_part_index(range_string, part_size):
+        start, inside_end = [int(value) for value in range_string.split('-')]
+        end = inside_end + 1
+        length = end - start
+        if length == part_size + 1:
+            # Off-by-one bug in Amazon's Glacier implementation,
+            # see: https://forums.aws.amazon.com/thread.jspa?threadID=106866
+            # Workaround: since part_size is too big by one byte, adjust it
+            end -= 1
+            inside_end -= 1
+            length -= 1
+        assert not (start % part_size), (
+            "upload part start byte is not on a part boundary")
+        assert (length <= part_size), "upload part is bigger than part size"
+        return start // part_size
+
+    def resume_archive_from_file(self, upload_id, filename=None,
+                                 file_obj=None):
+        """Resume upload of a file already part-uploaded to Glacier.
+
+        The resumption of an upload where the part-uploaded section is empty
+        is a valid degenerate case that this function can handle.
+
+        One and only one of filename or file_obj must be specified.
+
+        :type upload_id: str
+        :param upload_id: existing Glacier upload id of upload being resumed.
+
+        :type filename: str
+        :param filename: file to open for resume
+
+        :type fobj: file
+        :param fobj: file-like object containing local data to resume. This
+            must read from the start of the entire upload, not just from the
+            point being resumed. Use fobj.seek(0) to achieve this if necessary.
+
+        :rtype: str
+        :return: The archive id of the newly created archive
+
+        """
+        part_list_response = self.list_all_parts(upload_id)
+        part_size = part_list_response['PartSizeInBytes']
+
+        part_hash_map = {}
+        for part_desc in part_list_response['Parts']:
+            part_index = self._range_string_to_part_index(
+                part_desc['RangeInBytes'], part_size)
+            part_tree_hash = part_desc['SHA256TreeHash'].decode('hex')
+            part_hash_map[part_index] = part_tree_hash
+
+        if not file_obj:
+            file_obj = open(filename, "rb")
+
+        return resume_file_upload(
+            self, upload_id, part_size, file_obj, part_hash_map)
+
+    def concurrent_create_archive_from_file(self, filename, description,
+                                            **kwargs):
         """
         Create a new archive from a file and upload the given
         file.
@@ -154,6 +250,12 @@
         :type filename: str
         :param filename: A filename to upload
 
+        :param kwargs: Additional kwargs to pass through to
+            :py:class:`boto.glacier.concurrent.ConcurrentUploader`.
+            You can pass any argument besides the ``api`` and
+            ``vault_name`` param (these arguments are already
+            passed to the ``ConcurrentUploader`` for you).
+
         :raises: `boto.glacier.exception.UploadArchiveError` is an error
             occurs during the upload process.
 
@@ -161,8 +263,8 @@
         :return: The archive id of the newly created archive
 
         """
-        uploader = ConcurrentUploader(self.layer1, self.name)
-        archive_id = uploader.upload(filename)
+        uploader = ConcurrentUploader(self.layer1, self.name, **kwargs)
+        archive_id = uploader.upload(filename, description)
         return archive_id
 
     def retrieve_archive(self, archive_id, sns_topic=None,
@@ -213,8 +315,8 @@
             sends notification when the job is completed and the output
             is ready for you to download.
 
-        :rtype: :class:`boto.glacier.job.Job`
-        :return: A Job object representing the retrieval job.
+        :rtype: str
+        :return: The ID of the job
         """
         job_data = {'Type': 'inventory-retrieval'}
         if sns_topic is not None:
@@ -225,6 +327,25 @@
         response = self.layer1.initiate_job(self.name, job_data)
         return response['JobId']
 
+    def retrieve_inventory_job(self, **kwargs):
+        """
+        Identical to ``retrieve_inventory``, but returns a ``Job`` instance
+        instead of just the job ID.
+
+        :type description: str
+        :param description: An optional description for the job.
+
+        :type sns_topic: str
+        :param sns_topic: The Amazon SNS topic ARN where Amazon Glacier
+            sends notification when the job is completed and the output
+            is ready for you to download.
+
+        :rtype: :class:`boto.glacier.job.Job`
+        :return: A Job object representing the retrieval job.
+        """
+        job_id = self.retrieve_inventory(**kwargs)
+        return self.get_job(job_id)
+
     def delete_archive(self, archive_id):
         """
         This operation deletes an archive from the vault.
@@ -241,7 +362,7 @@
         :type job_id: str
         :param job_id: The ID of the job
 
-        :rtype: :class:`boto.glaicer.job.Job`
+        :rtype: :class:`boto.glacier.job.Job`
         :return: A Job object representing the job.
         """
         response_data = self.layer1.describe_job(self.name, job_id)
@@ -263,9 +384,29 @@
             Valid values are: InProgress|Succeeded|Failed.  If not
             specified, jobs with all status codes are returned.
 
-        :rtype: list of :class:`boto.glaicer.job.Job`
+        :rtype: list of :class:`boto.glacier.job.Job`
         :return: A list of Job objects related to this vault.
         """
         response_data = self.layer1.list_jobs(self.name, completed,
                                               status_code)
         return [Job(self, jd) for jd in response_data['JobList']]
+
+    def list_all_parts(self, upload_id):
+        """Automatically make and combine multiple calls to list_parts.
+
+        Call list_parts as necessary, combining the results in case multiple
+        calls were required to get data on all available parts.
+
+        """
+        result = self.layer1.list_parts(self.name, upload_id)
+        marker = result['Marker']
+        while marker:
+            additional_result = self.layer1.list_parts(
+                self.name, upload_id, marker=marker)
+            result['Parts'].extend(additional_result['Parts'])
+            marker = additional_result['Marker']
+        # The marker makes no sense in an unpaginated result, and clearing it
+        # makes testing easier. This also has the nice property that the result
+        # is a normal (but expanded) response.
+        result['Marker'] = None
+        return result
diff --git a/boto/glacier/writer.py b/boto/glacier/writer.py
index 42db994..ad0ab26 100644
--- a/boto/glacier/writer.py
+++ b/boto/glacier/writer.py
@@ -1,5 +1,6 @@
 # -*- coding: utf-8 -*-
 # Copyright (c) 2012 Thomas Parslow http://almostobsolete.net/
+# Copyright (c) 2012 Robie Basak <robie@justgohome.co.uk>
 # Tree hash implementation from Aaron Brady bradya@gmail.com
 #
 # Permission is hereby granted, free of charge, to any person obtaining a
@@ -21,79 +22,184 @@
 # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
 # IN THE SOFTWARE.
 #
-
-import urllib
 import hashlib
-import math
-import json
+
+from boto.glacier.utils import chunk_hashes, tree_hash, bytes_to_hex
+# This import is provided for backwards compatibility.  This function is
+# now in boto.glacier.utils, but any existing code can still import
+# this directly from this module.
+from boto.glacier.utils import compute_hashes_from_fileobj
 
 
 _ONE_MEGABYTE = 1024 * 1024
 
 
-def chunk_hashes(bytestring, chunk_size=_ONE_MEGABYTE):
-    chunk_count = int(math.ceil(len(bytestring) / float(chunk_size)))
-    hashes = []
-    for i in xrange(chunk_count):
-        start = i * chunk_size
-        end = (i + 1) * chunk_size
-        hashes.append(hashlib.sha256(bytestring[start:end]).digest())
-    return hashes
+class _Partitioner(object):
+    """Convert variable-size writes into part-sized writes
 
+    Call write(data) with variable sized data as needed to write all data. Call
+    flush() after all data is written.
 
-def tree_hash(fo):
-    """
-    Given a hash of each 1MB chunk (from chunk_hashes) this will hash
-    together adjacent hashes until it ends up with one big one. So a
-    tree of hashes.
-    """
-    hashes = []
-    hashes.extend(fo)
-    while len(hashes) > 1:
-        new_hashes = []
-        while True:
-            if len(hashes) > 1:
-                first = hashes.pop(0)
-                second = hashes.pop(0)
-                new_hashes.append(hashlib.sha256(first + second).digest())
-            elif len(hashes) == 1:
-                only = hashes.pop(0)
-                new_hashes.append(only)
-            else:
-                break
-        hashes.extend(new_hashes)
-    return hashes[0]
-
-
-def compute_hashes_from_fileobj(fileobj, chunk_size=1024 * 1024):
-    """Compute the linear and tree hash from a fileobj.
-
-    This function will compute the linear/tree hash of a fileobj
-    in a single pass through the fileobj.
-
-    :param fileobj: A file like object.
-
-    :param chunk_size: The size of the chunks to use for the tree
-        hash.  This is also the buffer size used to read from
-        `fileobj`.
-
-    :rtype: tuple
-    :return: A tuple of (linear_hash, tree_hash).  Both hashes
-        are returned in hex.
+    This instance will call send_fn(part_data) as needed in part_size pieces,
+    except for the final part which may be shorter than part_size. Make sure to
+    call flush() to ensure that a short final part results in a final send_fn
+    call.
 
     """
-    linear_hash = hashlib.sha256()
-    chunks = []
-    chunk = fileobj.read(chunk_size)
-    while chunk:
-        linear_hash.update(chunk)
-        chunks.append(hashlib.sha256(chunk).digest())
-        chunk = fileobj.read(chunk_size)
-    return linear_hash.hexdigest(), bytes_to_hex(tree_hash(chunks))
+    def __init__(self, part_size, send_fn):
+        self.part_size = part_size
+        self.send_fn = send_fn
+        self._buffer = []
+        self._buffer_size = 0
+
+    def write(self, data):
+        if data == '':
+            return
+        self._buffer.append(data)
+        self._buffer_size += len(data)
+        while self._buffer_size > self.part_size:
+            self._send_part()
+
+    def _send_part(self):
+        data = ''.join(self._buffer)
+        # Put back any data remaining over the part size into the
+        # buffer
+        if len(data) > self.part_size:
+            self._buffer = [data[self.part_size:]]
+            self._buffer_size = len(self._buffer[0])
+        else:
+            self._buffer = []
+            self._buffer_size = 0
+        # The part we will send
+        part = data[:self.part_size]
+        self.send_fn(part)
+
+    def flush(self):
+        if self._buffer_size > 0:
+            self._send_part()
 
 
-def bytes_to_hex(str):
-    return ''.join(["%02x" % ord(x) for x in str]).strip()
+class _Uploader(object):
+    """Upload to a Glacier upload_id.
+
+    Call upload_part for each part (in any order) and then close to complete
+    the upload.
+
+    """
+    def __init__(self, vault, upload_id, part_size, chunk_size=_ONE_MEGABYTE):
+        self.vault = vault
+        self.upload_id = upload_id
+        self.part_size = part_size
+        self.chunk_size = chunk_size
+        self.archive_id = None
+
+        self._uploaded_size = 0
+        self._tree_hashes = []
+
+        self.closed = False
+
+    def _insert_tree_hash(self, index, raw_tree_hash):
+        list_length = len(self._tree_hashes)
+        if index >= list_length:
+            self._tree_hashes.extend([None] * (list_length - index + 1))
+        self._tree_hashes[index] = raw_tree_hash
+
+    def upload_part(self, part_index, part_data):
+        """Upload a part to Glacier.
+
+        :param part_index: part number where 0 is the first part
+        :param part_data: data to upload corresponding to this part
+
+        """
+        if self.closed:
+            raise ValueError("I/O operation on closed file")
+        # Create a request and sign it
+        part_tree_hash = tree_hash(chunk_hashes(part_data, self.chunk_size))
+        self._insert_tree_hash(part_index, part_tree_hash)
+
+        hex_tree_hash = bytes_to_hex(part_tree_hash)
+        linear_hash = hashlib.sha256(part_data).hexdigest()
+        start = self.part_size * part_index
+        content_range = (start,
+                         (start + len(part_data)) - 1)
+        response = self.vault.layer1.upload_part(self.vault.name,
+                                                 self.upload_id,
+                                                 linear_hash,
+                                                 hex_tree_hash,
+                                                 content_range, part_data)
+        response.read()
+        self._uploaded_size += len(part_data)
+
+    def skip_part(self, part_index, part_tree_hash, part_length):
+        """Skip uploading of a part.
+
+        The final close call needs to calculate the tree hash and total size
+        of all uploaded data, so this is the mechanism for resume
+        functionality to provide it without actually uploading the data again.
+
+        :param part_index: part number where 0 is the first part
+        :param part_tree_hash: binary tree_hash of part being skipped
+        :param part_length: length of part being skipped
+
+        """
+        if self.closed:
+            raise ValueError("I/O operation on closed file")
+        self._insert_tree_hash(part_index, part_tree_hash)
+        self._uploaded_size += part_length
+
+    def close(self):
+        if self.closed:
+            return
+        if None in self._tree_hashes:
+            raise RuntimeError("Some parts were not uploaded.")
+        # Complete the multiplart glacier upload
+        hex_tree_hash = bytes_to_hex(tree_hash(self._tree_hashes))
+        response = self.vault.layer1.complete_multipart_upload(
+            self.vault.name, self.upload_id, hex_tree_hash,
+            self._uploaded_size)
+        self.archive_id = response['ArchiveId']
+        self.closed = True
+
+
+def generate_parts_from_fobj(fobj, part_size):
+    data = fobj.read(part_size)
+    while data:
+        yield data
+        data = fobj.read(part_size)
+
+
+def resume_file_upload(vault, upload_id, part_size, fobj, part_hash_map,
+                       chunk_size=_ONE_MEGABYTE):
+    """Resume upload of a file already part-uploaded to Glacier.
+
+    The resumption of an upload where the part-uploaded section is empty is a
+    valid degenerate case that this function can handle. In this case,
+    part_hash_map should be an empty dict.
+
+    :param vault: boto.glacier.vault.Vault object.
+    :param upload_id: existing Glacier upload id of upload being resumed.
+    :param part_size: part size of existing upload.
+    :param fobj: file object containing local data to resume. This must read
+        from the start of the entire upload, not just from the point being
+        resumed. Use fobj.seek(0) to achieve this if necessary.
+    :param part_hash_map: {part_index: part_tree_hash, ...} of data already
+        uploaded. Each supplied part_tree_hash will be verified and the part
+        re-uploaded if there is a mismatch.
+    :param chunk_size: chunk size of tree hash calculation. This must be
+        1 MiB for Amazon.
+
+    """
+    uploader = _Uploader(vault, upload_id, part_size, chunk_size)
+    for part_index, part_data in enumerate(
+            generate_parts_from_fobj(fobj, part_size)):
+        part_tree_hash = tree_hash(chunk_hashes(part_data, chunk_size))
+        if (part_index not in part_hash_map or
+                part_hash_map[part_index] != part_tree_hash):
+            uploader.upload_part(part_index, part_data)
+        else:
+            uploader.skip_part(part_index, part_tree_hash, len(part_data))
+    uploader.close()
+    return uploader.archive_id
 
 
 class Writer(object):
@@ -101,70 +207,56 @@
     Presents a file-like object for writing to a Amazon Glacier
     Archive. The data is written using the multi-part upload API.
     """
-    def __init__(self, vault, upload_id, part_size):
-        self.vault = vault
-        self.upload_id = upload_id
-        self.part_size = part_size
-
-        self._buffer_size = 0
-        self._uploaded_size = 0
-        self._buffer = []
-        self._tree_hashes = []
-
-        self.archive_location = None
+    def __init__(self, vault, upload_id, part_size, chunk_size=_ONE_MEGABYTE):
+        self.uploader = _Uploader(vault, upload_id, part_size, chunk_size)
+        self.partitioner = _Partitioner(part_size, self._upload_part)
         self.closed = False
+        self.next_part_index = 0
 
-    def send_part(self):
-        buf = "".join(self._buffer)
-        # Put back any data remaining over the part size into the
-        # buffer
-        if len(buf) > self.part_size:
-            self._buffer = [buf[self.part_size:]]
-            self._buffer_size = len(self._buffer[0])
-        else:
-            self._buffer = []
-            self._buffer_size = 0
-        # The part we will send
-        part = buf[:self.part_size]
-        # Create a request and sign it
-        part_tree_hash = tree_hash(chunk_hashes(part))
-        self._tree_hashes.append(part_tree_hash)
-
-        hex_tree_hash = bytes_to_hex(part_tree_hash)
-        linear_hash = hashlib.sha256(part).hexdigest()
-        content_range = (self._uploaded_size,
-                         (self._uploaded_size + len(part)) - 1)
-        response = self.vault.layer1.upload_part(self.vault.name,
-                                                 self.upload_id,
-                                                 linear_hash,
-                                                 hex_tree_hash,
-                                                 content_range, part)
-        self._uploaded_size += len(part)
-
-    def write(self, str):
+    def write(self, data):
         if self.closed:
             raise ValueError("I/O operation on closed file")
-        if str == "":
-            return
-        self._buffer.append(str)
-        self._buffer_size += len(str)
-        while self._buffer_size > self.part_size:
-            self.send_part()
+        self.partitioner.write(data)
+
+    def _upload_part(self, part_data):
+        self.uploader.upload_part(self.next_part_index, part_data)
+        self.next_part_index += 1
 
     def close(self):
         if self.closed:
             return
-        if self._buffer_size > 0:
-            self.send_part()
-        # Complete the multiplart glacier upload
-        hex_tree_hash = bytes_to_hex(tree_hash(self._tree_hashes))
-        response = self.vault.layer1.complete_multipart_upload(self.vault.name,
-                                                               self.upload_id,
-                                                               hex_tree_hash,
-                                                               self._uploaded_size)
-        self.archive_id = response['ArchiveId']
+        self.partitioner.flush()
+        self.uploader.close()
         self.closed = True
 
     def get_archive_id(self):
         self.close()
-        return self.archive_id
+        return self.uploader.archive_id
+
+    @property
+    def current_tree_hash(self):
+        """
+        Returns the current tree hash for the data that's been written
+        **so far**.
+
+        Only once the writing is complete is the final tree hash returned.
+        """
+        return tree_hash(self.uploader._tree_hashes)
+
+    @property
+    def current_uploaded_size(self):
+        """
+        Returns the current uploaded size for the data that's been written
+        **so far**.
+
+        Only once the writing is complete is the final uploaded size returned.
+        """
+        return self.uploader._uploaded_size
+
+    @property
+    def upload_id(self):
+        return self.uploader.upload_id
+
+    @property
+    def vault(self):
+        return self.uploader.vault
diff --git a/boto/gs/acl.py b/boto/gs/acl.py
index 7df3e58..57bdce1 100755
--- a/boto/gs/acl.py
+++ b/boto/gs/acl.py
@@ -46,14 +46,17 @@
 CannedACLStrings = ['private', 'public-read', 'project-private',
                     'public-read-write', 'authenticated-read',
                     'bucket-owner-read', 'bucket-owner-full-control']
+"""A list of Google Cloud Storage predefined (canned) ACL strings."""
 
 SupportedPermissions = ['READ', 'WRITE', 'FULL_CONTROL']
+"""A list of supported ACL permissions."""
 
-class ACL:
+
+class ACL(object):
 
     def __init__(self, parent=None):
         self.parent = parent
-        self.entries = []
+        self.entries = Entries(self)
 
     @property
     def acl(self):
@@ -123,7 +126,7 @@
         return s
 
 
-class Entries:
+class Entries(object):
 
     def __init__(self, parent=None):
         self.parent = parent
@@ -152,15 +155,17 @@
             setattr(self, name, value)
 
     def to_xml(self):
+        if not self.entry_list:
+          return ''
         s = '<%s>' % ENTRIES
         for entry in self.entry_list:
             s += entry.to_xml()
         s += '</%s>' % ENTRIES
         return s
-        
+
 
 # Class that represents a single (Scope, Permission) entry in an ACL.
-class Entry:
+class Entry(object):
 
     def __init__(self, scope=None, type=None, id=None, name=None,
                  email_address=None, domain=None, permission=None):
@@ -217,7 +222,8 @@
         s += '</%s>' % ENTRY
         return s
 
-class Scope:
+
+class Scope(object):
 
     # Map from Scope type.lower() to lower-cased list of allowed sub-elems.
     ALLOWED_SCOPE_TYPE_SUB_ELEMS = {
diff --git a/boto/gs/bucket.py b/boto/gs/bucket.py
index d86b89d..96c2bdc 100644
--- a/boto/gs/bucket.py
+++ b/boto/gs/bucket.py
@@ -19,16 +19,21 @@
 # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
 # IN THE SOFTWARE.
 
+import urllib
+import xml.sax
+
 import boto
 from boto import handler
+from boto.resultset import ResultSet
 from boto.exception import InvalidAclError
 from boto.gs.acl import ACL, CannedACLStrings
 from boto.gs.acl import SupportedPermissions as GSPermissions
+from boto.gs.bucketlistresultset import VersionedBucketListResultSet
 from boto.gs.cors import Cors
 from boto.gs.key import Key as GSKey
 from boto.s3.acl import Policy
 from boto.s3.bucket import Bucket as S3Bucket
-import xml.sax
+from boto.utils import get_utf8_value
 
 # constants for http query args
 DEF_OBJ_ACL = 'defaultObjectAcl'
@@ -36,6 +41,11 @@
 CORS_ARG = 'cors'
 
 class Bucket(S3Bucket):
+    """Represents a Google Cloud Storage bucket."""
+
+    VersioningBody = ('<?xml version="1.0" encoding="UTF-8"?>\n'
+                      '<VersioningConfiguration><Status>%s</Status>'
+                      '</VersioningConfiguration>')
     WebsiteBody = ('<?xml version="1.0" encoding="UTF-8"?>\n'
                    '<WebsiteConfiguration>%s%s</WebsiteConfiguration>')
     WebsiteMainPageFragment = '<MainPageSuffix>%s</MainPageSuffix>'
@@ -44,97 +54,486 @@
     def __init__(self, connection=None, name=None, key_class=GSKey):
         super(Bucket, self).__init__(connection, name, key_class)
 
-    def set_acl(self, acl_or_str, key_name='', headers=None, version_id=None):
-        """sets or changes a bucket's or key's acl (depending on whether a
-        key_name was passed). We include a version_id argument to support a
-        polymorphic interface for callers, however, version_id is not relevant
-        for Google Cloud Storage buckets and is therefore ignored here."""
-        key_name = key_name or ''
+    def startElement(self, name, attrs, connection):
+        return None
+
+    def endElement(self, name, value, connection):
+        if name == 'Name':
+            self.name = value
+        elif name == 'CreationDate':
+            self.creation_date = value
+        else:
+            setattr(self, name, value)
+
+    def get_key(self, key_name, headers=None, version_id=None,
+                response_headers=None, generation=None):
+        """Returns a Key instance for an object in this bucket.
+
+         Note that this method uses a HEAD request to check for the existence of
+         the key.
+
+        :type key_name: string
+        :param key_name: The name of the key to retrieve
+
+        :type response_headers: dict
+        :param response_headers: A dictionary containing HTTP
+            headers/values that will override any headers associated
+            with the stored object in the response.  See
+            http://goo.gl/06N3b for details.
+
+        :type version_id: string
+        :param version_id: Unused in this subclass.
+
+        :type generation: int
+        :param generation: A specific generation number to fetch the key at. If
+            not specified, the latest generation is fetched.
+
+        :rtype: :class:`boto.gs.key.Key`
+        :returns: A Key object from this bucket.
+        """
+        query_args_l = []
+        if generation:
+            query_args_l.append('generation=%s' % generation)
+        if response_headers:
+            for rk, rv in response_headers.iteritems():
+                query_args_l.append('%s=%s' % (rk, urllib.quote(rv)))
+
+        key, resp = self._get_key_internal(key_name, headers,
+                                           query_args_l=query_args_l)
+        return key
+
+    def copy_key(self, new_key_name, src_bucket_name, src_key_name,
+                 metadata=None, src_version_id=None, storage_class='STANDARD',
+                 preserve_acl=False, encrypt_key=False, headers=None,
+                 query_args=None, src_generation=None):
+        """Create a new key in the bucket by copying an existing key.
+
+        :type new_key_name: string
+        :param new_key_name: The name of the new key
+
+        :type src_bucket_name: string
+        :param src_bucket_name: The name of the source bucket
+
+        :type src_key_name: string
+        :param src_key_name: The name of the source key
+
+        :type src_generation: int
+        :param src_generation: The generation number of the source key to copy.
+            If not specified, the latest generation is copied.
+
+        :type metadata: dict
+        :param metadata: Metadata to be associated with new key.  If
+            metadata is supplied, it will replace the metadata of the
+            source key being copied.  If no metadata is supplied, the
+            source key's metadata will be copied to the new key.
+
+        :type version_id: string
+        :param version_id: Unused in this subclass.
+
+        :type storage_class: string
+        :param storage_class: The storage class of the new key.  By
+            default, the new key will use the standard storage class.
+            Possible values are: STANDARD | DURABLE_REDUCED_AVAILABILITY
+
+        :type preserve_acl: bool
+        :param preserve_acl: If True, the ACL from the source key will
+            be copied to the destination key.  If False, the
+            destination key will have the default ACL.  Note that
+            preserving the ACL in the new key object will require two
+            additional API calls to GCS, one to retrieve the current
+            ACL and one to set that ACL on the new object.  If you
+            don't care about the ACL (or if you have a default ACL set
+            on the bucket), a value of False will be significantly more
+            efficient.
+
+        :type encrypt_key: bool
+        :param encrypt_key: Included for compatibility with S3. This argument is
+            ignored.
+
+        :type headers: dict
+        :param headers: A dictionary of header name/value pairs.
+
+        :type query_args: string
+        :param query_args: A string of additional querystring arguments
+            to append to the request
+
+        :rtype: :class:`boto.gs.key.Key`
+        :returns: An instance of the newly created key object
+        """
+        if src_generation:
+            headers = headers or {}
+            headers['x-goog-copy-source-generation'] = str(src_generation)
+        return super(Bucket, self).copy_key(
+            new_key_name, src_bucket_name, src_key_name, metadata=metadata,
+            storage_class=storage_class, preserve_acl=preserve_acl,
+            encrypt_key=encrypt_key, headers=headers, query_args=query_args)
+
+    def list_versions(self, prefix='', delimiter='', marker='',
+                      generation_marker='', headers=None):
+        """
+        List versioned objects within a bucket.  This returns an
+        instance of an VersionedBucketListResultSet that automatically
+        handles all of the result paging, etc. from GCS.  You just need
+        to keep iterating until there are no more results.  Called
+        with no arguments, this will return an iterator object across
+        all keys within the bucket.
+
+        :type prefix: string
+        :param prefix: allows you to limit the listing to a particular
+            prefix.  For example, if you call the method with
+            prefix='/foo/' then the iterator will only cycle through
+            the keys that begin with the string '/foo/'.
+
+        :type delimiter: string
+        :param delimiter: can be used in conjunction with the prefix
+            to allow you to organize and browse your keys
+            hierarchically. See:
+            https://developers.google.com/storage/docs/reference-headers#delimiter
+            for more details.
+
+        :type marker: string
+        :param marker: The "marker" of where you are in the result set
+
+        :type generation_marker: string
+        :param generation_marker: The "generation marker" of where you are in
+            the result set.
+
+        :type headers: dict
+        :param headers: A dictionary of header name/value pairs.
+
+        :rtype:
+            :class:`boto.gs.bucketlistresultset.VersionedBucketListResultSet`
+        :return: an instance of a BucketListResultSet that handles paging, etc.
+        """
+        return VersionedBucketListResultSet(self, prefix, delimiter,
+                                            marker, generation_marker,
+                                            headers)
+
+    def delete_key(self, key_name, headers=None, version_id=None,
+                   mfa_token=None, generation=None):
+        """
+        Deletes a key from the bucket.
+
+        :type key_name: string
+        :param key_name: The key name to delete
+
+        :type headers: dict
+        :param headers: A dictionary of header name/value pairs.
+
+        :type version_id: string
+        :param version_id: Unused in this subclass.
+
+        :type mfa_token: tuple or list of strings
+        :param mfa_token: Unused in this subclass.
+
+        :type generation: int
+        :param generation: The generation number of the key to delete. If not
+            specified, the latest generation number will be deleted.
+
+        :rtype: :class:`boto.gs.key.Key`
+        :returns: A key object holding information on what was
+            deleted.
+        """
+        query_args_l = []
+        if generation:
+            query_args_l.append('generation=%s' % generation)
+        self._delete_key_internal(key_name, headers=headers,
+                                  version_id=version_id, mfa_token=mfa_token,
+                                  query_args_l=query_args_l)
+
+    def set_acl(self, acl_or_str, key_name='', headers=None, version_id=None,
+                generation=None, if_generation=None, if_metageneration=None):
+        """Sets or changes a bucket's or key's ACL.
+
+        :type acl_or_str: string or :class:`boto.gs.acl.ACL`
+        :param acl_or_str: A canned ACL string (see
+            :data:`~.gs.acl.CannedACLStrings`) or an ACL object.
+
+        :type key_name: string
+        :param key_name: A key name within the bucket to set the ACL for. If not
+            specified, the ACL for the bucket will be set.
+
+        :type headers: dict
+        :param headers: Additional headers to set during the request.
+
+        :type version_id: string
+        :param version_id: Unused in this subclass.
+
+        :type generation: int
+        :param generation: If specified, sets the ACL for a specific generation
+            of a versioned object. If not specified, the current version is
+            modified.
+
+        :type if_generation: int
+        :param if_generation: (optional) If set to a generation number, the acl
+            will only be updated if its current generation number is this value.
+
+        :type if_metageneration: int
+        :param if_metageneration: (optional) If set to a metageneration number,
+            the acl will only be updated if its current metageneration number is
+            this value.
+        """
         if isinstance(acl_or_str, Policy):
             raise InvalidAclError('Attempt to set S3 Policy on GS ACL')
         elif isinstance(acl_or_str, ACL):
-            self.set_xml_acl(acl_or_str.to_xml(), key_name, headers=headers)
+            self.set_xml_acl(acl_or_str.to_xml(), key_name, headers=headers,
+                             generation=generation,
+                             if_generation=if_generation,
+                             if_metageneration=if_metageneration)
         else:
-            self.set_canned_acl(acl_or_str, key_name, headers=headers)
+            self.set_canned_acl(acl_or_str, key_name, headers=headers,
+                                generation=generation,
+                                if_generation=if_generation,
+                                if_metageneration=if_metageneration)
 
-    def set_def_acl(self, acl_or_str, key_name='', headers=None):
-        """sets or changes a bucket's default object acl. The key_name argument
-        is ignored since keys have no default ACL property."""
+    def set_def_acl(self, acl_or_str, headers=None):
+        """Sets or changes a bucket's default ACL.
+
+        :type acl_or_str: string or :class:`boto.gs.acl.ACL`
+        :param acl_or_str: A canned ACL string (see
+            :data:`~.gs.acl.CannedACLStrings`) or an ACL object.
+
+        :type headers: dict
+        :param headers: Additional headers to set during the request.
+        """
         if isinstance(acl_or_str, Policy):
             raise InvalidAclError('Attempt to set S3 Policy on GS ACL')
         elif isinstance(acl_or_str, ACL):
-            self.set_def_xml_acl(acl_or_str.to_xml(), '', headers=headers)
+            self.set_def_xml_acl(acl_or_str.to_xml(), headers=headers)
         else:
-            self.set_def_canned_acl(acl_or_str, '', headers=headers)
+            self.set_def_canned_acl(acl_or_str, headers=headers)
 
-    def get_acl_helper(self, key_name, headers, query_args):
-        """provides common functionality for get_acl() and get_def_acl()"""
+    def _get_xml_acl_helper(self, key_name, headers, query_args):
+        """Provides common functionality for get_xml_acl and _get_acl_helper."""
         response = self.connection.make_request('GET', self.name, key_name,
                                                 query_args=query_args,
                                                 headers=headers)
         body = response.read()
-        if response.status == 200:
-            acl = ACL(self)
-            h = handler.XmlHandler(acl, self)
-            xml.sax.parseString(body, h)
-            return acl
-        else:
+        if response.status != 200:
             raise self.connection.provider.storage_response_error(
                 response.status, response.reason, body)
+        return body
 
-    def get_acl(self, key_name='', headers=None, version_id=None):
-        """returns a bucket's acl. We include a version_id argument
-           to support a polymorphic interface for callers, however,
-           version_id is not relevant for Google Cloud Storage buckets
-           and is therefore ignored here."""
-        return self.get_acl_helper(key_name, headers, STANDARD_ACL)
+    def _get_acl_helper(self, key_name, headers, query_args):
+        """Provides common functionality for get_acl and get_def_acl."""
+        body = self._get_xml_acl_helper(key_name, headers, query_args)
+        acl = ACL(self)
+        h = handler.XmlHandler(acl, self)
+        xml.sax.parseString(body, h)
+        return acl
 
-    def get_def_acl(self, key_name='', headers=None):
-        """returns a bucket's default object acl. The key_name argument is
-        ignored since keys have no default ACL property."""
-        return self.get_acl_helper('', headers, DEF_OBJ_ACL)
+    def get_acl(self, key_name='', headers=None, version_id=None,
+                generation=None):
+        """Returns the ACL of the bucket or an object in the bucket.
 
-    def set_canned_acl_helper(self, acl_str, key_name, headers, query_args):
-        """provides common functionality for set_canned_acl() and
-           set_def_canned_acl()"""
-        assert acl_str in CannedACLStrings
+        :param str key_name: The name of the object to get the ACL for. If not
+            specified, the ACL for the bucket will be returned.
 
-        if headers:
-            headers[self.connection.provider.acl_header] = acl_str
+        :param dict headers: Additional headers to set during the request.
+
+        :type version_id: string
+        :param version_id: Unused in this subclass.
+
+        :param int generation: If specified, gets the ACL for a specific
+            generation of a versioned object. If not specified, the current
+            version is returned. This parameter is only valid when retrieving
+            the ACL of an object, not a bucket.
+
+        :rtype: :class:`.gs.acl.ACL`
+        """
+        query_args = STANDARD_ACL
+        if generation:
+            query_args += '&generation=%s' % generation
+        return self._get_acl_helper(key_name, headers, query_args)
+
+    def get_xml_acl(self, key_name='', headers=None, version_id=None,
+                    generation=None):
+        """Returns the ACL string of the bucket or an object in the bucket.
+
+        :param str key_name: The name of the object to get the ACL for. If not
+            specified, the ACL for the bucket will be returned.
+
+        :param dict headers: Additional headers to set during the request.
+
+        :type version_id: string
+        :param version_id: Unused in this subclass.
+
+        :param int generation: If specified, gets the ACL for a specific
+            generation of a versioned object. If not specified, the current
+            version is returned. This parameter is only valid when retrieving
+            the ACL of an object, not a bucket.
+
+        :rtype: str
+        """
+        query_args = STANDARD_ACL
+        if generation:
+            query_args += '&generation=%s' % generation
+        return self._get_xml_acl_helper(key_name, headers, query_args)
+
+    def get_def_acl(self, headers=None):
+        """Returns the bucket's default ACL.
+
+        :param dict headers: Additional headers to set during the request.
+
+        :rtype: :class:`.gs.acl.ACL`
+        """
+        return self._get_acl_helper('', headers, DEF_OBJ_ACL)
+
+    def _set_acl_helper(self, acl_or_str, key_name, headers, query_args,
+                          generation, if_generation, if_metageneration,
+                          canned=False):
+        """Provides common functionality for set_acl, set_xml_acl,
+        set_canned_acl, set_def_acl, set_def_xml_acl, and
+        set_def_canned_acl()."""
+
+        headers = headers or {}
+        data = ''
+        if canned:
+            headers[self.connection.provider.acl_header] = acl_or_str
         else:
-            headers={self.connection.provider.acl_header: acl_str}
+            data = acl_or_str
 
-        response = self.connection.make_request('PUT', self.name, key_name,
-                headers=headers, query_args=query_args)
+        if generation:
+            query_args += '&generation=%s' % generation
+
+        if if_metageneration is not None and if_generation is None:
+            raise ValueError("Received if_metageneration argument with no "
+                             "if_generation argument. A metageneration has no "
+                             "meaning without a content generation.")
+        if not key_name and (if_generation or if_metageneration):
+            raise ValueError("Received if_generation or if_metageneration "
+                             "parameter while setting the ACL of a bucket.")
+        if if_generation is not None:
+            headers['x-goog-if-generation-match'] = str(if_generation)
+        if if_metageneration is not None:
+            headers['x-goog-if-metageneration-match'] = str(if_metageneration)
+
+        response = self.connection.make_request(
+            'PUT', get_utf8_value(self.name), get_utf8_value(key_name),
+            data=get_utf8_value(data), headers=headers, query_args=query_args)
         body = response.read()
         if response.status != 200:
             raise self.connection.provider.storage_response_error(
                 response.status, response.reason, body)
 
+    def set_xml_acl(self, acl_str, key_name='', headers=None, version_id=None,
+                    query_args='acl', generation=None, if_generation=None,
+                    if_metageneration=None):
+        """Sets a bucket's or objects's ACL to an XML string.
+
+        :type acl_str: string
+        :param acl_str: A string containing the ACL XML.
+
+        :type key_name: string
+        :param key_name: A key name within the bucket to set the ACL for. If not
+            specified, the ACL for the bucket will be set.
+
+        :type headers: dict
+        :param headers: Additional headers to set during the request.
+
+        :type version_id: string
+        :param version_id: Unused in this subclass.
+
+        :type query_args: str
+        :param query_args: The query parameters to pass with the request.
+
+        :type generation: int
+        :param generation: If specified, sets the ACL for a specific generation
+            of a versioned object. If not specified, the current version is
+            modified.
+
+        :type if_generation: int
+        :param if_generation: (optional) If set to a generation number, the acl
+            will only be updated if its current generation number is this value.
+
+        :type if_metageneration: int
+        :param if_metageneration: (optional) If set to a metageneration number,
+            the acl will only be updated if its current metageneration number is
+            this value.
+        """
+        return self._set_acl_helper(acl_str, key_name=key_name, headers=headers,
+                                    query_args=query_args,
+                                    generation=generation,
+                                    if_generation=if_generation,
+                                    if_metageneration=if_metageneration)
+
     def set_canned_acl(self, acl_str, key_name='', headers=None,
-                       version_id=None):
-        """sets or changes a bucket's acl to a predefined (canned) value.
-           We include a version_id argument to support a polymorphic
-           interface for callers, however, version_id is not relevant for
-           Google Cloud Storage buckets and is therefore ignored here."""
-        return self.set_canned_acl_helper(acl_str, key_name, headers,
-                                          STANDARD_ACL)
+                       version_id=None, generation=None, if_generation=None,
+                       if_metageneration=None):
+        """Sets a bucket's or objects's ACL using a predefined (canned) value.
 
-    def set_def_canned_acl(self, acl_str, key_name='', headers=None):
-        """sets or changes a bucket's default object acl to a predefined
-           (canned) value. The key_name argument is ignored since keys have no
-           default ACL property."""
-        return self.set_canned_acl_helper(acl_str, '', headers,
-                                          query_args=DEF_OBJ_ACL)
+        :type acl_str: string
+        :param acl_str: A canned ACL string. See
+            :data:`~.gs.acl.CannedACLStrings`.
 
-    def set_def_xml_acl(self, acl_str, key_name='', headers=None):
-        """sets or changes a bucket's default object ACL. The key_name argument
-        is ignored since keys have no default ACL property."""
+        :type key_name: string
+        :param key_name: A key name within the bucket to set the ACL for. If not
+            specified, the ACL for the bucket will be set.
+
+        :type headers: dict
+        :param headers: Additional headers to set during the request.
+
+        :type version_id: string
+        :param version_id: Unused in this subclass.
+
+        :type generation: int
+        :param generation: If specified, sets the ACL for a specific generation
+            of a versioned object. If not specified, the current version is
+            modified.
+
+        :type if_generation: int
+        :param if_generation: (optional) If set to a generation number, the acl
+            will only be updated if its current generation number is this value.
+
+        :type if_metageneration: int
+        :param if_metageneration: (optional) If set to a metageneration number,
+            the acl will only be updated if its current metageneration number is
+            this value.
+        """
+        if acl_str not in CannedACLStrings:
+            raise ValueError("Provided canned ACL string (%s) is not valid."
+                             % acl_str)
+        query_args = STANDARD_ACL
+        return self._set_acl_helper(acl_str, key_name, headers, query_args,
+                                    generation, if_generation,
+                                    if_metageneration, canned=True)
+
+    def set_def_canned_acl(self, acl_str, headers=None):
+        """Sets a bucket's default ACL using a predefined (canned) value.
+
+        :type acl_str: string
+        :param acl_str: A canned ACL string. See
+            :data:`~.gs.acl.CannedACLStrings`.
+
+        :type headers: dict
+        :param headers: Additional headers to set during the request.
+        """
+        if acl_str not in CannedACLStrings:
+            raise ValueError("Provided canned ACL string (%s) is not valid."
+                             % acl_str)
+        query_args = DEF_OBJ_ACL
+        return self._set_acl_helper(acl_str, '', headers, query_args,
+                                    generation=None, if_generation=None,
+                                    if_metageneration=None, canned=True)
+
+    def set_def_xml_acl(self, acl_str, headers=None):
+        """Sets a bucket's default ACL to an XML string.
+
+        :type acl_str: string
+        :param acl_str: A string containing the ACL XML.
+
+        :type headers: dict
+        :param headers: Additional headers to set during the request.
+        """
         return self.set_xml_acl(acl_str, '', headers,
                                 query_args=DEF_OBJ_ACL)
 
     def get_cors(self, headers=None):
-        """returns a bucket's CORS XML"""
+        """Returns a bucket's CORS XML document.
+
+        :param dict headers: Additional headers to send with the request.
+        :rtype: :class:`~.cors.Cors`
+        """
         response = self.connection.make_request('GET', self.name,
                                                 query_args=CORS_ARG,
                                                 headers=headers)
@@ -150,17 +549,39 @@
                 response.status, response.reason, body)
 
     def set_cors(self, cors, headers=None):
-        """sets or changes a bucket's CORS XML."""
-        cors_xml = cors.encode('UTF-8')
-        response = self.connection.make_request('PUT', self.name,
-                                                data=cors_xml,
-                                                query_args=CORS_ARG,
-                                                headers=headers)
+        """Sets a bucket's CORS XML document.
+
+        :param str cors: A string containing the CORS XML.
+        :param dict headers: Additional headers to send with the request.
+        """
+        response = self.connection.make_request(
+            'PUT', get_utf8_value(self.name), data=get_utf8_value(cors),
+            query_args=CORS_ARG, headers=headers)
         body = response.read()
         if response.status != 200:
             raise self.connection.provider.storage_response_error(
                 response.status, response.reason, body)
 
+    def get_storage_class(self):
+        """
+        Returns the StorageClass for the bucket.
+
+        :rtype: str
+        :return: The StorageClass for the bucket.
+        """
+        response = self.connection.make_request('GET', self.name,
+                                                query_args='storageClass')
+        body = response.read()
+        if response.status == 200:
+            rs = ResultSet(self)
+            h = handler.XmlHandler(rs, self)
+            xml.sax.parseString(body, h)
+            return rs.StorageClass
+        else:
+            raise self.connection.provider.storage_response_error(
+                response.status, response.reason, body)
+
+
     # Method with same signature as boto.s3.bucket.Bucket.add_email_grant(),
     # to allow polymorphic treatment at application layer.
     def add_email_grant(self, permission, email_address,
@@ -179,7 +600,7 @@
         :param email_address: The email address associated with the GS
                               account your are granting the permission to.
 
-        :type recursive: boolean
+        :type recursive: bool
         :param recursive: A boolean value to controls whether the call
                           will apply the grant to all keys within the bucket
                           or not.  The default value is False.  By passing a
@@ -200,7 +621,8 @@
 
     # Method with same signature as boto.s3.bucket.Bucket.add_user_grant(),
     # to allow polymorphic treatment at application layer.
-    def add_user_grant(self, permission, user_id, recursive=False, headers=None):
+    def add_user_grant(self, permission, user_id, recursive=False,
+                       headers=None):
         """
         Convenience method that provides a quick way to add a canonical user
         grant to a bucket. This method retrieves the current ACL, creates a new
@@ -276,14 +698,34 @@
     # (but returning different object type), to allow polymorphic treatment
     # at application layer.
     def list_grants(self, headers=None):
+        """Returns the ACL entries applied to this bucket.
+
+        :param dict headers: Additional headers to send with the request.
+        :rtype: list containing :class:`~.gs.acl.Entry` objects.
+        """
         acl = self.get_acl(headers=headers)
         return acl.entries
 
     def disable_logging(self, headers=None):
+        """Disable logging on this bucket.
+
+        :param dict headers: Additional headers to send with the request.
+        """
         xml_str = '<?xml version="1.0" encoding="UTF-8"?><Logging/>'
         self.set_subresource('logging', xml_str, headers=headers)
 
     def enable_logging(self, target_bucket, target_prefix=None, headers=None):
+        """Enable logging on a bucket.
+
+        :type target_bucket: bucket or string
+        :param target_bucket: The bucket to log to.
+
+        :type target_prefix: string
+        :param target_prefix: The prefix which should be prepended to the
+            generated log files written to the target_bucket.
+
+        :param dict headers: Additional headers to send with the request.
+        """
         if isinstance(target_bucket, Bucket):
             target_bucket = target_bucket.name
         xml_str = '<?xml version="1.0" encoding="UTF-8"?><Logging>'
@@ -295,27 +737,74 @@
 
         self.set_subresource('logging', xml_str, headers=headers)
 
+    def get_logging_config_with_xml(self, headers=None):
+        """Returns the current status of logging configuration on the bucket as
+        unparsed XML.
+
+        :param dict headers: Additional headers to send with the request.
+
+        :rtype: 2-Tuple
+        :returns: 2-tuple containing:
+
+            1) A dictionary containing the parsed XML response from GCS. The
+              overall structure is:
+
+              * Logging
+
+                * LogObjectPrefix: Prefix that is prepended to log objects.
+                * LogBucket: Target bucket for log objects.
+
+            2) Unparsed XML describing the bucket's logging configuration.
+        """
+        response = self.connection.make_request('GET', self.name,
+                                                query_args='logging',
+                                                headers=headers)
+        body = response.read()
+        boto.log.debug(body)
+
+        if response.status != 200:
+            raise self.connection.provider.storage_response_error(
+                response.status, response.reason, body)
+
+        e = boto.jsonresponse.Element()
+        h = boto.jsonresponse.XmlHandler(e, None)
+        h.parse(body)
+        return e, body
+
+    def get_logging_config(self, headers=None):
+        """Returns the current status of logging configuration on the bucket.
+
+        :param dict headers: Additional headers to send with the request.
+
+        :rtype: dict
+        :returns: A dictionary containing the parsed XML response from GCS. The
+            overall structure is:
+
+            * Logging
+
+              * LogObjectPrefix: Prefix that is prepended to log objects.
+              * LogBucket: Target bucket for log objects.
+        """
+        return self.get_logging_config_with_xml(headers)[0]
+
     def configure_website(self, main_page_suffix=None, error_key=None,
                           headers=None):
-        """
-        Configure this bucket to act as a website
+        """Configure this bucket to act as a website
 
-        :type suffix: str
-        :param suffix: Suffix that is appended to a request that is for a
-                       "directory" on the website endpoint (e.g. if the suffix
-                       is index.html and you make a request to
-                       samplebucket/images/ the data that is returned will
-                       be for the object with the key name images/index.html).
-                       The suffix must not be empty and must not include a
-                       slash character. This parameter is optional and the
-                       property is disabled if excluded.
-
+        :type main_page_suffix: str
+        :param main_page_suffix: Suffix that is appended to a request that is
+            for a "directory" on the website endpoint (e.g. if the suffix is
+            index.html and you make a request to samplebucket/images/ the data
+            that is returned will be for the object with the key name
+            images/index.html). The suffix must not be empty and must not
+            include a slash character. This parameter is optional and the
+            property is disabled if excluded.
 
         :type error_key: str
-        :param error_key: The object key name to use when a 400
-                          error occurs. This parameter is optional and the
-                          property is disabled if excluded.
+        :param error_key: The object key name to use when a 400 error occurs.
+            This parameter is optional and the property is disabled if excluded.
 
+        :param dict headers: Additional headers to send with the request.
         """
         if main_page_suffix:
             main_page_frag = self.WebsiteMainPageFragment % main_page_suffix
@@ -328,9 +817,9 @@
             error_frag = ''
 
         body = self.WebsiteBody % (main_page_frag, error_frag)
-        response = self.connection.make_request('PUT', self.name, data=body,
-                                                query_args='websiteConfig',
-                                                headers=headers)
+        response = self.connection.make_request(
+            'PUT', get_utf8_value(self.name), data=get_utf8_value(body),
+            query_args='websiteConfig', headers=headers)
         body = response.read()
         if response.status == 200:
             return True
@@ -339,36 +828,43 @@
                 response.status, response.reason, body)
 
     def get_website_configuration(self, headers=None):
-        """
-        Returns the current status of website configuration on the bucket.
+        """Returns the current status of website configuration on the bucket.
+
+        :param dict headers: Additional headers to send with the request.
 
         :rtype: dict
-        :returns: A dictionary containing a Python representation
-                  of the XML response from GCS. The overall structure is:
+        :returns: A dictionary containing the parsed XML response from GCS. The
+            overall structure is:
 
-        * WebsiteConfiguration
-          * MainPageSuffix: suffix that is appended to request that
-              is for a "directory" on the website endpoint
-          * NotFoundPage: name of an object to serve when site visitors
-              encounter a 404
+            * WebsiteConfiguration
+
+              * MainPageSuffix: suffix that is appended to request that
+                is for a "directory" on the website endpoint.
+              * NotFoundPage: name of an object to serve when site visitors
+                encounter a 404.
         """
-        return self.get_website_configuration_xml(self, headers)[0]
+        return self.get_website_configuration_with_xml(headers)[0]
 
     def get_website_configuration_with_xml(self, headers=None):
-        """
-        Returns the current status of website configuration on the bucket as
+        """Returns the current status of website configuration on the bucket as
         unparsed XML.
 
+        :param dict headers: Additional headers to send with the request.
+
         :rtype: 2-Tuple
         :returns: 2-tuple containing:
-        1) A dictionary containing a Python representation
-                  of the XML response from GCS. The overall structure is:
-          * WebsiteConfiguration
-            * MainPageSuffix: suffix that is appended to request that
-                is for a "directory" on the website endpoint
-            * NotFoundPage: name of an object to serve when site visitors
-                encounter a 404
-        2) unparsed XML describing the bucket's website configuration.
+
+            1) A dictionary containing the parsed XML response from GCS. The
+              overall structure is:
+
+              * WebsiteConfiguration
+
+                * MainPageSuffix: suffix that is appended to request that is for
+                  a "directory" on the website endpoint.
+                * NotFoundPage: name of an object to serve when site visitors
+                  encounter a 404
+
+            2) Unparsed XML describing the bucket's website configuration.
         """
         response = self.connection.make_request('GET', self.name,
                 query_args='websiteConfig', headers=headers)
@@ -385,4 +881,40 @@
         return e, body
 
     def delete_website_configuration(self, headers=None):
+        """Remove the website configuration from this bucket.
+
+        :param dict headers: Additional headers to send with the request.
+        """
         self.configure_website(headers=headers)
+
+    def get_versioning_status(self, headers=None):
+        """Returns the current status of versioning configuration on the bucket.
+
+        :rtype: bool
+        """
+        response = self.connection.make_request('GET', self.name,
+                                                query_args='versioning',
+                                                headers=headers)
+        body = response.read()
+        boto.log.debug(body)
+        if response.status != 200:
+            raise self.connection.provider.storage_response_error(
+                    response.status, response.reason, body)
+        resp_json = boto.jsonresponse.Element()
+        boto.jsonresponse.XmlHandler(resp_json, None).parse(body)
+        resp_json = resp_json['VersioningConfiguration']
+        return ('Status' in resp_json) and (resp_json['Status'] == 'Enabled')
+
+    def configure_versioning(self, enabled, headers=None):
+        """Configure versioning for this bucket.
+
+        :param bool enabled: If set to True, enables versioning on this bucket.
+            If set to False, disables versioning.
+
+        :param dict headers: Additional headers to send with the request.
+        """
+        if enabled == True:
+            req_body = self.VersioningBody % ('Enabled')
+        else:
+            req_body = self.VersioningBody % ('Suspended')
+        self.set_subresource('versioning', req_body, headers=headers)
diff --git a/boto/gs/bucketlistresultset.py b/boto/gs/bucketlistresultset.py
new file mode 100644
index 0000000..5e717a5
--- /dev/null
+++ b/boto/gs/bucketlistresultset.py
@@ -0,0 +1,64 @@
+# Copyright 2012 Google Inc.
+# Copyright (c) 2006,2007 Mitch Garnaat http://garnaat.org/
+#
+# Permission is hereby granted, free of charge, to any person obtaining a
+# copy of this software and associated documentation files (the
+# "Software"), to deal in the Software without restriction, including
+# without limitation the rights to use, copy, modify, merge, publish, dis-
+# tribute, sublicense, and/or sell copies of the Software, and to permit
+# persons to whom the Software is furnished to do so, subject to the fol-
+# lowing conditions:
+#
+# The above copyright notice and this permission notice shall be included
+# in all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
+# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
+# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+# IN THE SOFTWARE.
+
+def versioned_bucket_lister(bucket, prefix='', delimiter='',
+                            marker='', generation_marker='', headers=None):
+    """
+    A generator function for listing versioned objects.
+    """
+    more_results = True
+    k = None
+    while more_results:
+        rs = bucket.get_all_versions(prefix=prefix, marker=marker,
+                                     generation_marker=generation_marker,
+                                     delimiter=delimiter, headers=headers,
+                                     max_keys=999)
+        for k in rs:
+            yield k
+        marker = rs.next_marker
+        generation_marker = rs.next_generation_marker
+        more_results= rs.is_truncated
+
+class VersionedBucketListResultSet:
+    """
+    A resultset for listing versions within a bucket.  Uses the bucket_lister
+    generator function and implements the iterator interface.  This
+    transparently handles the results paging from GCS so even if you have
+    many thousands of keys within the bucket you can iterate over all
+    keys in a reasonably efficient manner.
+    """
+
+    def __init__(self, bucket=None, prefix='', delimiter='', marker='',
+                 generation_marker='', headers=None):
+        self.bucket = bucket
+        self.prefix = prefix
+        self.delimiter = delimiter
+        self.marker = marker
+        self.generation_marker = generation_marker
+        self.headers = headers
+
+    def __iter__(self):
+        return versioned_bucket_lister(self.bucket, prefix=self.prefix,
+                                       delimiter=self.delimiter,
+                                       marker=self.marker,
+                                       generation_marker=self.generation_marker,
+                                       headers=self.headers)
diff --git a/boto/gs/connection.py b/boto/gs/connection.py
index 20b0220..e7f2aeb 100755
--- a/boto/gs/connection.py
+++ b/boto/gs/connection.py
@@ -23,14 +23,15 @@
 from boto.s3.connection import S3Connection
 from boto.s3.connection import SubdomainCallingFormat
 from boto.s3.connection import check_lowercase_bucketname
+from boto.utils import get_utf8_value
 
 class Location:
-    DEFAULT = '' # US
+    DEFAULT = 'US'
     EU = 'EU'
 
 class GSConnection(S3Connection):
 
-    DefaultHost = 'commondatastorage.googleapis.com'
+    DefaultHost = 'storage.googleapis.com'
     QueryString = 'Signature=%s&Expires=%d&AWSAccessKeyId=%s'
 
     def __init__(self, gs_access_key_id=None, gs_secret_access_key=None,
@@ -46,25 +47,29 @@
                  suppress_consec_slashes=suppress_consec_slashes)
 
     def create_bucket(self, bucket_name, headers=None,
-                      location=Location.DEFAULT, policy=None):
+                      location=Location.DEFAULT, policy=None,
+                      storage_class='STANDARD'):
         """
         Creates a new bucket. By default it's located in the USA. You can
-        pass Location.EU to create an European bucket. You can also pass
-        a LocationConstraint, which (in addition to locating the bucket
-        in the specified location) informs Google that Google services
-        must not copy data out of that location.
+        pass Location.EU to create bucket in the EU. You can also pass
+        a LocationConstraint for where the bucket should be located, and 
+        a StorageClass describing how the data should be stored.
 
         :type bucket_name: string
-        :param bucket_name: The name of the new bucket
+        :param bucket_name: The name of the new bucket.
         
         :type headers: dict
-        :param headers: Additional headers to pass along with the request to AWS.
+        :param headers: Additional headers to pass along with the request to GCS.
 
         :type location: :class:`boto.gs.connection.Location`
-        :param location: The location of the new bucket
+        :param location: The location of the new bucket.
 
-        :type policy: :class:`boto.s3.acl.CannedACLStrings`
-        :param policy: A canned ACL policy that will be applied to the new key in S3.
+        :type policy: :class:`boto.gs.acl.CannedACLStrings`
+        :param policy: A canned ACL policy that will be applied to the new key
+                       in GCS.
+
+        :type storage_class: string
+        :param storage_class: Either 'STANDARD' or 'DURABLE_REDUCED_AVAILABILITY'.
              
         """
         check_lowercase_bucketname(bucket_name)
@@ -75,13 +80,19 @@
             else:
                 headers = {self.provider.acl_header : policy}
         if not location:
-            data = ''
+            location = Location.DEFAULT
+        location_elem = ('<LocationConstraint>%s</LocationConstraint>'
+                         % location)
+        if storage_class:
+            storage_class_elem = ('<StorageClass>%s</StorageClass>'
+                                  % storage_class)
         else:
-            data = ('<CreateBucketConfiguration>'
-                        '<LocationConstraint>%s</LocationConstraint>'
-                    '</CreateBucketConfiguration>' % location)
-        response = self.make_request('PUT', bucket_name, headers=headers,
-                data=data)
+            storage_class_elem = ''
+        data = ('<CreateBucketConfiguration>%s%s</CreateBucketConfiguration>'
+                 % (location_elem, storage_class_elem))
+        response = self.make_request(
+            'PUT', get_utf8_value(bucket_name), headers=headers,
+            data=get_utf8_value(data))
         body = response.read()
         if response.status == 409:
             raise self.provider.storage_create_error(
diff --git a/boto/gs/key.py b/boto/gs/key.py
index 3c76cc5..1ced4ce 100644
--- a/boto/gs/key.py
+++ b/boto/gs/key.py
@@ -19,12 +19,288 @@
 # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
 # IN THE SOFTWARE.
 
+import base64
+import binascii
 import os
+import re
 import StringIO
 from boto.exception import BotoClientError
 from boto.s3.key import Key as S3Key
+from boto.s3.keyfile import KeyFile
+from boto.utils import compute_hash
+from boto.utils import get_utf8_value
 
 class Key(S3Key):
+    """
+    Represents a key (object) in a GS bucket.
+
+    :ivar bucket: The parent :class:`boto.gs.bucket.Bucket`.
+    :ivar name: The name of this Key object.
+    :ivar metadata: A dictionary containing user metadata that you
+        wish to store with the object or that has been retrieved from
+        an existing object.
+    :ivar cache_control: The value of the `Cache-Control` HTTP header.
+    :ivar content_type: The value of the `Content-Type` HTTP header.
+    :ivar content_encoding: The value of the `Content-Encoding` HTTP header.
+    :ivar content_disposition: The value of the `Content-Disposition` HTTP
+        header.
+    :ivar content_language: The value of the `Content-Language` HTTP header.
+    :ivar etag: The `etag` associated with this object.
+    :ivar last_modified: The string timestamp representing the last
+        time this object was modified in GS.
+    :ivar owner: The ID of the owner of this object.
+    :ivar storage_class: The storage class of the object. Currently, one of:
+        STANDARD | DURABLE_REDUCED_AVAILABILITY.
+    :ivar md5: The MD5 hash of the contents of the object.
+    :ivar size: The size, in bytes, of the object.
+    :ivar generation: The generation number of the object.
+    :ivar metageneration: The generation number of the object metadata.
+    :ivar encrypted: Whether the object is encrypted while at rest on
+        the server.
+    :ivar cloud_hashes: Dictionary of checksums as supplied by the storage
+        provider.
+    """
+
+    def __init__(self, bucket=None, name=None, generation=None):
+        super(Key, self).__init__(bucket=bucket, name=name)
+        self.generation = generation
+        self.meta_generation = None
+        self.cloud_hashes = {}
+        self.component_count = None
+
+    def __repr__(self):
+        if self.generation and self.metageneration:
+            ver_str = '#%s.%s' % (self.generation, self.metageneration)
+        else:
+            ver_str = ''
+        if self.bucket:
+            return '<Key: %s,%s%s>' % (self.bucket.name, self.name, ver_str)
+        else:
+            return '<Key: None,%s%s>' % (self.name, ver_str)
+
+    def endElement(self, name, value, connection):
+        if name == 'Key':
+            self.name = value
+        elif name == 'ETag':
+            self.etag = value
+        elif name == 'IsLatest':
+            if value == 'true':
+                self.is_latest = True
+            else:
+                self.is_latest = False
+        elif name == 'LastModified':
+            self.last_modified = value
+        elif name == 'Size':
+            self.size = int(value)
+        elif name == 'StorageClass':
+            self.storage_class = value
+        elif name == 'Owner':
+            pass
+        elif name == 'VersionId':
+            self.version_id = value
+        elif name == 'Generation':
+            self.generation = value
+        elif name == 'MetaGeneration':
+            self.metageneration = value
+        else:
+            setattr(self, name, value)
+
+    def handle_version_headers(self, resp, force=False):
+        self.metageneration = resp.getheader('x-goog-metageneration', None)
+        self.generation = resp.getheader('x-goog-generation', None)
+
+    def handle_addl_headers(self, headers):
+        for key, value in headers:
+            if key == 'x-goog-hash':
+                for hash_pair in value.split(','):
+                    alg, b64_digest = hash_pair.strip().split('=', 1)
+                    self.cloud_hashes[alg] = binascii.a2b_base64(b64_digest)
+            elif key == 'x-goog-component-count':
+                self.component_count = int(value)
+
+    def open_read(self, headers=None, query_args='',
+                  override_num_retries=None, response_headers=None):
+        """
+        Open this key for reading
+
+        :type headers: dict
+        :param headers: Headers to pass in the web request
+
+        :type query_args: string
+        :param query_args: Arguments to pass in the query string
+            (ie, 'torrent')
+
+        :type override_num_retries: int
+        :param override_num_retries: If not None will override configured
+            num_retries parameter for underlying GET.
+
+        :type response_headers: dict
+        :param response_headers: A dictionary containing HTTP
+            headers/values that will override any headers associated
+            with the stored object in the response.  See
+            http://goo.gl/EWOPb for details.
+        """
+        # For GCS we need to include the object generation in the query args.
+        # The rest of the processing is handled in the parent class.
+        if self.generation:
+            if query_args:
+                query_args += '&'
+            query_args += 'generation=%s' % self.generation
+        super(Key, self).open_read(headers=headers, query_args=query_args,
+                                   override_num_retries=override_num_retries,
+                                   response_headers=response_headers)
+
+    def get_file(self, fp, headers=None, cb=None, num_cb=10,
+                 torrent=False, version_id=None, override_num_retries=None,
+                 response_headers=None, hash_algs=None):
+        query_args = None
+        if self.generation:
+            query_args = ['generation=%s' % self.generation]
+        self._get_file_internal(fp, headers=headers, cb=cb, num_cb=num_cb,
+                                override_num_retries=override_num_retries,
+                                response_headers=response_headers,
+                                hash_algs=hash_algs,
+                                query_args=query_args)
+
+    def get_contents_to_file(self, fp, headers=None,
+                             cb=None, num_cb=10,
+                             torrent=False,
+                             version_id=None,
+                             res_download_handler=None,
+                             response_headers=None,
+                             hash_algs=None):
+        """
+        Retrieve an object from GCS using the name of the Key object as the
+        key in GCS. Write the contents of the object to the file pointed
+        to by 'fp'.
+
+        :type fp: File -like object
+        :param fp:
+
+        :type headers: dict
+        :param headers: additional HTTP headers that will be sent with
+            the GET request.
+
+        :type cb: function
+        :param cb: a callback function that will be called to report
+            progress on the upload. The callback should accept two
+            integer parameters, the first representing the number of
+            bytes that have been successfully transmitted to GCS and
+            the second representing the size of the to be transmitted
+            object.
+
+        :type cb: int
+        :param num_cb: (optional) If a callback is specified with the
+            cb parameter this parameter determines the granularity of
+            the callback by defining the maximum number of times the
+            callback will be called during the file transfer.
+
+        :type torrent: bool
+        :param torrent: If True, returns the contents of a torrent
+            file as a string.
+
+        :type res_upload_handler: ResumableDownloadHandler
+        :param res_download_handler: If provided, this handler will
+            perform the download.
+
+        :type response_headers: dict
+        :param response_headers: A dictionary containing HTTP
+            headers/values that will override any headers associated
+            with the stored object in the response. See
+            http://goo.gl/sMkcC for details.
+        """
+        if self.bucket != None:
+            if res_download_handler:
+                res_download_handler.get_file(self, fp, headers, cb, num_cb,
+                                              torrent=torrent,
+                                              version_id=version_id,
+                                              hash_algs=hash_algs)
+            else:
+                self.get_file(fp, headers, cb, num_cb, torrent=torrent,
+                              version_id=version_id,
+                              response_headers=response_headers,
+                              hash_algs=hash_algs)
+
+    def compute_hash(self, fp, algorithm, size=None):
+        """
+        :type fp: file
+        :param fp: File pointer to the file to hash. The file
+            pointer will be reset to the same position before the
+            method returns.
+
+        :type algorithm: zero-argument constructor for hash objects that
+            implements update() and digest() (e.g. hashlib.md5)
+
+        :type size: int
+        :param size: (optional) The Maximum number of bytes to read
+            from the file pointer (fp). This is useful when uploading
+            a file in multiple parts where the file is being split
+            in place into different parts. Less bytes may be available.
+        """
+        hex_digest, b64_digest, data_size = compute_hash(
+            fp, size=size, hash_algorithm=algorithm)
+        # The internal implementation of compute_hash() needs to return the
+        # data size, but we don't want to return that value to the external
+        # caller because it changes the class interface (i.e. it might
+        # break some code), so we consume the third tuple value here and
+        # return the remainder of the tuple to the caller, thereby preserving
+        # the existing interface.
+        self.size = data_size
+        return (hex_digest, b64_digest)
+
+    def send_file(self, fp, headers=None, cb=None, num_cb=10,
+                  query_args=None, chunked_transfer=False, size=None,
+                  hash_algs=None):
+        """
+        Upload a file to GCS.
+
+        :type fp: file
+        :param fp: The file pointer to upload. The file pointer must
+            point point at the offset from which you wish to upload.
+            ie. if uploading the full file, it should point at the
+            start of the file. Normally when a file is opened for
+            reading, the fp will point at the first byte. See the
+            bytes parameter below for more info.
+
+        :type headers: dict
+        :param headers: The headers to pass along with the PUT request
+
+        :type num_cb: int
+        :param num_cb: (optional) If a callback is specified with the
+            cb parameter this parameter determines the granularity of
+            the callback by defining the maximum number of times the
+            callback will be called during the file
+            transfer. Providing a negative integer will cause your
+            callback to be called with each buffer read.
+
+        :type query_args: string
+        :param query_args: Arguments to pass in the query string.
+
+        :type chunked_transfer: boolean
+        :param chunked_transfer: (optional) If true, we use chunked
+            Transfer-Encoding.
+
+        :type size: int
+        :param size: (optional) The Maximum number of bytes to read
+            from the file pointer (fp). This is useful when uploading
+            a file in multiple parts where you are splitting the file
+            up into different ranges to be uploaded. If not specified,
+            the default behaviour is to read all bytes from the file
+            pointer. Less bytes may be available.
+
+        :type hash_algs: dictionary
+        :param hash_algs: (optional) Dictionary of hash algorithms and
+            corresponding hashing class that implements update() and digest().
+            Defaults to {'md5': hashlib.md5}.
+        """
+        self._send_file_internal(fp, headers=headers, cb=cb, num_cb=num_cb,
+                                 query_args=query_args,
+                                 chunked_transfer=chunked_transfer, size=size,
+                                 hash_algs=hash_algs)
+
+    def delete(self):
+        return self.bucket.delete_key(self.name, version_id=self.version_id,
+                                      generation=self.generation)
 
     def add_email_grant(self, permission, email_address):
         """
@@ -112,7 +388,8 @@
 
     def set_contents_from_file(self, fp, headers=None, replace=True,
                                cb=None, num_cb=10, policy=None, md5=None,
-                               res_upload_handler=None, size=None, rewind=False):
+                               res_upload_handler=None, size=None, rewind=False,
+                               if_generation=None):
         """
         Store an object in GS using the name of the Key object as the
         key in GS and the contents of the file pointed to by 'fp' as the
@@ -182,6 +459,12 @@
                        it. The default behaviour is False which reads from
                        the current position of the file pointer (fp).
 
+        :type if_generation: int
+        :param if_generation: (optional) If set to a generation number, the
+            object will only be written to if its current generation number is
+            this value. If set to the value 0, the object will only be written
+            if it doesn't already exist.
+
         :rtype: int
         :return: The number of bytes written to the key.
 
@@ -193,7 +476,8 @@
         provider = self.bucket.connection.provider
         if res_upload_handler and size:
             # could use size instead of file_length if provided but...
-            raise BotoClientError('"size" param not supported for resumable uploads.')
+            raise BotoClientError(
+                '"size" param not supported for resumable uploads.')
         headers = headers or {}
         if policy:
             headers[provider.acl_header] = policy
@@ -202,22 +486,50 @@
             # caller requests reading from beginning of fp.
             fp.seek(0, os.SEEK_SET)
         else:
-            spos = fp.tell()
-            fp.seek(0, os.SEEK_END)
-            if fp.tell() == spos:
-                fp.seek(0, os.SEEK_SET)
-                if fp.tell() != spos:
-                    # Raise an exception as this is likely a programming error
-                    # whereby there is data before the fp but nothing after it.
-                    fp.seek(spos)
-                    raise AttributeError(
-                     'fp is at EOF. Use rewind option or seek() to data start.')
-            # seek back to the correct position.
-            fp.seek(spos)
+            # The following seek/tell/seek logic is intended
+            # to detect applications using the older interface to
+            # set_contents_from_file(), which automatically rewound the
+            # file each time the Key was reused. This changed with commit
+            # 14ee2d03f4665fe20d19a85286f78d39d924237e, to support uploads
+            # split into multiple parts and uploaded in parallel, and at
+            # the time of that commit this check was added because otherwise
+            # older programs would get a success status and upload an empty
+            # object. Unfortuantely, it's very inefficient for fp's implemented
+            # by KeyFile (used, for example, by gsutil when copying between
+            # providers). So, we skip the check for the KeyFile case.
+            # TODO: At some point consider removing this seek/tell/seek
+            # logic, after enough time has passed that it's unlikely any
+            # programs remain that assume the older auto-rewind interface.
+            if not isinstance(fp, KeyFile):
+                spos = fp.tell()
+                fp.seek(0, os.SEEK_END)
+                if fp.tell() == spos:
+                    fp.seek(0, os.SEEK_SET)
+                    if fp.tell() != spos:
+                        # Raise an exception as this is likely a programming
+                        # error whereby there is data before the fp but nothing
+                        # after it.
+                        fp.seek(spos)
+                        raise AttributeError('fp is at EOF. Use rewind option '
+                                             'or seek() to data start.')
+                # seek back to the correct position.
+                fp.seek(spos)
 
         if hasattr(fp, 'name'):
             self.path = fp.name
         if self.bucket != None:
+            if isinstance(fp, KeyFile):
+                # Avoid EOF seek for KeyFile case as it's very inefficient.
+                key = fp.getkey()
+                size = key.size - fp.tell()
+                self.size = size
+                # At present both GCS and S3 use MD5 for the etag for
+                # non-multipart-uploaded objects. If the etag is 32 hex
+                # chars use it as an MD5, to avoid having to read the file
+                # twice while transferring.
+                if (re.match('^"[a-fA-F0-9]{32}"$', key.etag)):
+                    etag = key.etag.strip('"')
+                    md5 = (etag, base64.b64encode(binascii.unhexlify(etag)))
             if size:
                 self.size = size
             else:
@@ -229,16 +541,21 @@
                 fp.seek(spos)
                 size = self.size
 
-            if self.name == None:
-                if md5 == None:
-                  md5 = self.compute_md5(fp, size)
-                  self.md5 = md5[0]
-                  self.base64md5 = md5[1]
+            if md5 == None:
+                md5 = self.compute_md5(fp, size)
+            self.md5 = md5[0]
+            self.base64md5 = md5[1]
 
+            if self.name == None:
                 self.name = self.md5
+
             if not replace:
                 if self.bucket.lookup(self.name):
                     return
+
+            if if_generation is not None:
+                headers['x-goog-if-generation-match'] = str(if_generation)
+
             if res_upload_handler:
                 res_upload_handler.send_file(self, fp, headers, cb, num_cb)
             else:
@@ -248,7 +565,8 @@
     def set_contents_from_filename(self, filename, headers=None, replace=True,
                                    cb=None, num_cb=10, policy=None, md5=None,
                                    reduced_redundancy=None,
-                                   res_upload_handler=None):
+                                   res_upload_handler=None,
+                                   if_generation=None):
         """
         Store an object in GS using the name of the Key object as the
         key in GS and the contents of the file named by 'filename'.
@@ -294,21 +612,28 @@
         :type res_upload_handler: ResumableUploadHandler
         :param res_upload_handler: If provided, this handler will perform the
             upload.
-        """
-        # Clear out any previously computed md5 hashes, since we are setting the content.
-        self.md5 = None
-        self.base64md5 = None
 
-        fp = open(filename, 'rb')
-        self.set_contents_from_file(fp, headers, replace, cb, num_cb,
-                                    policy, md5, res_upload_handler)
-        fp.close()
+        :type if_generation: int
+        :param if_generation: (optional) If set to a generation number, the
+            object will only be written to if its current generation number is
+            this value. If set to the value 0, the object will only be written
+            if it doesn't already exist.
+        """
+        # Clear out any previously computed hashes, since we are setting the
+        # content.
+        self.local_hashes = {}
+
+        with open(filename, 'rb') as fp:
+            self.set_contents_from_file(fp, headers, replace, cb, num_cb,
+                                        policy, md5, res_upload_handler,
+                                        if_generation=if_generation)
 
     def set_contents_from_string(self, s, headers=None, replace=True,
-                                 cb=None, num_cb=10, policy=None, md5=None):
+                                 cb=None, num_cb=10, policy=None, md5=None,
+                                 if_generation=None):
         """
-        Store an object in S3 using the name of the Key object as the
-        key in S3 and the string 's' as the contents.
+        Store an object in GCS using the name of the Key object as the
+        key in GCS and the string 's' as the contents.
         See set_contents_from_file method for details about the
         parameters.
 
@@ -322,10 +647,10 @@
 
         :type cb: function
         :param cb: a callback function that will be called to report
-                   progress on the upload.  The callback should accept
+                   progress on the upload. The callback should accept
                    two integer parameters, the first representing the
                    number of bytes that have been successfully
-                   transmitted to S3 and the second representing the
+                   transmitted to GCS and the second representing the
                    size of the to be transmitted object.
 
         :type cb: int
@@ -335,29 +660,263 @@
                        the maximum number of times the callback will
                        be called during the file transfer.
 
-        :type policy: :class:`boto.s3.acl.CannedACLStrings`
+        :type policy: :class:`boto.gs.acl.CannedACLStrings`
         :param policy: A canned ACL policy that will be applied to the
-                       new key in S3.
+                       new key in GCS.
 
         :type md5: A tuple containing the hexdigest version of the MD5
                    checksum of the file as the first element and the
                    Base64-encoded version of the plain checksum as the
-                   second element.  This is the same format returned by
+                   second element. This is the same format returned by
                    the compute_md5 method.
         :param md5: If you need to compute the MD5 for any reason prior
                     to upload, it's silly to have to do it twice so this
                     param, if present, will be used as the MD5 values
-                    of the file.  Otherwise, the checksum will be computed.
+                    of the file. Otherwise, the checksum will be computed.
+
+        :type if_generation: int
+        :param if_generation: (optional) If set to a generation number, the
+            object will only be written to if its current generation number is
+            this value. If set to the value 0, the object will only be written
+            if it doesn't already exist.
         """
 
         # Clear out any previously computed md5 hashes, since we are setting the content.
         self.md5 = None
         self.base64md5 = None
 
-        if isinstance(s, unicode):
-            s = s.encode("utf-8")
-        fp = StringIO.StringIO(s)
+        fp = StringIO.StringIO(get_utf8_value(s))
         r = self.set_contents_from_file(fp, headers, replace, cb, num_cb,
-                                        policy, md5)
+                                        policy, md5,
+                                        if_generation=if_generation)
         fp.close()
         return r
+
+    def set_contents_from_stream(self, *args, **kwargs):
+        """
+        Store an object using the name of the Key object as the key in
+        cloud and the contents of the data stream pointed to by 'fp' as
+        the contents.
+
+        The stream object is not seekable and total size is not known.
+        This has the implication that we can't specify the
+        Content-Size and Content-MD5 in the header. So for huge
+        uploads, the delay in calculating MD5 is avoided but with a
+        penalty of inability to verify the integrity of the uploaded
+        data.
+
+        :type fp: file
+        :param fp: the file whose contents are to be uploaded
+
+        :type headers: dict
+        :param headers: additional HTTP headers to be sent with the
+            PUT request.
+
+        :type replace: bool
+        :param replace: If this parameter is False, the method will first check
+            to see if an object exists in the bucket with the same key. If it
+            does, it won't overwrite it. The default value is True which will
+            overwrite the object.
+
+        :type cb: function
+        :param cb: a callback function that will be called to report
+            progress on the upload. The callback should accept two integer
+            parameters, the first representing the number of bytes that have
+            been successfully transmitted to GS and the second representing the
+            total number of bytes that need to be transmitted.
+
+        :type num_cb: int
+        :param num_cb: (optional) If a callback is specified with the
+            cb parameter, this parameter determines the granularity of
+            the callback by defining the maximum number of times the
+            callback will be called during the file transfer.
+
+        :type policy: :class:`boto.gs.acl.CannedACLStrings`
+        :param policy: A canned ACL policy that will be applied to the new key
+            in GS.
+
+        :type size: int
+        :param size: (optional) The Maximum number of bytes to read from
+            the file pointer (fp). This is useful when uploading a
+            file in multiple parts where you are splitting the file up
+            into different ranges to be uploaded. If not specified,
+            the default behaviour is to read all bytes from the file
+            pointer. Less bytes may be available.
+
+        :type if_generation: int
+        :param if_generation: (optional) If set to a generation number, the
+            object will only be written to if its current generation number is
+            this value. If set to the value 0, the object will only be written
+            if it doesn't already exist.
+        """
+        if_generation = kwargs.pop('if_generation', None)
+        if if_generation is not None:
+            headers = kwargs.get('headers', {})
+            headers['x-goog-if-generation-match'] = str(if_generation)
+            kwargs['headers'] = headers
+        super(Key, self).set_contents_from_stream(*args, **kwargs)
+
+    def set_acl(self, acl_or_str, headers=None, generation=None,
+                 if_generation=None, if_metageneration=None):
+        """Sets the ACL for this object.
+
+        :type acl_or_str: string or :class:`boto.gs.acl.ACL`
+        :param acl_or_str: A canned ACL string (see
+            :data:`~.gs.acl.CannedACLStrings`) or an ACL object.
+
+        :type headers: dict
+        :param headers: Additional headers to set during the request.
+
+        :type generation: int
+        :param generation: If specified, sets the ACL for a specific generation
+            of a versioned object. If not specified, the current version is
+            modified.
+
+        :type if_generation: int
+        :param if_generation: (optional) If set to a generation number, the acl
+            will only be updated if its current generation number is this value.
+
+        :type if_metageneration: int
+        :param if_metageneration: (optional) If set to a metageneration number,
+            the acl will only be updated if its current metageneration number is
+            this value.
+        """
+        if self.bucket != None:
+            self.bucket.set_acl(acl_or_str, self.name, headers=headers,
+                                generation=generation,
+                                if_generation=if_generation,
+                                if_metageneration=if_metageneration)
+
+    def get_acl(self, headers=None, generation=None):
+        """Returns the ACL of this object.
+
+        :param dict headers: Additional headers to set during the request.
+
+        :param int generation: If specified, gets the ACL for a specific
+            generation of a versioned object. If not specified, the current
+            version is returned.
+
+        :rtype: :class:`.gs.acl.ACL`
+        """
+        if self.bucket != None:
+            return self.bucket.get_acl(self.name, headers=headers,
+                                       generation=generation)
+
+    def get_xml_acl(self, headers=None, generation=None):
+        """Returns the ACL string of this object.
+
+        :param dict headers: Additional headers to set during the request.
+
+        :param int generation: If specified, gets the ACL for a specific
+            generation of a versioned object. If not specified, the current
+            version is returned.
+
+        :rtype: str
+        """
+        if self.bucket != None:
+            return self.bucket.get_xml_acl(self.name, headers=headers,
+                                           generation=generation)
+
+    def set_xml_acl(self, acl_str, headers=None, generation=None,
+                     if_generation=None, if_metageneration=None):
+        """Sets this objects's ACL to an XML string.
+
+        :type acl_str: string
+        :param acl_str: A string containing the ACL XML.
+
+        :type headers: dict
+        :param headers: Additional headers to set during the request.
+
+        :type generation: int
+        :param generation: If specified, sets the ACL for a specific generation
+            of a versioned object. If not specified, the current version is
+            modified.
+
+        :type if_generation: int
+        :param if_generation: (optional) If set to a generation number, the acl
+            will only be updated if its current generation number is this value.
+
+        :type if_metageneration: int
+        :param if_metageneration: (optional) If set to a metageneration number,
+            the acl will only be updated if its current metageneration number is
+            this value.
+        """
+        if self.bucket != None:
+            return self.bucket.set_xml_acl(acl_str, self.name, headers=headers,
+                                           generation=generation,
+                                           if_generation=if_generation,
+                                           if_metageneration=if_metageneration)
+
+    def set_canned_acl(self, acl_str, headers=None, generation=None,
+                       if_generation=None, if_metageneration=None):
+        """Sets this objects's ACL using a predefined (canned) value.
+
+        :type acl_str: string
+        :param acl_str: A canned ACL string. See
+            :data:`~.gs.acl.CannedACLStrings`.
+
+        :type headers: dict
+        :param headers: Additional headers to set during the request.
+
+        :type generation: int
+        :param generation: If specified, sets the ACL for a specific generation
+            of a versioned object. If not specified, the current version is
+            modified.
+
+        :type if_generation: int
+        :param if_generation: (optional) If set to a generation number, the acl
+            will only be updated if its current generation number is this value.
+
+        :type if_metageneration: int
+        :param if_metageneration: (optional) If set to a metageneration number,
+            the acl will only be updated if its current metageneration number is
+            this value.
+        """
+        if self.bucket != None:
+            return self.bucket.set_canned_acl(
+                acl_str,
+                self.name,
+                headers=headers,
+                generation=generation,
+                if_generation=if_generation,
+                if_metageneration=if_metageneration
+            )
+
+    def compose(self, components, content_type=None, headers=None):
+        """Create a new object from a sequence of existing objects.
+
+        The content of the object representing this Key will be the
+        concatenation of the given object sequence. For more detail, visit
+
+            https://developers.google.com/storage/docs/composite-objects
+
+        :type components list of Keys
+        :param components List of gs.Keys representing the component objects
+
+        :type content_type (optional) string
+        :param content_type Content type for the new composite object.
+        """
+        compose_req = []
+        for key in components:
+            if key.bucket.name != self.bucket.name:
+                raise BotoClientError(
+                    'GCS does not support inter-bucket composing')
+
+            generation_tag = ''
+            if key.generation:
+                generation_tag = ('<Generation>%s</Generation>'
+                                  % str(key.generation))
+            compose_req.append('<Component><Name>%s</Name>%s</Component>' %
+                               (key.name, generation_tag))
+        compose_req_xml = ('<ComposeRequest>%s</ComposeRequest>' %
+                         ''.join(compose_req))
+        headers = headers or {}
+        if content_type:
+            headers['Content-Type'] = content_type
+        resp = self.bucket.connection.make_request(
+            'PUT', get_utf8_value(self.bucket.name), get_utf8_value(self.name),
+            headers=headers, query_args='compose',
+            data=get_utf8_value(compose_req_xml))
+        if resp.status < 200 or resp.status > 299:
+            raise self.bucket.connection.provider.storage_response_error(
+                resp.status, resp.reason, resp.read())
diff --git a/boto/gs/resumable_upload_handler.py b/boto/gs/resumable_upload_handler.py
index decdb5c..57ae754 100644
--- a/boto/gs/resumable_upload_handler.py
+++ b/boto/gs/resumable_upload_handler.py
@@ -19,7 +19,6 @@
 # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
 # IN THE SOFTWARE.
 
-import cgi
 import errno
 import httplib
 import os
@@ -28,12 +27,12 @@
 import socket
 import time
 import urlparse
-import boto
-from boto import config
+from boto import config, UserAgent
 from boto.connection import AWSAuthConnection
 from boto.exception import InvalidUriError
 from boto.exception import ResumableTransferDisposition
 from boto.exception import ResumableUploadException
+from boto.s3.keyfile import KeyFile
 try:
     from hashlib import md5
 except ImportError:
@@ -162,6 +161,22 @@
         """
         return self.tracker_uri
 
+    def get_upload_id(self):
+        """
+        Returns the upload ID for the resumable upload, or None if the upload
+        has not yet started.
+        """
+        # We extract the upload_id from the tracker uri. We could retrieve the
+        # upload_id from the headers in the response but this only works for
+        # the case where we get the tracker uri from the service. In the case
+        # where we get the tracker from the tracking file we need to do this
+        # logic anyway.
+        delim = '?upload_id='
+        if self.tracker_uri and delim in self.tracker_uri:
+          return self.tracker_uri[self.tracker_uri.index(delim) + len(delim):]
+        else:
+          return None
+
     def _remove_tracker_file(self):
         if (self.tracker_file_name and
             os.path.exists(self.tracker_file_name)):
@@ -305,12 +320,12 @@
         self._save_tracker_uri_to_file()
 
     def _upload_file_bytes(self, conn, http_conn, fp, file_length,
-                           total_bytes_uploaded, cb, num_cb, md5sum):
+                           total_bytes_uploaded, cb, num_cb, headers):
         """
         Makes one attempt to upload file bytes, using an existing resumable
         upload connection.
 
-        Returns etag from server upon success.
+        Returns (etag, generation, metageneration) from server upon success.
 
         Raises ResumableUploadException if any problems occur.
         """
@@ -331,7 +346,10 @@
         # Content-Range header if the file is 0 bytes long, because the
         # resumable upload protocol uses an *inclusive* end-range (so, sending
         # 'bytes 0-0/1' would actually mean you're sending a 1-byte file).
-        put_headers = {}
+        if not headers:
+          put_headers = {}
+        else:
+          put_headers = headers.copy()
         if file_length:
             if total_bytes_uploaded == file_length:
                 range_header = self._build_content_range_header(
@@ -356,7 +374,8 @@
         http_conn.set_debuglevel(0)
         while buf:
             http_conn.send(buf)
-            md5sum.update(buf)
+            for alg in self.digesters:
+                self.digesters[alg].update(buf)
             total_bytes_uploaded += len(buf)
             if cb:
                 i += 1
@@ -364,6 +383,7 @@
                     cb(total_bytes_uploaded, file_length)
                     i = 0
             buf = fp.read(self.BUFFER_SIZE)
+        http_conn.set_debuglevel(conn.debug)
         if cb:
             cb(total_bytes_uploaded, file_length)
         if total_bytes_uploaded != file_length:
@@ -375,12 +395,14 @@
                 (total_bytes_uploaded, file_length),
                 ResumableTransferDisposition.ABORT)
         resp = http_conn.getresponse()
-        body = resp.read()
         # Restore http connection debug level.
         http_conn.set_debuglevel(conn.debug)
 
         if resp.status == 200:
-            return resp.getheader('etag')  # Success
+            # Success.
+            return (resp.getheader('etag'),
+                    resp.getheader('x-goog-generation'),
+                    resp.getheader('x-goog-metageneration'))
         # Retry timeout (408) and status 500 and 503 errors after a delay.
         elif resp.status in [408, 500, 503]:
             disposition = ResumableTransferDisposition.WAIT_BEFORE_RETRY
@@ -392,11 +414,11 @@
                                        (resp.status, resp.reason), disposition)
 
     def _attempt_resumable_upload(self, key, fp, file_length, headers, cb,
-                                  num_cb, md5sum):
+                                  num_cb):
         """
         Attempts a resumable upload.
 
-        Returns etag from server upon success.
+        Returns (etag, generation, metageneration) from server upon success.
 
         Raises ResumableUploadException if any problems occur.
         """
@@ -411,9 +433,9 @@
 
                 if server_end:
                   # If the server already has some of the content, we need to
-                  # update the md5 with the bytes that have already been
+                  # update the digesters with the bytes that have already been
                   # uploaded to ensure we get a complete hash in the end.
-                  print 'Catching up md5 for resumed upload'
+                  print 'Catching up hash digest(s) for resumed upload'
                   fp.seek(0)
                   # Read local file's bytes through position server has. For
                   # example, if server has (0, 3) we want to read 3-0+1=4 bytes.
@@ -422,13 +444,14 @@
                       chunk = fp.read(min(key.BufferSize, bytes_to_go))
                       if not chunk:
                           raise ResumableUploadException(
-                              'Hit end of file during resumable upload md5 '
+                              'Hit end of file during resumable upload hash '
                               'catchup. This should not happen under\n'
                               'normal circumstances, as it indicates the '
                               'server has more bytes of this transfer\nthan'
                               ' the current file size. Restarting upload.',
                               ResumableTransferDisposition.START_OVER)
-                      md5sum.update(chunk)
+                      for alg in self.digesters:
+                          self.digesters[alg].update(chunk)
                       bytes_to_go -= len(chunk)
 
                 if conn.debug >= 1:
@@ -447,7 +470,12 @@
             self.upload_start_point = server_end
 
         total_bytes_uploaded = server_end + 1
-        fp.seek(total_bytes_uploaded)
+        # Corner case: Don't attempt to seek if we've already uploaded the
+        # entire file, because if the file is a stream (e.g., the KeyFile
+        # wrapper around input key when copying between providers), attempting
+        # to seek to the end of file would result in an InvalidRange error.
+        if file_length < total_bytes_uploaded:
+          fp.seek(total_bytes_uploaded)
         conn = key.bucket.connection
 
         # Get a new HTTP connection (vs conn.get_http_connection(), which reuses
@@ -463,7 +491,8 @@
         # and can report that progress on next attempt.
         try:
             return self._upload_file_bytes(conn, http_conn, fp, file_length,
-                                           total_bytes_uploaded, cb, num_cb, md5sum)
+                                           total_bytes_uploaded, cb, num_cb,
+                                           headers)
         except (ResumableUploadException, socket.error):
             resp = self._query_server_state(conn, file_length)
             if resp.status == 400:
@@ -526,9 +555,9 @@
         else:
             self.progress_less_iterations += 1
             if roll_back_md5:
-                # Rollback any potential md5sum updates, as we did not
+                # Rollback any potential hash updates, as we did not
                 # make any progress in this iteration.
-                self.md5sum = self.md5sum_before_attempt
+                self.digesters = self.digesters_before_attempt
 
         if self.progress_less_iterations > self.num_retries:
             # Don't retry any longer in the current process.
@@ -537,7 +566,7 @@
                     'progress. You might try this upload again later',
                     ResumableTransferDisposition.ABORT_CUR_PROCESS)
 
-        # Use binary exponential backoff to desynchronize client requests
+        # Use binary exponential backoff to desynchronize client requests.
         sleep_time_secs = random.random() * (2**self.progress_less_iterations)
         if debug >= 1:
             print ('Got retryable failure (%d progress-less in a row).\n'
@@ -545,7 +574,7 @@
                    (self.progress_less_iterations, sleep_time_secs))
         time.sleep(sleep_time_secs)
 
-    def send_file(self, key, fp, headers, cb=None, num_cb=10):
+    def send_file(self, key, fp, headers, cb=None, num_cb=10, hash_algs=None):
         """
         Upload a file to a key into a bucket on GS, using GS resumable upload
         protocol.
@@ -573,6 +602,12 @@
             during the file transfer. Providing a negative integer will cause
             your callback to be called with each buffer read.
 
+        :type hash_algs: dictionary
+        :param hash_algs: (optional) Dictionary mapping hash algorithm
+            descriptions to corresponding state-ful hashing objects that
+            implement update(), digest(), and copy() (e.g. hashlib.md5()).
+            Defaults to {'md5': md5()}.
+
         Raises ResumableUploadException if a problem occurs during the transfer.
         """
 
@@ -583,33 +618,48 @@
         # that header.
         CT = 'Content-Type'
         if CT in headers and headers[CT] is None:
-          del headers[CT]
+            del headers[CT]
 
-        fp.seek(0, os.SEEK_END)
-        file_length = fp.tell()
-        fp.seek(0)
+        headers['User-Agent'] = UserAgent
+
+        # Determine file size different ways for case where fp is actually a
+        # wrapper around a Key vs an actual file.
+        if isinstance(fp, KeyFile):
+            file_length = fp.getkey().size
+        else:
+            fp.seek(0, os.SEEK_END)
+            file_length = fp.tell()
+            fp.seek(0)
         debug = key.bucket.connection.debug
 
         # Compute the MD5 checksum on the fly.
-        self.md5sum = md5()
+        if hash_algs is None:
+            hash_algs = {'md5': md5}
+        self.digesters = dict(
+            (alg, hash_algs[alg]()) for alg in hash_algs or {})
 
         # Use num-retries from constructor if one was provided; else check
         # for a value specified in the boto config file; else default to 5.
         if self.num_retries is None:
-            self.num_retries = config.getint('Boto', 'num_retries', 5)
+            self.num_retries = config.getint('Boto', 'num_retries', 6)
         self.progress_less_iterations = 0
 
         while True:  # Retry as long as we're making progress.
             server_had_bytes_before_attempt = self.server_has_bytes
-            self.md5sum_before_attempt = self.md5sum.copy()
+            self.digesters_before_attempt = dict(
+                (alg, self.digesters[alg].copy())
+                for alg in self.digesters)
             try:
-                etag = self._attempt_resumable_upload(key, fp, file_length,
-                                                      headers, cb, num_cb,
-                                                      self.md5sum)
+                # Save generation and metageneration in class state so caller
+                # can find these values, for use in preconditions of future
+                # operations on the uploaded object.
+                (etag, self.generation, self.metageneration) = (
+                    self._attempt_resumable_upload(key, fp, file_length,
+                                                   headers, cb, num_cb))
 
-                # Get the final md5 for the uploaded content.
-                hd = self.md5sum.hexdigest()
-                key.md5, key.base64md5 = key.get_md5_from_hexdigest(hd)
+                # Get the final digests for the uploaded content.
+                for alg in self.digesters:
+                    key.local_hashes[alg] = self.digesters[alg].digest()
 
                 # Upload succceded, so remove the tracker file (if have one).
                 self._remove_tracker_file()
diff --git a/boto/handler.py b/boto/handler.py
index 525f9c9..df065cc 100644
--- a/boto/handler.py
+++ b/boto/handler.py
@@ -19,6 +19,7 @@
 # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
 # IN THE SOFTWARE.
 
+import StringIO
 import xml.sax
 
 class XmlHandler(xml.sax.ContentHandler):
@@ -42,5 +43,14 @@
 
     def characters(self, content):
         self.current_text += content
-            
 
+
+class XmlHandlerWrapper(object):
+    def __init__(self, root_node, connection):
+        self.handler = XmlHandler(root_node, connection)
+        self.parser = xml.sax.make_parser()
+        self.parser.setContentHandler(self.handler)
+        self.parser.setFeature(xml.sax.handler.feature_external_ges, 0)
+
+    def parseString(self, content):
+        return self.parser.parse(StringIO.StringIO(content))
diff --git a/boto/https_connection.py b/boto/https_connection.py
index d7a3f3a..4cbf518 100644
--- a/boto/https_connection.py
+++ b/boto/https_connection.py
@@ -84,7 +84,7 @@
 
   default_port = httplib.HTTPS_PORT
 
-  def __init__(self, host, port=None, key_file=None, cert_file=None,
+  def __init__(self, host, port=default_port, key_file=None, cert_file=None,
                ca_certs=None, strict=None, **kwargs):
     """Constructor.
 
@@ -106,6 +106,8 @@
   def connect(self):
     "Connect to a host on a given (SSL) port."
     sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
+    if hasattr(self, "timeout") and self.timeout is not socket._GLOBAL_DEFAULT_TIMEOUT:
+        sock.settimeout(self.timeout)
     sock.connect((self.host, self.port))
     boto.log.debug("wrapping ssl socket; CA certificate file=%s",
                    self.ca_certs)
diff --git a/boto/iam/connection.py b/boto/iam/connection.py
index 9827602..adacc8f 100644
--- a/boto/iam/connection.py
+++ b/boto/iam/connection.py
@@ -19,13 +19,9 @@
 # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
 # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
 # IN THE SOFTWARE.
-try:
-    import json
-except ImportError:
-    import simplejson as json
-
 import boto
 import boto.jsonresponse
+from boto.compat import json
 from boto.resultset import ResultSet
 from boto.iam.summarymap import SummaryMap
 from boto.connection import AWSQueryConnection
@@ -356,7 +352,7 @@
         implicitly based on the AWS Access Key ID used to sign the request.
 
         :type user_name: string
-        :param user_name: The name of the user to delete.
+        :param user_name: The name of the user to retrieve.
             If not specified, defaults to user making request.
         """
         params = {}
diff --git a/boto/manage/cmdshell.py b/boto/manage/cmdshell.py
index 60f281d..9ee0133 100644
--- a/boto/manage/cmdshell.py
+++ b/boto/manage/cmdshell.py
@@ -14,7 +14,7 @@
 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
 # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
 # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
-# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, 
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
 # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
 # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
 # IN THE SOFTWARE.
@@ -46,16 +46,16 @@
         self._ssh_client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
         self.connect()
 
-    def connect(self):
+    def connect(self, num_retries=5):
         retry = 0
-        while retry < 5:
+        while retry < num_retries:
             try:
                 self._ssh_client.connect(self.server.hostname,
                                          username=self.uname,
                                          pkey=self._pkey)
                 return
             except socket.error, (value, message):
-                if value == 61 or value == 111:
+                if value in (51, 61, 111):
                     print 'SSH Connection refused, will retry in 5 seconds'
                     time.sleep(5)
                     retry += 1
@@ -196,14 +196,14 @@
     """
     A little class to fake out SSHClient (which is expecting a
     :class`boto.manage.server.Server` instance.  This allows us
-    to 
+    to
     """
     def __init__(self, instance, ssh_key_file):
         self.instance = instance
         self.ssh_key_file = ssh_key_file
         self.hostname = instance.dns_name
         self.instance_id = self.instance.id
-        
+
 def start(server):
     instance_id = boto.config.get('Instance', 'instance-id', None)
     if instance_id == server.instance_id:
diff --git a/boto/mturk/connection.py b/boto/mturk/connection.py
index 7de938b..ad66784 100644
--- a/boto/mturk/connection.py
+++ b/boto/mturk/connection.py
@@ -14,7 +14,7 @@
 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
 # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
 # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
-# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, 
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
 # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
 # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
 # IN THE SOFTWARE.
@@ -30,16 +30,18 @@
 from boto.connection import AWSQueryConnection
 from boto.exception import EC2ResponseError
 from boto.resultset import ResultSet
-from boto.mturk.question import QuestionForm, ExternalQuestion
+from boto.mturk.question import QuestionForm, ExternalQuestion, HTMLQuestion
+
 
 class MTurkRequestError(EC2ResponseError):
     "Error for MTurk Requests"
     # todo: subclass from an abstract parent of EC2ResponseError
 
+
 class MTurkConnection(AWSQueryConnection):
-    
+
     APIVersion = '2012-03-25'
-    
+
     def __init__(self, aws_access_key_id=None, aws_secret_access_key=None,
                  is_secure=True, port=None, proxy=None, proxy_port=None,
                  proxy_user=None, proxy_pass=None,
@@ -50,13 +52,14 @@
                 host = 'mechanicalturk.sandbox.amazonaws.com'
             else:
                 host = 'mechanicalturk.amazonaws.com'
+        self.debug = debug
 
         AWSQueryConnection.__init__(self, aws_access_key_id,
                                     aws_secret_access_key,
                                     is_secure, port, proxy, proxy_port,
                                     proxy_user, proxy_pass, host, debug,
                                     https_connection_factory)
-    
+
     def _required_auth_capability(self):
         return ['mturk']
 
@@ -67,7 +70,7 @@
         return self._process_request('GetAccountBalance', params,
                                      [('AvailableBalance', Price),
                                       ('OnHoldBalance', Price)])
-    
+
     def register_hit_type(self, title, description, reward, duration,
                           keywords=None, approval_delay=None, qual_req=None):
         """
@@ -79,8 +82,7 @@
         params = dict(
             Title=title,
             Description=description,
-            AssignmentDurationInSeconds=
-                self.duration_as_seconds(duration),
+            AssignmentDurationInSeconds=self.duration_as_seconds(duration),
             )
         params.update(MTurkConnection.get_price_as_price(reward).get_as_params('Reward'))
 
@@ -94,32 +96,55 @@
         if qual_req is not None:
             params.update(qual_req.get_as_params())
 
-        return self._process_request('RegisterHITType', params, [('HITTypeId', HITTypeId)])
-
+        return self._process_request('RegisterHITType', params,
+                                     [('HITTypeId', HITTypeId)])
 
     def set_email_notification(self, hit_type, email, event_types=None):
         """
         Performs a SetHITTypeNotification operation to set email
         notification for a specified HIT type
         """
-        return self._set_notification(hit_type, 'Email', email, event_types)
-    
+        return self._set_notification(hit_type, 'Email', email,
+                                      'SetHITTypeNotification', event_types)
+
     def set_rest_notification(self, hit_type, url, event_types=None):
         """
         Performs a SetHITTypeNotification operation to set REST notification
         for a specified HIT type
         """
-        return self._set_notification(hit_type, 'REST', url, event_types)
-        
-    def _set_notification(self, hit_type, transport, destination, event_types=None):
+        return self._set_notification(hit_type, 'REST', url,
+                                      'SetHITTypeNotification', event_types)
+
+    def set_sqs_notification(self, hit_type, queue_url, event_types=None):
         """
-        Common SetHITTypeNotification operation to set notification for a
-        specified HIT type
+        Performs a SetHITTypeNotification operation so set SQS notification
+        for a specified HIT type. Queue URL is of form:
+        https://queue.amazonaws.com/<CUSTOMER_ID>/<QUEUE_NAME> and can be
+        found when looking at the details for a Queue in the AWS Console
         """
-        assert isinstance(hit_type, str), "hit_type argument should be a string."
-        
+        return self._set_notification(hit_type, "SQS", queue_url,
+                                      'SetHITTypeNotification', event_types)
+
+    def send_test_event_notification(self, hit_type, url,
+                                     event_types=None,
+                                     test_event_type='Ping'):
+        """
+        Performs a SendTestEventNotification operation with REST notification
+        for a specified HIT type
+        """
+        return self._set_notification(hit_type, 'REST', url,
+                                      'SendTestEventNotification',
+                                      event_types, test_event_type)
+
+    def _set_notification(self, hit_type, transport,
+                          destination, request_type,
+                          event_types=None, test_event_type=None):
+        """
+        Common operation to set notification or send a test event
+        notification for a specified HIT type
+        """
         params = {'HITTypeId': hit_type}
-        
+
         # from the Developer Guide:
         # The 'Active' parameter is optional. If omitted, the active status of
         # the HIT type's notification specification is unchanged. All HIT types
@@ -132,54 +157,67 @@
 
         # add specific event types if required
         if event_types:
-            self.build_list_params(notification_params, event_types, 'EventType')
-        
+            self.build_list_params(notification_params, event_types,
+                                   'EventType')
+
         # Set up dict of 'Notification.1.Transport' etc. values
         notification_rest_params = {}
         num = 1
         for key in notification_params:
             notification_rest_params['Notification.%d.%s' % (num, key)] = notification_params[key]
-        
+
         # Update main params dict
         params.update(notification_rest_params)
-        
+
+        # If test notification, specify the notification type to be tested
+        if test_event_type:
+            params.update({'TestEventType': test_event_type})
+
         # Execute operation
-        return self._process_request('SetHITTypeNotification', params)
-    
-    def create_hit(self, hit_type=None, question=None,
+        return self._process_request(request_type, params)
+
+    def create_hit(self, hit_type=None, question=None, hit_layout=None,
                    lifetime=datetime.timedelta(days=7),
-                   max_assignments=1, 
+                   max_assignments=1,
                    title=None, description=None, keywords=None,
                    reward=None, duration=datetime.timedelta(days=7),
                    approval_delay=None, annotation=None,
                    questions=None, qualifications=None,
-                   response_groups=None):
+                   layout_params=None, response_groups=None):
         """
         Creates a new HIT.
         Returns a ResultSet
-        See: http://docs.amazonwebservices.com/AWSMechanicalTurkRequester/2006-10-31/ApiReference_CreateHITOperation.html
+        See: http://docs.amazonwebservices.com/AWSMechTurk/2012-03-25/AWSMturkAPI/ApiReference_CreateHITOperation.html
         """
-        
-        # handle single or multiple questions
-        neither = question is None and questions is None
-        both = question is not None and questions is not None
-        if neither or both:
-            raise ValueError("Must specify either question (single Question instance) or questions (list or QuestionForm instance), but not both")
 
-        if question:
-            questions = [question]
-        question_param = QuestionForm(questions)
-        if isinstance(question, QuestionForm):
-            question_param = question
-        elif isinstance(question, ExternalQuestion):
-            question_param = question
-        
         # Handle basic required arguments and set up params dict
-        params = {'Question': question_param.get_as_xml(),
-                  'LifetimeInSeconds':
+        params = {'LifetimeInSeconds':
                       self.duration_as_seconds(lifetime),
                   'MaxAssignments': max_assignments,
-                  }
+                 }
+
+        # handle single or multiple questions or layouts
+        neither = question is None and questions is None
+        if hit_layout is None:
+            both = question is not None and questions is not None
+            if neither or both:
+                raise ValueError("Must specify question (single Question instance) or questions (list or QuestionForm instance), but not both")
+            if question:
+                questions = [question]
+            question_param = QuestionForm(questions)
+            if isinstance(question, QuestionForm):
+                question_param = question
+            elif isinstance(question, ExternalQuestion):
+                question_param = question
+            elif isinstance(question, HTMLQuestion):
+                question_param = question
+            params['Question'] = question_param.get_as_xml()
+        else:
+            if not neither:
+                raise ValueError("Must not specify question (single Question instance) or questions (list or QuestionForm instance) when specifying hit_layout")
+            params['HITLayoutId'] = hit_layout
+            if layout_params:
+                params.update(layout_params.get_as_params())
 
         # if hit type specified then add it
         # else add the additional required parameters
@@ -188,10 +226,10 @@
         else:
             # Handle keywords
             final_keywords = MTurkConnection.get_keywords_as_string(keywords)
-            
+
             # Handle price argument
             final_price = MTurkConnection.get_price_as_price(reward)
-            
+
             final_duration = self.duration_as_seconds(duration)
 
             additional_params = dict(
@@ -212,7 +250,7 @@
         # add the annotation if specified
         if annotation is not None:
             params['RequesterAnnotation'] = annotation
-               
+
         # Add the Qualifications if specified
         if qualifications is not None:
             params.update(qualifications.get_as_params())
@@ -220,43 +258,44 @@
         # Handle optional response groups argument
         if response_groups:
             self.build_list_params(params, response_groups, 'ResponseGroup')
-                
+
         # Submit
-        return self._process_request('CreateHIT', params, [('HIT', HIT),])
+        return self._process_request('CreateHIT', params, [('HIT', HIT)])
 
     def change_hit_type_of_hit(self, hit_id, hit_type):
         """
         Change the HIT type of an existing HIT. Note that the reward associated
         with the new HIT type must match the reward of the current HIT type in
         order for the operation to be valid.
-        
+
         :type hit_id: str
         :type hit_type: str
         """
-        params = {'HITId' : hit_id,
+        params = {'HITId': hit_id,
                   'HITTypeId': hit_type}
 
         return self._process_request('ChangeHITTypeOfHIT', params)
-    
+
     def get_reviewable_hits(self, hit_type=None, status='Reviewable',
-                            sort_by='Expiration', sort_direction='Ascending', 
+                            sort_by='Expiration', sort_direction='Ascending',
                             page_size=10, page_number=1):
         """
         Retrieve the HITs that have a status of Reviewable, or HITs that
         have a status of Reviewing, and that belong to the Requester
         calling the operation.
         """
-        params = {'Status' : status,
-                  'SortProperty' : sort_by,
-                  'SortDirection' : sort_direction,
-                  'PageSize' : page_size,
-                  'PageNumber' : page_number}
+        params = {'Status': status,
+                  'SortProperty': sort_by,
+                  'SortDirection': sort_direction,
+                  'PageSize': page_size,
+                  'PageNumber': page_number}
 
         # Handle optional hit_type argument
         if hit_type is not None:
             params.update({'HITTypeId': hit_type})
 
-        return self._process_request('GetReviewableHITs', params, [('HIT', HIT),])
+        return self._process_request('GetReviewableHITs', params,
+                                     [('HIT', HIT)])
 
     @staticmethod
     def _get_pages(page_size, total_records):
@@ -264,14 +303,13 @@
         Given a page size (records per page) and a total number of
         records, return the page numbers to be retrieved.
         """
-        pages = total_records/page_size+bool(total_records%page_size)
-        return range(1, pages+1)
-
+        pages = total_records / page_size + bool(total_records % page_size)
+        return range(1, pages + 1)
 
     def get_all_hits(self):
         """
         Return all of a Requester's HITs
-        
+
         Despite what search_hits says, it does not return all hits, but
         instead returns a page of hits. This method will pull the hits
         from the server 100 at a time, but will yield the results
@@ -285,7 +323,7 @@
         hit_sets = itertools.imap(get_page_hits, page_nums)
         return itertools.chain.from_iterable(hit_sets)
 
-    def search_hits(self, sort_by='CreationTime', sort_direction='Ascending', 
+    def search_hits(self, sort_by='CreationTime', sort_direction='Ascending',
                     page_size=10, page_number=1, response_groups=None):
         """
         Return a page of a Requester's HITs, on behalf of the Requester.
@@ -295,22 +333,50 @@
         The SearchHITs operation does not accept any search parameters
         that filter the results.
         """
-        params = {'SortProperty' : sort_by,
-                  'SortDirection' : sort_direction,
-                  'PageSize' : page_size,
-                  'PageNumber' : page_number}
+        params = {'SortProperty': sort_by,
+                  'SortDirection': sort_direction,
+                  'PageSize': page_size,
+                  'PageNumber': page_number}
         # Handle optional response groups argument
         if response_groups:
             self.build_list_params(params, response_groups, 'ResponseGroup')
-                
 
-        return self._process_request('SearchHITs', params, [('HIT', HIT),])
+        return self._process_request('SearchHITs', params, [('HIT', HIT)])
+
+    def get_assignment(self, assignment_id, response_groups=None):
+        """
+        Retrieves an assignment using the assignment's ID. Requesters can only
+        retrieve their own assignments, and only assignments whose related HIT
+        has not been disposed.
+
+        The returned ResultSet will have the following attributes:
+
+        Request
+                This element is present only if the Request ResponseGroup
+                is specified.
+        Assignment
+                The assignment. The response includes one Assignment object.
+        HIT
+                The HIT associated with this assignment. The response
+                includes one HIT object.
+
+        """
+
+        params = {'AssignmentId': assignment_id}
+
+        # Handle optional response groups argument
+        if response_groups:
+            self.build_list_params(params, response_groups, 'ResponseGroup')
+
+        return self._process_request('GetAssignment', params,
+                                     [('Assignment', Assignment),
+                                      ('HIT', HIT)])
 
     def get_assignments(self, hit_id, status=None,
-                            sort_by='SubmitTime', sort_direction='Ascending', 
+                            sort_by='SubmitTime', sort_direction='Ascending',
                             page_size=10, page_number=1, response_groups=None):
         """
-        Retrieves completed assignments for a HIT. 
+        Retrieves completed assignments for a HIT.
         Use this operation to retrieve the results for a HIT.
 
         The returned ResultSet will have the following attributes:
@@ -329,14 +395,14 @@
                 on this call.
                 A non-negative integer
 
-        The ResultSet will contain zero or more Assignment objects 
+        The ResultSet will contain zero or more Assignment objects
 
         """
-        params = {'HITId' : hit_id,
-                  'SortProperty' : sort_by,
-                  'SortDirection' : sort_direction,
-                  'PageSize' : page_size,
-                  'PageNumber' : page_number}
+        params = {'HITId': hit_id,
+                  'SortProperty': sort_by,
+                  'SortDirection': sort_direction,
+                  'PageSize': page_size,
+                  'PageNumber': page_number}
 
         if status is not None:
             params['AssignmentStatus'] = status
@@ -344,14 +410,14 @@
         # Handle optional response groups argument
         if response_groups:
             self.build_list_params(params, response_groups, 'ResponseGroup')
-                
+
         return self._process_request('GetAssignmentsForHIT', params,
-                                     [('Assignment', Assignment),])
+                                     [('Assignment', Assignment)])
 
     def approve_assignment(self, assignment_id, feedback=None):
         """
         """
-        params = {'AssignmentId': assignment_id,}
+        params = {'AssignmentId': assignment_id}
         if feedback:
             params['RequesterFeedback'] = feedback
         return self._process_request('ApproveAssignment', params)
@@ -359,7 +425,7 @@
     def reject_assignment(self, assignment_id, feedback=None):
         """
         """
-        params = {'AssignmentId': assignment_id,}
+        params = {'AssignmentId': assignment_id}
         if feedback:
             params['RequesterFeedback'] = feedback
         return self._process_request('RejectAssignment', params)
@@ -367,7 +433,7 @@
     def approve_rejected_assignment(self, assignment_id, feedback=None):
         """
         """
-        params = {'AssignmentId' : assignment_id, }
+        params = {'AssignmentId': assignment_id}
         if feedback:
             params['RequesterFeedback'] = feedback
         return self._process_request('ApproveRejectedAssignment', params)
@@ -375,23 +441,23 @@
     def get_hit(self, hit_id, response_groups=None):
         """
         """
-        params = {'HITId': hit_id,}
+        params = {'HITId': hit_id}
         # Handle optional response groups argument
         if response_groups:
             self.build_list_params(params, response_groups, 'ResponseGroup')
-                
-        return self._process_request('GetHIT', params, [('HIT', HIT),])
+
+        return self._process_request('GetHIT', params, [('HIT', HIT)])
 
     def set_reviewing(self, hit_id, revert=None):
         """
-        Update a HIT with a status of Reviewable to have a status of Reviewing, 
+        Update a HIT with a status of Reviewable to have a status of Reviewing,
         or reverts a Reviewing HIT back to the Reviewable status.
 
         Only HITs with a status of Reviewable can be updated with a status of
         Reviewing.  Similarly, only Reviewing HITs can be reverted back to a
         status of Reviewable.
         """
-        params = {'HITId': hit_id,}
+        params = {'HITId': hit_id}
         if revert:
             params['Revert'] = revert
         return self._process_request('SetHITAsReviewing', params)
@@ -413,11 +479,11 @@
         It is not possible to re-enable a HIT once it has been disabled.
         To make the work from a disabled HIT available again, create a new HIT.
         """
-        params = {'HITId': hit_id,}
+        params = {'HITId': hit_id}
         # Handle optional response groups argument
         if response_groups:
             self.build_list_params(params, response_groups, 'ResponseGroup')
-                
+
         return self._process_request('DisableHIT', params)
 
     def dispose_hit(self, hit_id):
@@ -430,7 +496,7 @@
         reviewable, then call GetAssignmentsForHIT to retrieve the
         assignments.  Disposing of a HIT removes the HIT from the
         results of a call to GetReviewableHITs.  """
-        params = {'HITId': hit_id,}
+        params = {'HITId': hit_id}
         return self._process_request('DisposeHIT', params)
 
     def expire_hit(self, hit_id):
@@ -447,14 +513,15 @@
         submitted, the expired HIT becomes"reviewable", and will be
         returned by a call to GetReviewableHITs.
         """
-        params = {'HITId': hit_id,}
+        params = {'HITId': hit_id}
         return self._process_request('ForceExpireHIT', params)
 
-    def extend_hit(self, hit_id, assignments_increment=None, expiration_increment=None):
+    def extend_hit(self, hit_id, assignments_increment=None,
+                   expiration_increment=None):
         """
         Increase the maximum number of assignments, or extend the
         expiration date, of an existing HIT.
-        
+
         NOTE: If a HIT has a status of Reviewable and the HIT is
         extended to make it Available, the HIT will not be returned by
         GetReviewableHITs, and its submitted assignments will not be
@@ -469,7 +536,7 @@
            (assignments_increment is not None and expiration_increment is not None):
             raise ValueError("Must specify either assignments_increment or expiration_increment, but not both")
 
-        params = {'HITId': hit_id,}
+        params = {'HITId': hit_id}
         if assignments_increment:
             params['MaxAssignmentsIncrement'] = assignments_increment
         if expiration_increment:
@@ -485,7 +552,7 @@
 
         help_type: either 'Operation' or 'ResponseGroup'
         """
-        params = {'About': about, 'HelpType': help_type,}
+        params = {'About': about, 'HelpType': help_type}
         return self._process_request('Help', params)
 
     def grant_bonus(self, worker_id, assignment_id, bonus_price, reason):
@@ -520,12 +587,12 @@
         params = {'WorkerId': worker_id, 'Reason': reason}
 
         return self._process_request('UnblockWorker', params)
-    
+
     def notify_workers(self, worker_ids, subject, message_text):
         """
         Send a text message to workers.
         """
-        params = {'Subject' : subject,
+        params = {'Subject': subject,
                   'MessageText': message_text}
         self.build_list_params(params, worker_ids, 'WorkerId')
 
@@ -593,7 +660,7 @@
 
         if answer_key is not None:
             if isinstance(answer_key, basestring):
-                params['AnswerKey'] = answer_key # xml
+                params['AnswerKey'] = answer_key  # xml
             else:
                 raise TypeError
                 # Eventually someone will write an AnswerKey class.
@@ -607,17 +674,29 @@
             params['Keywords'] = self.get_keywords_as_string(keywords)
 
         return self._process_request('CreateQualificationType', params,
-                                     [('QualificationType', QualificationType),])
+                                     [('QualificationType',
+                                       QualificationType)])
 
     def get_qualification_type(self, qualification_type_id):
-        params = {'QualificationTypeId' : qualification_type_id }
+        params = {'QualificationTypeId': qualification_type_id }
         return self._process_request('GetQualificationType', params,
-                                     [('QualificationType', QualificationType),])
+                                     [('QualificationType', QualificationType)])
 
-    def get_qualifications_for_qualification_type(self, qualification_type_id):
-        params = {'QualificationTypeId' : qualification_type_id }
+    def get_all_qualifications_for_qual_type(self, qualification_type_id):
+        page_size = 100
+        search_qual = self.get_qualifications_for_qualification_type(qualification_type_id)
+        total_records = int(search_qual.TotalNumResults)
+        get_page_quals = lambda page: self.get_qualifications_for_qualification_type(qualification_type_id = qualification_type_id, page_size=page_size, page_number = page)
+        page_nums = self._get_pages(page_size, total_records)
+        qual_sets = itertools.imap(get_page_quals, page_nums)
+        return itertools.chain.from_iterable(qual_sets)
+
+    def get_qualifications_for_qualification_type(self, qualification_type_id, page_size=100, page_number = 1):
+        params = {'QualificationTypeId': qualification_type_id,
+                  'PageSize': page_size,
+                  'PageNumber': page_number}
         return self._process_request('GetQualificationsForQualificationType', params,
-                                     [('QualificationType', QualificationType),])
+                                     [('Qualification', Qualification)])
 
     def update_qualification_type(self, qualification_type_id,
                                   description=None,
@@ -629,7 +708,7 @@
                                   auto_granted=None,
                                   auto_granted_value=None):
 
-        params = {'QualificationTypeId' : qualification_type_id }
+        params = {'QualificationTypeId': qualification_type_id}
 
         if description is not None:
             params['Description'] = description
@@ -649,7 +728,7 @@
 
         if answer_key is not None:
             if isinstance(answer_key, basestring):
-                params['AnswerKey'] = answer_key # xml
+                params['AnswerKey'] = answer_key  # xml
             else:
                 raise TypeError
                 # Eventually someone will write an AnswerKey class.
@@ -661,11 +740,11 @@
             params['AutoGrantedValue'] = auto_granted_value
 
         return self._process_request('UpdateQualificationType', params,
-                                     [('QualificationType', QualificationType),])
+                                     [('QualificationType', QualificationType)])
 
     def dispose_qualification_type(self, qualification_type_id):
         """TODO: Document."""
-        params = {'QualificationTypeId' : qualification_type_id}
+        params = {'QualificationTypeId': qualification_type_id}
         return self._process_request('DisposeQualificationType', params)
 
     def search_qualification_types(self, query=None, sort_by='Name',
@@ -673,46 +752,46 @@
                                    page_number=1, must_be_requestable=True,
                                    must_be_owned_by_caller=True):
         """TODO: Document."""
-        params = {'Query' : query,
-                  'SortProperty' : sort_by,
-                  'SortDirection' : sort_direction,
-                  'PageSize' : page_size,
-                  'PageNumber' : page_number,
-                  'MustBeRequestable' : must_be_requestable,
-                  'MustBeOwnedByCaller' : must_be_owned_by_caller}
+        params = {'Query': query,
+                  'SortProperty': sort_by,
+                  'SortDirection': sort_direction,
+                  'PageSize': page_size,
+                  'PageNumber': page_number,
+                  'MustBeRequestable': must_be_requestable,
+                  'MustBeOwnedByCaller': must_be_owned_by_caller}
         return self._process_request('SearchQualificationTypes', params,
-                    [('QualificationType', QualificationType),])
+                    [('QualificationType', QualificationType)])
 
     def get_qualification_requests(self, qualification_type_id,
                                    sort_by='Expiration',
                                    sort_direction='Ascending', page_size=10,
                                    page_number=1):
         """TODO: Document."""
-        params = {'QualificationTypeId' : qualification_type_id,
-                  'SortProperty' : sort_by,
-                  'SortDirection' : sort_direction,
-                  'PageSize' : page_size,
-                  'PageNumber' : page_number}
+        params = {'QualificationTypeId': qualification_type_id,
+                  'SortProperty': sort_by,
+                  'SortDirection': sort_direction,
+                  'PageSize': page_size,
+                  'PageNumber': page_number}
         return self._process_request('GetQualificationRequests', params,
-                    [('QualificationRequest', QualificationRequest),])
+                    [('QualificationRequest', QualificationRequest)])
 
     def grant_qualification(self, qualification_request_id, integer_value=1):
         """TODO: Document."""
-        params = {'QualificationRequestId' : qualification_request_id,
-                  'IntegerValue' : integer_value}
+        params = {'QualificationRequestId': qualification_request_id,
+                  'IntegerValue': integer_value}
         return self._process_request('GrantQualification', params)
 
     def revoke_qualification(self, subject_id, qualification_type_id,
                              reason=None):
         """TODO: Document."""
-        params = {'SubjectId' : subject_id,
-                  'QualificationTypeId' : qualification_type_id,
-                  'Reason' : reason}
+        params = {'SubjectId': subject_id,
+                  'QualificationTypeId': qualification_type_id,
+                  'Reason': reason}
         return self._process_request('RevokeQualification', params)
 
     def assign_qualification(self, qualification_type_id, worker_id,
                              value=1, send_notification=True):
-        params = {'QualificationTypeId' : qualification_type_id,
+        params = {'QualificationTypeId': qualification_type_id,
                   'WorkerId' : worker_id,
                   'IntegerValue' : value,
                   'SendNotification' : send_notification}
@@ -723,7 +802,7 @@
         params = {'QualificationTypeId' : qualification_type_id,
                   'SubjectId' : worker_id}
         return self._process_request('GetQualificationScore', params,
-                    [('Qualification', Qualification),])
+                    [('Qualification', Qualification)])
 
     def update_qualification_score(self, qualification_type_id, worker_id,
                                    value):
@@ -737,7 +816,8 @@
         """
         Helper to process the xml response from AWS
         """
-        response = self.make_request(request_type, params, verb='POST')
+        params['Operation'] = request_type
+        response = self.make_request(None, params, verb='POST')
         return self._process_response(response, marker_elems)
 
     def _process_response(self, response, marker_elems=None):
@@ -745,7 +825,8 @@
         Helper to process the xml response from AWS
         """
         body = response.read()
-        #print body
+        if self.debug == 2:
+            print body
         if '<Errors>' not in body:
             rs = ResultSet(marker_elems)
             h = handler.XmlHandler(rs, self)
@@ -771,7 +852,7 @@
         else:
             raise TypeError("keywords argument must be a string or a list of strings; got a %s" % type(keywords))
         return final_keywords
-    
+
     @staticmethod
     def get_price_as_price(reward):
         """
@@ -786,13 +867,14 @@
     @staticmethod
     def duration_as_seconds(duration):
         if isinstance(duration, datetime.timedelta):
-            duration = duration.days*86400 + duration.seconds
+            duration = duration.days * 86400 + duration.seconds
         try:
             duration = int(duration)
         except TypeError:
             raise TypeError("Duration must be a timedelta or int-castable, got %s" % type(duration))
         return duration
 
+
 class BaseAutoResultElement:
     """
     Base class to automatically add attributes when parsing XML
@@ -806,11 +888,12 @@
     def endElement(self, name, value, connection):
         setattr(self, name, value)
 
+
 class HIT(BaseAutoResultElement):
     """
     Class to extract a HIT structure from a response (used in ResultSet)
-    
-    Will have attributes named as per the Developer Guide, 
+
+    Will have attributes named as per the Developer Guide,
     e.g. HITId, HITTypeId, CreationTime
     """
 
@@ -829,55 +912,70 @@
     # are we there yet?
     expired = property(_has_expired)
 
+
 class HITTypeId(BaseAutoResultElement):
     """
-    Class to extract an HITTypeId structure from a response 
+    Class to extract an HITTypeId structure from a response
     """
 
     pass
 
+
 class Qualification(BaseAutoResultElement):
     """
     Class to extract an Qualification structure from a response (used in
     ResultSet)
-    
+
     Will have attributes named as per the Developer Guide such as
     QualificationTypeId, IntegerValue. Does not seem to contain GrantTime.
     """
-    
+
     pass
 
+
 class QualificationType(BaseAutoResultElement):
     """
     Class to extract an QualificationType structure from a response (used in
     ResultSet)
-    
-    Will have attributes named as per the Developer Guide, 
+
+    Will have attributes named as per the Developer Guide,
     e.g. QualificationTypeId, CreationTime, Name, etc
     """
-    
+
     pass
 
+
 class QualificationRequest(BaseAutoResultElement):
     """
     Class to extract an QualificationRequest structure from a response (used in
     ResultSet)
-    
-    Will have attributes named as per the Developer Guide, 
-    e.g. QualificationRequestId, QualificationTypeId, SubjectId, etc
 
-    TODO: Ensure that Test and Answer attribute are treated properly if the
-          qualification requires a test. These attributes are XML-encoded.
+    Will have attributes named as per the Developer Guide,
+    e.g. QualificationRequestId, QualificationTypeId, SubjectId, etc
     """
-    
-    pass
+
+    def __init__(self, connection):
+        BaseAutoResultElement.__init__(self, connection)
+        self.answers = []
+
+    def endElement(self, name, value, connection):
+        # the answer consists of embedded XML, so it needs to be parsed independantly
+        if name == 'Answer':
+            answer_rs = ResultSet([('Answer', QuestionFormAnswer)])
+            h = handler.XmlHandler(answer_rs, connection)
+            value = connection.get_utf8_value(value)
+            xml.sax.parseString(value, h)
+            self.answers.append(answer_rs)
+        else:
+            BaseAutoResultElement.endElement(self, name, value, connection)
+
 
 class Assignment(BaseAutoResultElement):
     """
     Class to extract an Assignment structure from a response (used in
     ResultSet)
-    
-    Will have attributes named as per the Developer Guide, 
+
+    Will have attributes named as per the Developer Guide,
     e.g. AssignmentId, WorkerId, HITId, Answer, etc
     """
 
@@ -888,7 +986,7 @@
     def endElement(self, name, value, connection):
         # the answer consists of embedded XML, so it needs to be parsed independantly
         if name == 'Answer':
-            answer_rs = ResultSet([('Answer', QuestionFormAnswer),])
+            answer_rs = ResultSet([('Answer', QuestionFormAnswer)])
             h = handler.XmlHandler(answer_rs, connection)
             value = connection.get_utf8_value(value)
             xml.sax.parseString(value, h)
@@ -896,11 +994,12 @@
         else:
             BaseAutoResultElement.endElement(self, name, value, connection)
 
+
 class QuestionFormAnswer(BaseAutoResultElement):
     """
     Class to extract Answers from inside the embedded XML
     QuestionFormAnswers element inside the Answer element which is
-    part of the Assignment structure
+    part of the Assignment and QualificationRequest structures
 
     A QuestionFormAnswers element contains an Answer element for each
     question in the HIT or Qualification test for which the Worker
@@ -925,4 +1024,4 @@
         if name == 'QuestionIdentifier':
             self.qid = value
         elif name in ['FreeText', 'SelectionIdentifier', 'OtherSelectionText'] and self.qid:
-            self.fields.append( value )
+            self.fields.append(value)
diff --git a/boto/mturk/layoutparam.py b/boto/mturk/layoutparam.py
new file mode 100644
index 0000000..16e5932
--- /dev/null
+++ b/boto/mturk/layoutparam.py
@@ -0,0 +1,55 @@
+# Copyright (c) 2008 Chris Moyer http://coredumped.org/
+#
+# Permission is hereby granted, free of charge, to any person obtaining a
+# copy of this software and associated documentation files (the
+# "Software"), to deal in the Software without restriction, including
+# without limitation the rights to use, copy, modify, merge, publish, dis-
+# tribute, sublicense, and/or sell copies of the Software, and to permit
+# persons to whom the Software is furnished to do so, subject to the fol-
+# lowing conditions:
+#
+# The above copyright notice and this permission notice shall be included
+# in all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
+# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
+# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, 
+# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+# IN THE SOFTWARE.
+
+class LayoutParameters:
+
+    def __init__(self, layoutParameters=None):
+        if layoutParameters == None:
+            layoutParameters = []
+        self.layoutParameters = layoutParameters
+
+    def add(self, req):
+        self.layoutParameters.append(req)
+
+    def get_as_params(self):
+        params = {}
+        assert(len(self.layoutParameters) <= 25)
+        for n, layoutParameter in enumerate(self.layoutParameters):
+            kv = layoutParameter.get_as_params()
+            for key in kv:
+                params['HITLayoutParameter.%s.%s' % ((n+1), key) ] = kv[key]
+        return params
+
+class LayoutParameter(object):
+    """
+    Representation of a single HIT layout parameter
+    """
+
+    def __init__(self, name, value):
+        self.name = name
+        self.value = value
+    
+    def get_as_params(self):
+        params =  {
+            "Name": self.name,
+            "Value": self.value,
+        }
+        return params
diff --git a/boto/mturk/question.py b/boto/mturk/question.py
index ab4f970..90ab00d 100644
--- a/boto/mturk/question.py
+++ b/boto/mturk/question.py
@@ -14,14 +14,16 @@
 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
 # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
 # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
-# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, 
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
 # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
 # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
 # IN THE SOFTWARE.
 
+import xml.sax.saxutils
+
 class Question(object):
     template = "<Question>%(items)s</Question>"
-    
+
     def __init__(self, identifier, content, answer_spec,
                  is_required=False, display_name=None):
         # copy all of the parameters into object attributes
@@ -29,7 +31,7 @@
         del self.self
 
     def get_as_params(self, label='Question'):
-        return { label : self.get_as_xml() }
+        return {label: self.get_as_xml()}
 
     def get_as_xml(self):
         items = [
@@ -44,18 +46,22 @@
         return self.template % vars()
 
 try:
-	from lxml import etree
-	class ValidatingXML(object):
-		def validate(self):
-			import urllib2
-			schema_src_file = urllib2.urlopen(self.schema_url)
-			schema_doc = etree.parse(schema_src_file)
-			schema = etree.XMLSchema(schema_doc)
-			doc = etree.fromstring(self.get_as_xml())
-			schema.assertValid(doc)
+    from lxml import etree
+
+    class ValidatingXML(object):
+
+        def validate(self):
+            import urllib2
+            schema_src_file = urllib2.urlopen(self.schema_url)
+            schema_doc = etree.parse(schema_src_file)
+            schema = etree.XMLSchema(schema_doc)
+            doc = etree.fromstring(self.get_as_xml())
+            schema.assertValid(doc)
 except ImportError:
-	class ValidatingXML(object):
-		def validate(self): pass
+    class ValidatingXML(object):
+
+        def validate(self):
+            pass
 
 
 class ExternalQuestion(ValidatingXML):
@@ -64,46 +70,52 @@
     """
     schema_url = "http://mechanicalturk.amazonaws.com/AWSMechanicalTurkDataSchemas/2006-07-14/ExternalQuestion.xsd"
     template = '<ExternalQuestion xmlns="%(schema_url)s"><ExternalURL>%%(external_url)s</ExternalURL><FrameHeight>%%(frame_height)s</FrameHeight></ExternalQuestion>' % vars()
-    
+
     def __init__(self, external_url, frame_height):
-        self.external_url = external_url
+        self.external_url = xml.sax.saxutils.escape( external_url )
         self.frame_height = frame_height
-    
+
     def get_as_params(self, label='ExternalQuestion'):
-        return { label : self.get_as_xml() }
-    
+        return {label: self.get_as_xml()}
+
     def get_as_xml(self):
         return self.template % vars(self)
 
+
 class XMLTemplate:
     def get_as_xml(self):
         return self.template % vars(self)
 
+
 class SimpleField(object, XMLTemplate):
     """
     A Simple name/value pair that can be easily rendered as XML.
-    
+
     >>> SimpleField('Text', 'A text string').get_as_xml()
     '<Text>A text string</Text>'
     """
     template = '<%(field)s>%(value)s</%(field)s>'
-    
+
     def __init__(self, field, value):
         self.field = field
         self.value = value
 
+
 class Binary(object, XMLTemplate):
     template = """<Binary><MimeType><Type>%(type)s</Type><SubType>%(subtype)s</SubType></MimeType><DataURL>%(url)s</DataURL><AltText>%(alt_text)s</AltText></Binary>"""
+
     def __init__(self, type, subtype, url, alt_text):
         self.__dict__.update(vars())
         del self.self
 
+
 class List(list):
     """A bulleted list suitable for OrderedContent or Overview content"""
     def get_as_xml(self):
         items = ''.join('<ListItem>%s</ListItem>' % item for item in self)
         return '<List>%s</List>' % items
 
+
 class Application(object):
     template = "<Application><%(class_)s>%(content)s</%(class_)s></Application>"
     parameter_template = "<Name>%(name)s</Name><Value>%(value)s</Value>"
@@ -127,6 +139,22 @@
         class_ = self.__class__.__name__
         return self.template % vars()
 
+
+class HTMLQuestion(ValidatingXML):
+    schema_url = 'http://mechanicalturk.amazonaws.com/AWSMechanicalTurkDataSchemas/2011-11-11/HTMLQuestion.xsd'
+    template = '<HTMLQuestion xmlns=\"%(schema_url)s\"><HTMLContent><![CDATA[<!DOCTYPE html>%%(html_form)s]]></HTMLContent><FrameHeight>%%(frame_height)s</FrameHeight></HTMLQuestion>' % vars()
+
+    def __init__(self, html_form, frame_height):
+        self.html_form = html_form
+        self.frame_height = frame_height
+
+    def get_as_params(self, label="HTMLQuestion"):
+        return {label: self.get_as_xml()}
+
+    def get_as_xml(self):
+        return self.template % vars(self)
+
+
 class JavaApplet(Application):
     def __init__(self, path, filename, *args, **kwargs):
         self.path = path
@@ -139,6 +167,7 @@
         content.append_field('AppletFilename', self.filename)
         super(JavaApplet, self).get_inner_content(content)
 
+
 class Flash(Application):
     def __init__(self, url, *args, **kwargs):
         self.url = url
@@ -149,12 +178,15 @@
         content.append_field('FlashMovieURL', self.url)
         super(Flash, self).get_inner_content(content)
 
+
 class FormattedContent(object, XMLTemplate):
     schema_url = 'http://mechanicalturk.amazonaws.com/AWSMechanicalTurkDataSchemas/2006-07-14/FormattedContentXHTMLSubset.xsd'
     template = '<FormattedContent><![CDATA[%(content)s]]></FormattedContent>'
+
     def __init__(self, content):
         self.content = content
 
+
 class OrderedContent(list):
 
     def append_field(self, field, value):
@@ -163,20 +195,22 @@
     def get_as_xml(self):
         return ''.join(item.get_as_xml() for item in self)
 
+
 class Overview(OrderedContent):
     template = '<Overview>%(content)s</Overview>'
 
     def get_as_params(self, label='Overview'):
-        return { label : self.get_as_xml() }
-    
+        return {label: self.get_as_xml()}
+
     def get_as_xml(self):
         content = super(Overview, self).get_as_xml()
         return self.template % vars()
 
+
 class QuestionForm(ValidatingXML, list):
     """
     From the AMT API docs:
-    
+
     The top-most element of the QuestionForm data structure is a
     QuestionForm element. This element contains optional Overview
     elements and one or more Question elements. There can be any
@@ -184,9 +218,9 @@
     following example structure has an Overview element and a
     Question element followed by a second Overview element and
     Question element--all within the same QuestionForm.
-    
+
     ::
-    
+
         <QuestionForm xmlns="[the QuestionForm schema URL]">
             <Overview>
                 [...]
@@ -202,7 +236,7 @@
             </Question>
             [...]
         </QuestionForm>
-    
+
     QuestionForm is implemented as a list, so to construct a
     QuestionForm, simply append Questions and Overviews (with at least
     one Question).
@@ -222,6 +256,7 @@
         items = ''.join(item.get_as_xml() for item in self)
         return self.xml_template % vars()
 
+
 class QuestionContent(OrderedContent):
     template = '<QuestionContent>%(content)s</QuestionContent>'
 
@@ -229,6 +264,7 @@
         content = super(QuestionContent, self).get_as_xml()
         return self.template % vars()
 
+
 class AnswerSpecification(object):
     template = '<AnswerSpecification>%(spec)s</AnswerSpecification>'
 
@@ -239,6 +275,7 @@
         spec = self.spec.get_as_xml()
         return self.template % vars()
 
+
 class Constraints(OrderedContent):
     template = '<Constraints>%(content)s</Constraints>'
 
@@ -246,6 +283,7 @@
         content = super(Constraints, self).get_as_xml()
         return self.template % vars()
 
+
 class Constraint(object):
     def get_attributes(self):
         pairs = zip(self.attribute_names, self.attribute_values)
@@ -260,6 +298,7 @@
         attrs = self.get_attributes()
         return self.template % vars()
 
+
 class NumericConstraint(Constraint):
     attribute_names = 'minValue', 'maxValue'
     template = '<IsNumeric %(attrs)s />'
@@ -267,6 +306,7 @@
     def __init__(self, min_value=None, max_value=None):
         self.attribute_values = min_value, max_value
 
+
 class LengthConstraint(Constraint):
     attribute_names = 'minLength', 'maxLength'
     template = '<Length %(attrs)s />'
@@ -274,6 +314,7 @@
     def __init__(self, min_length=None, max_length=None):
         self.attribute_values = min_length, max_length
 
+
 class RegExConstraint(Constraint):
     attribute_names = 'regex', 'errorText', 'flags'
     template = '<AnswerFormatRegex %(attrs)s />'
@@ -290,16 +331,18 @@
             )
         return attrs
 
+
 class NumberOfLinesSuggestion(object):
     template = '<NumberOfLinesSuggestion>%(num_lines)s</NumberOfLinesSuggestion>'
 
     def __init__(self, num_lines=1):
         self.num_lines = num_lines
-    
+
     def get_as_xml(self):
         num_lines = self.num_lines
         return self.template % vars()
-    
+
+
 class FreeTextAnswer(object):
     template = '<FreeTextAnswer>%(items)s</FreeTextAnswer>'
 
@@ -310,7 +353,7 @@
         else:
             self.constraints = Constraints(constraints)
         self.num_lines = num_lines
-    
+
     def get_as_xml(self):
         items = [self.constraints]
         if self.default:
@@ -320,17 +363,19 @@
         items = ''.join(item.get_as_xml() for item in items)
         return self.template % vars()
 
+
 class FileUploadAnswer(object):
     template = """<FileUploadAnswer><MaxFileSizeInBytes>%(max_bytes)d</MaxFileSizeInBytes><MinFileSizeInBytes>%(min_bytes)d</MinFileSizeInBytes></FileUploadAnswer>"""
-    
+
     def __init__(self, min_bytes, max_bytes):
-        assert 0 <= min_bytes <= max_bytes <= 2*10**9
+        assert 0 <= min_bytes <= max_bytes <= 2 * 10 ** 9
         self.min_bytes = min_bytes
         self.max_bytes = max_bytes
-    
+
     def get_as_xml(self):
         return self.template % vars(self)
 
+
 class SelectionAnswer(object):
     """
     A class to generate SelectionAnswer XML data structures.
@@ -344,9 +389,9 @@
     MAX_SELECTION_COUNT_XML_TEMPLATE = """<MaxSelectionCount>%s</MaxSelectionCount>""" # count
     ACCEPTED_STYLES = ['radiobutton', 'dropdown', 'checkbox', 'list', 'combobox', 'multichooser']
     OTHER_SELECTION_ELEMENT_NAME = 'OtherSelection'
-    
+
     def __init__(self, min=1, max=1, style=None, selections=None, type='text', other=False):
-        
+
         if style is not None:
             if style in SelectionAnswer.ACCEPTED_STYLES:
                 self.style_suggestion = style
@@ -354,22 +399,22 @@
                 raise ValueError("style '%s' not recognized; should be one of %s" % (style, ', '.join(SelectionAnswer.ACCEPTED_STYLES)))
         else:
             self.style_suggestion = None
-        
+
         if selections is None:
             raise ValueError("SelectionAnswer.__init__(): selections must be a non-empty list of (content, identifier) tuples")
         else:
             self.selections = selections
-        
+
         self.min_selections = min
         self.max_selections = max
-        
+
         assert len(selections) >= self.min_selections, "# of selections is less than minimum of %d" % self.min_selections
         #assert len(selections) <= self.max_selections, "# of selections exceeds maximum of %d" % self.max_selections
-        
+
         self.type = type
-        
+
         self.other = other
-    
+
     def get_as_xml(self):
         if self.type == 'text':
             TYPE_TAG = "Text"
@@ -377,14 +422,14 @@
             TYPE_TAG = "Binary"
         else:
             raise ValueError("illegal type: %s; must be either 'text' or 'binary'" % str(self.type))
-        
+
         # build list of <Selection> elements
         selections_xml = ""
         for tpl in self.selections:
             value_xml = SelectionAnswer.SELECTION_VALUE_XML_TEMPLATE % (TYPE_TAG, tpl[0], TYPE_TAG)
             selection_xml = SelectionAnswer.SELECTION_XML_TEMPLATE % (tpl[1], value_xml)
             selections_xml += selection_xml
-        
+
         if self.other:
             # add OtherSelection element as xml if available
             if hasattr(self.other, 'get_as_xml'):
@@ -392,7 +437,7 @@
                 selections_xml += self.other.get_as_xml().replace('FreeTextAnswer', 'OtherSelection')
             else:
                 selections_xml += "<OtherSelection />"
-        
+
         if self.style_suggestion is not None:
             style_xml = SelectionAnswer.STYLE_XML_TEMPLATE % self.style_suggestion
         else:
@@ -403,9 +448,8 @@
             count_xml += SelectionAnswer.MAX_SELECTION_COUNT_XML_TEMPLATE %self.max_selections
         else:
             count_xml = ""
-        
+
         ret = SelectionAnswer.SELECTIONANSWER_XML_TEMPLATE % (count_xml, style_xml, selections_xml)
 
         # return XML
         return ret
-        
diff --git a/boto/mws/connection.py b/boto/mws/connection.py
index 8141b3c..db58e6d 100644
--- a/boto/mws/connection.py
+++ b/boto/mws/connection.py
@@ -21,6 +21,7 @@
 import xml.sax
 import hashlib
 import base64
+import string
 from boto.connection import AWSQueryConnection
 from boto.mws.exception import ResponseErrorFactory
 from boto.mws.response import ResponseFactory, ResponseElement
@@ -219,6 +220,7 @@
             response = getattr(boto.mws.response, action + 'Response')
         else:
             response = ResponseFactory(action)
+        response._action = action
 
         def wrapper(self, *args, **kw):
             kw.setdefault(accesskey, getattr(self, accesskey, None))
@@ -278,6 +280,39 @@
         xml.sax.parseString(body, h)
         return obj
 
+    def method_for(self, name):
+        """Return the MWS API method referred to in the argument.
+           The named method can be in CamelCase or underlined_lower_case.
+           This is the complement to MWSConnection.any_call.action
+        """
+        # this looks ridiculous but it should be better than regex
+        action = '_' in name and string.capwords(name, '_') or name
+        attribs = [getattr(self, m) for m in dir(self)]
+        ismethod = lambda m: type(m) is type(self.method_for)
+        ismatch = lambda m: getattr(m, 'action', None) == action
+        method = filter(ismatch, filter(ismethod, attribs))
+        return method and method[0] or None
+
+    def iter_call(self, call, *args, **kw):
+        """Pass a call name as the first argument and a generator
+           is returned for the initial response and any continuation
+           call responses made using the NextToken.
+        """
+        method = self.method_for(call)
+        assert method, 'No call named "{0}"'.format(call)
+        return self.iter_response(method(*args, **kw))
+
+    def iter_response(self, response):
+        """Pass a call's response as the initial argument and a
+           generator is returned for the initial response and any
+           continuation call responses made using the NextToken.
+        """
+        yield response
+        more = self.method_for(response._action + 'ByNextToken')
+        while more and response._result.HasNext == 'true':
+            response = more(NextToken=response._result.NextToken)
+            yield response
+
     @boolean_arguments('PurgeAndReplace')
     @http_body('FeedContent')
     @structured_lists('MarketplaceIdList.Id')
@@ -408,14 +443,14 @@
         """
         return self.post_request(path, kw, response)
 
-    @requires('ReportId')
+    @requires(['ReportId'])
     @api_action('Reports', 15, 60)
     def get_report(self, path, response, **kw):
         """Returns the contents of a report.
         """
         return self.post_request(path, kw, response, isXML=False)
 
-    @requires('ReportType', 'Schedule')
+    @requires(['ReportType', 'Schedule'])
     @api_action('Reports', 10, 45)
     def manage_report_schedule(self, path, response, **kw):
         """Creates, updates, or deletes a report request schedule for
@@ -450,7 +485,7 @@
         return self.post_request(path, kw, response)
 
     @boolean_arguments('Acknowledged')
-    @requires('ReportIdList')
+    @requires(['ReportIdList'])
     @structured_lists('ReportIdList.Id')
     @api_action('Reports', 10, 45)
     def update_report_acknowledgements(self, path, response, **kw):
@@ -458,7 +493,7 @@
         """
         return self.post_request(path, kw, response)
 
-    @requires('ShipFromAddress', 'InboundShipmentPlanRequestItems')
+    @requires(['ShipFromAddress', 'InboundShipmentPlanRequestItems'])
     @structured_objects('ShipFromAddress', 'InboundShipmentPlanRequestItems')
     @api_action('Inbound', 30, 0.5)
     def create_inbound_shipment_plan(self, path, response, **kw):
@@ -466,7 +501,7 @@
         """
         return self.post_request(path, kw, response)
 
-    @requires('ShipmentId', 'InboundShipmentHeader', 'InboundShipmentItems')
+    @requires(['ShipmentId', 'InboundShipmentHeader', 'InboundShipmentItems'])
     @structured_objects('InboundShipmentHeader', 'InboundShipmentItems')
     @api_action('Inbound', 30, 0.5)
     def create_inbound_shipment(self, path, response, **kw):
@@ -474,7 +509,7 @@
         """
         return self.post_request(path, kw, response)
 
-    @requires('ShipmentId')
+    @requires(['ShipmentId'])
     @structured_objects('InboundShipmentHeader', 'InboundShipmentItems')
     @api_action('Inbound', 30, 0.5)
     def update_inbound_shipment(self, path, response, **kw):
@@ -549,7 +584,7 @@
         return self.post_request(path, kw, response)
 
     @structured_objects('Address', 'Items')
-    @requires('Address', 'Items')
+    @requires(['Address', 'Items'])
     @api_action('Outbound', 30, 0.5)
     def get_fulfillment_preview(self, path, response, **kw):
         """Returns a list of fulfillment order previews based on items
@@ -557,10 +592,11 @@
         """
         return self.post_request(path, kw, response)
 
-    @structured_objects('Address', 'Items')
-    @requires('SellerFulfillmentOrderId', 'DisplayableOrderId',
-              'ShippingSpeedCategory',    'DisplayableOrderDateTime',
-              'DestinationAddress',       'DisplayableOrderComment')
+    @structured_objects('DestinationAddress', 'Items')
+    @requires(['SellerFulfillmentOrderId', 'DisplayableOrderId',
+               'ShippingSpeedCategory',    'DisplayableOrderDateTime',
+               'DestinationAddress',       'DisplayableOrderComment',
+               'Items'])
     @api_action('Outbound', 30, 0.5)
     def create_fulfillment_order(self, path, response, **kw):
         """Requests that Amazon ship items from the seller's inventory
@@ -568,7 +604,7 @@
         """
         return self.post_request(path, kw, response)
 
-    @requires('SellerFulfillmentOrderId')
+    @requires(['SellerFulfillmentOrderId'])
     @api_action('Outbound', 30, 0.5)
     def get_fulfillment_order(self, path, response, **kw):
         """Returns a fulfillment order based on a specified
diff --git a/boto/mws/response.py b/boto/mws/response.py
index c95aadb..06740b5 100644
--- a/boto/mws/response.py
+++ b/boto/mws/response.py
@@ -179,6 +179,8 @@
         name = self.__class__.__name__
         if name == 'JITResponse':
             name = '^{0}^'.format(self._name or '')
+        elif name == 'MWSResponse':
+            name = '^{0}^'.format(self._name or name)
         return '{0}{1!r}({2})'.format(
             name, self.copy(), ', '.join(map(render, attrs)))
 
@@ -212,6 +214,13 @@
 class Response(ResponseElement):
     ResponseMetadata = Element()
 
+    @strip_namespace
+    def startElement(self, name, attrs, connection):
+        if name == self._name:
+            self.update(attrs)
+        else:
+            return ResponseElement.startElement(self, name, attrs, connection)
+
     @property
     def _result(self):
         return getattr(self, self._action + 'Result', None)
@@ -266,7 +275,7 @@
 
 
 class GetReportRequestListResult(RequestReportResult):
-    ReportRequestInfo = Element()
+    ReportRequestInfo = ElementList()
 
 
 class GetReportRequestListByNextTokenResult(GetReportRequestListResult):
@@ -278,7 +287,7 @@
 
 
 class GetReportListResult(ResponseElement):
-    ReportInfo = Element()
+    ReportInfo = ElementList()
 
 
 class GetReportListByNextTokenResult(GetReportListResult):
diff --git a/boto/opsworks/__init__.py b/boto/opsworks/__init__.py
new file mode 100644
index 0000000..e69de29
--- /dev/null
+++ b/boto/opsworks/__init__.py
diff --git a/boto/opsworks/exceptions.py b/boto/opsworks/exceptions.py
new file mode 100644
index 0000000..da23e48
--- /dev/null
+++ b/boto/opsworks/exceptions.py
@@ -0,0 +1,30 @@
+# Copyright (c) 2013 Amazon.com, Inc. or its affiliates.  All Rights Reserved
+#
+# Permission is hereby granted, free of charge, to any person obtaining a
+# copy of this software and associated documentation files (the
+# "Software"), to deal in the Software without restriction, including
+# without limitation the rights to use, copy, modify, merge, publish, dis-
+# tribute, sublicense, and/or sell copies of the Software, and to permit
+# persons to whom the Software is furnished to do so, subject to the fol-
+# lowing conditions:
+#
+# The above copyright notice and this permission notice shall be included
+# in all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
+# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
+# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+# IN THE SOFTWARE.
+#
+from boto.exception import JSONResponseError
+
+
+class ResourceNotFoundException(JSONResponseError):
+    pass
+
+
+class ValidationException(JSONResponseError):
+    pass
diff --git a/boto/opsworks/layer1.py b/boto/opsworks/layer1.py
new file mode 100644
index 0000000..2e8ae43
--- /dev/null
+++ b/boto/opsworks/layer1.py
@@ -0,0 +1,1549 @@
+# Copyright (c) 2013 Amazon.com, Inc. or its affiliates.  All Rights Reserved
+#
+# Permission is hereby granted, free of charge, to any person obtaining a
+# copy of this software and associated documentation files (the
+# "Software"), to deal in the Software without restriction, including
+# without limitation the rights to use, copy, modify, merge, publish, dis-
+# tribute, sublicense, and/or sell copies of the Software, and to permit
+# persons to whom the Software is furnished to do so, subject to the fol-
+# lowing conditions:
+#
+# The above copyright notice and this permission notice shall be included
+# in all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
+# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
+# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+# IN THE SOFTWARE.
+#
+
+import json
+import boto
+from boto.connection import AWSQueryConnection
+from boto.regioninfo import RegionInfo
+from boto.exception import JSONResponseError
+from boto.opsworks import exceptions
+
+
+class OpsWorksConnection(AWSQueryConnection):
+    """
+    AWS OpsWorks
+    """
+    APIVersion = "2013-02-18"
+    DefaultRegionName = "us-east-1"
+    DefaultRegionEndpoint = "opsworks.us-east-1.amazonaws.com"
+    ServiceName = "OpsWorks"
+    TargetPrefix = "OpsWorks_20130218"
+    ResponseError = JSONResponseError
+
+    _faults = {
+        "ResourceNotFoundException": exceptions.ResourceNotFoundException,
+        "ValidationException": exceptions.ValidationException,
+    }
+
+
+    def __init__(self, **kwargs):
+        region = kwargs.get('region')
+        if not region:
+            region = RegionInfo(self, self.DefaultRegionName,
+                                self.DefaultRegionEndpoint)
+        kwargs['host'] = region.endpoint
+        AWSQueryConnection.__init__(self, **kwargs)
+        self.region = region
+
+    def _required_auth_capability(self):
+        return ['hmac-v4']
+
+    def attach_elastic_load_balancer(self, elastic_load_balancer_name,
+                                     layer_id):
+        """
+
+
+        :type elastic_load_balancer_name: string
+        :param elastic_load_balancer_name:
+
+        :type layer_id: string
+        :param layer_id:
+
+        """
+        params = {
+            'ElasticLoadBalancerName': elastic_load_balancer_name,
+            'LayerId': layer_id,
+        }
+        return self.make_request(action='AttachElasticLoadBalancer',
+                                 body=json.dumps(params))
+
+    def clone_stack(self, source_stack_id, service_role_arn, name=None,
+                    region=None, attributes=None,
+                    default_instance_profile_arn=None, default_os=None,
+                    hostname_theme=None, default_availability_zone=None,
+                    custom_json=None, use_custom_cookbooks=None,
+                    custom_cookbooks_source=None, default_ssh_key_name=None,
+                    clone_permissions=None, clone_app_ids=None,
+                    default_root_device_type=None):
+        """
+        Creates a clone of a specified stack.
+
+        :type source_stack_id: string
+        :param source_stack_id: The source stack ID.
+
+        :type name: string
+        :param name: The cloned stack name.
+
+        :type region: string
+        :param region: The cloned stack AWS region, such as "us-east-1". For
+            more information about AWS regions, see `Regions and Endpoints`_
+
+        :type attributes: map
+        :param attributes: A list of stack attributes and values as key/value
+            pairs to be added to the cloned stack.
+
+        :type service_role_arn: string
+        :param service_role_arn: The stack AWS Identity and Access Management
+            (IAM) role, which allows OpsWorks to work with AWS resources on
+            your behalf. You must set this parameter to the Amazon Resource
+            Name (ARN) for an existing IAM role. If you create a stack by using
+            the OpsWorks console, it creates the role for you. You can obtain
+            an existing stack's IAM ARN programmatically by calling
+            DescribePermissions. For more information about IAM ARNs, see
+            `Using Identifiers`_.
+
+        :type default_instance_profile_arn: string
+        :param default_instance_profile_arn: The ARN of an IAM profile that is
+            the default profile for all of the stack's EC2 instances. For more
+            information about IAM ARNs, see `Using Identifiers`_.
+
+        :type default_os: string
+        :param default_os: The cloned stack default operating system, which
+            must be either "Amazon Linux" or "Ubuntu 12.04 LTS".
+
+        :type hostname_theme: string
+        :param hostname_theme: The stack's host name theme, with spaces are
+            replaced by underscores. The theme is used to generate hostnames
+            for the stack's instances. By default, `HostnameTheme` is set to
+            Layer_Dependent, which creates hostnames by appending integers to
+            the layer's shortname. The other themes are:
+
+        + Baked_Goods
+        + Clouds
+        + European_Cities
+        + Fruits
+        + Greek_Deities
+        + Legendary_Creatures_from_Japan
+        + Planets_and_Moons
+        + Roman_Deities
+        + Scottish_Islands
+        + US_Cities
+        + Wild_Cats
+
+
+        To obtain a generated hostname, call `GetHostNameSuggestion`, which
+            returns a hostname based on the current theme.
+
+        :type default_availability_zone: string
+        :param default_availability_zone: The cloned stack's Availability Zone.
+            For more information, see `Regions and Endpoints`_.
+
+        :type custom_json: string
+        :param custom_json:
+        A string that contains user-defined, custom JSON. It is used to
+            override the corresponding default stack configuration JSON values.
+            The string should be in the following format and must escape
+            characters such as '"'.:
+        `"{\"key1\": \"value1\", \"key2\": \"value2\",...}"`
+
+        :type use_custom_cookbooks: boolean
+        :param use_custom_cookbooks: Whether to use custom cookbooks.
+
+        :type custom_cookbooks_source: dict
+        :param custom_cookbooks_source:
+
+        :type default_ssh_key_name: string
+        :param default_ssh_key_name: A default SSH key for the stack instances.
+            You can override this value when you create or update an instance.
+
+        :type clone_permissions: boolean
+        :param clone_permissions: Whether to clone the source stack's
+            permissions.
+
+        :type clone_app_ids: list
+        :param clone_app_ids: A list of source stack app IDs to be included in
+            the cloned stack.
+
+        :type default_root_device_type: string
+        :param default_root_device_type:
+
+        """
+        params = {
+            'SourceStackId': source_stack_id,
+            'ServiceRoleArn': service_role_arn,
+        }
+        if name is not None:
+            params['Name'] = name
+        if region is not None:
+            params['Region'] = region
+        if attributes is not None:
+            params['Attributes'] = attributes
+        if default_instance_profile_arn is not None:
+            params['DefaultInstanceProfileArn'] = default_instance_profile_arn
+        if default_os is not None:
+            params['DefaultOs'] = default_os
+        if hostname_theme is not None:
+            params['HostnameTheme'] = hostname_theme
+        if default_availability_zone is not None:
+            params['DefaultAvailabilityZone'] = default_availability_zone
+        if custom_json is not None:
+            params['CustomJson'] = custom_json
+        if use_custom_cookbooks is not None:
+            params['UseCustomCookbooks'] = use_custom_cookbooks
+        if custom_cookbooks_source is not None:
+            params['CustomCookbooksSource'] = custom_cookbooks_source
+        if default_ssh_key_name is not None:
+            params['DefaultSshKeyName'] = default_ssh_key_name
+        if clone_permissions is not None:
+            params['ClonePermissions'] = clone_permissions
+        if clone_app_ids is not None:
+            params['CloneAppIds'] = clone_app_ids
+        if default_root_device_type is not None:
+            params['DefaultRootDeviceType'] = default_root_device_type
+        return self.make_request(action='CloneStack',
+                                 body=json.dumps(params))
+
+    def create_app(self, stack_id, name, type, shortname=None,
+                   description=None, app_source=None, domains=None,
+                   enable_ssl=None, ssl_configuration=None, attributes=None):
+        """
+        Creates an app for a specified stack.
+
+        :type stack_id: string
+        :param stack_id: The stack ID.
+
+        :type shortname: string
+        :param shortname:
+
+        :type name: string
+        :param name: The app name.
+
+        :type description: string
+        :param description: A description of the app.
+
+        :type type: string
+        :param type: The app type. Each supported type is associated with a
+            particular layer. For example, PHP applications are associated with
+            a PHP layer. OpsWorks deploys an application to those instances
+            that are members of the corresponding layer.
+
+        :type app_source: dict
+        :param app_source: A `Source` object that specifies the app repository.
+
+        :type domains: list
+        :param domains: The app virtual host settings, with multiple domains
+            separated by commas. For example: `'www.mysite.com, mysite.com'`
+
+        :type enable_ssl: boolean
+        :param enable_ssl: Whether to enable SSL for the app.
+
+        :type ssl_configuration: dict
+        :param ssl_configuration: An `SslConfiguration` object with the SSL
+            configuration.
+
+        :type attributes: map
+        :param attributes: One or more user-defined key/value pairs to be added
+            to the stack attributes bag.
+
+        """
+        params = {'StackId': stack_id, 'Name': name, 'Type': type, }
+        if shortname is not None:
+            params['Shortname'] = shortname
+        if description is not None:
+            params['Description'] = description
+        if app_source is not None:
+            params['AppSource'] = app_source
+        if domains is not None:
+            params['Domains'] = domains
+        if enable_ssl is not None:
+            params['EnableSsl'] = enable_ssl
+        if ssl_configuration is not None:
+            params['SslConfiguration'] = ssl_configuration
+        if attributes is not None:
+            params['Attributes'] = attributes
+        return self.make_request(action='CreateApp',
+                                 body=json.dumps(params))
+
+    def create_deployment(self, stack_id, command, app_id=None,
+                          instance_ids=None, comment=None, custom_json=None):
+        """
+        Deploys a stack or app.
+
+
+        + App deployment generates a `deploy` event, which runs the
+          associated recipes and passes them a JSON stack configuration
+          object that includes information about the app.
+        + Stack deployment runs the `deploy` recipes but does not
+          raise an event.
+
+        :type stack_id: string
+        :param stack_id: The stack ID.
+
+        :type app_id: string
+        :param app_id: The app ID, for app deployments.
+
+        :type instance_ids: list
+        :param instance_ids: The instance IDs for the deployment targets.
+
+        :type command: dict
+        :param command: A `DeploymentCommand` object that describes details of
+            the operation.
+
+        :type comment: string
+        :param comment: A user-defined comment.
+
+        :type custom_json: string
+        :param custom_json:
+        A string that contains user-defined, custom JSON. It is used to
+            override the corresponding default stack configuration JSON values.
+            The string should be in the following format and must escape
+            characters such as '"'.:
+        `"{\"key1\": \"value1\", \"key2\": \"value2\",...}"`
+
+        """
+        params = {'StackId': stack_id, 'Command': command, }
+        if app_id is not None:
+            params['AppId'] = app_id
+        if instance_ids is not None:
+            params['InstanceIds'] = instance_ids
+        if comment is not None:
+            params['Comment'] = comment
+        if custom_json is not None:
+            params['CustomJson'] = custom_json
+        return self.make_request(action='CreateDeployment',
+                                 body=json.dumps(params))
+
+    def create_instance(self, stack_id, layer_ids, instance_type,
+                        auto_scaling_type=None, hostname=None, os=None,
+                        ssh_key_name=None, availability_zone=None,
+                        architecture=None, root_device_type=None):
+        """
+        Creates an instance in a specified stack.
+
+        :type stack_id: string
+        :param stack_id: The stack ID.
+
+        :type layer_ids: list
+        :param layer_ids: An array that contains the instance layer IDs.
+
+        :type instance_type: string
+        :param instance_type:
+        The instance type, which can be one of the following:
+
+
+        + m1.small
+        + m1.medium
+        + m1.large
+        + m1.xlarge
+        + c1.medium
+        + c1.xlarge
+        + m2.xlarge
+        + m2.2xlarge
+        + m2.4xlarge
+
+        :type auto_scaling_type: string
+        :param auto_scaling_type:
+        The instance auto scaling type, which has three possible values:
+
+
+        + **AlwaysRunning**: A 24x7 instance, which is not affected by auto
+              scaling.
+        + **TimeBasedAutoScaling**: A time-based auto scaling instance, which
+              is started and stopped based on a specified schedule. To specify
+              the schedule, call SetTimeBasedAutoScaling.
+        + **LoadBasedAutoScaling**: A load-based auto scaling instance, which
+              is started and stopped based on load metrics. To use load-based
+              auto scaling, you must enable it for the instance layer and
+              configure the thresholds by calling SetLoadBasedAutoScaling.
+
+        :type hostname: string
+        :param hostname: The instance host name.
+
+        :type os: string
+        :param os: The instance operating system.
+
+        :type ssh_key_name: string
+        :param ssh_key_name: The instance SSH key name.
+
+        :type availability_zone: string
+        :param availability_zone: The instance Availability Zone. For more
+            information, see `Regions and Endpoints`_.
+
+        :type architecture: string
+        :param architecture:
+
+        :type root_device_type: string
+        :param root_device_type:
+
+        """
+        params = {
+            'StackId': stack_id,
+            'LayerIds': layer_ids,
+            'InstanceType': instance_type,
+        }
+        if auto_scaling_type is not None:
+            params['AutoScalingType'] = auto_scaling_type
+        if hostname is not None:
+            params['Hostname'] = hostname
+        if os is not None:
+            params['Os'] = os
+        if ssh_key_name is not None:
+            params['SshKeyName'] = ssh_key_name
+        if availability_zone is not None:
+            params['AvailabilityZone'] = availability_zone
+        if architecture is not None:
+            params['Architecture'] = architecture
+        if root_device_type is not None:
+            params['RootDeviceType'] = root_device_type
+        return self.make_request(action='CreateInstance',
+                                 body=json.dumps(params))
+
+    def create_layer(self, stack_id, type, name, shortname, attributes=None,
+                     custom_instance_profile_arn=None,
+                     custom_security_group_ids=None, packages=None,
+                     volume_configurations=None, enable_auto_healing=None,
+                     auto_assign_elastic_ips=None, custom_recipes=None):
+        """
+        Creates a layer.
+
+        :type stack_id: string
+        :param stack_id: The layer stack ID.
+
+        :type type: string
+        :param type: The layer type. A stack cannot have more than one layer of
+            the same type.
+
+        :type name: string
+        :param name: The layer name, which is used by the console.
+
+        :type shortname: string
+        :param shortname: The layer short name, which is used internally by
+            OpsWorks and by Chef recipes. The shortname is also used as the
+            name for the directory where your app files are installed. It can
+            have a maximum of 200 characters, which are limited to the
+            alphanumeric characters, '-', '_', and '.'.
+
+        :type attributes: map
+        :param attributes: One or more user-defined key/value pairs to be added
+            to the stack attributes bag.
+
+        :type custom_instance_profile_arn: string
+        :param custom_instance_profile_arn: The ARN of an IAM profile that to
+            be used for the layer's EC2 instances. For more information about
+            IAM ARNs, see `Using Identifiers`_.
+
+        :type custom_security_group_ids: list
+        :param custom_security_group_ids: An array containing the layer custom
+            security group IDs.
+
+        :type packages: list
+        :param packages: An array of `Package` objects that describe the layer
+            packages.
+
+        :type volume_configurations: list
+        :param volume_configurations: A `VolumeConfigurations` object that
+            describes the layer Amazon EBS volumes.
+
+        :type enable_auto_healing: boolean
+        :param enable_auto_healing: Whether to disable auto healing for the
+            layer.
+
+        :type auto_assign_elastic_ips: boolean
+        :param auto_assign_elastic_ips: Whether to automatically assign an
+            `Elastic IP address`_ to the layer.
+
+        :type custom_recipes: dict
+        :param custom_recipes: A `LayerCustomRecipes` object that specifies the
+            layer custom recipes.
+
+        """
+        params = {
+            'StackId': stack_id,
+            'Type': type,
+            'Name': name,
+            'Shortname': shortname,
+        }
+        if attributes is not None:
+            params['Attributes'] = attributes
+        if custom_instance_profile_arn is not None:
+            params['CustomInstanceProfileArn'] = custom_instance_profile_arn
+        if custom_security_group_ids is not None:
+            params['CustomSecurityGroupIds'] = custom_security_group_ids
+        if packages is not None:
+            params['Packages'] = packages
+        if volume_configurations is not None:
+            params['VolumeConfigurations'] = volume_configurations
+        if enable_auto_healing is not None:
+            params['EnableAutoHealing'] = enable_auto_healing
+        if auto_assign_elastic_ips is not None:
+            params['AutoAssignElasticIps'] = auto_assign_elastic_ips
+        if custom_recipes is not None:
+            params['CustomRecipes'] = custom_recipes
+        return self.make_request(action='CreateLayer',
+                                 body=json.dumps(params))
+
+    def create_stack(self, name, region, service_role_arn,
+                     default_instance_profile_arn, attributes=None,
+                     default_os=None, hostname_theme=None,
+                     default_availability_zone=None, custom_json=None,
+                     use_custom_cookbooks=None, custom_cookbooks_source=None,
+                     default_ssh_key_name=None,
+                     default_root_device_type=None):
+        """
+        Creates a new stack.
+
+        :type name: string
+        :param name: The stack name.
+
+        :type region: string
+        :param region: The stack AWS region, such as "us-east-1". For more
+            information about Amazon regions, see `Regions and Endpoints`_.
+
+        :type attributes: map
+        :param attributes: One or more user-defined key/value pairs to be added
+            to the stack attributes bag.
+
+        :type service_role_arn: string
+        :param service_role_arn: The stack AWS Identity and Access Management
+            (IAM) role, which allows OpsWorks to work with AWS resources on
+            your behalf. You must set this parameter to the Amazon Resource
+            Name (ARN) for an existing IAM role. For more information about IAM
+            ARNs, see `Using Identifiers`_.
+
+        :type default_instance_profile_arn: string
+        :param default_instance_profile_arn: The ARN of an IAM profile that is
+            the default profile for all of the stack's EC2 instances. For more
+            information about IAM ARNs, see `Using Identifiers`_.
+
+        :type default_os: string
+        :param default_os: The cloned stack default operating system, which
+            must be either "Amazon Linux" or "Ubuntu 12.04 LTS".
+
+        :type hostname_theme: string
+        :param hostname_theme: The stack's host name theme, with spaces are
+            replaced by underscores. The theme is used to generate hostnames
+            for the stack's instances. By default, `HostnameTheme` is set to
+            Layer_Dependent, which creates hostnames by appending integers to
+            the layer's shortname. The other themes are:
+
+        + Baked_Goods
+        + Clouds
+        + European_Cities
+        + Fruits
+        + Greek_Deities
+        + Legendary_Creatures_from_Japan
+        + Planets_and_Moons
+        + Roman_Deities
+        + Scottish_Islands
+        + US_Cities
+        + Wild_Cats
+
+
+        To obtain a generated hostname, call `GetHostNameSuggestion`, which
+            returns a hostname based on the current theme.
+
+        :type default_availability_zone: string
+        :param default_availability_zone: The stack default Availability Zone.
+            For more information, see `Regions and Endpoints`_.
+
+        :type custom_json: string
+        :param custom_json:
+        A string that contains user-defined, custom JSON. It is used to
+            override the corresponding default stack configuration JSON values.
+            The string should be in the following format and must escape
+            characters such as '"'.:
+        `"{\"key1\": \"value1\", \"key2\": \"value2\",...}"`
+
+        :type use_custom_cookbooks: boolean
+        :param use_custom_cookbooks: Whether the stack uses custom cookbooks.
+
+        :type custom_cookbooks_source: dict
+        :param custom_cookbooks_source:
+
+        :type default_ssh_key_name: string
+        :param default_ssh_key_name: A default SSH key for the stack instances.
+            You can override this value when you create or update an instance.
+
+        :type default_root_device_type: string
+        :param default_root_device_type:
+
+        """
+        params = {
+            'Name': name,
+            'Region': region,
+            'ServiceRoleArn': service_role_arn,
+            'DefaultInstanceProfileArn': default_instance_profile_arn,
+        }
+        if attributes is not None:
+            params['Attributes'] = attributes
+        if default_os is not None:
+            params['DefaultOs'] = default_os
+        if hostname_theme is not None:
+            params['HostnameTheme'] = hostname_theme
+        if default_availability_zone is not None:
+            params['DefaultAvailabilityZone'] = default_availability_zone
+        if custom_json is not None:
+            params['CustomJson'] = custom_json
+        if use_custom_cookbooks is not None:
+            params['UseCustomCookbooks'] = use_custom_cookbooks
+        if custom_cookbooks_source is not None:
+            params['CustomCookbooksSource'] = custom_cookbooks_source
+        if default_ssh_key_name is not None:
+            params['DefaultSshKeyName'] = default_ssh_key_name
+        if default_root_device_type is not None:
+            params['DefaultRootDeviceType'] = default_root_device_type
+        return self.make_request(action='CreateStack',
+                                 body=json.dumps(params))
+
+    def create_user_profile(self, iam_user_arn, ssh_username=None,
+                            ssh_public_key=None):
+        """
+        Creates a new user.
+
+        :type iam_user_arn: string
+        :param iam_user_arn: The user's IAM ARN.
+
+        :type ssh_username: string
+        :param ssh_username: The user's SSH user name.
+
+        :type ssh_public_key: string
+        :param ssh_public_key: The user's public SSH key.
+
+        """
+        params = {'IamUserArn': iam_user_arn, }
+        if ssh_username is not None:
+            params['SshUsername'] = ssh_username
+        if ssh_public_key is not None:
+            params['SshPublicKey'] = ssh_public_key
+        return self.make_request(action='CreateUserProfile',
+                                 body=json.dumps(params))
+
+    def delete_app(self, app_id):
+        """
+        Deletes a specified app.
+
+        :type app_id: string
+        :param app_id: The app ID.
+
+        """
+        params = {'AppId': app_id, }
+        return self.make_request(action='DeleteApp',
+                                 body=json.dumps(params))
+
+    def delete_instance(self, instance_id, delete_elastic_ip=None,
+                        delete_volumes=None):
+        """
+        Deletes a specified instance.
+
+        :type instance_id: string
+        :param instance_id: The instance ID.
+
+        :type delete_elastic_ip: boolean
+        :param delete_elastic_ip: Whether to delete the instance Elastic IP
+            address.
+
+        :type delete_volumes: boolean
+        :param delete_volumes: Whether to delete the instance Amazon EBS
+            volumes.
+
+        """
+        params = {'InstanceId': instance_id, }
+        if delete_elastic_ip is not None:
+            params['DeleteElasticIp'] = delete_elastic_ip
+        if delete_volumes is not None:
+            params['DeleteVolumes'] = delete_volumes
+        return self.make_request(action='DeleteInstance',
+                                 body=json.dumps(params))
+
+    def delete_layer(self, layer_id):
+        """
+        Deletes a specified layer. You must first remove all
+        associated instances.
+
+        :type layer_id: string
+        :param layer_id: The layer ID.
+
+        """
+        params = {'LayerId': layer_id, }
+        return self.make_request(action='DeleteLayer',
+                                 body=json.dumps(params))
+
+    def delete_stack(self, stack_id):
+        """
+        Deletes a specified stack. You must first delete all instances
+        and layers.
+
+        :type stack_id: string
+        :param stack_id: The stack ID.
+
+        """
+        params = {'StackId': stack_id, }
+        return self.make_request(action='DeleteStack',
+                                 body=json.dumps(params))
+
+    def delete_user_profile(self, iam_user_arn):
+        """
+        Deletes a user.
+
+        :type iam_user_arn: string
+        :param iam_user_arn: The user's IAM ARN.
+
+        """
+        params = {'IamUserArn': iam_user_arn, }
+        return self.make_request(action='DeleteUserProfile',
+                                 body=json.dumps(params))
+
+    def describe_apps(self, stack_id=None, app_ids=None):
+        """
+        Requests a description of a specified set of apps.
+
+        :type stack_id: string
+        :param stack_id:
+        The app stack ID.
+
+        :type app_ids: list
+        :param app_ids: An array of app IDs for the apps to be described.
+
+        """
+        params = {}
+        if stack_id is not None:
+            params['StackId'] = stack_id
+        if app_ids is not None:
+            params['AppIds'] = app_ids
+        return self.make_request(action='DescribeApps',
+                                 body=json.dumps(params))
+
+    def describe_commands(self, deployment_id=None, instance_id=None,
+                          command_ids=None):
+        """
+        Describes the results of specified commands.
+
+        :type deployment_id: string
+        :param deployment_id: The deployment ID.
+
+        :type instance_id: string
+        :param instance_id: The instance ID.
+
+        :type command_ids: list
+        :param command_ids: An array of IDs for the commands to be described.
+
+        """
+        params = {}
+        if deployment_id is not None:
+            params['DeploymentId'] = deployment_id
+        if instance_id is not None:
+            params['InstanceId'] = instance_id
+        if command_ids is not None:
+            params['CommandIds'] = command_ids
+        return self.make_request(action='DescribeCommands',
+                                 body=json.dumps(params))
+
+    def describe_deployments(self, stack_id=None, app_id=None,
+                             deployment_ids=None):
+        """
+        Requests a description of a specified set of deployments.
+
+        :type stack_id: string
+        :param stack_id: The stack ID.
+
+        :type app_id: string
+        :param app_id: The app ID.
+
+        :type deployment_ids: list
+        :param deployment_ids: An array of deployment IDs to be described.
+
+        """
+        params = {}
+        if stack_id is not None:
+            params['StackId'] = stack_id
+        if app_id is not None:
+            params['AppId'] = app_id
+        if deployment_ids is not None:
+            params['DeploymentIds'] = deployment_ids
+        return self.make_request(action='DescribeDeployments',
+                                 body=json.dumps(params))
+
+    def describe_elastic_ips(self, instance_id=None, ips=None):
+        """
+        Describes an instance's `Elastic IP addresses`_.
+
+        :type instance_id: string
+        :param instance_id: The instance ID.
+
+        :type ips: list
+        :param ips: An array of Elastic IP addresses to be described.
+
+        """
+        params = {}
+        if instance_id is not None:
+            params['InstanceId'] = instance_id
+        if ips is not None:
+            params['Ips'] = ips
+        return self.make_request(action='DescribeElasticIps',
+                                 body=json.dumps(params))
+
+    def describe_elastic_load_balancers(self, stack_id=None, layer_ids=None):
+        """
+
+
+        :type stack_id: string
+        :param stack_id:
+
+        :type layer_ids: list
+        :param layer_ids:
+
+        """
+        params = {}
+        if stack_id is not None:
+            params['StackId'] = stack_id
+        if layer_ids is not None:
+            params['LayerIds'] = layer_ids
+        return self.make_request(action='DescribeElasticLoadBalancers',
+                                 body=json.dumps(params))
+
+    def describe_instances(self, stack_id=None, layer_id=None,
+                           instance_ids=None):
+        """
+        Requests a description of a set of instances associated with a
+        specified ID or IDs.
+
+        :type stack_id: string
+        :param stack_id: A stack ID.
+
+        :type layer_id: string
+        :param layer_id: A layer ID.
+
+        :type instance_ids: list
+        :param instance_ids: An array of instance IDs to be described.
+
+        """
+        params = {}
+        if stack_id is not None:
+            params['StackId'] = stack_id
+        if layer_id is not None:
+            params['LayerId'] = layer_id
+        if instance_ids is not None:
+            params['InstanceIds'] = instance_ids
+        return self.make_request(action='DescribeInstances',
+                                 body=json.dumps(params))
+
+    def describe_layers(self, stack_id, layer_ids=None):
+        """
+        Requests a description of one or more layers in a specified
+        stack.
+
+        :type stack_id: string
+        :param stack_id: The stack ID.
+
+        :type layer_ids: list
+        :param layer_ids: An array of layer IDs that specify the layers to be
+            described.
+
+        """
+        params = {'StackId': stack_id, }
+        if layer_ids is not None:
+            params['LayerIds'] = layer_ids
+        return self.make_request(action='DescribeLayers',
+                                 body=json.dumps(params))
+
+    def describe_load_based_auto_scaling(self, layer_ids):
+        """
+        Describes load-based auto scaling configurations for specified
+        layers.
+
+        :type layer_ids: list
+        :param layer_ids: An array of layer IDs.
+
+        """
+        params = {'LayerIds': layer_ids, }
+        return self.make_request(action='DescribeLoadBasedAutoScaling',
+                                 body=json.dumps(params))
+
+    def describe_permissions(self, iam_user_arn, stack_id):
+        """
+        Describes the permissions for a specified stack. You must
+        specify at least one of the two request values.
+
+        :type iam_user_arn: string
+        :param iam_user_arn: The user's IAM ARN. For more information about IAM
+            ARNs, see `Using Identifiers`_.
+
+        :type stack_id: string
+        :param stack_id: The stack ID.
+
+        """
+        params = {'IamUserArn': iam_user_arn, 'StackId': stack_id, }
+        return self.make_request(action='DescribePermissions',
+                                 body=json.dumps(params))
+
+    def describe_raid_arrays(self, instance_id=None, raid_array_ids=None):
+        """
+        Describe an instance's RAID arrays.
+
+        :type instance_id: string
+        :param instance_id: The instance ID.
+
+        :type raid_array_ids: list
+        :param raid_array_ids: An array of RAID array IDs to be described.
+
+        """
+        params = {}
+        if instance_id is not None:
+            params['InstanceId'] = instance_id
+        if raid_array_ids is not None:
+            params['RaidArrayIds'] = raid_array_ids
+        return self.make_request(action='DescribeRaidArrays',
+                                 body=json.dumps(params))
+
+    def describe_service_errors(self, stack_id=None, instance_id=None,
+                                service_error_ids=None):
+        """
+        Describes OpsWorks service errors.
+
+        :type stack_id: string
+        :param stack_id: The stack ID.
+
+        :type instance_id: string
+        :param instance_id: The instance ID.
+
+        :type service_error_ids: list
+        :param service_error_ids: An array of service error IDs to be
+            described.
+
+        """
+        params = {}
+        if stack_id is not None:
+            params['StackId'] = stack_id
+        if instance_id is not None:
+            params['InstanceId'] = instance_id
+        if service_error_ids is not None:
+            params['ServiceErrorIds'] = service_error_ids
+        return self.make_request(action='DescribeServiceErrors',
+                                 body=json.dumps(params))
+
+    def describe_stacks(self, stack_ids=None):
+        """
+        Requests a description of one or more stacks.
+
+        :type stack_ids: list
+        :param stack_ids: An array of stack IDs that specify the stacks to be
+            described.
+
+        """
+        params = {}
+        if stack_ids is not None:
+            params['StackIds'] = stack_ids
+        return self.make_request(action='DescribeStacks',
+                                 body=json.dumps(params))
+
+    def describe_time_based_auto_scaling(self, instance_ids):
+        """
+        Describes time-based auto scaling configurations for specified
+        instances.
+
+        :type instance_ids: list
+        :param instance_ids: An array of instance IDs.
+
+        """
+        params = {'InstanceIds': instance_ids, }
+        return self.make_request(action='DescribeTimeBasedAutoScaling',
+                                 body=json.dumps(params))
+
+    def describe_user_profiles(self, iam_user_arns):
+        """
+        Describe specified users.
+
+        :type iam_user_arns: list
+        :param iam_user_arns: An array of IAM user ARNs that identify the users
+            to be described.
+
+        """
+        params = {'IamUserArns': iam_user_arns, }
+        return self.make_request(action='DescribeUserProfiles',
+                                 body=json.dumps(params))
+
+    def describe_volumes(self, instance_id=None, raid_array_id=None,
+                         volume_ids=None):
+        """
+        Describes an instance's Amazon EBS volumes.
+
+        :type instance_id: string
+        :param instance_id: The instance ID.
+
+        :type raid_array_id: string
+        :param raid_array_id: The RAID array ID.
+
+        :type volume_ids: list
+        :param volume_ids: Am array of volume IDs to be described.
+
+        """
+        params = {}
+        if instance_id is not None:
+            params['InstanceId'] = instance_id
+        if raid_array_id is not None:
+            params['RaidArrayId'] = raid_array_id
+        if volume_ids is not None:
+            params['VolumeIds'] = volume_ids
+        return self.make_request(action='DescribeVolumes',
+                                 body=json.dumps(params))
+
+    def detach_elastic_load_balancer(self, elastic_load_balancer_name,
+                                     layer_id):
+        """
+
+
+        :type elastic_load_balancer_name: string
+        :param elastic_load_balancer_name:
+
+        :type layer_id: string
+        :param layer_id:
+
+        """
+        params = {
+            'ElasticLoadBalancerName': elastic_load_balancer_name,
+            'LayerId': layer_id,
+        }
+        return self.make_request(action='DetachElasticLoadBalancer',
+                                 body=json.dumps(params))
+
+    def get_hostname_suggestion(self, layer_id):
+        """
+        Gets a generated hostname for the specified layer, based on
+        the current hostname theme.
+
+        :type layer_id: string
+        :param layer_id: The layer ID.
+
+        """
+        params = {'LayerId': layer_id, }
+        return self.make_request(action='GetHostnameSuggestion',
+                                 body=json.dumps(params))
+
+    def reboot_instance(self, instance_id):
+        """
+        Reboots a specified instance.
+
+        :type instance_id: string
+        :param instance_id: The instance ID.
+
+        """
+        params = {'InstanceId': instance_id, }
+        return self.make_request(action='RebootInstance',
+                                 body=json.dumps(params))
+
+    def set_load_based_auto_scaling(self, layer_id, enable=None,
+                                    up_scaling=None, down_scaling=None):
+        """
+        Specify the load-based auto scaling configuration for a
+        specified layer.
+
+        To use load-based auto scaling, you must create a set of load-
+        based auto scaling instances. Load-based auto scaling operates
+        only on the instances from that set, so you must ensure that
+        you have created enough instances to handle the maximum
+        anticipated load.
+
+        :type layer_id: string
+        :param layer_id: The layer ID.
+
+        :type enable: boolean
+        :param enable: Enables load-based auto scaling for the layer.
+
+        :type up_scaling: dict
+        :param up_scaling: An `AutoScalingThresholds` object with the upscaling
+            threshold configuration. If the load exceeds these thresholds for a
+            specified amount of time, OpsWorks starts a specified number of
+            instances.
+
+        :type down_scaling: dict
+        :param down_scaling: An `AutoScalingThresholds` object with the
+            downscaling threshold configuration. If the load falls below these
+            thresholds for a specified amount of time, OpsWorks stops a
+            specified number of instances.
+
+        """
+        params = {'LayerId': layer_id, }
+        if enable is not None:
+            params['Enable'] = enable
+        if up_scaling is not None:
+            params['UpScaling'] = up_scaling
+        if down_scaling is not None:
+            params['DownScaling'] = down_scaling
+        return self.make_request(action='SetLoadBasedAutoScaling',
+                                 body=json.dumps(params))
+
+    def set_permission(self, stack_id, iam_user_arn, allow_ssh=None,
+                       allow_sudo=None):
+        """
+        Specifies a stack's permissions.
+
+        :type stack_id: string
+        :param stack_id: The stack ID.
+
+        :type iam_user_arn: string
+        :param iam_user_arn: The user's IAM ARN.
+
+        :type allow_ssh: boolean
+        :param allow_ssh: The user is allowed to use SSH to communicate with
+            the instance.
+
+        :type allow_sudo: boolean
+        :param allow_sudo: The user is allowed to use **sudo** to elevate
+            privileges.
+
+        """
+        params = {'StackId': stack_id, 'IamUserArn': iam_user_arn, }
+        if allow_ssh is not None:
+            params['AllowSsh'] = allow_ssh
+        if allow_sudo is not None:
+            params['AllowSudo'] = allow_sudo
+        return self.make_request(action='SetPermission',
+                                 body=json.dumps(params))
+
+    def set_time_based_auto_scaling(self, instance_id,
+                                    auto_scaling_schedule=None):
+        """
+        Specify the time-based auto scaling configuration for a
+        specified instance.
+
+        :type instance_id: string
+        :param instance_id: The instance ID.
+
+        :type auto_scaling_schedule: dict
+        :param auto_scaling_schedule: An `AutoScalingSchedule` with the
+            instance schedule.
+
+        """
+        params = {'InstanceId': instance_id, }
+        if auto_scaling_schedule is not None:
+            params['AutoScalingSchedule'] = auto_scaling_schedule
+        return self.make_request(action='SetTimeBasedAutoScaling',
+                                 body=json.dumps(params))
+
+    def start_instance(self, instance_id):
+        """
+        Starts a specified instance.
+
+        :type instance_id: string
+        :param instance_id: The instance ID.
+
+        """
+        params = {'InstanceId': instance_id, }
+        return self.make_request(action='StartInstance',
+                                 body=json.dumps(params))
+
+    def start_stack(self, stack_id):
+        """
+        Starts stack's instances.
+
+        :type stack_id: string
+        :param stack_id: The stack ID.
+
+        """
+        params = {'StackId': stack_id, }
+        return self.make_request(action='StartStack',
+                                 body=json.dumps(params))
+
+    def stop_instance(self, instance_id):
+        """
+        Stops a specified instance. When you stop a standard instance,
+        the data disappears and must be reinstalled when you restart
+        the instance. You can stop an Amazon EBS-backed instance
+        without losing data.
+
+        :type instance_id: string
+        :param instance_id: The instance ID.
+
+        """
+        params = {'InstanceId': instance_id, }
+        return self.make_request(action='StopInstance',
+                                 body=json.dumps(params))
+
+    def stop_stack(self, stack_id):
+        """
+        Stops a specified stack.
+
+        :type stack_id: string
+        :param stack_id: The stack ID.
+
+        """
+        params = {'StackId': stack_id, }
+        return self.make_request(action='StopStack',
+                                 body=json.dumps(params))
+
+    def update_app(self, app_id, name=None, description=None, type=None,
+                   app_source=None, domains=None, enable_ssl=None,
+                   ssl_configuration=None, attributes=None):
+        """
+        Updates a specified app.
+
+        :type app_id: string
+        :param app_id: The app ID.
+
+        :type name: string
+        :param name: The app name.
+
+        :type description: string
+        :param description: A description of the app.
+
+        :type type: string
+        :param type: The app type.
+
+        :type app_source: dict
+        :param app_source: A `Source` object that specifies the app repository.
+
+        :type domains: list
+        :param domains: The app's virtual host settings, with multiple domains
+            separated by commas. For example: `'www.mysite.com, mysite.com'`
+
+        :type enable_ssl: boolean
+        :param enable_ssl: Whether SSL is enabled for the app.
+
+        :type ssl_configuration: dict
+        :param ssl_configuration: An `SslConfiguration` object with the SSL
+            configuration.
+
+        :type attributes: map
+        :param attributes: One or more user-defined key/value pairs to be added
+            to the stack attributes bag.
+
+        """
+        params = {'AppId': app_id, }
+        if name is not None:
+            params['Name'] = name
+        if description is not None:
+            params['Description'] = description
+        if type is not None:
+            params['Type'] = type
+        if app_source is not None:
+            params['AppSource'] = app_source
+        if domains is not None:
+            params['Domains'] = domains
+        if enable_ssl is not None:
+            params['EnableSsl'] = enable_ssl
+        if ssl_configuration is not None:
+            params['SslConfiguration'] = ssl_configuration
+        if attributes is not None:
+            params['Attributes'] = attributes
+        return self.make_request(action='UpdateApp',
+                                 body=json.dumps(params))
+
+    def update_instance(self, instance_id, layer_ids=None,
+                        instance_type=None, auto_scaling_type=None,
+                        hostname=None, os=None, ssh_key_name=None,
+                        architecture=None):
+        """
+        Updates a specified instance.
+
+        :type instance_id: string
+        :param instance_id: The instance ID.
+
+        :type layer_ids: list
+        :param layer_ids: The instance's layer IDs.
+
+        :type instance_type: string
+        :param instance_type:
+        The instance type, which can be one of the following:
+
+
+        + m1.small
+        + m1.medium
+        + m1.large
+        + m1.xlarge
+        + c1.medium
+        + c1.xlarge
+        + m2.xlarge
+        + m2.2xlarge
+        + m2.4xlarge
+
+        :type auto_scaling_type: string
+        :param auto_scaling_type:
+        The instance's auto scaling type, which has three possible values:
+
+
+        + **AlwaysRunning**: A 24x7 instance, which is not affected by auto
+              scaling.
+        + **TimeBasedAutoScaling**: A time-based auto scaling instance, which
+              is started and stopped based on a specified schedule.
+        + **LoadBasedAutoScaling**: A load-based auto scaling instance, which
+              is started and stopped based on load metrics.
+
+        :type hostname: string
+        :param hostname: The instance host name.
+
+        :type os: string
+        :param os: The instance operating system.
+
+        :type ssh_key_name: string
+        :param ssh_key_name: The instance SSH key name.
+
+        :type architecture: string
+        :param architecture:
+
+        """
+        params = {'InstanceId': instance_id, }
+        if layer_ids is not None:
+            params['LayerIds'] = layer_ids
+        if instance_type is not None:
+            params['InstanceType'] = instance_type
+        if auto_scaling_type is not None:
+            params['AutoScalingType'] = auto_scaling_type
+        if hostname is not None:
+            params['Hostname'] = hostname
+        if os is not None:
+            params['Os'] = os
+        if ssh_key_name is not None:
+            params['SshKeyName'] = ssh_key_name
+        if architecture is not None:
+            params['Architecture'] = architecture
+        return self.make_request(action='UpdateInstance',
+                                 body=json.dumps(params))
+
+    def update_layer(self, layer_id, name=None, shortname=None,
+                     attributes=None, custom_instance_profile_arn=None,
+                     custom_security_group_ids=None, packages=None,
+                     volume_configurations=None, enable_auto_healing=None,
+                     auto_assign_elastic_ips=None, custom_recipes=None):
+        """
+        Updates a specified layer.
+
+        :type layer_id: string
+        :param layer_id: The layer ID.
+
+        :type name: string
+        :param name: The layer name, which is used by the console.
+
+        :type shortname: string
+        :param shortname: The layer short name, which is used internally by
+            OpsWorks, by Chef. The shortname is also used as the name for the
+            directory where your app files are installed. It can have a maximum
+            of 200 characters and must be in the following format:
+            /\A[a-z0-9\-\_\.]+\Z/.
+
+        :type attributes: map
+        :param attributes: One or more user-defined key/value pairs to be added
+            to the stack attributes bag.
+
+        :type custom_instance_profile_arn: string
+        :param custom_instance_profile_arn: The ARN of an IAM profile to be
+            used for all of the layer's EC2 instances. For more information
+            about IAM ARNs, see `Using Identifiers`_.
+
+        :type custom_security_group_ids: list
+        :param custom_security_group_ids: An array containing the layer's
+            custom security group IDs.
+
+        :type packages: list
+        :param packages: An array of `Package` objects that describe the
+            layer's packages.
+
+        :type volume_configurations: list
+        :param volume_configurations: A `VolumeConfigurations` object that
+            describes the layer's Amazon EBS volumes.
+
+        :type enable_auto_healing: boolean
+        :param enable_auto_healing: Whether to disable auto healing for the
+            layer.
+
+        :type auto_assign_elastic_ips: boolean
+        :param auto_assign_elastic_ips: Whether to automatically assign an
+            `Elastic IP address`_ to the layer.
+
+        :type custom_recipes: dict
+        :param custom_recipes: A `LayerCustomRecipes` object that specifies the
+            layer's custom recipes.
+
+        """
+        params = {'LayerId': layer_id, }
+        if name is not None:
+            params['Name'] = name
+        if shortname is not None:
+            params['Shortname'] = shortname
+        if attributes is not None:
+            params['Attributes'] = attributes
+        if custom_instance_profile_arn is not None:
+            params['CustomInstanceProfileArn'] = custom_instance_profile_arn
+        if custom_security_group_ids is not None:
+            params['CustomSecurityGroupIds'] = custom_security_group_ids
+        if packages is not None:
+            params['Packages'] = packages
+        if volume_configurations is not None:
+            params['VolumeConfigurations'] = volume_configurations
+        if enable_auto_healing is not None:
+            params['EnableAutoHealing'] = enable_auto_healing
+        if auto_assign_elastic_ips is not None:
+            params['AutoAssignElasticIps'] = auto_assign_elastic_ips
+        if custom_recipes is not None:
+            params['CustomRecipes'] = custom_recipes
+        return self.make_request(action='UpdateLayer',
+                                 body=json.dumps(params))
+
+    def update_stack(self, stack_id, name=None, attributes=None,
+                     service_role_arn=None,
+                     default_instance_profile_arn=None, default_os=None,
+                     hostname_theme=None, default_availability_zone=None,
+                     custom_json=None, use_custom_cookbooks=None,
+                     custom_cookbooks_source=None, default_ssh_key_name=None,
+                     default_root_device_type=None):
+        """
+        Updates a specified stack.
+
+        :type stack_id: string
+        :param stack_id: The stack ID.
+
+        :type name: string
+        :param name: The stack's new name.
+
+        :type attributes: map
+        :param attributes: One or more user-defined key/value pairs to be added
+            to the stack attributes bag.
+
+        :type service_role_arn: string
+        :param service_role_arn: The stack AWS Identity and Access Management
+            (IAM) role, which allows OpsWorks to work with AWS resources on
+            your behalf. You must set this parameter to the Amazon Resource
+            Name (ARN) for an existing IAM role. For more information about IAM
+            ARNs, see `Using Identifiers`_.
+
+        :type default_instance_profile_arn: string
+        :param default_instance_profile_arn: The ARN of an IAM profile that is
+            the default profile for all of the stack's EC2 instances. For more
+            information about IAM ARNs, see `Using Identifiers`_.
+
+        :type default_os: string
+        :param default_os: The cloned stack default operating system, which
+            must be either "Amazon Linux" or "Ubuntu 12.04 LTS".
+
+        :type hostname_theme: string
+        :param hostname_theme: The stack's new host name theme, with spaces are
+            replaced by underscores. The theme is used to generate hostnames
+            for the stack's instances. By default, `HostnameTheme` is set to
+            Layer_Dependent, which creates hostnames by appending integers to
+            the layer's shortname. The other themes are:
+
+        + Baked_Goods
+        + Clouds
+        + European_Cities
+        + Fruits
+        + Greek_Deities
+        + Legendary_Creatures_from_Japan
+        + Planets_and_Moons
+        + Roman_Deities
+        + Scottish_Islands
+        + US_Cities
+        + Wild_Cats
+
+
+        To obtain a generated hostname, call `GetHostNameSuggestion`, which
+            returns a hostname based on the current theme.
+
+        :type default_availability_zone: string
+        :param default_availability_zone: The stack new default Availability
+            Zone. For more information, see `Regions and Endpoints`_.
+
+        :type custom_json: string
+        :param custom_json:
+        A string that contains user-defined, custom JSON. It is used to
+            override the corresponding default stack configuration JSON values.
+            The string should be in the following format and must escape
+            characters such as '"'.:
+        `"{\"key1\": \"value1\", \"key2\": \"value2\",...}"`
+
+        :type use_custom_cookbooks: boolean
+        :param use_custom_cookbooks: Whether the stack uses custom cookbooks.
+
+        :type custom_cookbooks_source: dict
+        :param custom_cookbooks_source:
+
+        :type default_ssh_key_name: string
+        :param default_ssh_key_name: A default SSH key for the stack instances.
+            You can override this value when you create or update an instance.
+
+        :type default_root_device_type: string
+        :param default_root_device_type:
+
+        """
+        params = {'StackId': stack_id, }
+        if name is not None:
+            params['Name'] = name
+        if attributes is not None:
+            params['Attributes'] = attributes
+        if service_role_arn is not None:
+            params['ServiceRoleArn'] = service_role_arn
+        if default_instance_profile_arn is not None:
+            params['DefaultInstanceProfileArn'] = default_instance_profile_arn
+        if default_os is not None:
+            params['DefaultOs'] = default_os
+        if hostname_theme is not None:
+            params['HostnameTheme'] = hostname_theme
+        if default_availability_zone is not None:
+            params['DefaultAvailabilityZone'] = default_availability_zone
+        if custom_json is not None:
+            params['CustomJson'] = custom_json
+        if use_custom_cookbooks is not None:
+            params['UseCustomCookbooks'] = use_custom_cookbooks
+        if custom_cookbooks_source is not None:
+            params['CustomCookbooksSource'] = custom_cookbooks_source
+        if default_ssh_key_name is not None:
+            params['DefaultSshKeyName'] = default_ssh_key_name
+        if default_root_device_type is not None:
+            params['DefaultRootDeviceType'] = default_root_device_type
+        return self.make_request(action='UpdateStack',
+                                 body=json.dumps(params))
+
+    def update_user_profile(self, iam_user_arn, ssh_username=None,
+                            ssh_public_key=None):
+        """
+        Updates a specified user's SSH name and public key.
+
+        :type iam_user_arn: string
+        :param iam_user_arn: The user IAM ARN.
+
+        :type ssh_username: string
+        :param ssh_username: The user's new SSH user name.
+
+        :type ssh_public_key: string
+        :param ssh_public_key: The user's new SSH public key.
+
+        """
+        params = {'IamUserArn': iam_user_arn, }
+        if ssh_username is not None:
+            params['SshUsername'] = ssh_username
+        if ssh_public_key is not None:
+            params['SshPublicKey'] = ssh_public_key
+        return self.make_request(action='UpdateUserProfile',
+                                 body=json.dumps(params))
+
+    def make_request(self, action, body):
+        headers = {
+            'X-Amz-Target': '%s.%s' % (self.TargetPrefix, action),
+            'Host': self.region.endpoint,
+            'Content-Type': 'application/x-amz-json-1.1',
+            'Content-Length': str(len(body)),
+        }
+        http_request = self.build_base_http_request(
+            method='POST', path='/', auth_path='/', params={},
+            headers=headers, data=body)
+        response = self._mexe(http_request, sender=None,
+                              override_num_retries=10)
+        response_body = response.read()
+        boto.log.debug(response_body)
+        if response.status == 200:
+            if response_body:
+                return json.loads(response_body)
+        else:
+            json_body = json.loads(response_body)
+            fault_name = json_body.get('__type', None)
+            exception_class = self._faults.get(fault_name, self.ResponseError)
+            raise exception_class(response.status, response.reason,
+                                  body=json_body)
+
diff --git a/boto/provider.py b/boto/provider.py
index dc1172c..457a87e 100644
--- a/boto/provider.py
+++ b/boto/provider.py
@@ -117,7 +117,8 @@
                                             'metadata-directive',
             RESUMABLE_UPLOAD_HEADER_KEY: None,
             SECURITY_TOKEN_HEADER_KEY: AWS_HEADER_PREFIX + 'security-token',
-            SERVER_SIDE_ENCRYPTION_KEY: AWS_HEADER_PREFIX + 'server-side-encryption',
+            SERVER_SIDE_ENCRYPTION_KEY: AWS_HEADER_PREFIX +
+                                         'server-side-encryption',
             VERSION_ID_HEADER_KEY: AWS_HEADER_PREFIX + 'version-id',
             STORAGE_CLASS_HEADER_KEY: AWS_HEADER_PREFIX + 'storage-class',
             MFA_HEADER_KEY: AWS_HEADER_PREFIX + 'mfa',
@@ -166,6 +167,7 @@
     def __init__(self, name, access_key=None, secret_key=None,
                  security_token=None):
         self.host = None
+        self.port = None
         self.access_key = access_key
         self.secret_key = secret_key
         self.security_token = security_token
@@ -176,10 +178,13 @@
         self.get_credentials(access_key, secret_key)
         self.configure_headers()
         self.configure_errors()
-        # allow config file to override default host
+        # Allow config file to override default host and port.
         host_opt_name = '%s_host' % self.HostKeyMap[self.name]
         if config.has_option('Credentials', host_opt_name):
             self.host = config.get('Credentials', host_opt_name)
+        port_opt_name = '%s_port' % self.HostKeyMap[self.name]
+        if config.has_option('Credentials', port_opt_name):
+            self.port = config.getint('Credentials', port_opt_name)
 
     def get_access_key(self):
         if self._credentials_need_refresh():
@@ -230,22 +235,39 @@
             else:
                 return False
 
-
     def get_credentials(self, access_key=None, secret_key=None):
         access_key_name, secret_key_name = self.CredentialMap[self.name]
         if access_key is not None:
             self.access_key = access_key
+            boto.log.debug("Using access key provided by client.")
         elif access_key_name.upper() in os.environ:
             self.access_key = os.environ[access_key_name.upper()]
+            boto.log.debug("Using access key found in environment variable.")
         elif config.has_option('Credentials', access_key_name):
             self.access_key = config.get('Credentials', access_key_name)
+            boto.log.debug("Using access key found in config file.")
 
         if secret_key is not None:
             self.secret_key = secret_key
+            boto.log.debug("Using secret key provided by client.")
         elif secret_key_name.upper() in os.environ:
             self.secret_key = os.environ[secret_key_name.upper()]
+            boto.log.debug("Using secret key found in environment variable.")
         elif config.has_option('Credentials', secret_key_name):
             self.secret_key = config.get('Credentials', secret_key_name)
+            boto.log.debug("Using secret key found in config file.")
+        elif config.has_option('Credentials', 'keyring'):
+            keyring_name = config.get('Credentials', 'keyring')
+            try:
+                import keyring
+            except ImportError:
+                boto.log.error("The keyring module could not be imported. "
+                               "For keyring support, install the keyring "
+                               "module.")
+                raise
+            self.secret_key = keyring.get_password(
+                keyring_name, self.access_key)
+            boto.log.debug("Using secret key found in keyring.")
 
         if ((self._access_key is None or self._secret_key is None) and
                 self.MetadataServiceSupport[self.name]):
@@ -258,10 +280,16 @@
         boto.log.debug("Retrieving credentials from metadata server.")
         from boto.utils import get_instance_metadata
         timeout = config.getfloat('Boto', 'metadata_service_timeout', 1.0)
-        metadata = get_instance_metadata(timeout=timeout, num_retries=1)
-        # I'm assuming there's only one role on the instance profile.
-        if metadata and 'iam' in metadata:
-            security = metadata['iam']['security-credentials'].values()[0]
+        attempts = config.getint('Boto', 'metadata_service_num_attempts', 1)
+        # The num_retries arg is actually the total number of attempts made,
+        # so the config options is named *_num_attempts to make this more
+        # clear to users.
+        metadata = get_instance_metadata(
+            timeout=timeout, num_retries=attempts,
+            data='meta-data/iam/security-credentials')
+        if metadata:
+            # I'm assuming there's only one role on the instance profile.
+            security = metadata.values()[0]
             self._access_key = security['AccessKeyId']
             self._secret_key = self._convert_key_to_str(security['SecretAccessKey'])
             self._security_token = security['Token']
diff --git a/boto/pyami/config.py b/boto/pyami/config.py
index d75e791..08da658 100644
--- a/boto/pyami/config.py
+++ b/boto/pyami/config.py
@@ -15,7 +15,7 @@
 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
 # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
 # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
-# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, 
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
 # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
 # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
 # IN THE SOFTWARE.
@@ -196,11 +196,7 @@
                     fp.write('%s = %s\n' % (option, self.get(section, option)))
     
     def dump_to_sdb(self, domain_name, item_name):
-        try:
-            import simplejson as json
-        except ImportError:
-            import json
-
+        from boto.compat import json
         sdb = boto.connect_sdb()
         domain = sdb.lookup(domain_name)
         if not domain:
@@ -215,11 +211,7 @@
         item.save()
 
     def load_from_sdb(self, domain_name, item_name):
-        try:
-            import json
-        except ImportError:
-            import simplejson as json
-
+        from boto.compat import json
         sdb = boto.connect_sdb()
         domain = sdb.lookup(domain_name)
         item = domain.get_item(item_name)
diff --git a/boto/rds/__init__.py b/boto/rds/__init__.py
index 8190eef..d81a6bb 100644
--- a/boto/rds/__init__.py
+++ b/boto/rds/__init__.py
@@ -20,7 +20,6 @@
 # IN THE SOFTWARE.
 #
 
-import boto.utils
 import urllib
 from boto.connection import AWSQueryConnection
 from boto.rds.dbinstance import DBInstance
@@ -51,7 +50,9 @@
             RDSRegionInfo(name='ap-northeast-1',
                           endpoint='rds.ap-northeast-1.amazonaws.com'),
             RDSRegionInfo(name='ap-southeast-1',
-                          endpoint='rds.ap-southeast-1.amazonaws.com')
+                          endpoint='rds.ap-southeast-1.amazonaws.com'),
+            RDSRegionInfo(name='ap-southeast-2',
+                          endpoint='rds.ap-southeast-2.amazonaws.com'),
             ]
 
 
@@ -80,8 +81,8 @@
 class RDSConnection(AWSQueryConnection):
 
     DefaultRegionName = 'us-east-1'
-    DefaultRegionEndpoint = 'rds.us-east-1.amazonaws.com'
-    APIVersion = '2011-04-01'
+    DefaultRegionEndpoint = 'rds.amazonaws.com'
+    APIVersion = '2012-09-17'
 
     def __init__(self, aws_access_key_id=None, aws_secret_access_key=None,
                  is_secure=True, port=None, proxy=None, proxy_port=None,
@@ -161,8 +162,9 @@
                           db_subnet_group_name = None,
                           license_model = None,
                           option_group_name = None,
+                          iops=None,
                           ):
-        # API version: 2012-04-23
+        # API version: 2012-09-17
         # Parameter notes:
         # =================
         # id should be db_instance_identifier according to API docs but has been left
@@ -348,6 +350,17 @@
         :param option_group_name: Indicates that the DB Instance should be associated
                                   with the specified option group.
 
+        :type iops: int
+        :param iops:  The amount of IOPS (input/output operations per second) to Provisioned
+                      for the DB Instance. Can be modified at a later date.
+
+                      Must scale linearly. For every 1000 IOPS provision, you must allocated
+                      100 GB of storage space. This scales up to 1 TB / 10 000 IOPS for MySQL
+                      and Oracle. MSSQL is limited to 700 GB / 7 000 IOPS.
+
+                      If you specify a value, it must be at least 1000 IOPS and you must
+                      allocate 100 GB of storage.
+
         :rtype: :class:`boto.rds.dbinstance.DBInstance`
         :return: The new db instance.
         """
@@ -388,6 +401,7 @@
                   'DBSubnetGroupName': db_subnet_group_name,
                   'Engine': engine,
                   'EngineVersion': engine_version,
+                  'Iops': iops,
                   'LicenseModel': license_model,
                   'MasterUsername': master_username,
                   'MasterUserPassword': master_password,
@@ -488,13 +502,19 @@
                           backup_retention_period=None,
                           preferred_backup_window=None,
                           multi_az=False,
-                          apply_immediately=False):
+                          apply_immediately=False,
+                          iops=None):
         """
         Modify an existing DBInstance.
 
         :type id: str
         :param id: Unique identifier for the new instance.
 
+        :type param_group: str
+        :param param_group: Name of DBParameterGroup to associate with
+                            this DBInstance.  If no groups are specified
+                            no parameter groups will be used.
+
         :type security_groups: list of str or list of DBSecurityGroup objects
         :param security_groups: List of names of DBSecurityGroup to authorize on
                                 this DBInstance.
@@ -548,6 +568,17 @@
         :param multi_az: If True, specifies the DB Instance will be
                          deployed in multiple availability zones.
 
+        :type iops: int
+        :param iops:  The amount of IOPS (input/output operations per second) to Provisioned
+                      for the DB Instance. Can be modified at a later date.
+
+                      Must scale linearly. For every 1000 IOPS provision, you must allocated
+                      100 GB of storage space. This scales up to 1 TB / 10 000 IOPS for MySQL
+                      and Oracle. MSSQL is limited to 700 GB / 7 000 IOPS.
+
+                      If you specify a value, it must be at least 1000 IOPS and you must
+                      allocate 100 GB of storage.
+
         :rtype: :class:`boto.rds.dbinstance.DBInstance`
         :return: The modified db instance.
         """
@@ -578,6 +609,8 @@
             params['MultiAZ'] = 'true'
         if apply_immediately:
             params['ApplyImmediately'] = 'true'
+        if iops:
+            params['Iops'] = iops
 
         return self.get_object('ModifyDBInstance', params, DBInstance)
 
@@ -987,7 +1020,8 @@
                                            instance_class, port=None,
                                            availability_zone=None,
                                            multi_az=None,
-                                           auto_minor_version_upgrade=None):
+                                           auto_minor_version_upgrade=None,
+                                           db_subnet_group_name=None):
         """
         Create a new DBInstance from a DB snapshot.
 
@@ -1024,6 +1058,11 @@
                                            during the maintenance window.
                                            Default is the API default.
 
+        :type db_subnet_group_name: str
+        :param db_subnet_group_name: A DB Subnet Group to associate with this DB Instance.
+                                     If there is no DB Subnet Group, then it is a non-VPC DB
+                                     instance.
+
         :rtype: :class:`boto.rds.dbinstance.DBInstance`
         :return: The newly created DBInstance
         """
@@ -1038,6 +1077,8 @@
             params['MultiAZ'] = str(multi_az).lower()
         if auto_minor_version_upgrade is not None:
             params['AutoMinorVersionUpgrade'] = str(auto_minor_version_upgrade).lower()
+        if db_subnet_group_name is not None:
+            params['DBSubnetGroupName'] = db_subnet_group_name
         return self.get_object('RestoreDBInstanceFromDBSnapshot',
                                params, DBInstance)
 
diff --git a/boto/rds/dbinstance.py b/boto/rds/dbinstance.py
index f6c2787..7002791 100644
--- a/boto/rds/dbinstance.py
+++ b/boto/rds/dbinstance.py
@@ -21,6 +21,7 @@
 
 from boto.rds.dbsecuritygroup import DBSecurityGroup
 from boto.rds.parametergroup import ParameterGroup
+from boto.resultset import ResultSet
 
 
 class DBInstance(object):
@@ -43,9 +44,9 @@
         capacity class of the DB Instance.
     :ivar master_username: The username that is set as master username
         at creation time.
-    :ivar parameter_group: Provides the list of DB Parameter Groups
+    :ivar parameter_groups: Provides the list of DB Parameter Groups
         applied to this DB Instance.
-    :ivar security_group: Provides List of DB Security Group elements
+    :ivar security_groups: Provides List of DB Security Group elements
         containing only DBSecurityGroup.Name and DBSecurityGroup.Status
         subelements.
     :ivar availability_zone: Specifies the name of the Availability Zone
@@ -58,12 +59,16 @@
     :ivar preferred_maintenance_window: Specifies the weekly time
         range (in UTC) during which system maintenance can occur. (string)
     :ivar latest_restorable_time: Specifies the latest time to which
-        a database can be restored with point-in-time restore. TODO: type?
+        a database can be restored with point-in-time restore. (string)
     :ivar multi_az: Boolean that specifies if the DB Instance is a
         Multi-AZ deployment.
+    :ivar iops: The current number of provisioned IOPS for the DB Instance.
+        Can be None if this is a standard instance.
     :ivar pending_modified_values: Specifies that changes to the
         DB Instance are pending. This element is only included when changes
         are pending. Specific changes are identified by subelements.
+    :ivar read_replica_dbinstance_identifiers: List of read replicas
+        associated with this DB instance.
     """
 
     def __init__(self, connection=None, id=None):
@@ -76,14 +81,16 @@
         self.endpoint = None
         self.instance_class = None
         self.master_username = None
-        self.parameter_group = None
-        self.security_group = None
+        self.parameter_groups = []
+        self.security_groups = []
+        self.read_replica_dbinstance_identifiers = []
         self.availability_zone = None
         self.backup_retention_period = None
         self.preferred_backup_window = None
         self.preferred_maintenance_window = None
         self.latest_restorable_time = None
         self.multi_az = False
+        self.iops = None
         self.pending_modified_values = None
         self._in_endpoint = False
         self._port = None
@@ -95,15 +102,21 @@
     def startElement(self, name, attrs, connection):
         if name == 'Endpoint':
             self._in_endpoint = True
-        elif name == 'DBParameterGroup':
-            self.parameter_group = ParameterGroup(self.connection)
-            return self.parameter_group
-        elif name == 'DBSecurityGroup':
-            self.security_group = DBSecurityGroup(self.connection)
-            return self.security_group
+        elif name == 'DBParameterGroups':
+            self.parameter_groups = ResultSet([('DBParameterGroup',
+                                                ParameterGroup)])
+            return self.parameter_groups
+        elif name == 'DBSecurityGroups':
+            self.security_groups = ResultSet([('DBSecurityGroup',
+                                               DBSecurityGroup)])
+            return self.security_groups
         elif name == 'PendingModifiedValues':
             self.pending_modified_values = PendingModifiedValues()
             return self.pending_modified_values
+        elif name == 'ReadReplicaDBInstanceIdentifiers':
+            self.read_replica_dbinstance_identifiers = \
+                    ReadReplicaDBInstanceIdentifiers()
+            return self.read_replica_dbinstance_identifiers
         return None
 
     def endElement(self, name, value, connection):
@@ -145,9 +158,33 @@
         elif name == 'MultiAZ':
             if value.lower() == 'true':
                 self.multi_az = True
+        elif name == 'Iops':
+            self.iops = int(value)
         else:
             setattr(self, name, value)
 
+    @property
+    def security_group(self):
+        """
+        Provide backward compatibility for previous security_group
+        attribute.
+        """
+        if len(self.security_groups) > 0:
+            return self.security_groups[-1]
+        else:
+            return None
+
+    @property
+    def parameter_group(self):
+        """
+        Provide backward compatibility for previous parameter_group
+        attribute.
+        """
+        if len(self.parameter_groups) > 0:
+            return self.parameter_groups[-1]
+        else:
+            return None
+
     def snapshot(self, snapshot_id):
         """
         Create a new DB snapshot of this DBInstance.
@@ -217,10 +254,15 @@
                backup_retention_period=None,
                preferred_backup_window=None,
                multi_az=False,
+               iops=None,
                apply_immediately=False):
         """
         Modify this DBInstance.
 
+        :type param_group: str
+        :param param_group: Name of DBParameterGroup to associate with
+                            this DBInstance.
+
         :type security_groups: list of str or list of DBSecurityGroup objects
         :param security_groups: List of names of DBSecurityGroup to
             authorize on this DBInstance.
@@ -271,6 +313,19 @@
         :param multi_az: If True, specifies the DB Instance will be
             deployed in multiple availability zones.
 
+        :type iops: int
+        :param iops: The amount of IOPS (input/output operations per
+            second) to Provisioned for the DB Instance. Can be
+            modified at a later date.
+
+            Must scale linearly. For every 1000 IOPS provision, you
+            must allocated 100 GB of storage space. This scales up to
+            1 TB / 10 000 IOPS for MySQL and Oracle. MSSQL is limited
+            to 700 GB / 7 000 IOPS.
+
+            If you specify a value, it must be at least 1000 IOPS and
+            you must allocate 100 GB of storage.
+
         :rtype: :class:`boto.rds.dbinstance.DBInstance`
         :return: The modified db instance.
         """
@@ -284,14 +339,23 @@
                                                  backup_retention_period,
                                                  preferred_backup_window,
                                                  multi_az,
-                                                 apply_immediately)
+                                                 apply_immediately,
+                                                 iops)
 
 
 class PendingModifiedValues(dict):
-
     def startElement(self, name, attrs, connection):
         return None
 
     def endElement(self, name, value, connection):
         if name != 'PendingModifiedValues':
             self[name] = value
+
+
+class ReadReplicaDBInstanceIdentifiers(list):
+    def startElement(self, name, attrs, connection):
+        return None
+
+    def endElement(self, name, value, connection):
+        if name == 'ReadReplicaDBInstanceIdentifier':
+            self.append(value)
diff --git a/boto/rds/dbsecuritygroup.py b/boto/rds/dbsecuritygroup.py
index 6a69ddb..3783606 100644
--- a/boto/rds/dbsecuritygroup.py
+++ b/boto/rds/dbsecuritygroup.py
@@ -28,13 +28,18 @@
     """
     Represents an RDS database security group
 
-    Properties reference available from the AWS documentation at http://docs.amazonwebservices.com/AmazonRDS/latest/APIReference/API_DeleteDBSecurityGroup.html
+    Properties reference available from the AWS documentation at
+    http://docs.amazonwebservices.com/AmazonRDS/latest/APIReference/API_DeleteDBSecurityGroup.html
 
-    :ivar Status: The current status of the security group. Possibile values are [ active, ? ]. Reference documentation lacks specifics of possibilities
-    :ivar connection: boto.rds.RDSConnection associated with the current object
+    :ivar Status: The current status of the security group. Possible values are
+        [ active, ? ]. Reference documentation lacks specifics of possibilities
+    :ivar connection: :py:class:`boto.rds.RDSConnection` associated with the current object
     :ivar description: The description of the security group
-    :ivar ec2_groups: List of EC2SecurityGroup objects that this security group PERMITS
-    :ivar ip_ranges: List of IPRange objects (containing CIDR addresses) that this security group PERMITS
+    :ivar ec2_groups: List of :py:class:`EC2 Security Group
+        <boto.ec2.securitygroup.SecurityGroup>` objects that this security
+        group PERMITS
+    :ivar ip_ranges: List of :py:class:`boto.rds.dbsecuritygroup.IPRange`
+        objects (containing CIDR addresses) that this security group PERMITS
     :ivar name: Name of the security group
     :ivar owner_id: ID of the owner of the security group. Can be 'None'
     """
@@ -83,13 +88,14 @@
         You need to pass in either a CIDR block to authorize or
         and EC2 SecurityGroup.
 
-        @type cidr_ip: string
-        @param cidr_ip: A valid CIDR IP range to authorize
+        :type cidr_ip: string
+        :param cidr_ip: A valid CIDR IP range to authorize
 
-        @type ec2_group: :class:`boto.ec2.securitygroup.SecurityGroup>`
+        :type ec2_group: :class:`boto.ec2.securitygroup.SecurityGroup`
+        :param ec2_group: An EC2 security group to authorize
 
-        @rtype: bool
-        @return: True if successful.
+        :rtype: bool
+        :return: True if successful.
         """
         if isinstance(ec2_group, SecurityGroup):
             group_name = ec2_group.name
@@ -108,13 +114,14 @@
         You need to pass in either a CIDR block or
         an EC2 SecurityGroup from which to revoke access.
 
-        @type cidr_ip: string
-        @param cidr_ip: A valid CIDR IP range to revoke
+        :type cidr_ip: string
+        :param cidr_ip: A valid CIDR IP range to revoke
 
-        @type ec2_group: :class:`boto.ec2.securitygroup.SecurityGroup>`
+        :type ec2_group: :class:`boto.ec2.securitygroup.SecurityGroup`
+        :param ec2_group: An EC2 security group to revoke
 
-        @rtype: bool
-        @return: True if successful.
+        :rtype: bool
+        :return: True if successful.
         """
         if isinstance(ec2_group, SecurityGroup):
             group_name = ec2_group.name
@@ -131,6 +138,8 @@
 class IPRange(object):
     """
     Describes a CIDR address range for use in a DBSecurityGroup
+
+    :ivar cidr_ip: IP Address range
     """
 
     def __init__(self, parent=None):
@@ -174,4 +183,4 @@
         elif name == 'EC2SecurityGroupOwnerId':
             self.owner_id = value
         else:
-            setattr(self, name, value)
\ No newline at end of file
+            setattr(self, name, value)
diff --git a/boto/rds/dbsnapshot.py b/boto/rds/dbsnapshot.py
index 0ea7f94..acacd73 100644
--- a/boto/rds/dbsnapshot.py
+++ b/boto/rds/dbsnapshot.py
@@ -14,7 +14,7 @@
 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
 # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
 # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
-# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, 
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
 # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
 # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
 # IN THE SOFTWARE.
@@ -39,7 +39,7 @@
     :ivar snapshot_create_time: Provides the time (UTC) when the snapshot was taken
     :ivar status: Specifies the status of this DB Snapshot. Possible values are [ available, backing-up, creating, deleted, deleting, failed, modifying, rebooting, resetting-master-credentials ]
     """
-    
+
     def __init__(self, connection=None, id=None):
         self.connection = connection
         self.id = id
@@ -84,4 +84,25 @@
         elif name == 'SnapshotTime':
             self.time = value
         else:
-            setattr(self, name, value)
\ No newline at end of file
+            setattr(self, name, value)
+
+    def update(self, validate=False):
+        """
+        Update the DB snapshot's status information by making a call to fetch
+        the current snapshot attributes from the service.
+
+        :type validate: bool
+        :param validate: By default, if EC2 returns no data about the
+                         instance the update method returns quietly.  If
+                         the validate param is True, however, it will
+                         raise a ValueError exception if no data is
+                         returned from EC2.
+        """
+        rs = self.connection.get_all_dbsnapshots(self.id)
+        if len(rs) > 0:
+            for i in rs:
+                if i.id == self.id:
+                    self.__dict__.update(i.__dict__)
+        elif validate:
+            raise ValueError('%s is not a valid Snapshot ID' % self.id)
+        return self.status
diff --git a/boto/redshift/__init__.py b/boto/redshift/__init__.py
new file mode 100644
index 0000000..fca2a79
--- /dev/null
+++ b/boto/redshift/__init__.py
@@ -0,0 +1,55 @@
+# Copyright (c) 2013 Amazon.com, Inc. or its affiliates.
+# All Rights Reserved
+#
+# Permission is hereby granted, free of charge, to any person obtaining a
+# copy of this software and associated documentation files (the
+# "Software"), to deal in the Software without restriction, including
+# without limitation the rights to use, copy, modify, merge, publish, dis-
+# tribute, sublicense, and/or sell copies of the Software, and to permit
+# persons to whom the Software is furnished to do so, subject to the fol-
+# lowing conditions:
+#
+# The above copyright notice and this permission notice shall be included
+# in all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
+# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
+# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+# IN THE SOFTWARE.
+#
+from boto.regioninfo import RegionInfo
+
+
+def regions():
+    """
+    Get all available regions for the AWS Redshift service.
+
+    :rtype: list
+    :return: A list of :class:`boto.regioninfo.RegionInfo`
+    """
+    from boto.redshift.layer1 import RedshiftConnection
+    cls = RedshiftConnection
+    return [
+        RegionInfo(name='us-east-1',
+                   endpoint='redshift.us-east-1.amazonaws.com',
+                   connection_cls=cls),
+        RegionInfo(name='us-west-2',
+                   endpoint='redshift.us-west-2.amazonaws.com',
+                   connection_cls=cls),
+        RegionInfo(name='eu-west-1',
+                   endpoint='redshift.eu-west-1.amazonaws.com',
+                   connection_cls=cls),
+        RegionInfo(name='ap-northeast-1',
+                   endpoint='redshift.ap-northeast-1.amazonaws.com',
+                   connection_cls=cls),
+    ]
+
+
+def connect_to_region(region_name, **kw_params):
+    for region in regions():
+        if region.name == region_name:
+            return region.connect(**kw_params)
+    return None
diff --git a/boto/redshift/exceptions.py b/boto/redshift/exceptions.py
new file mode 100644
index 0000000..92779d0
--- /dev/null
+++ b/boto/redshift/exceptions.py
@@ -0,0 +1,182 @@
+# Copyright (c) 2013 Amazon.com, Inc. or its affiliates.  All Rights Reserved
+#
+# Permission is hereby granted, free of charge, to any person obtaining a
+# copy of this software and associated documentation files (the
+# "Software"), to deal in the Software without restriction, including
+# without limitation the rights to use, copy, modify, merge, publish, dis-
+# tribute, sublicense, and/or sell copies of the Software, and to permit
+# persons to whom the Software is furnished to do so, subject to the fol-
+# lowing conditions:
+#
+# The above copyright notice and this permission notice shall be included
+# in all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
+# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
+# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+# IN THE SOFTWARE.
+#
+from boto.exception import JSONResponseError
+
+
+class ClusterNotFoundFault(JSONResponseError):
+    pass
+
+
+class InvalidClusterSnapshotStateFault(JSONResponseError):
+    pass
+
+
+class ClusterSnapshotNotFoundFault(JSONResponseError):
+    pass
+
+
+class ClusterNotFoundFault(JSONResponseError):
+    pass
+
+
+class ClusterSecurityGroupQuotaExceededFault(JSONResponseError):
+    pass
+
+
+class ReservedNodeOfferingNotFoundFault(JSONResponseError):
+    pass
+
+
+class InvalidSubnet(JSONResponseError):
+    pass
+
+
+class ClusterSubnetGroupQuotaExceededFault(JSONResponseError):
+    pass
+
+
+class InvalidClusterStateFault(JSONResponseError):
+    pass
+
+
+class InvalidClusterParameterGroupStateFault(JSONResponseError):
+    pass
+
+
+class ClusterParameterGroupAlreadyExistsFault(JSONResponseError):
+    pass
+
+
+class InvalidClusterSecurityGroupStateFault(JSONResponseError):
+    pass
+
+
+class InvalidRestoreFault(JSONResponseError):
+    pass
+
+
+class AuthorizationNotFoundFault(JSONResponseError):
+    pass
+
+
+class ResizeNotFoundFault(JSONResponseError):
+    pass
+
+
+class NumberOfNodesQuotaExceededFault(JSONResponseError):
+    pass
+
+
+class ClusterSnapshotAlreadyExistsFault(JSONResponseError):
+    pass
+
+
+class AuthorizationQuotaExceededFault(JSONResponseError):
+    pass
+
+
+class AuthorizationAlreadyExistsFault(JSONResponseError):
+    pass
+
+
+class ClusterSnapshotQuotaExceededFault(JSONResponseError):
+    pass
+
+
+class ReservedNodeNotFoundFault(JSONResponseError):
+    pass
+
+
+class ReservedNodeAlreadyExistsFault(JSONResponseError):
+    pass
+
+
+class ClusterSecurityGroupAlreadyExistsFault(JSONResponseError):
+    pass
+
+
+class ClusterParameterGroupNotFoundFault(JSONResponseError):
+    pass
+
+
+class ReservedNodeQuotaExceededFault(JSONResponseError):
+    pass
+
+
+class ClusterQuotaExceededFault(JSONResponseError):
+    pass
+
+
+class ClusterSubnetQuotaExceededFault(JSONResponseError):
+    pass
+
+
+class UnsupportedOptionFault(JSONResponseError):
+    pass
+
+
+class InvalidVPCNetworkStateFault(JSONResponseError):
+    pass
+
+
+class ClusterSecurityGroupNotFoundFault(JSONResponseError):
+    pass
+
+
+class InvalidClusterSubnetGroupStateFault(JSONResponseError):
+    pass
+
+
+class ClusterSubnetGroupAlreadyExistsFault(JSONResponseError):
+    pass
+
+
+class NumberOfNodesPerClusterLimitExceededFault(JSONResponseError):
+    pass
+
+
+class ClusterSubnetGroupNotFoundFault(JSONResponseError):
+    pass
+
+
+class ClusterParameterGroupQuotaExceededFault(JSONResponseError):
+    pass
+
+
+class ClusterAlreadyExistsFault(JSONResponseError):
+    pass
+
+
+class InsufficientClusterCapacityFault(JSONResponseError):
+    pass
+
+
+class InvalidClusterSubnetStateFault(JSONResponseError):
+    pass
+
+
+class SubnetAlreadyInUse(JSONResponseError):
+    pass
+
+
+class InvalidParameterCombinationFault(JSONResponseError):
+    pass
diff --git a/boto/redshift/layer1.py b/boto/redshift/layer1.py
new file mode 100644
index 0000000..f57ec0a
--- /dev/null
+++ b/boto/redshift/layer1.py
@@ -0,0 +1,2076 @@
+# Copyright (c) 2013 Amazon.com, Inc. or its affiliates.  All Rights Reserved
+#
+# Permission is hereby granted, free of charge, to any person obtaining a
+# copy of this software and associated documentation files (the
+# "Software"), to deal in the Software without restriction, including
+# without limitation the rights to use, copy, modify, merge, publish, dis-
+# tribute, sublicense, and/or sell copies of the Software, and to permit
+# persons to whom the Software is furnished to do so, subject to the fol-
+# lowing conditions:
+#
+# The above copyright notice and this permission notice shall be included
+# in all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
+# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
+# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+# IN THE SOFTWARE.
+#
+
+import json
+import boto
+from boto.connection import AWSQueryConnection
+from boto.regioninfo import RegionInfo
+from boto.exception import JSONResponseError
+from boto.redshift import exceptions
+
+
+class RedshiftConnection(AWSQueryConnection):
+    """
+    Amazon Redshift **Overview**
+    This is the Amazon Redshift API Reference. This guide provides
+    descriptions and samples of the Amazon Redshift API.
+
+    Amazon Redshift manages all the work of setting up, operating, and
+    scaling a data warehouse: provisioning capacity, monitoring and
+    backing up the cluster, and applying patches and upgrades to the
+    Amazon Redshift engine. You can focus on using your data to
+    acquire new insights for your business and customers.
+    **Are You a First-Time Amazon Redshift User?**
+    If you are a first-time user of Amazon Redshift, we recommend that
+    you begin by reading the following sections:
+
+
+
+    + Service Highlights and Pricing - The `product detail page`_
+      provides the Amazon Redshift value proposition, service highlights
+      and pricing.
+    + Getting Started - The `Getting Started Guide`_ includes an
+      example that walks you through the process of creating a cluster,
+      creating database tables, uploading data, and testing queries.
+
+
+
+    After you complete the Getting Started Guide, we recommend that
+    you explore one of the following guides:
+
+
+    + Cluster Management - If you are responsible for managing Amazon
+      Redshift clusters, the `Cluster Management Guide`_ shows you how
+      to create and manage Amazon Redshift clusters. If you are an
+      application developer, you can use the Amazon Redshift Query API
+      to manage clusters programmatically. Additionally, the AWS SDK
+      libraries that wrap the underlying Amazon Redshift API simplify
+      your programming tasks. If you prefer a more interactive way of
+      managing clusters, you can use the Amazon Redshift console and the
+      AWS command line interface (AWS CLI). For information about the
+      API and CLI, go to the following manuals :
+
+        + API Reference ( this document )
+        + `CLI Reference`_
+
+    + Amazon Redshift Database Database Developer - If you are a
+      database developer, the Amazon Redshift `Database Developer
+      Guide`_ explains how to design, build, query, and maintain the
+      databases that make up your data warehouse.
+
+
+    For a list of supported AWS regions where you can provision a
+    cluster, go to the `Regions and Endpoints`_ section in the Amazon
+    Web Services Glossary .
+    """
+    APIVersion = "2012-12-01"
+    DefaultRegionName = "us-east-1"
+    DefaultRegionEndpoint = "redshift.us-east-1.amazonaws.com"
+    ResponseError = JSONResponseError
+
+    _faults = {
+        "ClusterNotFound": exceptions.ClusterNotFoundFault,
+        "InvalidClusterSnapshotState": exceptions.InvalidClusterSnapshotStateFault,
+        "ClusterSnapshotNotFound": exceptions.ClusterSnapshotNotFoundFault,
+        "ClusterSecurityGroupQuotaExceeded": exceptions.ClusterSecurityGroupQuotaExceededFault,
+        "ReservedNodeOfferingNotFound": exceptions.ReservedNodeOfferingNotFoundFault,
+        "InvalidSubnet": exceptions.InvalidSubnet,
+        "ClusterSubnetGroupQuotaExceeded": exceptions.ClusterSubnetGroupQuotaExceededFault,
+        "InvalidClusterState": exceptions.InvalidClusterStateFault,
+        "InvalidClusterParameterGroupState": exceptions.InvalidClusterParameterGroupStateFault,
+        "ClusterParameterGroupAlreadyExists": exceptions.ClusterParameterGroupAlreadyExistsFault,
+        "InvalidClusterSecurityGroupState": exceptions.InvalidClusterSecurityGroupStateFault,
+        "InvalidRestore": exceptions.InvalidRestoreFault,
+        "AuthorizationNotFound": exceptions.AuthorizationNotFoundFault,
+        "ResizeNotFound": exceptions.ResizeNotFoundFault,
+        "NumberOfNodesQuotaExceeded": exceptions.NumberOfNodesQuotaExceededFault,
+        "ClusterSnapshotAlreadyExists": exceptions.ClusterSnapshotAlreadyExistsFault,
+        "AuthorizationQuotaExceeded": exceptions.AuthorizationQuotaExceededFault,
+        "AuthorizationAlreadyExists": exceptions.AuthorizationAlreadyExistsFault,
+        "ClusterSnapshotQuotaExceeded": exceptions.ClusterSnapshotQuotaExceededFault,
+        "ReservedNodeNotFound": exceptions.ReservedNodeNotFoundFault,
+        "ReservedNodeAlreadyExists": exceptions.ReservedNodeAlreadyExistsFault,
+        "ClusterSecurityGroupAlreadyExists": exceptions.ClusterSecurityGroupAlreadyExistsFault,
+        "ClusterParameterGroupNotFound": exceptions.ClusterParameterGroupNotFoundFault,
+        "ReservedNodeQuotaExceeded": exceptions.ReservedNodeQuotaExceededFault,
+        "ClusterQuotaExceeded": exceptions.ClusterQuotaExceededFault,
+        "ClusterSubnetQuotaExceeded": exceptions.ClusterSubnetQuotaExceededFault,
+        "UnsupportedOption": exceptions.UnsupportedOptionFault,
+        "InvalidVPCNetworkState": exceptions.InvalidVPCNetworkStateFault,
+        "ClusterSecurityGroupNotFound": exceptions.ClusterSecurityGroupNotFoundFault,
+        "InvalidClusterSubnetGroupState": exceptions.InvalidClusterSubnetGroupStateFault,
+        "ClusterSubnetGroupAlreadyExists": exceptions.ClusterSubnetGroupAlreadyExistsFault,
+        "NumberOfNodesPerClusterLimitExceeded": exceptions.NumberOfNodesPerClusterLimitExceededFault,
+        "ClusterSubnetGroupNotFound": exceptions.ClusterSubnetGroupNotFoundFault,
+        "ClusterParameterGroupQuotaExceeded": exceptions.ClusterParameterGroupQuotaExceededFault,
+        "ClusterAlreadyExists": exceptions.ClusterAlreadyExistsFault,
+        "InsufficientClusterCapacity": exceptions.InsufficientClusterCapacityFault,
+        "InvalidClusterSubnetState": exceptions.InvalidClusterSubnetStateFault,
+        "SubnetAlreadyInUse": exceptions.SubnetAlreadyInUse,
+        "InvalidParameterCombination": exceptions.InvalidParameterCombinationFault,
+    }
+
+
+    def __init__(self, **kwargs):
+        region = kwargs.pop('region', None)
+        if not region:
+            region = RegionInfo(self, self.DefaultRegionName,
+                                self.DefaultRegionEndpoint)
+        kwargs['host'] = region.endpoint
+        AWSQueryConnection.__init__(self, **kwargs)
+        self.region = region
+
+    def _required_auth_capability(self):
+        return ['hmac-v4']
+
+    def authorize_cluster_security_group_ingress(self,
+                                                 cluster_security_group_name,
+                                                 cidrip=None,
+                                                 ec2_security_group_name=None,
+                                                 ec2_security_group_owner_id=None):
+        """
+        Adds an inbound (ingress) rule to an Amazon Redshift security
+        group. Depending on whether the application accessing your
+        cluster is running on the Internet or an EC2 instance, you can
+        authorize inbound access to either a Classless Interdomain
+        Routing (CIDR) IP address range or an EC2 security group. You
+        can add as many as 20 ingress rules to an Amazon Redshift
+        security group.
+        The EC2 security group must be defined in the AWS region where
+        the cluster resides.
+        For an overview of CIDR blocks, see the Wikipedia article on
+        `Classless Inter-Domain Routing`_.
+
+        You must also associate the security group with a cluster so
+        that clients running on these IP addresses or the EC2 instance
+        are authorized to connect to the cluster. For information
+        about managing security groups, go to `Working with Security
+        Groups`_ in the Amazon Redshift Management Guide .
+
+        :type cluster_security_group_name: string
+        :param cluster_security_group_name: The name of the security group to
+            which the ingress rule is added.
+
+        :type cidrip: string
+        :param cidrip: The IP range to be added the Amazon Redshift security
+            group.
+
+        :type ec2_security_group_name: string
+        :param ec2_security_group_name: The EC2 security group to be added the
+            Amazon Redshift security group.
+
+        :type ec2_security_group_owner_id: string
+        :param ec2_security_group_owner_id: The AWS account number of the owner
+            of the security group specified by the EC2SecurityGroupName
+            parameter. The AWS Access Key ID is not an acceptable value.
+        Example: `111122223333`
+
+        """
+        params = {
+            'ClusterSecurityGroupName': cluster_security_group_name,
+        }
+        if cidrip is not None:
+            params['CIDRIP'] = cidrip
+        if ec2_security_group_name is not None:
+            params['EC2SecurityGroupName'] = ec2_security_group_name
+        if ec2_security_group_owner_id is not None:
+            params['EC2SecurityGroupOwnerId'] = ec2_security_group_owner_id
+        return self._make_request(
+            action='AuthorizeClusterSecurityGroupIngress',
+            verb='POST',
+            path='/', params=params)
+
+    def copy_cluster_snapshot(self, source_snapshot_identifier,
+                              target_snapshot_identifier):
+        """
+        Copies the specified automated cluster snapshot to a new
+        manual cluster snapshot. The source must be an automated
+        snapshot and it must be in the available state.
+
+        When you delete a cluster, Amazon Redshift deletes any
+        automated snapshots of the cluster. Also, when the retention
+        period of the snapshot expires, Amazon Redshift automatically
+        deletes it. If you want to keep an automated snapshot for a
+        longer period, you can make a manual copy of the snapshot.
+        Manual snapshots are retained until you delete them.
+
+        For more information about working with snapshots, go to
+        `Amazon Redshift Snapshots`_ in the Amazon Redshift Management
+        Guide .
+
+        :type source_snapshot_identifier: string
+        :param source_snapshot_identifier:
+        The identifier for the source snapshot.
+
+        Constraints:
+
+
+        + Must be the identifier for a valid automated snapshot whose state is
+              "available".
+
+        :type target_snapshot_identifier: string
+        :param target_snapshot_identifier:
+        The identifier given to the new manual snapshot.
+
+        Constraints:
+
+
+        + Cannot be null, empty, or blank.
+        + Must contain from 1 to 255 alphanumeric characters or hyphens.
+        + First character must be a letter.
+        + Cannot end with a hyphen or contain two consecutive hyphens.
+        + Must be unique for the AWS account that is making the request.
+
+        """
+        params = {
+            'SourceSnapshotIdentifier': source_snapshot_identifier,
+            'TargetSnapshotIdentifier': target_snapshot_identifier,
+        }
+        return self._make_request(
+            action='CopyClusterSnapshot',
+            verb='POST',
+            path='/', params=params)
+
+    def create_cluster(self, cluster_identifier, node_type, master_username,
+                       master_user_password, db_name=None, cluster_type=None,
+                       cluster_security_groups=None,
+                       vpc_security_group_ids=None,
+                       cluster_subnet_group_name=None,
+                       availability_zone=None,
+                       preferred_maintenance_window=None,
+                       cluster_parameter_group_name=None,
+                       automated_snapshot_retention_period=None, port=None,
+                       cluster_version=None, allow_version_upgrade=None,
+                       number_of_nodes=None, publicly_accessible=None,
+                       encrypted=None):
+        """
+        Creates a new cluster. To create the cluster in virtual
+        private cloud (VPC), you must provide cluster subnet group
+        name. If you don't provide a cluster subnet group name or the
+        cluster security group parameter, Amazon Redshift creates a
+        non-VPC cluster, it associates the default cluster security
+        group with the cluster. For more information about managing
+        clusters, go to `Amazon Redshift Clusters`_ in the Amazon
+        Redshift Management Guide .
+
+        :type db_name: string
+        :param db_name:
+        The name of the first database to be created when the cluster is
+            created.
+
+        To create additional databases after the cluster is created, connect to
+            the cluster with a SQL client and use SQL commands to create a
+            database. For more information, go to `Create a Database`_ in the
+            Amazon Redshift Developer Guide.
+
+        Default: `dev`
+
+        Constraints:
+
+
+        + Must contain 1 to 64 alphanumeric characters.
+        + Must contain only lowercase letters.
+        + Cannot be a word that is reserved by the service. A list of reserved
+              words can be found in `Reserved Words`_ in the Amazon Redshift
+              Developer Guide.
+
+        :type cluster_identifier: string
+        :param cluster_identifier: A unique identifier for the cluster. You use
+            this identifier to refer to the cluster for any subsequent cluster
+            operations such as deleting or modifying. The identifier also
+            appears in the Amazon Redshift console.
+        Constraints:
+
+
+        + Must contain from 1 to 63 alphanumeric characters or hyphens.
+        + Alphabetic characters must be lowercase.
+        + First character must be a letter.
+        + Cannot end with a hyphen or contain two consecutive hyphens.
+        + Must be unique for all clusters within an AWS account.
+
+
+        Example: `myexamplecluster`
+
+        :type cluster_type: string
+        :param cluster_type: The type of the cluster. When cluster type is
+            specified as
+
+        + `single-node`, the **NumberOfNodes** parameter is not required.
+        + `multi-node`, the **NumberOfNodes** parameter is required.
+
+
+        Valid Values: `multi-node` | `single-node`
+
+        Default: `multi-node`
+
+        :type node_type: string
+        :param node_type: The node type to be provisioned for the cluster. For
+            information about node types, go to ` Working with Clusters`_ in
+            the Amazon Redshift Management Guide .
+        Valid Values: `dw.hs1.xlarge` | `dw.hs1.8xlarge`.
+
+        :type master_username: string
+        :param master_username:
+        The user name associated with the master user account for the cluster
+            that is being created.
+
+        Constraints:
+
+
+        + Must be 1 - 128 alphanumeric characters.
+        + First character must be a letter.
+        + Cannot be a reserved word. A list of reserved words can be found in
+              `Reserved Words`_ in the Amazon Redshift Developer Guide.
+
+        :type master_user_password: string
+        :param master_user_password:
+        The password associated with the master user account for the cluster
+            that is being created.
+
+        Constraints:
+
+
+        + Must be between 8 and 64 characters in length.
+        + Must contain at least one uppercase letter.
+        + Must contain at least one lowercase letter.
+        + Must contain one number.
+
+        :type cluster_security_groups: list
+        :param cluster_security_groups: A list of security groups to be
+            associated with this cluster.
+        Default: The default cluster security group for Amazon Redshift.
+
+        :type vpc_security_group_ids: list
+        :param vpc_security_group_ids: A list of Virtual Private Cloud (VPC)
+            security groups to be associated with the cluster.
+        Default: The default VPC security group is associated with the cluster.
+
+        :type cluster_subnet_group_name: string
+        :param cluster_subnet_group_name: The name of a cluster subnet group to
+            be associated with this cluster.
+        If this parameter is not provided the resulting cluster will be
+            deployed outside virtual private cloud (VPC).
+
+        :type availability_zone: string
+        :param availability_zone: The EC2 Availability Zone (AZ) in which you
+            want Amazon Redshift to provision the cluster. For example, if you
+            have several EC2 instances running in a specific Availability Zone,
+            then you might want the cluster to be provisioned in the same zone
+            in order to decrease network latency.
+        Default: A random, system-chosen Availability Zone in the region that
+            is specified by the endpoint.
+
+        Example: `us-east-1d`
+
+        Constraint: The specified Availability Zone must be in the same region
+            as the current endpoint.
+
+        :type preferred_maintenance_window: string
+        :param preferred_maintenance_window: The weekly time range (in UTC)
+            during which automated cluster maintenance can occur.
+        Format: `ddd:hh24:mi-ddd:hh24:mi`
+
+        Default: A 30-minute window selected at random from an 8-hour block of
+            time per region, occurring on a random day of the week. The
+            following list shows the time blocks for each region from which the
+            default maintenance windows are assigned.
+
+
+        + **US-East (Northern Virginia) Region:** 03:00-11:00 UTC
+        + **US-West (Northern California) Region:** 06:00-14:00 UTC
+        + **EU (Ireland) Region:** 22:00-06:00 UTC
+        + **Asia Pacific (Singapore) Region:** 14:00-22:00 UTC
+        + **Asia Pacific (Tokyo) Region: ** 17:00-03:00 UTC
+
+
+        Valid Days: Mon | Tue | Wed | Thu | Fri | Sat | Sun
+
+        Constraints: Minimum 30-minute window.
+
+        :type cluster_parameter_group_name: string
+        :param cluster_parameter_group_name:
+        The name of the parameter group to be associated with this cluster.
+
+        Default: The default Amazon Redshift cluster parameter group. For
+            information about the default parameter group, go to `Working with
+            Amazon Redshift Parameter Groups`_
+
+        Constraints:
+
+
+        + Must be 1 to 255 alphanumeric characters or hyphens.
+        + First character must be a letter.
+        + Cannot end with a hyphen or contain two consecutive hyphens.
+
+        :type automated_snapshot_retention_period: integer
+        :param automated_snapshot_retention_period: The number of days that
+            automated snapshots are retained. If the value is 0, automated
+            snapshots are disabled. Even if automated snapshots are disabled,
+            you can still create manual snapshots when you want with
+            CreateClusterSnapshot.
+        Default: `1`
+
+        Constraints: Must be a value from 0 to 35.
+
+        :type port: integer
+        :param port: The port number on which the cluster accepts incoming
+            connections.
+        The cluster is accessible only via the JDBC and ODBC connection
+            strings. Part of the connection string requires the port on which
+            the cluster will listen for incoming connections.
+
+        Default: `5439`
+
+        Valid Values: `1150-65535`
+
+        :type cluster_version: string
+        :param cluster_version: The version of the Amazon Redshift engine
+            software that you want to deploy on the cluster.
+        The version selected runs on all the nodes in the cluster.
+
+        Constraints: Only version 1.0 is currently available.
+
+        Example: `1.0`
+
+        :type allow_version_upgrade: boolean
+        :param allow_version_upgrade: If `True`, upgrades can be applied during
+            the maintenance window to the Amazon Redshift engine that is
+            running on the cluster.
+        When a new version of the Amazon Redshift engine is released, you can
+            request that the service automatically apply upgrades during the
+            maintenance window to the Amazon Redshift engine that is running on
+            your cluster.
+
+        Default: `True`
+
+        :type number_of_nodes: integer
+        :param number_of_nodes: The number of compute nodes in the cluster.
+            This parameter is required when the **ClusterType** parameter is
+            specified as `multi-node`.
+        For information about determining how many nodes you need, go to `
+            Working with Clusters`_ in the Amazon Redshift Management Guide .
+
+        If you don't specify this parameter, you get a single-node cluster.
+            When requesting a multi-node cluster, you must specify the number
+            of nodes that you want in the cluster.
+
+        Default: `1`
+
+        Constraints: Value must be at least 1 and no more than 100.
+
+        :type publicly_accessible: boolean
+        :param publicly_accessible: If `True`, the cluster can be accessed from
+            a public network.
+
+        :type encrypted: boolean
+        :param encrypted: If `True`, the data in cluster is encrypted at rest.
+        Default: false
+
+        """
+        params = {
+            'ClusterIdentifier': cluster_identifier,
+            'NodeType': node_type,
+            'MasterUsername': master_username,
+            'MasterUserPassword': master_user_password,
+        }
+        if db_name is not None:
+            params['DBName'] = db_name
+        if cluster_type is not None:
+            params['ClusterType'] = cluster_type
+        if cluster_security_groups is not None:
+            self.build_list_params(params,
+                                   cluster_security_groups,
+                                   'ClusterSecurityGroups.member')
+        if vpc_security_group_ids is not None:
+            self.build_list_params(params,
+                                   vpc_security_group_ids,
+                                   'VpcSecurityGroupIds.member')
+        if cluster_subnet_group_name is not None:
+            params['ClusterSubnetGroupName'] = cluster_subnet_group_name
+        if availability_zone is not None:
+            params['AvailabilityZone'] = availability_zone
+        if preferred_maintenance_window is not None:
+            params['PreferredMaintenanceWindow'] = preferred_maintenance_window
+        if cluster_parameter_group_name is not None:
+            params['ClusterParameterGroupName'] = cluster_parameter_group_name
+        if automated_snapshot_retention_period is not None:
+            params['AutomatedSnapshotRetentionPeriod'] = automated_snapshot_retention_period
+        if port is not None:
+            params['Port'] = port
+        if cluster_version is not None:
+            params['ClusterVersion'] = cluster_version
+        if allow_version_upgrade is not None:
+            params['AllowVersionUpgrade'] = str(
+                allow_version_upgrade).lower()
+        if number_of_nodes is not None:
+            params['NumberOfNodes'] = number_of_nodes
+        if publicly_accessible is not None:
+            params['PubliclyAccessible'] = str(
+                publicly_accessible).lower()
+        if encrypted is not None:
+            params['Encrypted'] = str(
+                encrypted).lower()
+        return self._make_request(
+            action='CreateCluster',
+            verb='POST',
+            path='/', params=params)
+
+    def create_cluster_parameter_group(self, parameter_group_name,
+                                       parameter_group_family, description):
+        """
+        Creates an Amazon Redshift parameter group.
+
+        Creating parameter groups is independent of creating clusters.
+        You can associate a cluster with a parameter group when you
+        create the cluster. You can also associate an existing cluster
+        with a parameter group after the cluster is created by using
+        ModifyCluster.
+
+        Parameters in the parameter group define specific behavior
+        that applies to the databases you create on the cluster. For
+        more information about managing parameter groups, go to
+        `Amazon Redshift Parameter Groups`_ in the Amazon Redshift
+        Management Guide .
+
+        :type parameter_group_name: string
+        :param parameter_group_name:
+        The name of the cluster parameter group.
+
+        Constraints:
+
+
+        + Must be 1 to 255 alphanumeric characters or hyphens
+        + First character must be a letter.
+        + Cannot end with a hyphen or contain two consecutive hyphens.
+        + Must be unique withing your AWS account.
+
+
+        This value is stored as a lower-case string.
+
+        :type parameter_group_family: string
+        :param parameter_group_family: The Amazon Redshift engine version to
+            which the cluster parameter group applies. The cluster engine
+            version determines the set of parameters.
+        To get a list of valid parameter group family names, you can call
+            DescribeClusterParameterGroups. By default, Amazon Redshift returns
+            a list of all the parameter groups that are owned by your AWS
+            account, including the default parameter groups for each Amazon
+            Redshift engine version. The parameter group family names
+            associated with the default parameter groups provide you the valid
+            values. For example, a valid family name is "redshift-1.0".
+
+        :type description: string
+        :param description: A description of the parameter group.
+
+        """
+        params = {
+            'ParameterGroupName': parameter_group_name,
+            'ParameterGroupFamily': parameter_group_family,
+            'Description': description,
+        }
+        return self._make_request(
+            action='CreateClusterParameterGroup',
+            verb='POST',
+            path='/', params=params)
+
+    def create_cluster_security_group(self, cluster_security_group_name,
+                                      description):
+        """
+        Creates a new Amazon Redshift security group. You use security
+        groups to control access to non-VPC clusters.
+
+        For information about managing security groups, go to`Amazon
+        Redshift Cluster Security Groups`_ in the Amazon Redshift
+        Management Guide .
+
+        :type cluster_security_group_name: string
+        :param cluster_security_group_name: The name for the security group.
+            Amazon Redshift stores the value as a lowercase string.
+        Constraints:
+
+
+        + Must contain no more than 255 alphanumeric characters or hyphens.
+        + Must not be "Default".
+        + Must be unique for all security groups that are created by your AWS
+              account.
+
+
+        Example: `examplesecuritygroup`
+
+        :type description: string
+        :param description: A description for the security group.
+
+        """
+        params = {
+            'ClusterSecurityGroupName': cluster_security_group_name,
+            'Description': description,
+        }
+        return self._make_request(
+            action='CreateClusterSecurityGroup',
+            verb='POST',
+            path='/', params=params)
+
+    def create_cluster_snapshot(self, snapshot_identifier,
+                                cluster_identifier):
+        """
+        Creates a manual snapshot of the specified cluster. The
+        cluster must be in the "available" state.
+
+        For more information about working with snapshots, go to
+        `Amazon Redshift Snapshots`_ in the Amazon Redshift Management
+        Guide .
+
+        :type snapshot_identifier: string
+        :param snapshot_identifier: A unique identifier for the snapshot that
+            you are requesting. This identifier must be unique for all
+            snapshots within the AWS account.
+        Constraints:
+
+
+        + Cannot be null, empty, or blank
+        + Must contain from 1 to 255 alphanumeric characters or hyphens
+        + First character must be a letter
+        + Cannot end with a hyphen or contain two consecutive hyphens
+
+
+        Example: `my-snapshot-id`
+
+        :type cluster_identifier: string
+        :param cluster_identifier: The cluster identifier for which you want a
+            snapshot.
+
+        """
+        params = {
+            'SnapshotIdentifier': snapshot_identifier,
+            'ClusterIdentifier': cluster_identifier,
+        }
+        return self._make_request(
+            action='CreateClusterSnapshot',
+            verb='POST',
+            path='/', params=params)
+
+    def create_cluster_subnet_group(self, cluster_subnet_group_name,
+                                    description, subnet_ids):
+        """
+        Creates a new Amazon Redshift subnet group. You must provide a
+        list of one or more subnets in your existing Amazon Virtual
+        Private Cloud (Amazon VPC) when creating Amazon Redshift
+        subnet group.
+
+        For information about subnet groups, go to`Amazon Redshift
+        Cluster Subnet Groups`_ in the Amazon Redshift Management
+        Guide .
+
+        :type cluster_subnet_group_name: string
+        :param cluster_subnet_group_name: The name for the subnet group. Amazon
+            Redshift stores the value as a lowercase string.
+        Constraints:
+
+
+        + Must contain no more than 255 alphanumeric characters or hyphens.
+        + Must not be "Default".
+        + Must be unique for all subnet groups that are created by your AWS
+              account.
+
+
+        Example: `examplesubnetgroup`
+
+        :type description: string
+        :param description: A description for the subnet group.
+
+        :type subnet_ids: list
+        :param subnet_ids: An array of VPC subnet IDs. A maximum of 20 subnets
+            can be modified in a single request.
+
+        """
+        params = {
+            'ClusterSubnetGroupName': cluster_subnet_group_name,
+            'Description': description,
+        }
+        self.build_list_params(params,
+                               subnet_ids,
+                               'SubnetIds.member')
+        return self._make_request(
+            action='CreateClusterSubnetGroup',
+            verb='POST',
+            path='/', params=params)
+
+    def delete_cluster(self, cluster_identifier,
+                       skip_final_cluster_snapshot=None,
+                       final_cluster_snapshot_identifier=None):
+        """
+        Deletes a previously provisioned cluster. A successful
+        response from the web service indicates that the request was
+        received correctly. If a final cluster snapshot is requested
+        the status of the cluster will be "final-snapshot" while the
+        snapshot is being taken, then it's "deleting" once Amazon
+        Redshift begins deleting the cluster. Use DescribeClusters to
+        monitor the status of the deletion. The delete operation
+        cannot be canceled or reverted once submitted. For more
+        information about managing clusters, go to `Amazon Redshift
+        Clusters`_ in the Amazon Redshift Management Guide .
+
+        :type cluster_identifier: string
+        :param cluster_identifier:
+        The identifier of the cluster to be deleted.
+
+        Constraints:
+
+
+        + Must contain lowercase characters.
+        + Must contain from 1 to 63 alphanumeric characters or hyphens.
+        + First character must be a letter.
+        + Cannot end with a hyphen or contain two consecutive hyphens.
+
+        :type skip_final_cluster_snapshot: boolean
+        :param skip_final_cluster_snapshot: Determines whether a final snapshot
+            of the cluster is created before Amazon Redshift deletes the
+            cluster. If `True`, a final cluster snapshot is not created. If
+            `False`, a final cluster snapshot is created before the cluster is
+            deleted.
+        The FinalClusterSnapshotIdentifier parameter must be specified if
+            SkipFinalClusterSnapshot is `False`.
+
+        Default: `False`
+
+        :type final_cluster_snapshot_identifier: string
+        :param final_cluster_snapshot_identifier:
+        The identifier of the final snapshot that is to be created immediately
+            before deleting the cluster. If this parameter is provided,
+            SkipFinalClusterSnapshot must be `False`.
+
+        Constraints:
+
+
+        + Must be 1 to 255 alphanumeric characters.
+        + First character must be a letter.
+        + Cannot end with a hyphen or contain two consecutive hyphens.
+
+        """
+        params = {'ClusterIdentifier': cluster_identifier, }
+        if skip_final_cluster_snapshot is not None:
+            params['SkipFinalClusterSnapshot'] = str(
+                skip_final_cluster_snapshot).lower()
+        if final_cluster_snapshot_identifier is not None:
+            params['FinalClusterSnapshotIdentifier'] = final_cluster_snapshot_identifier
+        return self._make_request(
+            action='DeleteCluster',
+            verb='POST',
+            path='/', params=params)
+
+    def delete_cluster_parameter_group(self, parameter_group_name):
+        """
+        Deletes a specified Amazon Redshift parameter group. You
+        cannot delete a parameter group if it is associated with a
+        cluster.
+
+        :type parameter_group_name: string
+        :param parameter_group_name:
+        The name of the parameter group to be deleted.
+
+        Constraints:
+
+
+        + Must be the name of an existing cluster parameter group.
+        + Cannot delete a default cluster parameter group.
+
+        """
+        params = {'ParameterGroupName': parameter_group_name, }
+        return self._make_request(
+            action='DeleteClusterParameterGroup',
+            verb='POST',
+            path='/', params=params)
+
+    def delete_cluster_security_group(self, cluster_security_group_name):
+        """
+        Deletes an Amazon Redshift security group.
+        You cannot delete a security group that is associated with any
+        clusters. You cannot delete the default security group.
+        For information about managing security groups, go to`Amazon
+        Redshift Cluster Security Groups`_ in the Amazon Redshift
+        Management Guide .
+
+        :type cluster_security_group_name: string
+        :param cluster_security_group_name: The name of the cluster security
+            group to be deleted.
+
+        """
+        params = {
+            'ClusterSecurityGroupName': cluster_security_group_name,
+        }
+        return self._make_request(
+            action='DeleteClusterSecurityGroup',
+            verb='POST',
+            path='/', params=params)
+
+    def delete_cluster_snapshot(self, snapshot_identifier):
+        """
+        Deletes the specified manual snapshot. The snapshot must be in
+        the "available" state.
+
+        Unlike automated snapshots, manual snapshots are retained even
+        after you delete your cluster. Amazon Redshift does not delete
+        your manual snapshots. You must delete manual snapshot
+        explicitly to avoid getting charged.
+
+        :type snapshot_identifier: string
+        :param snapshot_identifier: The unique identifier of the manual
+            snapshot to be deleted.
+        Constraints: Must be the name of an existing snapshot that is in the
+            `available` state.
+
+        """
+        params = {'SnapshotIdentifier': snapshot_identifier, }
+        return self._make_request(
+            action='DeleteClusterSnapshot',
+            verb='POST',
+            path='/', params=params)
+
+    def delete_cluster_subnet_group(self, cluster_subnet_group_name):
+        """
+        Deletes the specified cluster subnet group.
+
+        :type cluster_subnet_group_name: string
+        :param cluster_subnet_group_name: The name of the cluster subnet group
+            name to be deleted.
+
+        """
+        params = {
+            'ClusterSubnetGroupName': cluster_subnet_group_name,
+        }
+        return self._make_request(
+            action='DeleteClusterSubnetGroup',
+            verb='POST',
+            path='/', params=params)
+
+    def describe_cluster_parameter_groups(self, parameter_group_name=None,
+                                          max_records=None, marker=None):
+        """
+        Returns a list of Amazon Redshift parameter groups, including
+        parameter groups you created and the default parameter group.
+        For each parameter group, the response includes the parameter
+        group name, description, and parameter group family name. You
+        can optionally specify a name to retrieve the description of a
+        specific parameter group.
+
+        For more information about managing parameter groups, go to
+        `Amazon Redshift Parameter Groups`_ in the Amazon Redshift
+        Management Guide .
+
+        :type parameter_group_name: string
+        :param parameter_group_name: The name of a specific parameter group for
+            which to return details. By default, details about all parameter
+            groups and the default parameter group are returned.
+
+        :type max_records: integer
+        :param max_records: The maximum number of parameter group records to
+            include in the response. If more records exist than the specified
+            `MaxRecords` value, the response includes a marker that you can use
+            in a subsequent DescribeClusterParameterGroups request to retrieve
+            the next set of records.
+        Default: `100`
+
+        Constraints: Value must be at least 20 and no more than 100.
+
+        :type marker: string
+        :param marker: An optional marker returned by a previous
+            DescribeClusterParameterGroups request to indicate the first
+            parameter group that the current request will return.
+
+        """
+        params = {}
+        if parameter_group_name is not None:
+            params['ParameterGroupName'] = parameter_group_name
+        if max_records is not None:
+            params['MaxRecords'] = max_records
+        if marker is not None:
+            params['Marker'] = marker
+        return self._make_request(
+            action='DescribeClusterParameterGroups',
+            verb='POST',
+            path='/', params=params)
+
+    def describe_cluster_parameters(self, parameter_group_name, source=None,
+                                    max_records=None, marker=None):
+        """
+        Returns a detailed list of parameters contained within the
+        specified Amazon Redshift parameter group. For each parameter
+        the response includes information such as parameter name,
+        description, data type, value, whether the parameter value is
+        modifiable, and so on.
+
+        You can specify source filter to retrieve parameters of only
+        specific type. For example, to retrieve parameters that were
+        modified by a user action such as from
+        ModifyClusterParameterGroup, you can specify source equal to
+        user .
+
+        For more information about managing parameter groups, go to
+        `Amazon Redshift Parameter Groups`_ in the Amazon Redshift
+        Management Guide .
+
+        :type parameter_group_name: string
+        :param parameter_group_name: The name of a cluster parameter group for
+            which to return details.
+
+        :type source: string
+        :param source: The parameter types to return. Specify `user` to show
+            parameters that are different form the default. Similarly, specify
+            `engine-default` to show parameters that are the same as the
+            default parameter group.
+        Default: All parameter types returned.
+
+        Valid Values: `user` | `engine-default`
+
+        :type max_records: integer
+        :param max_records: The maximum number of records to include in the
+            response. If more records exist than the specified `MaxRecords`
+            value, response includes a marker that you can specify in your
+            subsequent request to retrieve remaining result.
+        Default: `100`
+
+        Constraints: Value must be at least 20 and no more than 100.
+
+        :type marker: string
+        :param marker: An optional marker returned from a previous
+            **DescribeClusterParameters** request. If this parameter is
+            specified, the response includes only records beyond the specified
+            marker, up to the value specified by `MaxRecords`.
+
+        """
+        params = {'ParameterGroupName': parameter_group_name, }
+        if source is not None:
+            params['Source'] = source
+        if max_records is not None:
+            params['MaxRecords'] = max_records
+        if marker is not None:
+            params['Marker'] = marker
+        return self._make_request(
+            action='DescribeClusterParameters',
+            verb='POST',
+            path='/', params=params)
+
+    def describe_cluster_security_groups(self,
+                                         cluster_security_group_name=None,
+                                         max_records=None, marker=None):
+        """
+        Returns information about Amazon Redshift security groups. If
+        the name of a security group is specified, the response will
+        contain only information about only that security group.
+
+        For information about managing security groups, go to`Amazon
+        Redshift Cluster Security Groups`_ in the Amazon Redshift
+        Management Guide .
+
+        :type cluster_security_group_name: string
+        :param cluster_security_group_name: The name of a cluster security
+            group for which you are requesting details. You can specify either
+            the **Marker** parameter or a **ClusterSecurityGroupName**
+            parameter, but not both.
+        Example: `securitygroup1`
+
+        :type max_records: integer
+        :param max_records: The maximum number of records to be included in the
+            response. If more records exist than the specified `MaxRecords`
+            value, a marker is included in the response, which you can use in a
+            subsequent DescribeClusterSecurityGroups request.
+        Default: `100`
+
+        Constraints: Value must be at least 20 and no more than 100.
+
+        :type marker: string
+        :param marker: An optional marker returned by a previous
+            DescribeClusterSecurityGroups request to indicate the first
+            security group that the current request will return. You can
+            specify either the **Marker** parameter or a
+            **ClusterSecurityGroupName** parameter, but not both.
+
+        """
+        params = {}
+        if cluster_security_group_name is not None:
+            params['ClusterSecurityGroupName'] = cluster_security_group_name
+        if max_records is not None:
+            params['MaxRecords'] = max_records
+        if marker is not None:
+            params['Marker'] = marker
+        return self._make_request(
+            action='DescribeClusterSecurityGroups',
+            verb='POST',
+            path='/', params=params)
+
+    def describe_cluster_snapshots(self, cluster_identifier=None,
+                                   snapshot_identifier=None,
+                                   snapshot_type=None, start_time=None,
+                                   end_time=None, max_records=None,
+                                   marker=None):
+        """
+        Returns one or more snapshot objects, which contain metadata
+        about your cluster snapshots. By default, this operation
+        returns information about all snapshots of all clusters that
+        are owned by the AWS account.
+
+        :type cluster_identifier: string
+        :param cluster_identifier: The identifier of the cluster for which
+            information about snapshots is requested.
+
+        :type snapshot_identifier: string
+        :param snapshot_identifier: The snapshot identifier of the snapshot
+            about which to return information.
+
+        :type snapshot_type: string
+        :param snapshot_type: The type of snapshots for which you are
+            requesting information. By default, snapshots of all types are
+            returned.
+        Valid Values: `automated` | `manual`
+
+        :type start_time: timestamp
+        :param start_time: A value that requests only snapshots created at or
+            after the specified time. The time value is specified in ISO 8601
+            format. For more information about ISO 8601, go to the `ISO8601
+            Wikipedia page.`_
+        Example: `2012-07-16T18:00:00Z`
+
+        :type end_time: timestamp
+        :param end_time: A time value that requests only snapshots created at
+            or before the specified time. The time value is specified in ISO
+            8601 format. For more information about ISO 8601, go to the
+            `ISO8601 Wikipedia page.`_
+        Example: `2012-07-16T18:00:00Z`
+
+        :type max_records: integer
+        :param max_records: The maximum number of snapshot records to include
+            in the response. If more records exist than the specified
+            `MaxRecords` value, the response returns a marker that you can use
+            in a subsequent DescribeClusterSnapshots request in order to
+            retrieve the next set of snapshot records.
+        Default: `100`
+
+        Constraints: Must be at least 20 and no more than 100.
+
+        :type marker: string
+        :param marker: An optional marker returned by a previous
+            DescribeClusterSnapshots request to indicate the first snapshot
+            that the request will return.
+
+        """
+        params = {}
+        if cluster_identifier is not None:
+            params['ClusterIdentifier'] = cluster_identifier
+        if snapshot_identifier is not None:
+            params['SnapshotIdentifier'] = snapshot_identifier
+        if snapshot_type is not None:
+            params['SnapshotType'] = snapshot_type
+        if start_time is not None:
+            params['StartTime'] = start_time
+        if end_time is not None:
+            params['EndTime'] = end_time
+        if max_records is not None:
+            params['MaxRecords'] = max_records
+        if marker is not None:
+            params['Marker'] = marker
+        return self._make_request(
+            action='DescribeClusterSnapshots',
+            verb='POST',
+            path='/', params=params)
+
+    def describe_cluster_subnet_groups(self, cluster_subnet_group_name=None,
+                                       max_records=None, marker=None):
+        """
+        Returns one or more cluster subnet group objects, which
+        contain metadata about your cluster subnet groups. By default,
+        this operation returns information about all cluster subnet
+        groups that are defined in you AWS account.
+
+        :type cluster_subnet_group_name: string
+        :param cluster_subnet_group_name: The name of the cluster subnet group
+            for which information is requested.
+
+        :type max_records: integer
+        :param max_records: The maximum number of cluster subnet group records
+            to include in the response. If more records exist than the
+            specified `MaxRecords` value, the response returns a marker that
+            you can use in a subsequent DescribeClusterSubnetGroups request in
+            order to retrieve the next set of cluster subnet group records.
+        Default: 100
+
+        Constraints: Must be at least 20 and no more than 100.
+
+        :type marker: string
+        :param marker: An optional marker returned by a previous
+            DescribeClusterSubnetGroups request to indicate the first cluster
+            subnet group that the current request will return.
+
+        """
+        params = {}
+        if cluster_subnet_group_name is not None:
+            params['ClusterSubnetGroupName'] = cluster_subnet_group_name
+        if max_records is not None:
+            params['MaxRecords'] = max_records
+        if marker is not None:
+            params['Marker'] = marker
+        return self._make_request(
+            action='DescribeClusterSubnetGroups',
+            verb='POST',
+            path='/', params=params)
+
+    def describe_cluster_versions(self, cluster_version=None,
+                                  cluster_parameter_group_family=None,
+                                  max_records=None, marker=None):
+        """
+        Returns descriptions of the available Amazon Redshift cluster
+        versions. You can call this operation even before creating any
+        clusters to learn more about the Amazon Redshift versions. For
+        more information about managing clusters, go to `Amazon
+        Redshift Clusters`_ in the Amazon Redshift Management Guide
+
+        :type cluster_version: string
+        :param cluster_version: The specific cluster version to return.
+        Example: `1.0`
+
+        :type cluster_parameter_group_family: string
+        :param cluster_parameter_group_family:
+        The name of a specific cluster parameter group family to return details
+            for.
+
+        Constraints:
+
+
+        + Must be 1 to 255 alphanumeric characters
+        + First character must be a letter
+        + Cannot end with a hyphen or contain two consecutive hyphens
+
+        :type max_records: integer
+        :param max_records: The maximum number of records to include in the
+            response. If more than the `MaxRecords` value is available, a
+            marker is included in the response so that the following results
+            can be retrieved.
+        Default: `100`
+
+        Constraints: Value must be at least 20 and no more than 100.
+
+        :type marker: string
+        :param marker: The marker returned from a previous request. If this
+            parameter is specified, the response includes records beyond the
+            marker only, up to `MaxRecords`.
+
+        """
+        params = {}
+        if cluster_version is not None:
+            params['ClusterVersion'] = cluster_version
+        if cluster_parameter_group_family is not None:
+            params['ClusterParameterGroupFamily'] = cluster_parameter_group_family
+        if max_records is not None:
+            params['MaxRecords'] = max_records
+        if marker is not None:
+            params['Marker'] = marker
+        return self._make_request(
+            action='DescribeClusterVersions',
+            verb='POST',
+            path='/', params=params)
+
+    def describe_clusters(self, cluster_identifier=None, max_records=None,
+                          marker=None):
+        """
+        Returns properties of provisioned clusters including general
+        cluster properties, cluster database properties, maintenance
+        and backup properties, and security and access properties.
+        This operation supports pagination. For more information about
+        managing clusters, go to `Amazon Redshift Clusters`_ in the
+        Amazon Redshift Management Guide .
+
+        :type cluster_identifier: string
+        :param cluster_identifier: The unique identifier of a cluster whose
+            properties you are requesting. This parameter isn't case sensitive.
+        The default is that all clusters defined for an account are returned.
+
+        :type max_records: integer
+        :param max_records: The maximum number of records that the response can
+            include. If more records exist than the specified `MaxRecords`
+            value, a `marker` is included in the response that can be used in a
+            new **DescribeClusters** request to continue listing results.
+        Default: `100`
+
+        Constraints: Value must be at least 20 and no more than 100.
+
+        :type marker: string
+        :param marker: An optional marker returned by a previous
+            **DescribeClusters** request to indicate the first cluster that the
+            current **DescribeClusters** request will return.
+        You can specify either a **Marker** parameter or a
+            **ClusterIdentifier** parameter in a **DescribeClusters** request,
+            but not both.
+
+        """
+        params = {}
+        if cluster_identifier is not None:
+            params['ClusterIdentifier'] = cluster_identifier
+        if max_records is not None:
+            params['MaxRecords'] = max_records
+        if marker is not None:
+            params['Marker'] = marker
+        return self._make_request(
+            action='DescribeClusters',
+            verb='POST',
+            path='/', params=params)
+
+    def describe_default_cluster_parameters(self, parameter_group_family,
+                                            max_records=None, marker=None):
+        """
+        Returns a list of parameter settings for the specified
+        parameter group family.
+
+        For more information about managing parameter groups, go to
+        `Amazon Redshift Parameter Groups`_ in the Amazon Redshift
+        Management Guide .
+
+        :type parameter_group_family: string
+        :param parameter_group_family: The name of the cluster parameter group
+            family.
+
+        :type max_records: integer
+        :param max_records: The maximum number of records to include in the
+            response. If more records exist than the specified `MaxRecords`
+            value, a marker is included in the response so that the remaining
+            results may be retrieved.
+        Default: `100`
+
+        Constraints: Value must be at least 20 and no more than 100.
+
+        :type marker: string
+        :param marker: An optional marker returned from a previous
+            **DescribeDefaultClusterParameters** request. If this parameter is
+            specified, the response includes only records beyond the marker, up
+            to the value specified by `MaxRecords`.
+
+        """
+        params = {'ParameterGroupFamily': parameter_group_family, }
+        if max_records is not None:
+            params['MaxRecords'] = max_records
+        if marker is not None:
+            params['Marker'] = marker
+        return self._make_request(
+            action='DescribeDefaultClusterParameters',
+            verb='POST',
+            path='/', params=params)
+
+    def describe_events(self, source_identifier=None, source_type=None,
+                        start_time=None, end_time=None, duration=None,
+                        max_records=None, marker=None):
+        """
+        Returns events related to clusters, security groups,
+        snapshots, and parameter groups for the past 14 days. Events
+        specific to a particular cluster, security group, snapshot or
+        parameter group can be obtained by providing the name as a
+        parameter. By default, the past hour of events are returned.
+
+        :type source_identifier: string
+        :param source_identifier:
+        The identifier of the event source for which events will be returned.
+            If this parameter is not specified, then all sources are included
+            in the response.
+
+        Constraints:
+
+        If SourceIdentifier is supplied, SourceType must also be provided.
+
+
+        + Specify a cluster identifier when SourceType is `cluster`.
+        + Specify a cluster security group name when SourceType is `cluster-
+              security-group`.
+        + Specify a cluster parameter group name when SourceType is `cluster-
+              parameter-group`.
+        + Specify a cluster snapshot identifier when SourceType is `cluster-
+              snapshot`.
+
+        :type source_type: string
+        :param source_type:
+        The event source to retrieve events for. If no value is specified, all
+            events are returned.
+
+        Constraints:
+
+        If SourceType is supplied, SourceIdentifier must also be provided.
+
+
+        + Specify `cluster` when SourceIdentifier is a cluster identifier.
+        + Specify `cluster-security-group` when SourceIdentifier is a cluster
+              security group name.
+        + Specify `cluster-parameter-group` when SourceIdentifier is a cluster
+              parameter group name.
+        + Specify `cluster-snapshot` when SourceIdentifier is a cluster
+              snapshot identifier.
+
+        :type start_time: timestamp
+        :param start_time: The beginning of the time interval to retrieve
+            events for, specified in ISO 8601 format. For more information
+            about ISO 8601, go to the `ISO8601 Wikipedia page.`_
+        Example: `2009-07-08T18:00Z`
+
+        :type end_time: timestamp
+        :param end_time: The end of the time interval for which to retrieve
+            events, specified in ISO 8601 format. For more information about
+            ISO 8601, go to the `ISO8601 Wikipedia page.`_
+        Example: `2009-07-08T18:00Z`
+
+        :type duration: integer
+        :param duration: The number of minutes prior to the time of the request
+            for which to retrieve events. For example, if the request is sent
+            at 18:00 and you specify a duration of 60, then only events which
+            have occurred after 17:00 will be returned.
+        Default: `60`
+
+        :type max_records: integer
+        :param max_records: The maximum number of records to include in the
+            response. If more records exist than the specified `MaxRecords`
+            value, a marker is included in the response so that the remaining
+            results may be retrieved.
+        Default: `100`
+
+        Constraints: Value must be at least 20 and no more than 100.
+
+        :type marker: string
+        :param marker: An optional marker returned from a previous
+            **DescribeEvents** request. If this parameter is specified, the
+            response includes only records beyond the marker, up to the value
+            specified by `MaxRecords`.
+
+        """
+        params = {}
+        if source_identifier is not None:
+            params['SourceIdentifier'] = source_identifier
+        if source_type is not None:
+            params['SourceType'] = source_type
+        if start_time is not None:
+            params['StartTime'] = start_time
+        if end_time is not None:
+            params['EndTime'] = end_time
+        if duration is not None:
+            params['Duration'] = duration
+        if max_records is not None:
+            params['MaxRecords'] = max_records
+        if marker is not None:
+            params['Marker'] = marker
+        return self._make_request(
+            action='DescribeEvents',
+            verb='POST',
+            path='/', params=params)
+
+    def describe_orderable_cluster_options(self, cluster_version=None,
+                                           node_type=None, max_records=None,
+                                           marker=None):
+        """
+        Returns a list of orderable cluster options. Before you create
+        a new cluster you can use this operation to find what options
+        are available, such as the EC2 Availability Zones (AZ) in the
+        specific AWS region that you can specify, and the node types
+        you can request. The node types differ by available storage,
+        memory, CPU and price. With the cost involved you might want
+        to obtain a list of cluster options in the specific region and
+        specify values when creating a cluster. For more information
+        about managing clusters, go to `Amazon Redshift Clusters`_ in
+        the Amazon Redshift Management Guide
+
+        :type cluster_version: string
+        :param cluster_version: The version filter value. Specify this
+            parameter to show only the available offerings matching the
+            specified version.
+        Default: All versions.
+
+        Constraints: Must be one of the version returned from
+            DescribeClusterVersions.
+
+        :type node_type: string
+        :param node_type: The node type filter value. Specify this parameter to
+            show only the available offerings matching the specified node type.
+
+        :type max_records: integer
+        :param max_records: The maximum number of records to include in the
+            response. If more records exist than the specified `MaxRecords`
+            value, a marker is included in the response so that the remaining
+            results may be retrieved.
+        Default: `100`
+
+        Constraints: minimum 20, maximum 100.
+
+        :type marker: string
+        :param marker: An optional marker returned from a previous
+            **DescribeOrderableClusterOptions** request. If this parameter is
+            specified, the response includes only records beyond the marker, up
+            to the value specified by `MaxRecords`.
+
+        """
+        params = {}
+        if cluster_version is not None:
+            params['ClusterVersion'] = cluster_version
+        if node_type is not None:
+            params['NodeType'] = node_type
+        if max_records is not None:
+            params['MaxRecords'] = max_records
+        if marker is not None:
+            params['Marker'] = marker
+        return self._make_request(
+            action='DescribeOrderableClusterOptions',
+            verb='POST',
+            path='/', params=params)
+
+    def describe_reserved_node_offerings(self,
+                                         reserved_node_offering_id=None,
+                                         max_records=None, marker=None):
+        """
+        Returns a list of the available reserved node offerings by
+        Amazon Redshift with their descriptions including the node
+        type, the fixed and recurring costs of reserving the node and
+        duration the node will be reserved for you. These descriptions
+        help you determine which reserve node offering you want to
+        purchase. You then use the unique offering ID in you call to
+        PurchaseReservedNodeOffering to reserve one or more nodes for
+        your Amazon Redshift cluster.
+
+        For more information about managing parameter groups, go to
+        `Purchasing Reserved Nodes`_ in the Amazon Redshift Management
+        Guide .
+
+        :type reserved_node_offering_id: string
+        :param reserved_node_offering_id: The unique identifier for the
+            offering.
+
+        :type max_records: integer
+        :param max_records: The maximum number of records to include in the
+            response. If more records exist than the specified `MaxRecords`
+            value, a marker is included in the response so that the remaining
+            results may be retrieved.
+        Default: `100`
+
+        Constraints: minimum 20, maximum 100.
+
+        :type marker: string
+        :param marker: An optional marker returned by a previous
+            DescribeReservedNodeOfferings request to indicate the first
+            offering that the request will return.
+        You can specify either a **Marker** parameter or a
+            **ClusterIdentifier** parameter in a DescribeClusters request, but
+            not both.
+
+        """
+        params = {}
+        if reserved_node_offering_id is not None:
+            params['ReservedNodeOfferingId'] = reserved_node_offering_id
+        if max_records is not None:
+            params['MaxRecords'] = max_records
+        if marker is not None:
+            params['Marker'] = marker
+        return self._make_request(
+            action='DescribeReservedNodeOfferings',
+            verb='POST',
+            path='/', params=params)
+
+    def describe_reserved_nodes(self, reserved_node_id=None,
+                                max_records=None, marker=None):
+        """
+        Returns the descriptions of the reserved nodes.
+
+        :type reserved_node_id: string
+        :param reserved_node_id: Identifier for the node reservation.
+
+        :type max_records: integer
+        :param max_records: The maximum number of records to include in the
+            response. If more records exist than the specified `MaxRecords`
+            value, a marker is included in the response so that the remaining
+            results may be retrieved.
+        Default: `100`
+
+        Constraints: minimum 20, maximum 100.
+
+        :type marker: string
+        :param marker: An optional marker returned by a previous
+            DescribeReservedNodes request to indicate the first parameter group
+            that the current request will return.
+
+        """
+        params = {}
+        if reserved_node_id is not None:
+            params['ReservedNodeId'] = reserved_node_id
+        if max_records is not None:
+            params['MaxRecords'] = max_records
+        if marker is not None:
+            params['Marker'] = marker
+        return self._make_request(
+            action='DescribeReservedNodes',
+            verb='POST',
+            path='/', params=params)
+
+    def describe_resize(self, cluster_identifier):
+        """
+        Returns information about the last resize operation for the
+        specified cluster. If no resize operation has ever been
+        initiated for the specified cluster, a `HTTP 404` error is
+        returned. If a resize operation was initiated and completed,
+        the status of the resize remains as `SUCCEEDED` until the next
+        resize.
+
+        A resize operation can be requested using ModifyCluster and
+        specifying a different number or type of nodes for the
+        cluster.
+
+        :type cluster_identifier: string
+        :param cluster_identifier: The unique identifier of a cluster whose
+            resize progress you are requesting. This parameter isn't case-
+            sensitive.
+        By default, resize operations for all clusters defined for an AWS
+            account are returned.
+
+        """
+        params = {'ClusterIdentifier': cluster_identifier, }
+        return self._make_request(
+            action='DescribeResize',
+            verb='POST',
+            path='/', params=params)
+
+    def modify_cluster(self, cluster_identifier, cluster_type=None,
+                       node_type=None, number_of_nodes=None,
+                       cluster_security_groups=None,
+                       vpc_security_group_ids=None,
+                       master_user_password=None,
+                       cluster_parameter_group_name=None,
+                       automated_snapshot_retention_period=None,
+                       preferred_maintenance_window=None,
+                       cluster_version=None, allow_version_upgrade=None):
+        """
+        Modifies the settings for a cluster. For example, you can add
+        another security or parameter group, update the preferred
+        maintenance window, or change the master user password.
+        Resetting a cluster password or modifying the security groups
+        associated with a cluster do not need a reboot. However,
+        modifying parameter group requires a reboot for parameters to
+        take effect. For more information about managing clusters, go
+        to `Amazon Redshift Clusters`_ in the Amazon Redshift
+        Management Guide
+
+        You can also change node type and the number of nodes to scale
+        up or down the cluster. When resizing a cluster, you must
+        specify both the number of nodes and the node type even if one
+        of the parameters does not change. If you specify the same
+        number of nodes and node type that are already configured for
+        the cluster, an error is returned.
+
+        :type cluster_identifier: string
+        :param cluster_identifier: The unique identifier of the cluster to be
+            modified.
+        Example: `examplecluster`
+
+        :type cluster_type: string
+        :param cluster_type: The new cluster type.
+        When you submit your cluster resize request, your existing cluster goes
+            into a read-only mode. After Amazon Redshift provisions a new
+            cluster based on your resize requirements, there will be outage for
+            a period while the old cluster is deleted and your connection is
+            switched to the new cluster. You can use DescribeResize to track
+            the progress of the resize request.
+
+        Valid Values: ` multi-node | single-node `
+
+        :type node_type: string
+        :param node_type: The new node type of the cluster. If you specify a
+            new node type, you must also specify the number of nodes parameter
+            also.
+        When you submit your request to resize a cluster, Amazon Redshift sets
+            access permissions for the cluster to read-only. After Amazon
+            Redshift provisions a new cluster according to your resize
+            requirements, there will be a temporary outage while the old
+            cluster is deleted and your connection is switched to the new
+            cluster. When the new connection is complete, the original access
+            permissions for the cluster are restored. You can use the
+            DescribeResize to track the progress of the resize request.
+
+        Valid Values: ` dw.hs1.xlarge` | `dw.hs1.8xlarge`
+
+        :type number_of_nodes: integer
+        :param number_of_nodes: The new number of nodes of the cluster. If you
+            specify a new number of nodes, you must also specify the node type
+            parameter also.
+        When you submit your request to resize a cluster, Amazon Redshift sets
+            access permissions for the cluster to read-only. After Amazon
+            Redshift provisions a new cluster according to your resize
+            requirements, there will be a temporary outage while the old
+            cluster is deleted and your connection is switched to the new
+            cluster. When the new connection is complete, the original access
+            permissions for the cluster are restored. You can use
+            DescribeResize to track the progress of the resize request.
+
+        Valid Values: Integer greater than `0`.
+
+        :type cluster_security_groups: list
+        :param cluster_security_groups:
+        A list of cluster security groups to be authorized on this cluster.
+            This change is asynchronously applied as soon as possible.
+
+        Security groups currently associated with the cluster and not in the
+            list of groups to apply, will be revoked from the cluster.
+
+        Constraints:
+
+
+        + Must be 1 to 255 alphanumeric characters or hyphens
+        + First character must be a letter
+        + Cannot end with a hyphen or contain two consecutive hyphens
+
+        :type vpc_security_group_ids: list
+        :param vpc_security_group_ids: A list of Virtual Private Cloud (VPC)
+            security groups to be associated with the cluster.
+
+        :type master_user_password: string
+        :param master_user_password:
+        The new password for the cluster master user. This change is
+            asynchronously applied as soon as possible. Between the time of the
+            request and the completion of the request, the `MasterUserPassword`
+            element exists in the `PendingModifiedValues` element of the
+            operation response.
+        Operations never return the password, so this operation provides a way
+            to regain access to the master user account for a cluster if the
+            password is lost.
+
+
+        Default: Uses existing setting.
+
+        Constraints:
+
+
+        + Must be between 8 and 64 characters in length.
+        + Must contain at least one uppercase letter.
+        + Must contain at least one lowercase letter.
+        + Must contain one number.
+
+        :type cluster_parameter_group_name: string
+        :param cluster_parameter_group_name: The name of the cluster parameter
+            group to apply to this cluster. This change is applied only after
+            the cluster is rebooted. To reboot a cluster use RebootCluster.
+        Default: Uses existing setting.
+
+        Constraints: The cluster parameter group must be in the same parameter
+            group family that matches the cluster version.
+
+        :type automated_snapshot_retention_period: integer
+        :param automated_snapshot_retention_period: The number of days that
+            automated snapshots are retained. If the value is 0, automated
+            snapshots are disabled. Even if automated snapshots are disabled,
+            you can still create manual snapshots when you want with
+            CreateClusterSnapshot.
+        If you decrease the automated snapshot retention period from its
+            current value, existing automated snapshots which fall outside of
+            the new retention period will be immediately deleted.
+
+        Default: Uses existing setting.
+
+        Constraints: Must be a value from 0 to 35.
+
+        :type preferred_maintenance_window: string
+        :param preferred_maintenance_window: The weekly time range (in UTC)
+            during which system maintenance can occur, if necessary. If system
+            maintenance is necessary during the window, it may result in an
+            outage.
+        This maintenance window change is made immediately. If the new
+            maintenance window indicates the current time, there must be at
+            least 120 minutes between the current time and end of the window in
+            order to ensure that pending changes are applied.
+
+        Default: Uses existing setting.
+
+        Format: ddd:hh24:mi-ddd:hh24:mi, for example `wed:07:30-wed:08:00`.
+
+        Valid Days: Mon | Tue | Wed | Thu | Fri | Sat | Sun
+
+        Constraints: Must be at least 30 minutes.
+
+        :type cluster_version: string
+        :param cluster_version: The new version number of the Amazon Redshift
+            engine to upgrade to.
+        For major version upgrades, if a non-default cluster parameter group is
+            currently in use, a new cluster parameter group in the cluster
+            parameter group family for the new version must be specified. The
+            new cluster parameter group can be the default for that cluster
+            parameter group family. For more information about managing
+            parameter groups, go to `Amazon Redshift Parameter Groups`_ in the
+            Amazon Redshift Management Guide .
+
+        Example: `1.0`
+
+        :type allow_version_upgrade: boolean
+        :param allow_version_upgrade: If `True`, upgrades will be applied
+            automatically to the cluster during the maintenance window.
+        Default: `False`
+
+        """
+        params = {'ClusterIdentifier': cluster_identifier, }
+        if cluster_type is not None:
+            params['ClusterType'] = cluster_type
+        if node_type is not None:
+            params['NodeType'] = node_type
+        if number_of_nodes is not None:
+            params['NumberOfNodes'] = number_of_nodes
+        if cluster_security_groups is not None:
+            self.build_list_params(params,
+                                   cluster_security_groups,
+                                   'ClusterSecurityGroups.member')
+        if vpc_security_group_ids is not None:
+            self.build_list_params(params,
+                                   vpc_security_group_ids,
+                                   'VpcSecurityGroupIds.member')
+        if master_user_password is not None:
+            params['MasterUserPassword'] = master_user_password
+        if cluster_parameter_group_name is not None:
+            params['ClusterParameterGroupName'] = cluster_parameter_group_name
+        if automated_snapshot_retention_period is not None:
+            params['AutomatedSnapshotRetentionPeriod'] = automated_snapshot_retention_period
+        if preferred_maintenance_window is not None:
+            params['PreferredMaintenanceWindow'] = preferred_maintenance_window
+        if cluster_version is not None:
+            params['ClusterVersion'] = cluster_version
+        if allow_version_upgrade is not None:
+            params['AllowVersionUpgrade'] = str(
+                allow_version_upgrade).lower()
+        return self._make_request(
+            action='ModifyCluster',
+            verb='POST',
+            path='/', params=params)
+
+    def modify_cluster_parameter_group(self, parameter_group_name,
+                                       parameters):
+        """
+        Modifies the parameters of a parameter group.
+
+        For more information about managing parameter groups, go to
+        `Amazon Redshift Parameter Groups`_ in the Amazon Redshift
+        Management Guide .
+
+        :type parameter_group_name: string
+        :param parameter_group_name: The name of the parameter group to be
+            modified.
+
+        :type parameters: list
+        :param parameters: An array of parameters to be modified. A maximum of
+            20 parameters can be modified in a single request.
+        For each parameter to be modified, you must supply at least the
+            parameter name and parameter value; other name-value pairs of the
+            parameter are optional.
+
+        """
+        params = {'ParameterGroupName': parameter_group_name, }
+        self.build_complex_list_params(
+            params, parameters,
+            'Parameters.member',
+            ('ParameterName', 'ParameterValue', 'Description', 'Source', 'DataType', 'AllowedValues', 'IsModifiable', 'MinimumEngineVersion'))
+        return self._make_request(
+            action='ModifyClusterParameterGroup',
+            verb='POST',
+            path='/', params=params)
+
+    def modify_cluster_subnet_group(self, cluster_subnet_group_name,
+                                    subnet_ids, description=None):
+        """
+        Modifies a cluster subnet group to include the specified list
+        of VPC subnets. The operation replaces the existing list of
+        subnets with the new list of subnets.
+
+        :type cluster_subnet_group_name: string
+        :param cluster_subnet_group_name: The name of the subnet group to be
+            modified.
+
+        :type description: string
+        :param description: A text description of the subnet group to be
+            modified.
+
+        :type subnet_ids: list
+        :param subnet_ids: An array of VPC subnet IDs. A maximum of 20 subnets
+            can be modified in a single request.
+
+        """
+        params = {
+            'ClusterSubnetGroupName': cluster_subnet_group_name,
+        }
+        self.build_list_params(params,
+                               subnet_ids,
+                               'SubnetIds.member')
+        if description is not None:
+            params['Description'] = description
+        return self._make_request(
+            action='ModifyClusterSubnetGroup',
+            verb='POST',
+            path='/', params=params)
+
+    def purchase_reserved_node_offering(self, reserved_node_offering_id,
+                                        node_count=None):
+        """
+        Allows you to purchase reserved nodes. Amazon Redshift offers
+        a predefined set of reserved node offerings. You can purchase
+        one of the offerings. You can call the
+        DescribeReservedNodeOfferings API to obtain the available
+        reserved node offerings. You can call this API by providing a
+        specific reserved node offering and the number of nodes you
+        want to reserve.
+
+        For more information about managing parameter groups, go to
+        `Purchasing Reserved Nodes`_ in the Amazon Redshift Management
+        Guide .
+
+        :type reserved_node_offering_id: string
+        :param reserved_node_offering_id: The unique identifier of the reserved
+            node offering you want to purchase.
+
+        :type node_count: integer
+        :param node_count: The number of reserved nodes you want to purchase.
+        Default: `1`
+
+        """
+        params = {
+            'ReservedNodeOfferingId': reserved_node_offering_id,
+        }
+        if node_count is not None:
+            params['NodeCount'] = node_count
+        return self._make_request(
+            action='PurchaseReservedNodeOffering',
+            verb='POST',
+            path='/', params=params)
+
+    def reboot_cluster(self, cluster_identifier):
+        """
+        Reboots a cluster. This action is taken as soon as possible.
+        It results in a momentary outage to the cluster, during which
+        the cluster status is set to `rebooting`. A cluster event is
+        created when the reboot is completed. Any pending cluster
+        modifications (see ModifyCluster) are applied at this reboot.
+        For more information about managing clusters, go to `Amazon
+        Redshift Clusters`_ in the Amazon Redshift Management Guide
+
+        :type cluster_identifier: string
+        :param cluster_identifier: The cluster identifier.
+
+        """
+        params = {'ClusterIdentifier': cluster_identifier, }
+        return self._make_request(
+            action='RebootCluster',
+            verb='POST',
+            path='/', params=params)
+
+    def reset_cluster_parameter_group(self, parameter_group_name,
+                                      reset_all_parameters=None,
+                                      parameters=None):
+        """
+        Sets one or more parameters of the specified parameter group
+        to their default values and sets the source values of the
+        parameters to "engine-default". To reset the entire parameter
+        group specify the ResetAllParameters parameter. For parameter
+        changes to take effect you must reboot any associated
+        clusters.
+
+        :type parameter_group_name: string
+        :param parameter_group_name: The name of the cluster parameter group to
+            be reset.
+
+        :type reset_all_parameters: boolean
+        :param reset_all_parameters: If `True`, all parameters in the specified
+            parameter group will be reset to their default values.
+        Default: `True`
+
+        :type parameters: list
+        :param parameters: An array of names of parameters to be reset. If
+            ResetAllParameters option is not used, then at least one parameter
+            name must be supplied.
+        Constraints: A maximum of 20 parameters can be reset in a single
+            request.
+
+        """
+        params = {'ParameterGroupName': parameter_group_name, }
+        if reset_all_parameters is not None:
+            params['ResetAllParameters'] = str(
+                reset_all_parameters).lower()
+        if parameters is not None:
+            self.build_complex_list_params(
+                params, parameters,
+                'Parameters.member',
+                ('ParameterName', 'ParameterValue', 'Description', 'Source', 'DataType', 'AllowedValues', 'IsModifiable', 'MinimumEngineVersion'))
+        return self._make_request(
+            action='ResetClusterParameterGroup',
+            verb='POST',
+            path='/', params=params)
+
+    def restore_from_cluster_snapshot(self, cluster_identifier,
+                                      snapshot_identifier, port=None,
+                                      availability_zone=None,
+                                      allow_version_upgrade=None,
+                                      cluster_subnet_group_name=None,
+                                      publicly_accessible=None):
+        """
+        Creates a new cluster from a snapshot. Amazon Redshift creates
+        the resulting cluster with the same configuration as the
+        original cluster from which the snapshot was created, except
+        that the new cluster is created with the default cluster
+        security and parameter group. After Amazon Redshift creates
+        the cluster you can use the ModifyCluster API to associate a
+        different security group and different parameter group with
+        the restored cluster.
+
+        If a snapshot is taken of a cluster in VPC, you can restore it
+        only in VPC. In this case, you must provide a cluster subnet
+        group where you want the cluster restored. If snapshot is
+        taken of a cluster outside VPC, then you can restore it only
+        outside VPC.
+
+        For more information about working with snapshots, go to
+        `Amazon Redshift Snapshots`_ in the Amazon Redshift Management
+        Guide .
+
+        :type cluster_identifier: string
+        :param cluster_identifier: The identifier of the cluster that will be
+            created from restoring the snapshot.
+
+        Constraints:
+
+
+        + Must contain from 1 to 63 alphanumeric characters or hyphens.
+        + Alphabetic characters must be lowercase.
+        + First character must be a letter.
+        + Cannot end with a hyphen or contain two consecutive hyphens.
+        + Must be unique for all clusters within an AWS account.
+
+        :type snapshot_identifier: string
+        :param snapshot_identifier: The name of the snapshot from which to
+            create the new cluster. This parameter isn't case sensitive.
+        Example: `my-snapshot-id`
+
+        :type port: integer
+        :param port: The port number on which the cluster accepts connections.
+        Default: The same port as the original cluster.
+
+        Constraints: Must be between `1115` and `65535`.
+
+        :type availability_zone: string
+        :param availability_zone: The Amazon EC2 Availability Zone in which to
+            restore the cluster.
+        Default: A random, system-chosen Availability Zone.
+
+        Example: `us-east-1a`
+
+        :type allow_version_upgrade: boolean
+        :param allow_version_upgrade: If `True`, upgrades can be applied during
+            the maintenance window to the Amazon Redshift engine that is
+            running on the cluster.
+        Default: `True`
+
+        :type cluster_subnet_group_name: string
+        :param cluster_subnet_group_name: The name of the subnet group where
+            you want to cluster restored.
+        A snapshot of cluster in VPC can be restored only in VPC. Therefore,
+            you must provide subnet group name where you want the cluster
+            restored.
+
+        :type publicly_accessible: boolean
+        :param publicly_accessible: If `True`, the cluster can be accessed from
+            a public network.
+
+        """
+        params = {
+            'ClusterIdentifier': cluster_identifier,
+            'SnapshotIdentifier': snapshot_identifier,
+        }
+        if port is not None:
+            params['Port'] = port
+        if availability_zone is not None:
+            params['AvailabilityZone'] = availability_zone
+        if allow_version_upgrade is not None:
+            params['AllowVersionUpgrade'] = str(
+                allow_version_upgrade).lower()
+        if cluster_subnet_group_name is not None:
+            params['ClusterSubnetGroupName'] = cluster_subnet_group_name
+        if publicly_accessible is not None:
+            params['PubliclyAccessible'] = str(
+                publicly_accessible).lower()
+        return self._make_request(
+            action='RestoreFromClusterSnapshot',
+            verb='POST',
+            path='/', params=params)
+
+    def revoke_cluster_security_group_ingress(self,
+                                              cluster_security_group_name,
+                                              cidrip=None,
+                                              ec2_security_group_name=None,
+                                              ec2_security_group_owner_id=None):
+        """
+        Revokes an ingress rule in an Amazon Redshift security group
+        for a previously authorized IP range or Amazon EC2 security
+        group. To add an ingress rule, see
+        AuthorizeClusterSecurityGroupIngress. For information about
+        managing security groups, go to`Amazon Redshift Cluster
+        Security Groups`_ in the Amazon Redshift Management Guide .
+
+        :type cluster_security_group_name: string
+        :param cluster_security_group_name: The name of the security Group from
+            which to revoke the ingress rule.
+
+        :type cidrip: string
+        :param cidrip: The IP range for which to revoke access. This range must
+            be a valid Classless Inter-Domain Routing (CIDR) block of IP
+            addresses. If `CIDRIP` is specified, `EC2SecurityGroupName` and
+            `EC2SecurityGroupOwnerId` cannot be provided.
+
+        :type ec2_security_group_name: string
+        :param ec2_security_group_name: The name of the EC2 Security Group
+            whose access is to be revoked. If `EC2SecurityGroupName` is
+            specified, `EC2SecurityGroupOwnerId` must also be provided and
+            `CIDRIP` cannot be provided.
+
+        :type ec2_security_group_owner_id: string
+        :param ec2_security_group_owner_id: The AWS account number of the owner
+            of the security group specified in the `EC2SecurityGroupName`
+            parameter. The AWS access key ID is not an acceptable value. If
+            `EC2SecurityGroupOwnerId` is specified, `EC2SecurityGroupName` must
+            also be provided. and `CIDRIP` cannot be provided.
+        Example: `111122223333`
+
+        """
+        params = {
+            'ClusterSecurityGroupName': cluster_security_group_name,
+        }
+        if cidrip is not None:
+            params['CIDRIP'] = cidrip
+        if ec2_security_group_name is not None:
+            params['EC2SecurityGroupName'] = ec2_security_group_name
+        if ec2_security_group_owner_id is not None:
+            params['EC2SecurityGroupOwnerId'] = ec2_security_group_owner_id
+        return self._make_request(
+            action='RevokeClusterSecurityGroupIngress',
+            verb='POST',
+            path='/', params=params)
+
+    def _make_request(self, action, verb, path, params):
+        params['ContentType'] = 'JSON'
+        response = self.make_request(action=action, verb='POST',
+                                     path='/', params=params)
+        body = response.read()
+        boto.log.debug(body)
+        if response.status == 200:
+            return json.loads(body)
+        else:
+            json_body = json.loads(body)
+            fault_name = json_body.get('Error', {}).get('Code', None)
+            exception_class = self._faults.get(fault_name, self.ResponseError)
+            raise exception_class(response.status, response.reason,
+                                  body=json_body)
diff --git a/boto/resultset.py b/boto/resultset.py
index 080290e..f89ddbc 100644
--- a/boto/resultset.py
+++ b/boto/resultset.py
@@ -54,6 +54,7 @@
         self.next_key_marker = None
         self.next_upload_id_marker = None
         self.next_version_id_marker = None
+        self.next_generation_marker= None
         self.version_id_marker = None
         self.is_truncated = False
         self.next_token = None
@@ -94,6 +95,8 @@
             self.version_id_marker = value
         elif name == 'NextVersionIdMarker':
             self.next_version_id_marker = value
+        elif name == 'NextGenerationMarker':
+            self.next_generation_marker = value
         elif name == 'UploadIdMarker':
             self.upload_id_marker = value
         elif name == 'NextUploadIdMarker':
@@ -164,4 +167,3 @@
             self.request_id = value
         else:
             setattr(self, name, value)
-
diff --git a/boto/route53/connection.py b/boto/route53/connection.py
index 9e6b38d..221b29b 100644
--- a/boto/route53/connection.py
+++ b/boto/route53/connection.py
@@ -1,5 +1,8 @@
 # Copyright (c) 2006-2010 Mitch Garnaat http://garnaat.org/
 # Copyright (c) 2010, Eucalyptus Systems, Inc.
+# Copyright (c) 2011 Blue Pines Technologies LLC, Brad Carleton
+# www.bluepines.org
+# Copyright (c) 2012 42 Lines Inc., Jim Browne
 #
 # Permission is hereby granted, free of charge, to any person obtaining a
 # copy of this software and associated documentation files (the
@@ -15,23 +18,22 @@
 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
 # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
 # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
-# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, 
 # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
 # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
 # IN THE SOFTWARE.
 #
+
 import xml.sax
-import time
 import uuid
 import urllib
-
 import boto
 from boto.connection import AWSAuthConnection
 from boto import handler
-from boto.resultset import ResultSet
+from boto.route53.record import ResourceRecordSets
+from boto.route53.zone import Zone
 import boto.jsonresponse
 import exception
-import hostedzone
 
 HZXML = """<?xml version="1.0" encoding="UTF-8"?>
 <CreateHostedZoneRequest xmlns="%(xmlns)s">
@@ -41,7 +43,7 @@
     <Comment>%(comment)s</Comment>
   </HostedZoneConfig>
 </CreateHostedZoneRequest>"""
-
+        
 #boto.set_stream_logger('dns')
 
 
@@ -116,7 +118,7 @@
     def get_hosted_zone(self, hosted_zone_id):
         """
         Get detailed information about a particular Hosted Zone.
-
+        
         :type hosted_zone_id: str
         :param hosted_zone_id: The unique identifier for the Hosted Zone
 
@@ -141,7 +143,7 @@
 
         :type hosted_zone_name: str
         :param hosted_zone_name: The fully qualified domain name for the Hosted
-        Zone
+            Zone
 
         """
         if hosted_zone_name[-1] != '.':
@@ -156,7 +158,7 @@
         """
         Create a new Hosted Zone.  Returns a Python data structure with
         information about the newly created Hosted Zone.
-
+        
         :type domain_name: str
         :param domain_name: The name of the domain. This should be a
             fully-specified domain, and should end with a final period
@@ -176,7 +178,7 @@
             use that.
 
         :type comment: str
-        :param comment: Any comments you want to include about the hosted
+        :param comment: Any comments you want to include about the hosted      
             zone.
 
         """
@@ -186,10 +188,10 @@
                   'caller_ref': caller_ref,
                   'comment': comment,
                   'xmlns': self.XMLNameSpace}
-        xml = HZXML % params
+        xml_body = HZXML % params
         uri = '/%s/hostedzone' % self.Version
         response = self.make_request('POST', uri,
-                                     {'Content-Type': 'text/xml'}, xml)
+                                     {'Content-Type': 'text/xml'}, xml_body)
         body = response.read()
         boto.log.debug(body)
         if response.status == 201:
@@ -202,7 +204,7 @@
             raise exception.DNSServerError(response.status,
                                            response.reason,
                                            body)
-
+        
     def delete_hosted_zone(self, hosted_zone_id):
         uri = '/%s/hostedzone/%s' % (self.Version, hosted_zone_id)
         response = self.make_request('DELETE', uri)
@@ -224,7 +226,7 @@
         """
         Retrieve the Resource Record Sets defined for this Hosted Zone.
         Returns the raw XML data returned by the Route53 call.
-
+        
         :type hosted_zone_id: str
         :param hosted_zone_id: The unique identifier for the Hosted Zone
 
@@ -271,7 +273,6 @@
         :param maxitems: The maximum number of records
 
         """
-        from boto.route53.record import ResourceRecordSets
         params = {'type': type, 'name': name,
                   'Identifier': identifier, 'maxitems': maxitems}
         uri = '/%s/hostedzone/%s/rrset' % (self.Version, hosted_zone_id)
@@ -341,3 +342,62 @@
         h = boto.jsonresponse.XmlHandler(e, None)
         h.parse(body)
         return e
+
+    def create_zone(self, name):
+        """
+        Create a new Hosted Zone.  Returns a Zone object for the newly
+        created Hosted Zone.
+
+        :type name: str
+        :param name: The name of the domain. This should be a
+            fully-specified domain, and should end with a final period
+            as the last label indication.  If you omit the final period,
+            Amazon Route 53 assumes the domain is relative to the root.
+            This is the name you have registered with your DNS registrar.
+            It is also the name you will delegate from your registrar to
+            the Amazon Route 53 delegation servers returned in
+            response to this request.
+        """
+        zone = self.create_hosted_zone(name)
+        return Zone(self, zone['CreateHostedZoneResponse']['HostedZone'])
+
+    def get_zone(self, name):
+        """
+        Returns a Zone object for the specified Hosted Zone.
+
+        :param name: The name of the domain. This should be a
+            fully-specified domain, and should end with a final period
+            as the last label indication.
+        """
+        name = self._make_qualified(name)
+        for zone in self.get_zones():
+            if name == zone.name:
+                return zone
+
+    def get_zones(self):
+        """
+        Returns a list of Zone objects, one for each of the Hosted
+        Zones defined for the AWS account.
+        """
+        zones = self.get_all_hosted_zones()
+        return [Zone(self, zone) for zone in
+                zones['ListHostedZonesResponse']['HostedZones']]
+
+    def _make_qualified(self, value):
+        """
+        Ensure passed domain names end in a period (.) character.
+        This will usually make a domain fully qualified.
+        """
+        if type(value) in [list, tuple, set]:
+            new_list = []
+            for record in value:
+                if record and not record[-1] == '.':
+                    new_list.append("%s." % record)
+                else:
+                    new_list.append(record)
+            return new_list
+        else:
+            value = value.strip()
+            if value and not value[-1] == '.':
+                value = "%s." % value
+            return value
diff --git a/boto/route53/record.py b/boto/route53/record.py
index f954645..643af2a 100644
--- a/boto/route53/record.py
+++ b/boto/route53/record.py
@@ -57,7 +57,12 @@
         ResultSet.__init__(self, [('ResourceRecordSet', Record)])
 
     def __repr__(self):
-        return '<ResourceRecordSets: %s>' % self.hosted_zone_id
+        if self.changes:
+            record_list = ','.join([c.__repr__() for c in self.changes])
+        else:
+            record_list = ','.join([record.__repr__() for record in self])
+        return '<ResourceRecordSets:%s [%s]' % (self.hosted_zone_id,
+                                                record_list)
 
     def add_change(self, action, name, type, ttl=600,
             alias_hosted_zone_id=None, alias_dns_name=None, identifier=None,
@@ -121,6 +126,11 @@
         self.changes.append([action, change])
         return change
 
+    def add_change_record(self, action, change):
+        """Add an existing record to a change set with the specified action"""
+        self.changes.append([action, change])
+        return
+
     def to_xml(self):
         """Convert this ResourceRecordSet into XML
         to be saved via the ChangeResourceRecordSetsRequest"""
@@ -214,6 +224,9 @@
         self.weight = weight
         self.region = region
 
+    def __repr__(self):
+        return '<Record:%s:%s:%s>' % (self.name, self.type, self.to_print())
+
     def add_value(self, value):
         """Add a resource record value"""
         self.resource_records.append(value)
@@ -264,6 +277,8 @@
 
         if self.identifier != None and self.weight != None:
             rr += ' (WRR id=%s, w=%s)' % (self.identifier, self.weight)
+        elif self.identifier != None and self.region != None:
+            rr += ' (LBR id=%s, region=%s)' % (self.identifier, self.region)
 
         return rr
 
diff --git a/boto/route53/status.py b/boto/route53/status.py
new file mode 100644
index 0000000..782372a
--- /dev/null
+++ b/boto/route53/status.py
@@ -0,0 +1,42 @@
+# Copyright (c) 2011 Blue Pines Technologies LLC, Brad Carleton
+# www.bluepines.org
+# All rights reserved.
+#
+# Permission is hereby granted, free of charge, to any person obtaining a
+# copy of this software and associated documentation files (the
+# "Software"), to deal in the Software without restriction, including
+# without limitation the rights to use, copy, modify, merge, publish, dis-
+# tribute, sublicense, and/or sell copies of the Software, and to permit
+# persons to whom the Software is furnished to do so, subject to the fol-
+# lowing conditions:
+#
+# The above copyright notice and this permission notice shall be included
+# in all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
+# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
+# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+# IN THE SOFTWARE.
+
+
+class Status(object):
+    def __init__(self, route53connection, change_dict):
+        self.route53connection = route53connection
+        for key in change_dict:
+            if key == 'Id':
+                self.__setattr__(key.lower(),
+                                 change_dict[key].replace('/change/', ''))
+            else:
+                self.__setattr__(key.lower(), change_dict[key])
+
+    def update(self):
+        """ Update the status of this request."""
+        status = self.route53connection.get_change(self.id)['GetChangeResponse']['ChangeInfo']['Status']
+        self.status = status
+        return status
+
+    def __repr__(self):
+        return '<Status:%s>' % self.status
diff --git a/boto/route53/zone.py b/boto/route53/zone.py
new file mode 100644
index 0000000..75cefd4
--- /dev/null
+++ b/boto/route53/zone.py
@@ -0,0 +1,412 @@
+# Copyright (c) 2011 Blue Pines Technologies LLC, Brad Carleton
+# www.bluepines.org
+# Copyright (c) 2012 42 Lines Inc., Jim Browne
+# All rights reserved.
+#
+# Permission is hereby granted, free of charge, to any person obtaining a
+# copy of this software and associated documentation files (the
+# "Software"), to deal in the Software without restriction, including
+# without limitation the rights to use, copy, modify, merge, publish, dis-
+# tribute, sublicense, and/or sell copies of the Software, and to permit
+# persons to whom the Software is furnished to do so, subject to the fol-
+# lowing conditions:
+#
+# The above copyright notice and this permission notice shall be included
+# in all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
+# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
+# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+# IN THE SOFTWARE.
+
+default_ttl = 60
+
+import copy
+from boto.exception import TooManyRecordsException
+from boto.route53.record import ResourceRecordSets
+from boto.route53.status import Status
+
+
+class Zone(object):
+    """
+    A Route53 Zone.
+
+    :ivar Route53Connection route53connection
+    :ivar str Id: The ID of the hosted zone.
+    """
+    def __init__(self, route53connection, zone_dict):
+        self.route53connection = route53connection
+        for key in zone_dict:
+            if key == 'Id':
+                self.id = zone_dict['Id'].replace('/hostedzone/', '')
+            else:
+                self.__setattr__(key.lower(), zone_dict[key])
+
+    def __repr__(self):
+        return '<Zone:%s>' % self.name
+
+    def _commit(self, changes):
+        """
+        Commit a set of changes and return the ChangeInfo portion of
+        the response.
+
+        :type changes: ResourceRecordSets
+        :param changes: changes to be committed
+        """
+        response = changes.commit()
+        return response['ChangeResourceRecordSetsResponse']['ChangeInfo']
+
+    def _new_record(self, changes, resource_type, name, value, ttl, identifier,
+                   comment=""):
+        """
+        Add a CREATE change record to an existing ResourceRecordSets
+
+        :type changes: ResourceRecordSets
+        :param changes: change set to append to
+
+        :type name: str
+        :param name: The name of the resource record you want to
+            perform the action on.
+
+        :type resource_type: str
+        :param resource_type: The DNS record type
+
+        :param value: Appropriate value for resource_type
+
+        :type ttl: int
+        :param ttl: The resource record cache time to live (TTL), in seconds.
+
+        :type identifier: tuple
+        :param identifier: A tuple for setting WRR or LBR attributes.  Valid
+           forms are:
+
+           * (str, int): WRR record [e.g. ('foo',10)]
+           * (str, str): LBR record [e.g. ('foo','us-east-1')
+
+        :type comment: str
+        :param comment: A comment that will be stored with the change.
+        """
+        weight = None
+        region = None
+        if identifier is not None:
+            try:
+                int(identifier[1])
+                weight = identifier[1]
+                identifier = identifier[0]
+            except:
+                region = identifier[1]
+                identifier = identifier[0]
+        change = changes.add_change("CREATE", name, resource_type, ttl,
+                                    identifier=identifier, weight=weight,
+                                    region=region)
+        if type(value) in [list, tuple, set]:
+            for record in value:
+                change.add_value(record)
+        else:
+            change.add_value(value)
+
+    def add_record(self, resource_type, name, value, ttl=60, identifier=None,
+                   comment=""):
+        """
+        Add a new record to this Zone.  See _new_record for parameter
+        documentation.  Returns a Status object.
+        """
+        changes = ResourceRecordSets(self.route53connection, self.id, comment)
+        self._new_record(changes, resource_type, name, value, ttl, identifier,
+                         comment)
+        return Status(self.route53connection, self._commit(changes))
+
+    def update_record(self, old_record, new_value, new_ttl=None,
+                      new_identifier=None, comment=""):
+        """
+        Update an existing record in this Zone.  Returns a Status object.
+
+        :type old_record: ResourceRecord
+        :param old_record: A ResourceRecord (e.g. returned by find_records)
+
+        See _new_record for additional parameter documentation.
+        """
+        new_ttl = new_ttl or default_ttl
+        record = copy.copy(old_record)
+        changes = ResourceRecordSets(self.route53connection, self.id, comment)
+        changes.add_change_record("DELETE", record)
+        self._new_record(changes, record.type, record.name,
+                         new_value, new_ttl, new_identifier, comment)
+        return Status(self.route53connection, self._commit(changes))
+
+    def delete_record(self, record, comment=""):
+        """
+        Delete one or more records from this Zone.  Returns a Status object.
+
+        :param record: A ResourceRecord (e.g. returned by
+           find_records) or list, tuple, or set of ResourceRecords.
+
+        :type comment: str
+        :param comment: A comment that will be stored with the change.
+        """
+        changes = ResourceRecordSets(self.route53connection, self.id, comment)
+        if type(record) in [list, tuple, set]:
+            for r in record:
+                changes.add_change_record("DELETE", r)
+        else:
+            changes.add_change_record("DELETE", record)
+        return Status(self.route53connection, self._commit(changes))
+
+    def add_cname(self, name, value, ttl=None, identifier=None, comment=""):
+        """
+        Add a new CNAME record to this Zone.  See _new_record for
+        parameter documentation.  Returns a Status object.
+        """
+        ttl = ttl or default_ttl
+        name = self.route53connection._make_qualified(name)
+        value = self.route53connection._make_qualified(value)
+        return self.add_record(resource_type='CNAME',
+                               name=name,
+                               value=value,
+                               ttl=ttl,
+                               identifier=identifier,
+                               comment=comment)
+
+    def add_a(self, name, value, ttl=None, identifier=None, comment=""):
+        """
+        Add a new A record to this Zone.  See _new_record for
+        parameter documentation.  Returns a Status object.
+        """
+        ttl = ttl or default_ttl
+        name = self.route53connection._make_qualified(name)
+        return self.add_record(resource_type='A',
+                               name=name,
+                               value=value,
+                               ttl=ttl,
+                               identifier=identifier,
+                               comment=comment)
+
+    def add_mx(self, name, records, ttl=None, identifier=None, comment=""):
+        """
+        Add a new MX record to this Zone.  See _new_record for
+        parameter documentation.  Returns a Status object.
+        """
+        ttl = ttl or default_ttl
+        records = self.route53connection._make_qualified(records)
+        return self.add_record(resource_type='MX',
+                               name=name,
+                               value=records,
+                               ttl=ttl,
+                               identifier=identifier,
+                               comment=comment)
+
+    def find_records(self, name, type, desired=1, all=False, identifier=None):
+        """
+        Search this Zone for records that match given parameters.
+        Returns None if no results, a ResourceRecord if one result, or
+        a ResourceRecordSets if more than one result.
+
+        :type name: str
+        :param name: The name of the records should match this parameter
+
+        :type type: str
+        :param type: The type of the records should match this parameter
+
+        :type desired: int
+        :param desired: The number of desired results.  If the number of
+           matching records in the Zone exceeds the value of this parameter,
+           throw TooManyRecordsException
+
+        :type all: Boolean
+        :param all: If true return all records that match name, type, and
+          identifier parameters
+
+        :type identifier: Tuple
+        :param identifier: A tuple specifying WRR or LBR attributes.  Valid
+           forms are:
+
+           * (str, int): WRR record [e.g. ('foo',10)]
+           * (str, str): LBR record [e.g. ('foo','us-east-1')
+
+        """
+        name = self.route53connection._make_qualified(name)
+        returned = self.route53connection.get_all_rrsets(self.id, name=name,
+                                                         type=type)
+
+        # name/type for get_all_rrsets sets the starting record; they
+        # are not a filter
+        results = [r for r in returned if r.name == name and r.type == type]
+
+        weight = None
+        region = None
+        if identifier is not None:
+            try:
+                int(identifier[1])
+                weight = identifier[1]
+            except:
+                region = identifier[1]
+
+        if weight is not None:
+            results = [r for r in results if (r.weight == weight and
+                                              r.identifier == identifier[0])]
+        if region is not None:
+            results = [r for r in results if (r.region == region and
+                                              r.identifier == identifier[0])]
+
+        if ((not all) and (len(results) > desired)):
+            message = "Search: name %s type %s" % (name, type)
+            message += "\nFound: "
+            message += ", ".join(["%s %s %s" % (r.name, r.type, r.to_print())
+                                  for r in results])
+            raise TooManyRecordsException(message)
+        elif len(results) > 1:
+            return results
+        elif len(results) == 1:
+            return results[0]
+        else:
+            return None
+
+    def get_cname(self, name, all=False):
+        """
+        Search this Zone for CNAME records that match name.
+
+        Returns a ResourceRecord.
+
+        If there is more than one match return all as a
+        ResourceRecordSets if all is True, otherwise throws
+        TooManyRecordsException.
+        """
+        return self.find_records(name, 'CNAME', all=all)
+
+    def get_a(self, name, all=False):
+        """
+        Search this Zone for A records that match name.
+
+        Returns a ResourceRecord.
+
+        If there is more than one match return all as a
+        ResourceRecordSets if all is True, otherwise throws
+        TooManyRecordsException.
+        """
+        return self.find_records(name, 'A', all=all)
+
+    def get_mx(self, name, all=False):
+        """
+        Search this Zone for MX records that match name.
+
+        Returns a ResourceRecord.
+
+        If there is more than one match return all as a
+        ResourceRecordSets if all is True, otherwise throws
+        TooManyRecordsException.
+        """
+        return self.find_records(name, 'MX', all=all)
+
+    def update_cname(self, name, value, ttl=None, identifier=None, comment=""):
+        """
+        Update the given CNAME record in this Zone to a new value, ttl,
+        and identifier.  Returns a Status object.
+
+        Will throw TooManyRecordsException is name, value does not match
+        a single record.
+        """
+        name = self.route53connection._make_qualified(name)
+        value = self.route53connection._make_qualified(value)
+        old_record = self.get_cname(name)
+        ttl = ttl or old_record.ttl
+        return self.update_record(old_record,
+                                  new_value=value,
+                                  new_ttl=ttl,
+                                  new_identifier=identifier,
+                                  comment=comment)
+
+    def update_a(self, name, value, ttl=None, identifier=None, comment=""):
+        """
+        Update the given A record in this Zone to a new value, ttl,
+        and identifier.  Returns a Status object.
+
+        Will throw TooManyRecordsException is name, value does not match
+        a single record.
+        """
+        name = self.route53connection._make_qualified(name)
+        old_record = self.get_a(name)
+        ttl = ttl or old_record.ttl
+        return self.update_record(old_record,
+                                  new_value=value,
+                                  new_ttl=ttl,
+                                  new_identifier=identifier,
+                                  comment=comment)
+
+    def update_mx(self, name, value, ttl=None, identifier=None, comment=""):
+        """
+        Update the given MX record in this Zone to a new value, ttl,
+        and identifier.  Returns a Status object.
+
+        Will throw TooManyRecordsException is name, value does not match
+        a single record.
+        """
+        name = self.route53connection._make_qualified(name)
+        value = self.route53connection._make_qualified(value)
+        old_record = self.get_mx(name)
+        ttl = ttl or old_record.ttl
+        return self.update_record(old_record,
+                                  new_value=value,
+                                  new_ttl=ttl,
+                                  new_identifier=identifier,
+                                  comment=comment)
+
+    def delete_cname(self, name, identifier=None, all=False):
+        """
+        Delete a CNAME record matching name and identifier from
+        this Zone.  Returns a Status object.
+
+        If there is more than one match delete all matching records if
+        all is True, otherwise throws TooManyRecordsException.
+        """
+        name = self.route53connection._make_qualified(name)
+        record = self.find_records(name, 'CNAME', identifier=identifier,
+                                   all=all)
+        return self.delete_record(record)
+
+    def delete_a(self, name, identifier=None, all=False):
+        """
+        Delete an A record matching name and identifier from this
+        Zone.  Returns a Status object.
+
+        If there is more than one match delete all matching records if
+        all is True, otherwise throws TooManyRecordsException.
+        """
+        name = self.route53connection._make_qualified(name)
+        record = self.find_records(name, 'A', identifier=identifier,
+                                   all=all)
+        return self.delete_record(record)
+
+    def delete_mx(self, name, identifier=None, all=False):
+        """
+        Delete an MX record matching name and identifier from this
+        Zone.  Returns a Status object.
+
+        If there is more than one match delete all matching records if
+        all is True, otherwise throws TooManyRecordsException.
+        """
+        name = self.route53connection._make_qualified(name)
+        record = self.find_records(name, 'MX', identifier=identifier,
+                                   all=all)
+        return self.delete_record(record)
+
+    def get_records(self):
+        """
+        Return a ResourceRecordsSets for all of the records in this zone.
+        """
+        return self.route53connection.get_all_rrsets(self.id)
+
+    def delete(self):
+        """
+        Request that this zone be deleted by Amazon.
+        """
+        self.route53connection.delete_hosted_zone(self.id)
+
+    def get_nameservers(self):
+        """ Get the list of nameservers for this zone."""
+        ns = self.find_records(self.name, 'NS')
+        if ns is not None:
+            ns = ns.resource_records
+        return ns
diff --git a/boto/s3/__init__.py b/boto/s3/__init__.py
index 5db0d62..30d610d 100644
--- a/boto/s3/__init__.py
+++ b/boto/s3/__init__.py
@@ -65,9 +65,15 @@
             S3RegionInfo(name='ap-southeast-1',
                          endpoint='s3-ap-southeast-1.amazonaws.com',
                          connection_cls=S3Connection),
+            S3RegionInfo(name='ap-southeast-2',
+                         endpoint='s3-ap-southeast-2.amazonaws.com',
+                         connection_cls=S3Connection),
             S3RegionInfo(name='eu-west-1',
                          endpoint='s3-eu-west-1.amazonaws.com',
                          connection_cls=S3Connection),
+            S3RegionInfo(name='sa-east-1',
+                         endpoint='s3-sa-east-1.amazonaws.com',
+                         connection_cls=S3Connection),
             ]
 
 
diff --git a/boto/s3/bucket.py b/boto/s3/bucket.py
index 078f056..335e9fa 100644
--- a/boto/s3/bucket.py
+++ b/boto/s3/bucket.py
@@ -40,6 +40,7 @@
 from boto.s3.tagging import Tags
 from boto.s3.cors import CORSConfiguration
 from boto.s3.bucketlogging import BucketLogging
+from boto.s3 import website
 import boto.jsonresponse
 import boto.utils
 import xml.sax
@@ -85,14 +86,6 @@
          <MfaDelete>%s</MfaDelete>
        </VersioningConfiguration>"""
 
-    WebsiteBody = """<?xml version="1.0" encoding="UTF-8"?>
-      <WebsiteConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
-        <IndexDocument><Suffix>%s</Suffix></IndexDocument>
-        %s
-      </WebsiteConfiguration>"""
-
-    WebsiteErrorFragment = """<ErrorDocument><Key>%s</Key></ErrorDocument>"""
-
     VersionRE = '<Status>([A-Za-z]+)</Status>'
     MFADeleteRE = '<MfaDelete>([A-Za-z]+)</MfaDelete>'
 
@@ -166,16 +159,18 @@
         :rtype: :class:`boto.s3.key.Key`
         :returns: A Key object from this bucket.
         """
-        query_args = []
+        query_args_l = []
         if version_id:
-            query_args.append('versionId=%s' % version_id)
+            query_args_l.append('versionId=%s' % version_id)
         if response_headers:
             for rk, rv in response_headers.iteritems():
-                query_args.append('%s=%s' % (rk, urllib.quote(rv)))
-        if query_args:
-            query_args = '&'.join(query_args)
-        else:
-            query_args = None
+                query_args_l.append('%s=%s' % (rk, urllib.quote(rv)))
+
+        key, resp = self._get_key_internal(key_name, headers, query_args_l)
+        return key
+
+    def _get_key_internal(self, key_name, headers, query_args_l):
+        query_args = '&'.join(query_args_l) or None
         response = self.connection.make_request('HEAD', self.name, key_name,
                                                 headers=headers,
                                                 query_args=query_args)
@@ -205,10 +200,12 @@
             k.name = key_name
             k.handle_version_headers(response)
             k.handle_encryption_headers(response)
-            return k
+            k.handle_restore_headers(response)
+            k.handle_addl_headers(response.getheaders())
+            return k, response
         else:
             if response.status == 404:
-                return None
+                return None, response
             else:
                 raise self.connection.provider.storage_response_error(
                     response.status, response.reason, '')
@@ -240,9 +237,7 @@
         :type delimiter: string
         :param delimiter: can be used in conjunction with the prefix
             to allow you to organize and browse your keys
-            hierarchically. See:
-            http://docs.amazonwebservices.com/AmazonS3/2006-03-01/ for
-            more details.
+            hierarchically. See http://goo.gl/Xx63h for more details.
 
         :type marker: string
         :param marker: The "marker" of where you are in the result set
@@ -272,8 +267,10 @@
         :param delimiter: can be used in conjunction with the prefix
             to allow you to organize and browse your keys
             hierarchically. See:
-            http://docs.amazonwebservices.com/AmazonS3/2006-03-01/ for
-            more details.
+
+            http://aws.amazon.com/releasenotes/Amazon-S3/213
+
+            for more details.
 
         :type marker: string
         :param marker: The "marker" of where you are in the result set
@@ -304,24 +301,35 @@
                                             upload_id_marker,
                                             headers)
 
+    def _get_all_query_args(self, params, initial_query_string=''):
+        pairs = []
+
+        if initial_query_string:
+            pairs.append(initial_query_string)
+
+        for key, value in params.items():
+            key = key.replace('_', '-')
+            if key == 'maxkeys':
+                key = 'max-keys'
+            if isinstance(value, unicode):
+                value = value.encode('utf-8')
+            if value is not None and value != '':
+                pairs.append('%s=%s' % (
+                    urllib.quote(key),
+                    urllib.quote(str(value)
+                )))
+
+        return '&'.join(pairs)
+
     def _get_all(self, element_map, initial_query_string='',
                  headers=None, **params):
-        l = []
-        for k, v in params.items():
-            k = k.replace('_', '-')
-            if  k == 'maxkeys':
-                k = 'max-keys'
-            if isinstance(v, unicode):
-                v = v.encode('utf-8')
-            if v is not None and v != '':
-                l.append('%s=%s' % (urllib.quote(k), urllib.quote(str(v))))
-        if len(l):
-            s = initial_query_string + '&' + '&'.join(l)
-        else:
-            s = initial_query_string
+        query_args = self._get_all_query_args(
+            params,
+            initial_query_string=initial_query_string
+        )
         response = self.connection.make_request('GET', self.name,
                                                 headers=headers,
-                                                query_args=s)
+                                                query_args=query_args)
         body = response.read()
         boto.log.debug(body)
         if response.status == 200:
@@ -563,8 +571,8 @@
             pass
         return result
 
-    def delete_key(self, key_name, headers=None,
-                   version_id=None, mfa_token=None):
+    def delete_key(self, key_name, headers=None, version_id=None,
+                   mfa_token=None):
         """
         Deletes a key from the bucket.  If a version_id is provided,
         only that version of the key will be deleted.
@@ -588,11 +596,20 @@
             created or removed and what version_id the delete created
             or removed.
         """
+        if not key_name:
+            raise ValueError('Empty key names are not allowed')
+        return self._delete_key_internal(key_name, headers=headers,
+                                         version_id=version_id,
+                                         mfa_token=mfa_token,
+                                         query_args_l=None)
+
+    def _delete_key_internal(self, key_name, headers=None, version_id=None,
+                             mfa_token=None, query_args_l=None):
+        query_args_l = query_args_l or []
         provider = self.connection.provider
         if version_id:
-            query_args = 'versionId=%s' % version_id
-        else:
-            query_args = None
+            query_args_l.append('versionId=%s' % version_id)
+        query_args = '&'.join(query_args_l) or None
         if mfa_token:
             if not headers:
                 headers = {}
@@ -609,6 +626,7 @@
             k = self.key_class(self)
             k.name = key_name
             k.handle_version_headers(response)
+            k.handle_addl_headers(response.getheaders())
             return k
 
     def copy_key(self, new_key_name, src_bucket_name,
@@ -702,6 +720,7 @@
             if hasattr(key, 'Error'):
                 raise provider.storage_copy_error(key.Code, key.Message, body)
             key.handle_version_headers(response)
+            key.handle_addl_headers(response.getheaders())
             if preserve_acl:
                 self.set_xml_acl(acl, new_key_name)
             return key
@@ -1152,7 +1171,9 @@
         :param lifecycle_config: The lifecycle configuration you want
             to configure for this bucket.
         """
-        fp = StringIO.StringIO(lifecycle_config.to_xml())
+        xml = lifecycle_config.to_xml()
+        xml = xml.encode('utf-8')
+        fp = StringIO.StringIO(xml)
         md5 = boto.utils.compute_md5(fp)
         if headers is None:
             headers = {}
@@ -1205,7 +1226,10 @@
             raise self.connection.provider.storage_response_error(
                 response.status, response.reason, body)
 
-    def configure_website(self, suffix, error_key='', headers=None):
+    def configure_website(self, suffix=None, error_key=None,
+                          redirect_all_requests_to=None,
+                          routing_rules=None,
+                          headers=None):
         """
         Configure this bucket to act as a website
 
@@ -1221,13 +1245,35 @@
         :param error_key: The object key name to use when a 4XX class
             error occurs.  This is optional.
 
+        :type redirect_all_requests_to: :class:`boto.s3.website.RedirectLocation`
+        :param redirect_all_requests_to: Describes the redirect behavior for
+            every request to this bucket's website endpoint. If this value is
+            non None, no other values are considered when configuring the
+            website configuration for the bucket. This is an instance of
+            ``RedirectLocation``.
+
+        :type routing_rules: :class:`boto.s3.website.RoutingRules`
+        :param routing_rules: Object which specifies conditions
+            and redirects that apply when the conditions are met.
+
         """
-        if error_key:
-            error_frag = self.WebsiteErrorFragment % error_key
-        else:
-            error_frag = ''
-        body = self.WebsiteBody % (suffix, error_frag)
-        response = self.connection.make_request('PUT', self.name, data=body,
+        config = website.WebsiteConfiguration(
+                suffix, error_key, redirect_all_requests_to,
+                routing_rules)
+        return self.set_website_configuration(config, headers=headers)
+
+    def set_website_configuration(self, config, headers=None):
+        """
+        :type config: boto.s3.website.WebsiteConfiguration
+        :param config: Configuration data
+        """
+        return self.set_website_configuration_xml(config.to_xml(),
+          headers=headers)
+
+
+    def set_website_configuration_xml(self, xml, headers=None):
+        """Upload xml website configuration"""
+        response = self.connection.make_request('PUT', self.name, data=xml,
                                                 query_args='website',
                                                 headers=headers)
         body = response.read()
@@ -1255,7 +1301,17 @@
 
               * Key : name of object to serve when an error occurs
         """
-        return self.get_website_configuration_xml(self, headers)[0]
+        return self.get_website_configuration_with_xml(headers)[0]
+
+    def get_website_configuration_obj(self, headers=None):
+        """Get the website configuration as a
+        :class:`boto.s3.website.WebsiteConfiguration` object.
+        """
+        config_xml = self.get_website_configuration_xml(headers=headers)
+        config = website.WebsiteConfiguration()
+        h = handler.XmlHandler(config, self)
+        xml.sax.parseString(config_xml, h)
+        return config
 
     def get_website_configuration_with_xml(self, headers=None):
         """
@@ -1265,7 +1321,7 @@
         :rtype: 2-Tuple
         :returns: 2-tuple containing:
         1) A dictionary containing a Python representation
-                  of the XML response from GCS. The overall structure is:
+                  of the XML response. The overall structure is:
           * WebsiteConfiguration
             * IndexDocument
               * Suffix : suffix that is appended to request that
@@ -1274,6 +1330,15 @@
                 * Key : name of object to serve when an error occurs
         2) unparsed XML describing the bucket's website configuration.
         """
+
+        body = self.get_website_configuration_xml(headers=headers)
+        e = boto.jsonresponse.Element()
+        h = boto.jsonresponse.XmlHandler(e, None)
+        h.parse(body)
+        return e, body
+
+    def get_website_configuration_xml(self, headers=None):
+        """Get raw website configuration xml"""
         response = self.connection.make_request('GET', self.name,
                 query_args='website', headers=headers)
         body = response.read()
@@ -1282,11 +1347,7 @@
         if response.status != 200:
             raise self.connection.provider.storage_response_error(
                 response.status, response.reason, body)
-
-        e = boto.jsonresponse.Element()
-        h = boto.jsonresponse.XmlHandler(e, None)
-        h.parse(body)
-        return e, body
+        return body
 
     def delete_website_configuration(self, headers=None):
         """
diff --git a/boto/s3/bucketlistresultset.py b/boto/s3/bucketlistresultset.py
index 73b60c9..e11eb49 100644
--- a/boto/s3/bucketlistresultset.py
+++ b/boto/s3/bucketlistresultset.py
@@ -137,5 +137,3 @@
                                        key_marker=self.key_marker,
                                        upload_id_marker=self.upload_id_marker,
                                        headers=self.headers)
-
-    
diff --git a/boto/s3/connection.py b/boto/s3/connection.py
index f17ab40..583fa16 100644
--- a/boto/s3/connection.py
+++ b/boto/s3/connection.py
@@ -1,4 +1,5 @@
-# Copyright (c) 2006-2010 Mitch Garnaat http://garnaat.org/
+# Copyright (c) 2006-2012 Mitch Garnaat http://garnaat.org/
+# Copyright (c) 2012 Amazon.com, Inc. or its affiliates.
 # Copyright (c) 2010, Eucalyptus Systems, Inc.
 # All rights reserved.
 #
@@ -142,21 +143,25 @@
     SAEast = 'sa-east-1'
     APNortheast = 'ap-northeast-1'
     APSoutheast = 'ap-southeast-1'
+    APSoutheast2 = 'ap-southeast-2'
 
 
 class S3Connection(AWSAuthConnection):
 
-    DefaultHost = 's3.amazonaws.com'
+    DefaultHost = boto.config.get('s3', 'host', 's3.amazonaws.com')
+    DefaultCallingFormat = boto.config.get('s3', 'calling_format', 'boto.s3.connection.SubdomainCallingFormat')
     QueryString = 'Signature=%s&Expires=%d&AWSAccessKeyId=%s'
 
     def __init__(self, aws_access_key_id=None, aws_secret_access_key=None,
                  is_secure=True, port=None, proxy=None, proxy_port=None,
                  proxy_user=None, proxy_pass=None,
                  host=DefaultHost, debug=0, https_connection_factory=None,
-                 calling_format=SubdomainCallingFormat(), path='/',
+                 calling_format=DefaultCallingFormat, path='/',
                  provider='aws', bucket_class=Bucket, security_token=None,
                  suppress_consec_slashes=True, anon=False,
                  validate_certs=None):
+        if isinstance(calling_format, str):
+            calling_format=boto.utils.find_class(calling_format)()
         self.calling_format = calling_format
         self.bucket_class = bucket_class
         self.anon = anon
@@ -204,11 +209,12 @@
         return '{"expiration": "%s",\n"conditions": [%s]}' % \
             (time.strftime(boto.utils.ISO8601, expiration_time), ",".join(conditions))
 
-    def build_post_form_args(self, bucket_name, key, expires_in = 6000,
-                             acl = None, success_action_redirect = None,
-                             max_content_length = None,
-                             http_method = "http", fields=None,
-                             conditions=None):
+    def build_post_form_args(self, bucket_name, key, expires_in=6000,
+                             acl=None, success_action_redirect=None,
+                             max_content_length=None,
+                             http_method='http', fields=None,
+                             conditions=None, storage_class='STANDARD',
+                             server_side_encryption=None):
         """
         Taken from the AWS book Python examples and modified for use with boto
         This only returns the arguments required for the post form, not the
@@ -226,8 +232,14 @@
         :param expires_in: Time (in seconds) before this expires, defaults
             to 6000
 
-        :type acl: :class:`boto.s3.acl.ACL`
-        :param acl: ACL rule to use, if any
+        :type acl: string
+        :param acl: A canned ACL.  One of:
+            * private
+            * public-read
+            * public-read-write
+            * authenticated-read
+            * bucket-owner-read
+            * bucket-owner-full-control
 
         :type success_action_redirect: string
         :param success_action_redirect: URL to redirect to on success
@@ -238,25 +250,21 @@
         :type http_method: string
         :param http_method:  HTTP Method to use, "http" or "https"
 
+        :type storage_class: string
+        :param storage_class: Storage class to use for storing the object.
+            Valid values: STANDARD | REDUCED_REDUNDANCY
+
+        :type server_side_encryption: string
+        :param server_side_encryption: Specifies server-side encryption
+            algorithm to use when Amazon S3 creates an object.
+            Valid values: None | AES256
+
         :rtype: dict
         :return: A dictionary containing field names/values as well as
             a url to POST to
 
             .. code-block:: python
 
-                {
-                    "action": action_url_to_post_to,
-                    "fields": [
-                        {
-                            "name": field_name,
-                            "value":  field_value
-                        },
-                        {
-                            "name": field_name2,
-                            "value": field_value2
-                        }
-                    ]
-                }
 
         """
         if fields == None:
@@ -273,13 +281,27 @@
             conditions.append('{"key": "%s"}' % key)
         if acl:
             conditions.append('{"acl": "%s"}' % acl)
-            fields.append({ "name": "acl", "value": acl})
+            fields.append({"name": "acl", "value": acl})
         if success_action_redirect:
             conditions.append('{"success_action_redirect": "%s"}' % success_action_redirect)
-            fields.append({ "name": "success_action_redirect", "value": success_action_redirect})
+            fields.append({"name": "success_action_redirect", "value": success_action_redirect})
         if max_content_length:
             conditions.append('["content-length-range", 0, %i]' % max_content_length)
-            fields.append({"name":'content-length-range', "value": "0,%i" % max_content_length})
+
+        if self.provider.security_token:
+            fields.append({'name': 'x-amz-security-token',
+                           'value': self.provider.security_token})
+            conditions.append('{"x-amz-security-token": "%s"}' % self.provider.security_token)
+
+        if storage_class:
+            fields.append({'name': 'x-amz-storage-class',
+                           'value': storage_class})
+            conditions.append('{"x-amz-storage-class": "%s"}' % storage_class)
+
+        if server_side_encryption:
+            fields.append({'name': 'x-amz-server-side-encryption',
+                           'value': server_side_encryption})
+            conditions.append('{"x-amz-server-side-encryption": "%s"}' % server_side_encryption)
 
         policy = self.build_post_policy(expiration, conditions)
 
@@ -291,7 +313,8 @@
         fields.append({"name": "AWSAccessKeyId",
                        "value": self.aws_access_key_id})
 
-        # Add signature for encoded policy document as the 'AWSAccessKeyId' field
+        # Add signature for encoded policy document as the
+        # 'signature' field
         signature = self._auth_handler.sign_string(policy_b64)
         fields.append({"name": "signature", "value": signature})
         fields.append({"name": "key", "value": key})
@@ -384,12 +407,49 @@
         return rs.owner.id
 
     def get_bucket(self, bucket_name, validate=True, headers=None):
+        """
+        Retrieves a bucket by name.
+
+        If the bucket does not exist, an ``S3ResponseError`` will be raised. If
+        you are unsure if the bucket exists or not, you can use the
+        ``S3Connection.lookup`` method, which will either return a valid bucket
+        or ``None``.
+
+        :type bucket_name: string
+        :param bucket_name: The name of the bucket
+
+        :type headers: dict
+        :param headers: Additional headers to pass along with the request to
+            AWS.
+
+        :type validate: boolean
+        :param validate: If ``True``, it will try to fetch all keys within the
+            given bucket. (Default: ``True``)
+        """
         bucket = self.bucket_class(self, bucket_name)
         if validate:
             bucket.get_all_keys(headers, maxkeys=0)
         return bucket
 
     def lookup(self, bucket_name, validate=True, headers=None):
+        """
+        Attempts to get a bucket from S3.
+
+        Works identically to ``S3Connection.get_bucket``, save for that it
+        will return ``None`` if the bucket does not exist instead of throwing
+        an exception.
+
+        :type bucket_name: string
+        :param bucket_name: The name of the bucket
+
+        :type headers: dict
+        :param headers: Additional headers to pass along with the request to
+            AWS.
+
+        :type validate: boolean
+        :param validate: If ``True``, it will try to fetch all keys within the
+            given bucket. (Default: ``True``)
+        """
         try:
             bucket = self.get_bucket(bucket_name, validate, headers=headers)
         except:
@@ -400,7 +460,8 @@
                       location=Location.DEFAULT, policy=None):
         """
         Creates a new located bucket. By default it's in the USA. You can pass
-        Location.EU to create an European bucket.
+        Location.EU to create a European bucket (S3) or European Union bucket
+        (GCS).
 
         :type bucket_name: string
         :param bucket_name: The name of the new bucket
@@ -408,8 +469,10 @@
         :type headers: dict
         :param headers: Additional headers to pass along with the request to AWS.
 
-        :type location: :class:`boto.s3.connection.Location`
-        :param location: The location of the new bucket
+        :type location: str
+        :param location: The location of the new bucket.  You can use one of the
+            constants in :class:`boto.s3.connection.Location` (e.g. Location.EU,
+            Location.USWest, etc.).
 
         :type policy: :class:`boto.s3.acl.CannedACLStrings`
         :param policy: A canned ACL policy that will be applied to the
@@ -441,6 +504,19 @@
                 response.status, response.reason, body)
 
     def delete_bucket(self, bucket, headers=None):
+        """
+        Removes an S3 bucket.
+
+        In order to remove the bucket, it must first be empty. If the bucket is
+        not empty, an ``S3ResponseError`` will be raised.
+
+        :type bucket_name: string
+        :param bucket_name: The name of the bucket
+
+        :type headers: dict
+        :param headers: Additional headers to pass along with the request to
+            AWS.
+        """
         response = self.make_request('DELETE', bucket, headers=headers)
         body = response.read()
         if response.status != 204:
@@ -448,7 +524,8 @@
                 response.status, response.reason, body)
 
     def make_request(self, method, bucket='', key='', headers=None, data='',
-            query_args=None, sender=None, override_num_retries=None):
+                     query_args=None, sender=None, override_num_retries=None,
+                     retry_handler=None):
         if isinstance(bucket, self.bucket_class):
             bucket = bucket.name
         if isinstance(key, Key):
@@ -463,6 +540,9 @@
             boto.log.debug('path=%s' % path)
             auth_path += '?' + query_args
             boto.log.debug('auth_path=%s' % auth_path)
-        return AWSAuthConnection.make_request(self, method, path, headers,
-                data, host, auth_path, sender,
-                override_num_retries=override_num_retries)
+        return AWSAuthConnection.make_request(
+            self, method, path, headers,
+            data, host, auth_path, sender,
+            override_num_retries=override_num_retries,
+            retry_handler=retry_handler
+        )
diff --git a/boto/s3/deletemarker.py b/boto/s3/deletemarker.py
index c2dac19..5db4343 100644
--- a/boto/s3/deletemarker.py
+++ b/boto/s3/deletemarker.py
@@ -53,5 +53,3 @@
             self.version_id = value
         else:
             setattr(self, name, value)
-
-
diff --git a/boto/s3/key.py b/boto/s3/key.py
index c8ec4ef..7fead3a 100644
--- a/boto/s3/key.py
+++ b/boto/s3/key.py
@@ -21,17 +21,22 @@
 # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
 # IN THE SOFTWARE.
 
+import errno
 import mimetypes
 import os
 import re
 import rfc822
 import StringIO
 import base64
+import binascii
 import math
 import urllib
 import boto.utils
 from boto.exception import BotoClientError
+from boto.exception import StorageDataError
+from boto.exception import PleaseRetryException
 from boto.provider import Provider
+from boto.s3.keyfile import KeyFile
 from boto.s3.user import User
 from boto import UserAgent
 from boto.utils import compute_md5
@@ -42,10 +47,55 @@
 
 
 class Key(object):
+    """
+    Represents a key (object) in an S3 bucket.
+
+    :ivar bucket: The parent :class:`boto.s3.bucket.Bucket`.
+    :ivar name: The name of this Key object.
+    :ivar metadata: A dictionary containing user metadata that you
+        wish to store with the object or that has been retrieved from
+        an existing object.
+    :ivar cache_control: The value of the `Cache-Control` HTTP header.
+    :ivar content_type: The value of the `Content-Type` HTTP header.
+    :ivar content_encoding: The value of the `Content-Encoding` HTTP header.
+    :ivar content_disposition: The value of the `Content-Disposition` HTTP
+        header.
+    :ivar content_language: The value of the `Content-Language` HTTP header.
+    :ivar etag: The `etag` associated with this object.
+    :ivar last_modified: The string timestamp representing the last
+        time this object was modified in S3.
+    :ivar owner: The ID of the owner of this object.
+    :ivar storage_class: The storage class of the object.  Currently, one of:
+        STANDARD | REDUCED_REDUNDANCY | GLACIER
+    :ivar md5: The MD5 hash of the contents of the object.
+    :ivar size: The size, in bytes, of the object.
+    :ivar version_id: The version ID of this object, if it is a versioned
+        object.
+    :ivar encrypted: Whether the object is encrypted while at rest on
+        the server.
+    """
 
     DefaultContentType = 'application/octet-stream'
 
-    BufferSize = 8192
+    RestoreBody = """<?xml version="1.0" encoding="UTF-8"?>
+      <RestoreRequest xmlns="http://s3.amazonaws.com/doc/2006-03-01">
+        <Days>%s</Days>
+      </RestoreRequest>"""
+
+
+    BufferSize = boto.config.getint('Boto', 'key_buffer_size', 8192)
+
+    # The object metadata fields a user can set, other than custom metadata
+    # fields (i.e., those beginning with a provider-specific prefix like
+    # x-amz-meta).
+    base_user_settable_fields = set(["cache-control", "content-disposition",
+                                    "content-encoding", "content-language",
+                                    "content-md5", "content-type"])
+    _underscore_base_user_settable_fields = set()
+    for f in base_user_settable_fields:
+      _underscore_base_user_settable_fields.add(f.replace('-', '_'))
+
+
 
     def __init__(self, bucket=None, name=None):
         self.bucket = bucket
@@ -62,8 +112,6 @@
         self.last_modified = None
         self.owner = None
         self.storage_class = 'STANDARD'
-        self.md5 = None
-        self.base64md5 = None
         self.path = None
         self.resp = None
         self.mode = None
@@ -72,6 +120,14 @@
         self.source_version_id = None
         self.delete_marker = False
         self.encrypted = None
+        # If the object is being restored, this attribute will be set to True.
+        # If the object is restored, it will be set to False.  Otherwise this
+        # value will be None. If the restore is completed (ongoing_restore =
+        # False), the expiry_date will be populated with the expiry date of the
+        # restored object.
+        self.ongoing_restore = None
+        self.expiry_date = None
+        self.local_hashes = {}
 
     def __repr__(self):
         if self.bucket:
@@ -79,35 +135,53 @@
         else:
             return '<Key: None,%s>' % self.name
 
-    def __getattr__(self, name):
-        if name == 'key':
-            return self.name
-        else:
-            raise AttributeError
-
-    def __setattr__(self, name, value):
-        if name == 'key':
-            self.__dict__['name'] = value
-        else:
-            self.__dict__[name] = value
-
     def __iter__(self):
         return self
 
     @property
     def provider(self):
         provider = None
-        if self.bucket:
-            if self.bucket.connection:
-                provider = self.bucket.connection.provider
+        if self.bucket and self.bucket.connection:
+            provider = self.bucket.connection.provider
         return provider
 
+    @property
+    def key(self):
+        return self.name
+
+    @key.setter
+    def key(self, value):
+        self.name = value
+
+    @property
+    def md5(self):
+        if 'md5' in self.local_hashes and self.local_hashes['md5']:
+            return binascii.b2a_hex(self.local_hashes['md5'])
+
+    @md5.setter
+    def md5(self, value):
+        if value:
+            self.local_hashes['md5'] = binascii.a2b_hex(value)
+        elif 'md5' in self.local_hashes:
+            self.local_hashes.pop('md5', None)
+
+    @property
+    def base64md5(self):
+        if 'md5' in self.local_hashes and self.local_hashes['md5']:
+            return binascii.b2a_base64(self.local_hashes['md5']).rstrip('\n')
+
+    @base64md5.setter
+    def base64md5(self, value):
+        if value:
+            self.local_hashes['md5'] = binascii.a2b_base64(value)
+        elif 'md5' in self.local_hashes:
+            del self.local_hashes['md5']
+
     def get_md5_from_hexdigest(self, md5_hexdigest):
         """
         A utility function to create the 2-tuple (md5hexdigest, base64md5)
         from just having a precalculated md5_hexdigest.
         """
-        import binascii
         digest = binascii.unhexlify(md5_hexdigest)
         base64md5 = base64.encodestring(digest)
         if base64md5[-1] == '\n':
@@ -117,7 +191,8 @@
     def handle_encryption_headers(self, resp):
         provider = self.bucket.connection.provider
         if provider.server_side_encryption_header:
-            self.encrypted = resp.getheader(provider.server_side_encryption_header, None)
+            self.encrypted = resp.getheader(
+                provider.server_side_encryption_header, None)
         else:
             self.encrypted = None
 
@@ -137,6 +212,26 @@
         else:
             self.delete_marker = False
 
+    def handle_restore_headers(self, response):
+        header = response.getheader('x-amz-restore')
+        if header is None:
+            return
+        parts = header.split(',', 1)
+        for part in parts:
+            key, val = [i.strip() for i in part.split('=')]
+            val = val.replace('"', '')
+            if key == 'ongoing-request':
+                self.ongoing_restore = True if val.lower() == 'true' else False
+            elif key == 'expiry-date':
+                self.expiry_date = val
+
+    def handle_addl_headers(self, headers):
+        """
+        Used by Key subclasses to do additional, provider-specific
+        processing of response headers. No-op for this base class.
+        """
+        pass
+
     def open_read(self, headers=None, query_args='',
                   override_num_retries=None, response_headers=None):
         """
@@ -200,6 +295,7 @@
                     self.content_disposition = value
             self.handle_version_headers(self.resp)
             self.handle_encryption_headers(self.resp)
+            self.handle_addl_headers(self.resp.getheaders())
 
     def open_write(self, headers=None, override_num_retries=None):
         """
@@ -230,8 +326,23 @@
 
     closed = False
 
-    def close(self):
-        if self.resp:
+    def close(self, fast=False):
+        """
+        Close this key.
+
+        :type fast: bool
+        :param fast: True if you want the connection to be closed without first
+        reading the content. This should only be used in cases where subsequent
+        calls don't need to return the content from the open HTTP connection.
+        Note: As explained at
+        http://docs.python.org/2/library/httplib.html#httplib.HTTPConnection.getresponse,
+        callers must read the whole response before sending a new request to the
+        server. Calling Key.close(fast=True) and making a subsequent request to
+        the server will work because boto will get an httplib exception and
+        close/reopen the connection.
+
+        """
+        if self.resp and not fast:
             self.resp.read()
         self.resp = None
         self.mode = None
@@ -433,6 +544,46 @@
     def set_canned_acl(self, acl_str, headers=None):
         return self.bucket.set_canned_acl(acl_str, self.name, headers)
 
+    def get_redirect(self):
+        """Return the redirect location configured for this key.
+
+        If no redirect is configured (via set_redirect), then None
+        will be returned.
+
+        """
+        response = self.bucket.connection.make_request(
+            'HEAD', self.bucket.name, self.name)
+        if response.status == 200:
+            return response.getheader('x-amz-website-redirect-location')
+        else:
+            raise self.provider.storage_response_error(
+                response.status, response.reason, response.read())
+
+    def set_redirect(self, redirect_location, headers=None):
+        """Configure this key to redirect to another location.
+
+        When the bucket associated with this key is accessed from the website
+        endpoint, a 301 redirect will be issued to the specified
+        `redirect_location`.
+
+        :type redirect_location: string
+        :param redirect_location: The location to redirect.
+
+        """
+        if headers is None:
+            headers = {}
+        else:
+            headers = headers.copy()
+
+        headers['x-amz-website-redirect-location'] = redirect_location
+        response = self.bucket.connection.make_request('PUT', self.bucket.name,
+                                                       self.name, headers)
+        if response.status == 200:
+            return True
+        else:
+            raise self.provider.storage_response_error(
+                response.status, response.reason, response.read())
+
     def make_public(self, headers=None):
         return self.bucket.set_canned_acl('public-read', self.name, headers)
 
@@ -526,20 +677,12 @@
             point point at the offset from which you wish to upload.
             ie. if uploading the full file, it should point at the
             start of the file. Normally when a file is opened for
-            reading, the fp will point at the first byte. See the
+            reading, the fp will point at the first byte.  See the
             bytes parameter below for more info.
 
         :type headers: dict
         :param headers: The headers to pass along with the PUT request
 
-        :type cb: function
-        :param cb: a callback function that will be called to report
-            progress on the upload.  The callback should accept two
-            integer parameters, the first representing the number of
-            bytes that have been successfully transmitted to S3 and
-            the second representing the size of the to be transmitted
-            object.
-
         :type num_cb: int
         :param num_cb: (optional) If a callback is specified with the
             cb parameter this parameter determines the granularity of
@@ -548,6 +691,13 @@
             transfer. Providing a negative integer will cause your
             callback to be called with each buffer read.
 
+        :type query_args: string
+        :param query_args: (optional) Arguments to pass in the query string.
+
+        :type chunked_transfer: boolean
+        :param chunked_transfer: (optional) If true, we use chunked
+            Transfer-Encoding.
+
         :type size: int
         :param size: (optional) The Maximum number of bytes to read
             from the file pointer (fp). This is useful when uploading
@@ -556,6 +706,13 @@
             the default behaviour is to read all bytes from the file
             pointer. Less bytes may be available.
         """
+        self._send_file_internal(fp, headers=headers, cb=cb, num_cb=num_cb,
+                                 query_args=query_args,
+                                 chunked_transfer=chunked_transfer, size=size)
+
+    def _send_file_internal(self, fp, headers=None, cb=None, num_cb=10,
+                            query_args=None, chunked_transfer=False, size=None,
+                            hash_algs=None):
         provider = self.bucket.connection.provider
         try:
             spos = fp.tell()
@@ -563,6 +720,12 @@
             spos = None
             self.read_from_stream = False
 
+        # If hash_algs is unset and the MD5 hasn't already been computed,
+        # default to an MD5 hash_alg to hash the data on-the-fly.
+        if hash_algs is None and not self.md5:
+            hash_algs = {'md5': md5}
+        digesters = dict((alg, hash_algs[alg]()) for alg in hash_algs or {})
+
         def sender(http_conn, method, path, data, headers):
             # This function is called repeatedly for temporary retries
             # so we must be sure the file pointer is pointing at the
@@ -581,19 +744,13 @@
                 http_conn.putheader(key, headers[key])
             http_conn.endheaders()
 
-            # Calculate all MD5 checksums on the fly, if not already computed
-            if not self.base64md5:
-                m = md5()
-            else:
-                m = None
-
             save_debug = self.bucket.connection.debug
             self.bucket.connection.debug = 0
-            # If the debuglevel < 3 we don't want to show connection
+            # If the debuglevel < 4 we don't want to show connection
             # payload, so turn off HTTP connection-level debug output (to
             # be restored below).
             # Use the getattr approach to allow this to work in AppEngine.
-            if getattr(http_conn, 'debuglevel', 0) < 3:
+            if getattr(http_conn, 'debuglevel', 0) < 4:
                 http_conn.set_debuglevel(0)
 
             data_len = 0
@@ -609,7 +766,8 @@
                     # of data transferred, except when we know size.
                     cb_count = (1024 * 1024) / self.BufferSize
                 elif num_cb > 1:
-                    cb_count = int(math.ceil(cb_size / self.BufferSize / (num_cb - 1.0)))
+                    cb_count = int(
+                        math.ceil(cb_size / self.BufferSize / (num_cb - 1.0)))
                 elif num_cb < 0:
                     cb_count = -1
                 else:
@@ -634,8 +792,8 @@
                     http_conn.send('\r\n')
                 else:
                     http_conn.send(chunk)
-                if m:
-                    m.update(chunk)
+                for alg in digesters:
+                    digesters[alg].update(chunk)
                 if bytes_togo:
                     bytes_togo -= chunk_len
                     if bytes_togo <= 0:
@@ -652,10 +810,8 @@
 
             self.size = data_len
 
-            if m:
-                # Use the chunked trailer for the digest
-                hd = m.hexdigest()
-                self.md5, self.base64md5 = self.get_md5_from_hexdigest(hd)
+            for alg in digesters:
+                self.local_hashes[alg] = digesters[alg].digest()
 
             if chunked_transfer:
                 http_conn.send('0\r\n')
@@ -665,24 +821,17 @@
             if cb and (cb_count <= 1 or i > 0) and data_len > 0:
                 cb(data_len, cb_size)
 
-            response = http_conn.getresponse()
-            body = response.read()
             http_conn.set_debuglevel(save_debug)
             self.bucket.connection.debug = save_debug
-            if ((response.status == 500 or response.status == 503 or
-                    response.getheader('location')) and not chunked_transfer):
-                # we'll try again.
-                return response
-            elif response.status >= 200 and response.status <= 299:
-                self.etag = response.getheader('etag')
-                if self.etag != '"%s"' % self.md5:
-                    raise provider.storage_data_error(
-                        'ETag from S3 did not match computed MD5')
-                return response
-            else:
+            response = http_conn.getresponse()
+            body = response.read()
+
+            if not self.should_retry(response, chunked_transfer):
                 raise provider.storage_response_error(
                     response.status, response.reason, body)
 
+            return response
+
         if not headers:
             headers = {}
         else:
@@ -721,11 +870,57 @@
             headers['Content-Length'] = str(self.size)
         headers['Expect'] = '100-Continue'
         headers = boto.utils.merge_meta(headers, self.metadata, provider)
-        resp = self.bucket.connection.make_request('PUT', self.bucket.name,
-                                                   self.name, headers,
-                                                   sender=sender,
-                                                   query_args=query_args)
+        resp = self.bucket.connection.make_request(
+            'PUT',
+            self.bucket.name,
+            self.name,
+            headers,
+            sender=sender,
+            query_args=query_args
+        )
         self.handle_version_headers(resp, force=True)
+        self.handle_addl_headers(resp.getheaders())
+
+    def should_retry(self, response, chunked_transfer=False):
+        provider = self.bucket.connection.provider
+
+        if not chunked_transfer:
+            if response.status in [500, 503]:
+                # 500 & 503 can be plain retries.
+                return True
+
+            if response.getheader('location'):
+                # If there's a redirect, plain retry.
+                return True
+
+        if 200 <= response.status <= 299:
+            self.etag = response.getheader('etag')
+
+            if self.etag != '"%s"' % self.md5:
+                raise provider.storage_data_error(
+                    'ETag from S3 did not match computed MD5')
+
+            return True
+
+        if response.status == 400:
+            # The 400 must be trapped so the retry handler can check to
+            # see if it was a timeout.
+            # If ``RequestTimeout`` is present, we'll retry. Otherwise, bomb
+            # out.
+            body = response.read()
+            err = provider.storage_response_error(
+                response.status,
+                response.reason,
+                body
+            )
+
+            if err.error_code in ['RequestTimeout']:
+                raise PleaseRetryException(
+                    "Saw %s, retrying" % err.error_code,
+                    response=response
+                )
+
+        return False
 
     def compute_md5(self, fp, size=None):
         """
@@ -738,14 +933,9 @@
         :param size: (optional) The Maximum number of bytes to read
             from the file pointer (fp). This is useful when uploading
             a file in multiple parts where the file is being split
-            inplace into different parts. Less bytes may be available.
-
-        :rtype: tuple
-        :return: A tuple containing the hex digest version of the MD5
-            hash as the first element and the base64 encoded version
-            of the plain digest as the second element.
+            in place into different parts. Less bytes may be available.
         """
-        tup = compute_md5(fp, size=size)
+        hex_digest, b64_digest, data_size = compute_md5(fp, size=size)
         # Returned values are MD5 hash, base64 encoded MD5 hash, and data size.
         # The internal implementation of compute_md5() needs to return the
         # data size but we don't want to return that value to the external
@@ -753,8 +943,8 @@
         # break some code) so we consume the third tuple value here and
         # return the remainder of the tuple to the caller, thereby preserving
         # the existing interface.
-        self.size = tup[2]
-        return tup[0:2]
+        self.size = data_size
+        return (hex_digest, b64_digest)
 
     def set_contents_from_stream(self, fp, headers=None, replace=True,
                                  cb=None, num_cb=10, policy=None,
@@ -875,7 +1065,7 @@
             the second representing the size of the to be transmitted
             object.
 
-        :type cb: int
+        :type num_cb: int
         :param num_cb: (optional) If a callback is specified with the
             cb parameter this parameter determines the granularity of
             the callback by defining the maximum number of times the
@@ -934,18 +1124,34 @@
             # caller requests reading from beginning of fp.
             fp.seek(0, os.SEEK_SET)
         else:
-            spos = fp.tell()
-            fp.seek(0, os.SEEK_END)
-            if fp.tell() == spos:
-                fp.seek(0, os.SEEK_SET)
-                if fp.tell() != spos:
-                    # Raise an exception as this is likely a programming error
-                    # whereby there is data before the fp but nothing after it.
-                    fp.seek(spos)
-                    raise AttributeError(
-                     'fp is at EOF. Use rewind option or seek() to data start.')
-            # seek back to the correct position.
-            fp.seek(spos)
+            # The following seek/tell/seek logic is intended
+            # to detect applications using the older interface to
+            # set_contents_from_file(), which automatically rewound the
+            # file each time the Key was reused. This changed with commit
+            # 14ee2d03f4665fe20d19a85286f78d39d924237e, to support uploads
+            # split into multiple parts and uploaded in parallel, and at
+            # the time of that commit this check was added because otherwise
+            # older programs would get a success status and upload an empty
+            # object. Unfortuantely, it's very inefficient for fp's implemented
+            # by KeyFile (used, for example, by gsutil when copying between
+            # providers). So, we skip the check for the KeyFile case.
+            # TODO: At some point consider removing this seek/tell/seek
+            # logic, after enough time has passed that it's unlikely any
+            # programs remain that assume the older auto-rewind interface.
+            if not isinstance(fp, KeyFile):
+                spos = fp.tell()
+                fp.seek(0, os.SEEK_END)
+                if fp.tell() == spos:
+                    fp.seek(0, os.SEEK_SET)
+                    if fp.tell() != spos:
+                        # Raise an exception as this is likely a programming
+                        # error whereby there is data before the fp but nothing
+                        # after it.
+                        fp.seek(spos)
+                        raise AttributeError('fp is at EOF. Use rewind option '
+                                             'or seek() to data start.')
+                # seek back to the correct position.
+                fp.seek(spos)
 
         if reduced_redundancy:
             self.storage_class = 'REDUCED_REDUNDANCY'
@@ -955,7 +1161,6 @@
                 # What if different providers provide different classes?
         if hasattr(fp, 'name'):
             self.path = fp.name
-
         if self.bucket != None:
             if not md5 and provider.supports_chunked_transfer():
                 # defer md5 calculation to on the fly and
@@ -964,6 +1169,18 @@
                 self.size = None
             else:
                 chunked_transfer = False
+                if isinstance(fp, KeyFile):
+                    # Avoid EOF seek for KeyFile case as it's very inefficient.
+                    key = fp.getkey()
+                    size = key.size - fp.tell()
+                    self.size = size
+                    # At present both GCS and S3 use MD5 for the etag for
+                    # non-multipart-uploaded objects. If the etag is 32 hex
+                    # chars use it as an MD5, to avoid having to read the file
+                    # twice while transferring.
+                    if (re.match('^"[a-fA-F0-9]{32}"$', key.etag)):
+                        etag = key.etag.strip('"')
+                        md5 = (etag, base64.b64encode(binascii.unhexlify(etag)))
                 if not md5:
                     # compute_md5() and also set self.size to actual
                     # size of the bytes read computing the md5.
@@ -1052,12 +1269,15 @@
             :param encrypt_key: If True, the new copy of the object
             will be encrypted on the server-side by S3 and will be
             stored in an encrypted form while at rest in S3.
+
+        :rtype: int
+        :return: The number of bytes written to the key.
         """
-        fp = open(filename, 'rb')
-        self.set_contents_from_file(fp, headers, replace, cb, num_cb,
-                                    policy, md5, reduced_redundancy,
-                                    encrypt_key=encrypt_key)
-        fp.close()
+        with open(filename, 'rb') as fp:
+            return self.set_contents_from_file(fp, headers, replace, cb,
+                                               num_cb, policy, md5,
+                                               reduced_redundancy,
+                                               encrypt_key=encrypt_key)
 
     def set_contents_from_string(self, s, headers=None, replace=True,
                                  cb=None, num_cb=10, policy=None, md5=None,
@@ -1164,18 +1384,30 @@
             with the stored object in the response.  See
             http://goo.gl/EWOPb for details.
         """
+        self._get_file_internal(fp, headers=headers, cb=cb, num_cb=num_cb,
+                                torrent=torrent, version_id=version_id,
+                                override_num_retries=override_num_retries,
+                                response_headers=response_headers,
+                                hash_algs=None,
+                                query_args=None)
+
+    def _get_file_internal(self, fp, headers=None, cb=None, num_cb=10,
+                 torrent=False, version_id=None, override_num_retries=None,
+                 response_headers=None, hash_algs=None, query_args=None):
         if headers is None:
             headers = {}
         save_debug = self.bucket.connection.debug
         if self.bucket.connection.debug == 1:
             self.bucket.connection.debug = 0
 
-        query_args = []
+        query_args = query_args or []
         if torrent:
             query_args.append('torrent')
-            m = None
-        else:
-            m = md5()
+
+        if hash_algs is None and not torrent:
+            hash_algs = {'md5': md5}
+        digesters = dict((alg, hash_algs[alg]()) for alg in hash_algs or {})
+
         # If a version_id is passed in, use that.  If not, check to see
         # if the Key object has an explicit version_id and, if so, use that.
         # Otherwise, don't pass a version_id query param.
@@ -1185,7 +1417,8 @@
             query_args.append('versionId=%s' % version_id)
         if response_headers:
             for key in response_headers:
-                query_args.append('%s=%s' % (key, urllib.quote(response_headers[key])))
+                query_args.append('%s=%s' % (
+                    key, urllib.quote(response_headers[key])))
         query_args = '&'.join(query_args)
         self.open('r', headers, query_args=query_args,
                   override_num_retries=override_num_retries)
@@ -1208,22 +1441,28 @@
                 cb_count = 0
             i = 0
             cb(data_len, cb_size)
-        for bytes in self:
-            fp.write(bytes)
-            data_len += len(bytes)
-            if m:
-                m.update(bytes)
-            if cb:
-                if cb_size > 0 and data_len >= cb_size:
-                    break
-                i += 1
-                if i == cb_count or cb_count == -1:
-                    cb(data_len, cb_size)
-                    i = 0
+        try:
+            for bytes in self:
+                fp.write(bytes)
+                data_len += len(bytes)
+                for alg in digesters:
+                    digesters[alg].update(bytes)
+                if cb:
+                    if cb_size > 0 and data_len >= cb_size:
+                        break
+                    i += 1
+                    if i == cb_count or cb_count == -1:
+                        cb(data_len, cb_size)
+                        i = 0
+        except IOError as e:
+            if e.errno == errno.ENOSPC:
+                raise StorageDataError('Out of space for destination file '
+                                       '%s' % fp.name)
+            raise
         if cb and (cb_count <= 1 or i > 0) and data_len > 0:
             cb(data_len, cb_size)
-        if m:
-            self.md5 = m.hexdigest()
+        for alg in digesters:
+          self.local_hashes[alg] = digesters[alg].digest()
         if self.size is None and not torrent and "Range" not in headers:
             self.size = data_len
         self.close()
@@ -1359,11 +1598,16 @@
             http://goo.gl/EWOPb for details.
         """
         fp = open(filename, 'wb')
-        self.get_contents_to_file(fp, headers, cb, num_cb, torrent=torrent,
-                                  version_id=version_id,
-                                  res_download_handler=res_download_handler,
-                                  response_headers=response_headers)
-        fp.close()
+        try:
+            self.get_contents_to_file(fp, headers, cb, num_cb, torrent=torrent,
+                                      version_id=version_id,
+                                      res_download_handler=res_download_handler,
+                                      response_headers=response_headers)
+        except Exception:
+            os.remove(filename)
+            raise
+        finally:
+            fp.close()
         # if last_modified date was sent from s3, try to set file's timestamp
         if self.last_modified != None:
             try:
@@ -1467,7 +1711,88 @@
         :param display_name: An option string containing the user's
             Display Name.  Only required on Walrus.
         """
-        policy = self.get_acl()
+        policy = self.get_acl(headers=headers)
         policy.acl.add_user_grant(permission, user_id,
                                   display_name=display_name)
         self.set_acl(policy, headers=headers)
+
+    def _normalize_metadata(self, metadata):
+        if type(metadata) == set:
+            norm_metadata = set()
+            for k in metadata:
+                norm_metadata.add(k.lower())
+        else:
+            norm_metadata = {}
+            for k in metadata:
+                norm_metadata[k.lower()] = metadata[k]
+        return norm_metadata
+
+    def _get_remote_metadata(self, headers=None):
+        """
+        Extracts metadata from existing URI into a dict, so we can
+        overwrite/delete from it to form the new set of metadata to apply to a
+        key.
+        """
+        metadata = {}
+        for underscore_name in self._underscore_base_user_settable_fields:
+            if hasattr(self, underscore_name):
+                value = getattr(self, underscore_name)
+                if value:
+                    # Generate HTTP field name corresponding to "_" named field.
+                    field_name = underscore_name.replace('_', '-')
+                    metadata[field_name.lower()] = value
+        # self.metadata contains custom metadata, which are all user-settable.
+        prefix = self.provider.metadata_prefix
+        for underscore_name in self.metadata:
+            field_name = underscore_name.replace('_', '-')
+            metadata['%s%s' % (prefix, field_name.lower())] = (
+                self.metadata[underscore_name])
+        return metadata
+
+    def set_remote_metadata(self, metadata_plus, metadata_minus, preserve_acl,
+                            headers=None):
+        metadata_plus = self._normalize_metadata(metadata_plus)
+        metadata_minus = self._normalize_metadata(metadata_minus)
+        metadata = self._get_remote_metadata()
+        metadata.update(metadata_plus)
+        for h in metadata_minus:
+            if h in metadata:
+                del metadata[h]
+        src_bucket = self.bucket
+        # Boto prepends the meta prefix when adding headers, so strip prefix in
+        # metadata before sending back in to copy_key() call.
+        rewritten_metadata = {}
+        for h in metadata:
+            if (h.startswith('x-goog-meta-') or h.startswith('x-amz-meta-')):
+                rewritten_h = (h.replace('x-goog-meta-', '')
+                               .replace('x-amz-meta-', ''))
+            else:
+                rewritten_h = h
+            rewritten_metadata[rewritten_h] = metadata[h]
+        metadata = rewritten_metadata
+        src_bucket.copy_key(self.name, self.bucket.name, self.name,
+                            metadata=metadata, preserve_acl=preserve_acl,
+                            headers=headers)
+
+    def restore(self, days, headers=None):
+        """Restore an object from an archive.
+
+        :type days: int
+        :param days: The lifetime of the restored object (must
+            be at least 1 day).  If the object is already restored
+            then this parameter can be used to readjust the lifetime
+            of the restored object.  In this case, the days
+            param is with respect to the initial time of the request.
+            If the object has not been restored, this param is with
+            respect to the completion time of the request.
+
+        """
+        response = self.bucket.connection.make_request(
+            'POST', self.bucket.name, self.name,
+            data=self.RestoreBody % days,
+            headers=headers, query_args='restore')
+        if response.status not in (200, 202):
+            provider = self.bucket.connection.provider
+            raise provider.storage_response_error(response.status,
+                                                  response.reason,
+                                                  response.read())
diff --git a/boto/s3/keyfile.py b/boto/s3/keyfile.py
new file mode 100644
index 0000000..4245413
--- /dev/null
+++ b/boto/s3/keyfile.py
@@ -0,0 +1,134 @@
+# Copyright 2013 Google Inc.
+# Copyright 2011, Nexenta Systems Inc.
+#
+# Permission is hereby granted, free of charge, to any person obtaining a
+# copy of this software and associated documentation files (the
+# "Software"), to deal in the Software without restriction, including
+# without limitation the rights to use, copy, modify, merge, publish, dis-
+# tribute, sublicense, and/or sell copies of the Software, and to permit
+# persons to whom the Software is furnished to do so, subject to the fol-
+# lowing conditions:
+#
+# The above copyright notice and this permission notice shall be included
+# in all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
+# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
+# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+# IN THE SOFTWARE.
+
+"""
+Wrapper class to expose a Key being read via a partial implementaiton of the
+Python file interface. The only functions supported are those needed for seeking
+in a Key open for reading.
+"""
+
+import os
+from boto.exception import StorageResponseError
+
+class KeyFile():
+
+  def __init__(self, key):
+    self.key = key
+    self.key.open_read()
+    self.location = 0
+    self.closed = False
+    self.softspace = -1 # Not implemented.
+    self.mode = 'r'
+    self.encoding = 'Undefined in KeyFile'
+    self.errors = 'Undefined in KeyFile'
+    self.newlines = 'Undefined in KeyFile'
+    self.name = key.name
+
+  def tell(self):
+    if self.location is None:
+      raise ValueError("I/O operation on closed file")
+    return self.location
+
+  def seek(self, pos, whence=os.SEEK_SET):
+    self.key.close(fast=True)
+    if whence == os.SEEK_END:
+      # We need special handling for this case because sending an HTTP range GET
+      # with EOF for the range start would cause an invalid range error. Instead
+      # we position to one before EOF (plus pos) and then read one byte to
+      # position at EOF.
+      if self.key.size == 0:
+        # Don't try to seek with an empty key.
+        return
+      pos = self.key.size + pos - 1
+      if pos < 0:
+        raise IOError("Invalid argument")
+      self.key.open_read(headers={"Range": "bytes=%d-" % pos})
+      self.key.read(1)
+      self.location = pos + 1
+      return
+
+    if whence == os.SEEK_SET:
+      if pos < 0:
+        raise IOError("Invalid argument")
+    elif whence == os.SEEK_CUR:
+      pos += self.location
+    else:
+      raise IOError('Invalid whence param (%d) passed to seek' % whence)
+    try:
+      self.key.open_read(headers={"Range": "bytes=%d-" % pos})
+    except StorageResponseError as e:
+      # 416 Invalid Range means that the given starting byte was past the end
+      # of file. We catch this because the Python file interface allows silently
+      # seeking past the end of the file.
+      if e.status != 416:
+        raise
+
+    self.location = pos
+
+  def read(self, size):
+    self.location += size
+    return self.key.read(size)
+
+  def close(self):
+    self.key.close()
+    self.location = None
+    self.closed = True
+
+  def isatty(self):
+    return False
+
+  # Non-file interface, useful for code that wants to dig into underlying Key
+  # state.
+  def getkey(self):
+    return self.key
+
+  # Unimplemented interfaces below here.
+
+  def write(self, buf):
+    raise NotImplementedError('write not implemented in KeyFile')
+
+  def fileno(self):
+    raise NotImplementedError('fileno not implemented in KeyFile')
+
+  def flush(self):
+    raise NotImplementedError('flush not implemented in KeyFile')
+
+  def next(self):
+    raise NotImplementedError('next not implemented in KeyFile')
+
+  def readinto(self):
+    raise NotImplementedError('readinto not implemented in KeyFile')
+
+  def readline(self):
+    raise NotImplementedError('readline not implemented in KeyFile')
+
+  def readlines(self):
+    raise NotImplementedError('readlines not implemented in KeyFile')
+
+  def truncate(self):
+    raise NotImplementedError('truncate not implemented in KeyFile')
+
+  def writelines(self):
+    raise NotImplementedError('writelines not implemented in KeyFile')
+
+  def xreadlines(self):
+    raise NotImplementedError('xreadlines not implemented in KeyFile')
diff --git a/boto/s3/lifecycle.py b/boto/s3/lifecycle.py
index fa5c5cf..58126e6 100644
--- a/boto/s3/lifecycle.py
+++ b/boto/s3/lifecycle.py
@@ -34,20 +34,36 @@
     :ivar status: If Enabled, the rule is currently being applied.
         If Disabled, the rule is not currently being applied.
 
-    :ivar expiration: Indicates the lifetime, in days, of the objects
-        that are subject to the rule. The value must be a non-zero
-        positive integer.
+    :ivar expiration: An instance of `Expiration`. This indicates
+        the lifetime of the objects that are subject to the rule.
+
+    :ivar transition: An instance of `Transition`.  This indicates
+        when to transition to a different storage class.
+
     """
-    def __init__(self, id=None, prefix=None, status=None, expiration=None):
+    def __init__(self, id=None, prefix=None, status=None, expiration=None,
+                 transition=None):
         self.id = id
         self.prefix = prefix
         self.status = status
-        self.expiration = expiration
+        if isinstance(expiration, (int, long)):
+            # retain backwards compatibility???
+            self.expiration = Expiration(days=expiration)
+        else:
+            # None or object
+            self.expiration = expiration
+        self.transition = transition
 
     def __repr__(self):
-        return '<CORSRule: %s>' % self.id
+        return '<Rule: %s>' % self.id
 
     def startElement(self, name, attrs, connection):
+        if name == 'Transition':
+            self.transition = Transition()
+            return self.transition
+        elif name == 'Expiration':
+            self.expiration = Expiration()
+            return self.expiration
         return None
 
     def endElement(self, name, value, connection):
@@ -57,8 +73,6 @@
             self.prefix = value
         elif name == 'Status':
             self.status = value
-        elif name == 'Days':
-            self.expiration = int(value)
         else:
             setattr(self, name, value)
 
@@ -67,10 +81,96 @@
         s += '<ID>%s</ID>' % self.id
         s += '<Prefix>%s</Prefix>' % self.prefix
         s += '<Status>%s</Status>' % self.status
-        s += '<Expiration><Days>%d</Days></Expiration>' % self.expiration
+        if self.expiration is not None:
+            s += self.expiration.to_xml()
+        if self.transition is not None:
+            s += self.transition.to_xml()
         s += '</Rule>'
         return s
 
+class Expiration(object):
+    """
+    When an object will expire.
+
+    :ivar days: The number of days until the object expires
+
+    :ivar date: The date when the object will expire. Must be
+        in ISO 8601 format.
+    """
+    def __init__(self, days=None, date=None):
+        self.days = days
+        self.date = date
+
+    def startElement(self, name, attrs, connection):
+        return None
+
+    def endElement(self, name, value, connection):
+        if name == 'Days':
+            self.days = int(value)
+        elif name == 'Date':
+            self.date = value
+
+    def __repr__(self):
+        if self.days is None:
+            how_long = "on: %s" % self.date
+        else:
+            how_long = "in: %s days" % self.days
+        return '<Expiration: %s>' % how_long
+
+    def to_xml(self):
+        s = '<Expiration>'
+        if self.days is not None:
+            s += '<Days>%s</Days>' % self.days
+        elif self.date is not None:
+            s += '<Date>%s</Date>' % self.date
+        s += '</Expiration>'
+        return s
+
+class Transition(object):
+    """
+    A transition to a different storage class.
+
+    :ivar days: The number of days until the object should be moved.
+
+    :ivar date: The date when the object should be moved.  Should be
+        in ISO 8601 format.
+
+    :ivar storage_class: The storage class to transition to.  Valid
+        values are GLACIER.
+
+    """
+    def __init__(self, days=None, date=None, storage_class=None):
+        self.days = days
+        self.date = date
+        self.storage_class = storage_class
+
+    def startElement(self, name, attrs, connection):
+        return None
+
+    def endElement(self, name, value, connection):
+        if name == 'Days':
+            self.days = int(value)
+        elif name == 'Date':
+            self.date = value
+        elif name == 'StorageClass':
+            self.storage_class = value
+
+    def __repr__(self):
+        if self.days is None:
+            how_long = "on: %s" % self.date
+        else:
+            how_long = "in: %s days" % self.days
+        return '<Transition: %s, %s>' % (how_long, self.storage_class)
+
+    def to_xml(self):
+        s = '<Transition>'
+        s += '<StorageClass>%s</StorageClass>' % self.storage_class
+        if self.days is not None:
+            s += '<Days>%s</Days>' % self.days
+        elif self.date is not None:
+            s += '<Date>%s</Date>' % self.date
+        s += '</Transition>'
+        return s
 
 class Lifecycle(list):
     """
@@ -92,13 +192,14 @@
         Returns a string containing the XML version of the Lifecycle
         configuration as defined by S3.
         """
-        s = '<LifecycleConfiguration>'
+        s = '<?xml version="1.0" encoding="UTF-8"?>'
+        s += '<LifecycleConfiguration>'
         for rule in self:
             s += rule.to_xml()
         s += '</LifecycleConfiguration>'
         return s
 
-    def add_rule(self, id, prefix, status, expiration):
+    def add_rule(self, id, prefix, status, expiration, transition=None):
         """
         Add a rule to this Lifecycle configuration.  This only adds
         the rule to the local copy.  To install the new rule(s) on
@@ -120,7 +221,11 @@
         :type expiration: int
         :param expiration: Indicates the lifetime, in days, of the objects
             that are subject to the rule. The value must be a non-zero
-            positive integer.
+            positive integer. A Expiration object instance is also perfect.
+
+        :type transition: Transition
+        :param transition: Indicates when an object transitions to a
+            different storage class. 
         """
-        rule = Rule(id, prefix, status, expiration)
+        rule = Rule(id, prefix, status, expiration, transition)
         self.append(rule)
diff --git a/boto/s3/multipart.py b/boto/s3/multipart.py
index 9b62430..1292678 100644
--- a/boto/s3/multipart.py
+++ b/boto/s3/multipart.py
@@ -1,5 +1,7 @@
-# Copyright (c) 2006-2010 Mitch Garnaat http://garnaat.org/
+# Copyright (c) 2006-2012 Mitch Garnaat http://garnaat.org/
+# Copyright (c) 2012 Amazon.com, Inc. or its affiliates.
 # Copyright (c) 2010, Eucalyptus Systems, Inc.
+# All Rights Reserved
 #
 # Permission is hereby granted, free of charge, to any person obtaining a
 # copy of this software and associated documentation files (the
@@ -15,7 +17,7 @@
 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
 # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
 # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
-# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, 
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
 # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
 # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
 # IN THE SOFTWARE.
@@ -25,6 +27,7 @@
 from boto import handler
 import xml.sax
 
+
 class CompleteMultiPartUpload(object):
     """
     Represents a completed MultiPart Upload.  Contains the
@@ -51,7 +54,7 @@
     def __repr__(self):
         return '<CompleteMultiPartUpload: %s.%s>' % (self.bucket_name,
                                                      self.key_name)
-        
+
     def startElement(self, name, attrs, connection):
         return None
 
@@ -66,7 +69,8 @@
             self.etag = value
         else:
             setattr(self, name, value)
-        
+
+
 class Part(object):
     """
     Represents a single part in a MultiPart upload.
@@ -90,7 +94,7 @@
             return '<Part %d>' % self.part_number
         else:
             return '<Part %s>' % None
-        
+
     def startElement(self, name, attrs, connection):
         return None
 
@@ -105,7 +109,8 @@
             self.size = int(value)
         else:
             setattr(self, name, value)
-        
+
+
 def part_lister(mpupload, part_number_marker=None):
     """
     A generator function for listing parts of a multipart upload.
@@ -117,13 +122,14 @@
         for part in parts:
             yield part
         part_number_marker = mpupload.next_part_number_marker
-        more_results= mpupload.is_truncated
-        
+        more_results = mpupload.is_truncated
+
+
 class MultiPartUpload(object):
     """
     Represents a MultiPart Upload operation.
     """
-    
+
     def __init__(self, bucket=None):
         self.bucket = bucket
         self.bucket_name = None
@@ -217,14 +223,13 @@
             return self._parts
 
     def upload_part_from_file(self, fp, part_num, headers=None, replace=True,
-                              cb=None, num_cb=10, policy=None, md5=None,
-                              size=None):
+                              cb=None, num_cb=10, md5=None, size=None):
         """
         Upload another part of this MultiPart Upload.
-        
+
         :type fp: file
         :param fp: The file object you want to upload.
-        
+
         :type part_num: int
         :param part_num: The number of this part.
 
@@ -235,12 +240,14 @@
             raise ValueError('Part numbers must be greater than zero')
         query_args = 'uploadId=%s&partNumber=%d' % (self.id, part_num)
         key = self.bucket.new_key(self.key_name)
-        key.set_contents_from_file(fp, headers, replace, cb, num_cb, policy,
-                                   md5, reduced_redundancy=False,
+        key.set_contents_from_file(fp, headers=headers, replace=replace,
+                                   cb=cb, num_cb=num_cb, md5=md5,
+                                   reduced_redundancy=False,
                                    query_args=query_args, size=size)
 
     def copy_part_from_key(self, src_bucket_name, src_key_name, part_num,
-                           start=None, end=None):
+                           start=None, end=None, src_version_id=None,
+                           headers=None):
         """
         Copy another part of this MultiPart Upload.
 
@@ -258,6 +265,12 @@
 
         :type end: int
         :param end: Zero-based byte offset to copy to
+
+        :type src_version_id: string
+        :param src_version_id: version_id of source object to copy from
+
+        :type headers: dict
+        :param headers: Any headers to pass along in the request
         """
         if part_num < 1:
             raise ValueError('Part numbers must be greater than zero')
@@ -265,11 +278,15 @@
         if start is not None and end is not None:
             rng = 'bytes=%s-%s' % (start, end)
             provider = self.bucket.connection.provider
-            headers = {provider.copy_source_range_header: rng}
-        else:
-            headers = None
+            if headers is None:
+                headers = {}
+            else:
+                headers = headers.copy()
+            headers[provider.copy_source_range_header] = rng
         return self.bucket.copy_key(self.key_name, src_bucket_name,
-                                    src_key_name, storage_class=None,
+                                    src_key_name,
+                                    src_version_id=src_version_id,
+                                    storage_class=None,
                                     headers=headers,
                                     query_args=query_args)
 
@@ -296,5 +313,3 @@
         completely free all storage consumed by all parts.
         """
         self.bucket.cancel_multipart_upload(self.key_name, self.id)
-
-
diff --git a/boto/s3/prefix.py b/boto/s3/prefix.py
index 0b0196c..adf28e9 100644
--- a/boto/s3/prefix.py
+++ b/boto/s3/prefix.py
@@ -33,3 +33,10 @@
         else:
             setattr(self, name, value)
 
+    @property
+    def provider(self):
+        provider = None
+        if self.bucket and self.bucket.connection:
+            provider = self.bucket.connection.provider
+        return provider
+
diff --git a/boto/s3/resumable_download_handler.py b/boto/s3/resumable_download_handler.py
index ffa2095..cf18279 100644
--- a/boto/s3/resumable_download_handler.py
+++ b/boto/s3/resumable_download_handler.py
@@ -30,6 +30,8 @@
 from boto.connection import AWSAuthConnection
 from boto.exception import ResumableDownloadException
 from boto.exception import ResumableTransferDisposition
+from boto.s3.keyfile import KeyFile
+from boto.gs.key import Key as GSKey
 
 """
 Resumable download handler.
@@ -72,6 +74,9 @@
     """
     Returns size of file, optionally leaving fp positioned at EOF.
     """
+    if isinstance(fp, KeyFile) and not position_to_eof:
+        # Avoid EOF seek for KeyFile case as it's very inefficient.
+        return fp.getkey().size
     if not position_to_eof:
         cur_pos = fp.tell()
     fp.seek(0, os.SEEK_END)
@@ -86,7 +91,7 @@
     Handler for resumable downloads.
     """
 
-    ETAG_REGEX = '([a-z0-9]{32})\n'
+    MIN_ETAG_LEN = 5
 
     RETRYABLE_EXCEPTIONS = (httplib.HTTPException, IOError, socket.error,
                             socket.gaierror)
@@ -123,11 +128,11 @@
         f = None
         try:
             f = open(self.tracker_file_name, 'r')
-            etag_line = f.readline()
-            m = re.search(self.ETAG_REGEX, etag_line)
-            if m:
-                self.etag_value_for_current_download = m.group(1)
-            else:
+            self.etag_value_for_current_download = f.readline().rstrip('\n')
+            # We used to match an MD5-based regex to ensure that the etag was
+            # read correctly. Since ETags need not be MD5s, we now do a simple
+            # length sanity check instead.
+            if len(self.etag_value_for_current_download) < self.MIN_ETAG_LEN:
                 print('Couldn\'t read etag in tracker file (%s). Restarting '
                       'download from scratch.' % self.tracker_file_name)
         except IOError, e:
@@ -169,7 +174,7 @@
                 os.unlink(self.tracker_file_name)
 
     def _attempt_resumable_download(self, key, fp, headers, cb, num_cb,
-                                    torrent, version_id):
+                                    torrent, version_id, hash_algs):
         """
         Attempts a resumable download.
 
@@ -208,12 +213,16 @@
 
         # Disable AWSAuthConnection-level retry behavior, since that would
         # cause downloads to restart from scratch.
-        key.get_file(fp, headers, cb, num_cb, torrent, version_id,
-                     override_num_retries=0)
+        if isinstance(key, GSKey):
+          key.get_file(fp, headers, cb, num_cb, torrent, version_id,
+                       override_num_retries=0, hash_algs=hash_algs)
+        else:
+          key.get_file(fp, headers, cb, num_cb, torrent, version_id,
+                       override_num_retries=0)
         fp.flush()
 
     def get_file(self, key, fp, headers, cb=None, num_cb=10, torrent=False,
-                 version_id=None):
+                 version_id=None, hash_algs=None):
         """
         Retrieves a file from a Key
         :type key: :class:`boto.s3.key.Key` or subclass
@@ -245,6 +254,11 @@
         :type version_id: string
         :param version_id: The version ID (optional)
 
+        :type hash_algs: dictionary
+        :param hash_algs: (optional) Dictionary of hash algorithms and
+            corresponding hashing class that implements update() and digest().
+            Defaults to {'md5': hashlib/md5.md5}.
+
         Raises ResumableDownloadException if a problem occurs during
             the transfer.
         """
@@ -254,16 +268,16 @@
             headers = {}
 
         # Use num-retries from constructor if one was provided; else check
-        # for a value specified in the boto config file; else default to 5.
+        # for a value specified in the boto config file; else default to 6.
         if self.num_retries is None:
-            self.num_retries = config.getint('Boto', 'num_retries', 5)
+            self.num_retries = config.getint('Boto', 'num_retries', 6)
         progress_less_iterations = 0
 
         while True:  # Retry as long as we're making progress.
             had_file_bytes_before_attempt = get_cur_file_size(fp)
             try:
                 self._attempt_resumable_download(key, fp, headers, cb, num_cb,
-                                                 torrent, version_id)
+                                                 torrent, version_id, hash_algs)
                 # Download succceded, so remove the tracker file (if have one).
                 self._remove_tracker_file()
                 # Previously, check_final_md5() was called here to validate 
@@ -281,8 +295,12 @@
                     # close the socket (http://bugs.python.org/issue5542),
                     # so we need to close and reopen the key before resuming
                     # the download.
-                    key.get_file(fp, headers, cb, num_cb, torrent, version_id,
-                                 override_num_retries=0)
+                    if isinstance(key, GSKey):
+                      key.get_file(fp, headers, cb, num_cb, torrent, version_id,
+                                   override_num_retries=0, hash_algs=hash_algs)
+                    else:
+                      key.get_file(fp, headers, cb, num_cb, torrent, version_id,
+                                   override_num_retries=0)
             except ResumableDownloadException, e:
                 if (e.disposition ==
                     ResumableTransferDisposition.ABORT_CUR_PROCESS):
diff --git a/boto/s3/tagging.py b/boto/s3/tagging.py
index 3f0ce8b..0af6406 100644
--- a/boto/s3/tagging.py
+++ b/boto/s3/tagging.py
@@ -20,6 +20,9 @@
         return '<Tag><Key>%s</Key><Value>%s</Value></Tag>' % (
             self.key, self.value)
 
+    def __eq__(self, other):
+        return (self.key == other.key and self.value == other.value)
+
 
 class TagSet(list):
     def startElement(self, name, attrs, connection):
diff --git a/boto/s3/website.py b/boto/s3/website.py
new file mode 100644
index 0000000..c307f3e
--- /dev/null
+++ b/boto/s3/website.py
@@ -0,0 +1,293 @@
+# Copyright (c) 2013 Amazon.com, Inc. or its affiliates.  All Rights Reserved
+#
+# Permission is hereby granted, free of charge, to any person obtaining a
+# copy of this software and associated documentation files (the
+# "Software"), to deal in the Software without restriction, including
+# without limitation the rights to use, copy, modify, merge, publish, dis-
+# tribute, sublicense, and/or sell copies of the Software, and to permit
+# persons to whom the Software is furnished to do so, subject to the fol-
+# lowing conditions:
+#
+# The above copyright notice and this permission notice shall be included
+# in all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
+# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
+# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+# IN THE SOFTWARE.
+#
+
+def tag(key, value):
+    start = '<%s>' % key
+    end = '</%s>' % key
+    return '%s%s%s' % (start, value, end)
+
+
+class WebsiteConfiguration(object):
+    """
+    Website configuration for a bucket.
+
+    :ivar suffix: Suffix that is appended to a request that is for a
+        "directory" on the website endpoint (e.g. if the suffix is
+        index.html and you make a request to samplebucket/images/
+        the data that is returned will be for the object with the
+        key name images/index.html).  The suffix must not be empty
+        and must not include a slash character.
+
+    :ivar error_key: The object key name to use when a 4xx class error
+        occurs.  This key identifies the page that is returned when
+        such an error occurs.
+
+    :ivar redirect_all_requests_to: Describes the redirect behavior for every
+        request to this bucket's website endpoint. If this value is non None,
+        no other values are considered when configuring the website
+        configuration for the bucket. This is an instance of
+        ``RedirectLocation``.
+
+    :ivar routing_rules: ``RoutingRules`` object which specifies conditions
+        and redirects that apply when the conditions are met.
+
+    """
+
+    def __init__(self, suffix=None, error_key=None,
+                 redirect_all_requests_to=None, routing_rules=None):
+        self.suffix = suffix
+        self.error_key = error_key
+        self.redirect_all_requests_to = redirect_all_requests_to
+        if routing_rules is not None:
+            self.routing_rules = routing_rules
+        else:
+            self.routing_rules = RoutingRules()
+
+    def startElement(self, name, attrs, connection):
+        if name == 'RoutingRules':
+            self.routing_rules = RoutingRules()
+            return self.routing_rules
+        elif name == 'IndexDocument':
+            return _XMLKeyValue([('Suffix', 'suffix')], container=self)
+        elif name == 'ErrorDocument':
+            return _XMLKeyValue([('Key', 'error_key')], container=self)
+
+    def endElement(self, name, value, connection):
+        pass
+
+    def to_xml(self):
+        parts = ['<?xml version="1.0" encoding="UTF-8"?>',
+          '<WebsiteConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">']
+        if self.suffix is not None:
+            parts.append(tag('IndexDocument', tag('Suffix', self.suffix)))
+        if self.error_key is not None:
+            parts.append(tag('ErrorDocument', tag('Key', self.error_key)))
+        if self.redirect_all_requests_to is not None:
+            parts.append(self.redirect_all_requests_to.to_xml())
+        if self.routing_rules:
+            parts.append(self.routing_rules.to_xml())
+        parts.append('</WebsiteConfiguration>')
+        return ''.join(parts)
+
+
+class _XMLKeyValue(object):
+    def __init__(self, translator, container=None):
+        self.translator = translator
+        if container:
+            self.container = container
+        else:
+            self.container = self
+
+    def startElement(self, name, attrs, connection):
+        pass
+
+    def endElement(self, name, value, connection):
+        for xml_key, attr_name in self.translator:
+            if name == xml_key:
+                setattr(self.container, attr_name, value)
+
+    def to_xml(self):
+        parts = []
+        for xml_key, attr_name in self.translator:
+            content = getattr(self.container, attr_name)
+            if content is not None:
+                parts.append(tag(xml_key, content))
+        return ''.join(parts)
+
+
+class RedirectLocation(_XMLKeyValue):
+    """Specify redirect behavior for every request to a bucket's endpoint.
+
+    :ivar hostname: Name of the host where requests will be redirected.
+
+    :ivar protocol: Protocol to use (http, https) when redirecting requests.
+        The default is the protocol that is used in the original request.
+
+    """
+    TRANSLATOR = [('HostName', 'hostname'),
+                  ('Protocol', 'protocol'),
+                 ]
+
+    def __init__(self, hostname=None, protocol=None):
+        self.hostname = hostname
+        self.protocol = protocol
+        super(RedirectLocation, self).__init__(self.TRANSLATOR)
+
+    def to_xml(self):
+        return tag('RedirectAllRequestsTo',
+            super(RedirectLocation, self).to_xml())
+
+
+class RoutingRules(list):
+
+    def add_rule(self, rule):
+        """
+
+        :type rule: :class:`boto.s3.website.RoutingRule`
+        :param rule: A routing rule.
+
+        :return: This ``RoutingRules`` object is returned,
+            so that it can chain subsequent calls.
+
+        """
+        self.append(rule)
+        return self
+
+    def startElement(self, name, attrs, connection):
+        if name == 'RoutingRule':
+            rule = RoutingRule(Condition(), Redirect())
+            self.add_rule(rule)
+            return rule
+
+    def endElement(self, name, value, connection):
+        pass
+
+    def __repr__(self):
+        return "RoutingRules(%s)" % super(RoutingRules, self).__repr__()
+
+    def to_xml(self):
+        inner_text = []
+        for rule in self:
+            inner_text.append(rule.to_xml())
+        return tag('RoutingRules', '\n'.join(inner_text))
+
+
+class RoutingRule(object):
+    """Represents a single routing rule.
+
+    There are convenience methods to making creating rules
+    more concise::
+
+        rule = RoutingRule.when(key_prefix='foo/').then_redirect('example.com')
+
+    :ivar condition: Describes condition that must be met for the
+        specified redirect to apply.
+
+    :ivar redirect: Specifies redirect behavior.  You can redirect requests to
+        another host, to another page, or with another protocol. In the event
+        of an error, you can can specify a different error code to return.
+
+    """
+    def __init__(self, condition=None, redirect=None):
+        self.condition = condition
+        self.redirect = redirect
+
+    def startElement(self, name, attrs, connection):
+        if name == 'Condition':
+            return self.condition
+        elif name == 'Redirect':
+            return self.redirect
+
+    def endElement(self, name, value, connection):
+        pass
+
+    def to_xml(self):
+        parts = []
+        if self.condition:
+            parts.append(self.condition.to_xml())
+        if self.redirect:
+            parts.append(self.redirect.to_xml())
+        return tag('RoutingRule', '\n'.join(parts))
+
+    @classmethod
+    def when(cls, key_prefix=None, http_error_code=None):
+        return cls(Condition(key_prefix=key_prefix,
+                             http_error_code=http_error_code), None)
+
+    def then_redirect(self, hostname=None, protocol=None, replace_key=None,
+                      replace_key_prefix=None, http_redirect_code=None):
+        self.redirect = Redirect(
+                hostname=hostname, protocol=protocol,
+                replace_key=replace_key,
+                replace_key_prefix=replace_key_prefix,
+                http_redirect_code=http_redirect_code)
+        return self
+
+
+class Condition(_XMLKeyValue):
+    """
+    :ivar key_prefix: The object key name prefix when the redirect is applied.
+        For example, to redirect requests for ExamplePage.html, the key prefix
+        will be ExamplePage.html. To redirect request for all pages with the
+        prefix docs/, the key prefix will be /docs, which identifies all
+        objects in the docs/ folder.
+
+    :ivar http_error_code: The HTTP error code when the redirect is applied. In
+        the event of an error, if the error code equals this value, then the
+        specified redirect is applied.
+
+    """
+    TRANSLATOR = [
+        ('KeyPrefixEquals', 'key_prefix'),
+        ('HttpErrorCodeReturnedEquals', 'http_error_code'),
+        ]
+
+    def __init__(self, key_prefix=None, http_error_code=None):
+        self.key_prefix = key_prefix
+        self.http_error_code = http_error_code
+        super(Condition, self).__init__(self.TRANSLATOR)
+
+    def to_xml(self):
+        return tag('Condition', super(Condition, self).to_xml())
+
+
+class Redirect(_XMLKeyValue):
+    """
+    :ivar hostname: The host name to use in the redirect request.
+
+    :ivar protocol: The protocol to use in the redirect request.  Can be either
+    'http' or 'https'.
+
+    :ivar replace_key: The specific object key to use in the redirect request.
+        For example, redirect request to error.html.
+
+    :ivar replace_key_prefix: The object key prefix to use in the redirect
+        request. For example, to redirect requests for all pages with prefix
+        docs/ (objects in the docs/ folder) to documents/, you can set a
+        condition block with KeyPrefixEquals set to docs/ and in the Redirect
+        set ReplaceKeyPrefixWith to /documents.
+
+    :ivar http_redirect_code: The HTTP redirect code to use on the response.
+
+    """
+
+    TRANSLATOR = [
+        ('Protocol', 'protocol'),
+        ('HostName', 'hostname'),
+        ('ReplaceKeyWith', 'replace_key'),
+        ('ReplaceKeyPrefixWith', 'replace_key_prefix'),
+        ('HttpRedirectCode', 'http_redirect_code'),
+        ]
+
+    def __init__(self, hostname=None, protocol=None, replace_key=None,
+                 replace_key_prefix=None, http_redirect_code=None):
+        self.hostname = hostname
+        self.protocol = protocol
+        self.replace_key = replace_key
+        self.replace_key_prefix = replace_key_prefix
+        self.http_redirect_code = http_redirect_code
+        super(Redirect, self).__init__(self.TRANSLATOR)
+
+    def to_xml(self):
+        return tag('Redirect', super(Redirect, self).to_xml())
+
+
diff --git a/boto/sdb/__init__.py b/boto/sdb/__init__.py
index 7f29b69..bebc152 100644
--- a/boto/sdb/__init__.py
+++ b/boto/sdb/__init__.py
@@ -43,7 +43,9 @@
             SDBRegionInfo(name='ap-northeast-1',
                           endpoint='sdb.ap-northeast-1.amazonaws.com'),
             SDBRegionInfo(name='ap-southeast-1',
-                          endpoint='sdb.ap-southeast-1.amazonaws.com')
+                          endpoint='sdb.ap-southeast-1.amazonaws.com'),
+            SDBRegionInfo(name='ap-southeast-2',
+                          endpoint='sdb.ap-southeast-2.amazonaws.com')
             ]
 
 
diff --git a/boto/sdb/db/test_db.py b/boto/sdb/db/test_db.py
index b872f7f..b582bce 100644
--- a/boto/sdb/db/test_db.py
+++ b/boto/sdb/db/test_db.py
@@ -1,11 +1,17 @@
+import logging
+import time
+from datetime import datetime
+
 from boto.sdb.db.model import Model
 from boto.sdb.db.property import StringProperty, IntegerProperty, BooleanProperty
 from boto.sdb.db.property import DateTimeProperty, FloatProperty, ReferenceProperty
 from boto.sdb.db.property import PasswordProperty, ListProperty, MapProperty
-from datetime import datetime
-import time
 from boto.exception import SDBPersistenceError
 
+logging.basicConfig()
+log = logging.getLogger('test_db')
+log.setLevel(logging.DEBUG)
+
 _objects = {}
 
 #
@@ -70,11 +76,11 @@
     t.size = -42
     t.foo = True
     t.date = datetime.now()
-    print 'saving object'
+    log.debug('saving object')
     t.put()
     _objects['test_basic_t'] = t
     time.sleep(5)
-    print 'now try retrieving it'
+    log.debug('now try retrieving it')
     tt = TestBasic.get_by_id(t.id)
     _objects['test_basic_tt'] = tt
     assert tt.id == t.id
@@ -92,11 +98,11 @@
     t = TestFloat()
     t.name = 'float object'
     t.value = 98.6
-    print 'saving object'
+    log.debug('saving object')
     t.save()
     _objects['test_float_t'] = t
     time.sleep(5)
-    print 'now try retrieving it'
+    log.debug('now try retrieving it')
     tt = TestFloat.get_by_id(t.id)
     _objects['test_float_tt'] = tt
     assert tt.id == t.id
@@ -123,7 +129,7 @@
     _objects['test_reference_tt'] = tt
     assert tt.ref.id == t.id
     for o in t.refs:
-        print o
+        log.debug(o)
 
 def test_subclass():
     global _objects
@@ -202,23 +208,23 @@
     assert tt.create_date.timetuple() == t.create_date.timetuple()
 
 def test():
-    print 'test_basic'
+    log.info('test_basic')
     t1 = test_basic()
-    print 'test_required'
+    log.info('test_required')
     test_required()
-    print 'test_reference'
+    log.info('test_reference')
     test_reference(t1)
-    print 'test_subclass'
+    log.info('test_subclass')
     test_subclass()
-    print 'test_password'
+    log.info('test_password')
     test_password()
-    print 'test_list'
+    log.info('test_list')
     test_list()
-    print 'test_list_reference'
+    log.info('test_list_reference')
     test_list_reference()
-    print "test_datetime"
+    log.info("test_datetime")
     test_datetime()
-    print 'test_unique'
+    log.info('test_unique')
     test_unique()
 
 if __name__ == "__main__":
diff --git a/boto/ses/connection.py b/boto/ses/connection.py
index 902e288..b5bd157 100644
--- a/boto/ses/connection.py
+++ b/boto/ses/connection.py
@@ -29,7 +29,6 @@
 import boto
 import boto.jsonresponse
 from boto.ses import exceptions as ses_exceptions
-from boto.exception import BotoServerError
 
 
 class SESConnection(AWSAuthConnection):
@@ -103,8 +102,12 @@
         )
         body = response.read()
         if response.status == 200:
-            list_markers = ('VerifiedEmailAddresses', 'SendDataPoints')
-            e = boto.jsonresponse.Element(list_marker=list_markers)
+            list_markers = ('VerifiedEmailAddresses', 'Identities',
+                            'VerificationAttributes', 'SendDataPoints')
+            item_markers = ('member', 'item', 'entry')
+
+            e = boto.jsonresponse.Element(list_marker=list_markers,
+                                          item_marker=item_markers)
             h = boto.jsonresponse.XmlHandler(e, None)
             h.parse(body)
             return e
@@ -444,3 +447,75 @@
         params = {}
         self._build_list_params(params, identities, 'Identities.member')
         return self._make_request('GetIdentityDkimAttributes', params)
+
+    def list_identities(self):
+        """Returns a list containing all of the identities (email addresses
+        and domains) for a specific AWS Account, regardless of
+        verification status.
+
+        :rtype: dict
+        :returns: A ListIdentitiesResponse structure. Note that
+                  keys must be unicode strings.
+        """
+        return self._make_request('ListIdentities')
+
+    def get_identity_verification_attributes(self, identities):
+        """Given a list of identities (email addresses and/or domains),
+        returns the verification status and (for domain identities)
+        the verification token for each identity.
+
+        :type identities: list of strings or string
+        :param identities: List of identities.
+
+        :rtype: dict
+        :returns: A GetIdentityVerificationAttributesResponse structure.
+                  Note that keys must be unicode strings.
+        """
+        params = {}
+        self._build_list_params(params, identities,
+                               'Identities.member')
+        return self._make_request('GetIdentityVerificationAttributes', params)
+
+    def verify_domain_identity(self, domain):
+        """Verifies a domain.
+
+        :type domain: string
+        :param domain: The domain to be verified.
+
+        :rtype: dict
+        :returns: A VerifyDomainIdentityResponse structure. Note that
+                  keys must be unicode strings.
+        """
+        return self._make_request('VerifyDomainIdentity', {
+            'Domain': domain,
+        })
+
+    def verify_email_identity(self, email_address):
+        """Verifies an email address. This action causes a confirmation
+        email message to be sent to the specified address.
+
+        :type email_adddress: string
+        :param email_address: The email address to be verified.
+
+        :rtype: dict
+        :returns: A VerifyEmailIdentityResponse structure. Note that keys must
+                  be unicode strings.
+        """
+        return self._make_request('VerifyEmailIdentity', {
+            'EmailAddress': email_address,
+        })
+
+    def delete_identity(self, identity):
+        """Deletes the specified identity (email address or domain) from
+        the list of verified identities.
+
+        :type identity: string
+        :param identity: The identity to be deleted.
+
+        :rtype: dict
+        :returns: A DeleteIdentityResponse structure. Note that keys must
+                  be unicode strings.
+        """
+        return self._make_request('DeleteIdentity', {
+            'Identity': identity,
+        })
diff --git a/boto/sns/__init__.py b/boto/sns/__init__.py
index 64d0295..565317a 100644
--- a/boto/sns/__init__.py
+++ b/boto/sns/__init__.py
@@ -54,6 +54,9 @@
             RegionInfo(name='ap-southeast-1',
                        endpoint='sns.ap-southeast-1.amazonaws.com',
                        connection_cls=SNSConnection),
+            RegionInfo(name='ap-southeast-2',
+                       endpoint='sns.ap-southeast-2.amazonaws.com',
+                       connection_cls=SNSConnection),
             ]
 
 
diff --git a/boto/sns/connection.py b/boto/sns/connection.py
index bf528a3..1f29c19 100644
--- a/boto/sns/connection.py
+++ b/boto/sns/connection.py
@@ -20,14 +20,13 @@
 # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
 # IN THE SOFTWARE.
 
+import uuid
+import hashlib
+
 from boto.connection import AWSQueryConnection
 from boto.regioninfo import RegionInfo
+from boto.compat import json
 import boto
-import uuid
-try:
-    import simplejson as json
-except ImportError:
-    import json
 
 
 class SNSConnection(AWSQueryConnection):
@@ -257,7 +256,7 @@
         Subscribe to a Topic.
 
         :type topic: string
-        :param topic: The name of the new topic.
+        :param topic: The ARN of the new topic.
 
         :type protocol: string
         :param protocol: The protocol used to communicate with
@@ -304,27 +303,37 @@
           that policy.  If no policy exists, a new policy will be created.
 
         :type topic: string
-        :param topic: The name of the new topic.
+        :param topic: The ARN of the new topic.
 
         :type queue: A boto Queue object
         :param queue: The queue you wish to subscribe to the SNS Topic.
         """
         t = queue.id.split('/')
-        q_arn = 'arn:aws:sqs:%s:%s:%s' % (queue.connection.region.name,
-                                          t[1], t[2])
+        q_arn = queue.arn
+        sid = hashlib.md5(topic + q_arn).hexdigest()
+        sid_exists = False
         resp = self.subscribe(topic, 'sqs', q_arn)
-        policy = queue.get_attributes('Policy')
+        attr = queue.get_attributes('Policy')
+        if 'Policy' in attr:
+            policy = json.loads(attr['Policy'])
+        else:
+            policy = {}
         if 'Version' not in policy:
             policy['Version'] = '2008-10-17'
         if 'Statement' not in policy:
             policy['Statement'] = []
-        statement = {'Action': 'SQS:SendMessage',
-                     'Effect': 'Allow',
-                     'Principal': {'AWS': '*'},
-                     'Resource': q_arn,
-                     'Sid': str(uuid.uuid4()),
-                     'Condition': {'StringLike': {'aws:SourceArn': topic}}}
-        policy['Statement'].append(statement)
+        # See if a Statement with the Sid exists already.
+        for s in policy['Statement']:
+            if s['Sid'] == sid:
+                sid_exists = True
+        if not sid_exists:
+            statement = {'Action': 'SQS:SendMessage',
+                         'Effect': 'Allow',
+                         'Principal': {'AWS': '*'},
+                         'Resource': q_arn,
+                         'Sid': sid,
+                         'Condition': {'StringLike': {'aws:SourceArn': topic}}}
+            policy['Statement'].append(statement)
         queue.set_attribute('Policy', json.dumps(policy))
         return resp
 
diff --git a/boto/sqs/__init__.py b/boto/sqs/__init__.py
index b05ea6d..b59a457 100644
--- a/boto/sqs/__init__.py
+++ b/boto/sqs/__init__.py
@@ -43,7 +43,9 @@
             SQSRegionInfo(name='ap-northeast-1',
                           endpoint='ap-northeast-1.queue.amazonaws.com'),
             SQSRegionInfo(name='ap-southeast-1',
-                          endpoint='ap-southeast-1.queue.amazonaws.com')
+                          endpoint='ap-southeast-1.queue.amazonaws.com'),
+            SQSRegionInfo(name='ap-southeast-2',
+                          endpoint='ap-southeast-2.queue.amazonaws.com')
             ]
 
 
diff --git a/boto/sqs/connection.py b/boto/sqs/connection.py
index 90ecf0f..e076de1 100644
--- a/boto/sqs/connection.py
+++ b/boto/sqs/connection.py
@@ -34,9 +34,10 @@
     """
     DefaultRegionName = 'us-east-1'
     DefaultRegionEndpoint = 'queue.amazonaws.com'
-    APIVersion = '2011-10-01'
+    APIVersion = '2012-11-05'
     DefaultContentType = 'text/plain'
     ResponseError = SQSError
+    AuthServiceName = 'sqs'
 
     def __init__(self, aws_access_key_id=None, aws_secret_access_key=None,
                  is_secure=True, port=None, proxy=None, proxy_port=None,
@@ -56,9 +57,10 @@
                                     https_connection_factory, path,
                                     security_token=security_token,
                                     validate_certs=validate_certs)
+        self.auth_region_name = self.region.name
 
     def _required_auth_capability(self):
-        return ['sqs']
+        return ['hmac-v4']
 
     def create_queue(self, queue_name, visibility_timeout=None):
         """
@@ -98,12 +100,8 @@
         :param queue: The SQS queue to be deleted
 
         :type force_deletion: Boolean
-        :param force_deletion: Normally, SQS will not delete a queue
-            that contains messages.  However, if the force_deletion
-            argument is True, the queue will be deleted regardless of
-            whether there are messages in the queue or not.  USE WITH
-            CAUTION.  This will delete all messages in the queue as
-            well.
+        :param force_deletion: A deprecated parameter that is no longer used by
+            SQS's API.
 
         :rtype: bool
         :return: True if the command succeeded, False otherwise
@@ -122,12 +120,13 @@
             supplied, the default is to return all attributes.  Valid
             attributes are:
 
-            * ApproximateNumberOfMessages|
-            * ApproximateNumberOfMessagesNotVisible|
-            * VisibilityTimeout|
-            * CreatedTimestamp|
-            * LastModifiedTimestamp|
+            * ApproximateNumberOfMessages
+            * ApproximateNumberOfMessagesNotVisible
+            * VisibilityTimeout
+            * CreatedTimestamp
+            * LastModifiedTimestamp
             * Policy
+            * ReceiveMessageWaitTimeSeconds
 
         :rtype: :class:`boto.sqs.attributes.Attributes`
         :return: An Attributes object containing request value(s).
@@ -141,7 +140,8 @@
         return self.get_status('SetQueueAttributes', params, queue.id)
 
     def receive_message(self, queue, number_messages=1,
-                        visibility_timeout=None, attributes=None):
+                        visibility_timeout=None, attributes=None,
+                        wait_time_seconds=None):
         """
         Read messages from an SQS Queue.
 
@@ -168,14 +168,23 @@
             * ApproximateReceiveCount
             * ApproximateFirstReceiveTimestamp
 
+        :type wait_time_seconds: int
+        :param wait_time_seconds: The duration (in seconds) for which the call
+            will wait for a message to arrive in the queue before returning.
+            If a message is available, the call will return sooner than
+            wait_time_seconds.
+
         :rtype: list
         :return: A list of :class:`boto.sqs.message.Message` objects.
+
         """
         params = {'MaxNumberOfMessages' : number_messages}
-        if visibility_timeout:
+        if visibility_timeout is not None:
             params['VisibilityTimeout'] = visibility_timeout
-        if attributes:
+        if attributes is not None:
             self.build_list_params(params, attributes, 'AttributeName')
+        if wait_time_seconds is not None:
+            params['WaitTimeSeconds'] = wait_time_seconds
         return self.get_list('ReceiveMessage', params,
                              [('Message', queue.message_class)],
                              queue.id, queue)
diff --git a/boto/sqs/jsonmessage.py b/boto/sqs/jsonmessage.py
index fb0a4c3..0eb3a13 100644
--- a/boto/sqs/jsonmessage.py
+++ b/boto/sqs/jsonmessage.py
@@ -14,18 +14,16 @@
 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
 # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
 # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
-# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, 
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
 # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
 # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
 # IN THE SOFTWARE.
+import base64
 
 from boto.sqs.message import MHMessage
 from boto.exception import SQSDecodeError
-import base64
-try:
-    import simplejson as json
-except ImportError:
-    import json
+from boto.compat import json
+
 
 class JSONMessage(MHMessage):
     """
diff --git a/boto/sqs/queue.py b/boto/sqs/queue.py
index ca5593c..603faaa 100644
--- a/boto/sqs/queue.py
+++ b/boto/sqs/queue.py
@@ -14,7 +14,7 @@
 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
 # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
 # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
-# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, 
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
 # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
 # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
 # IN THE SOFTWARE.
@@ -54,6 +54,12 @@
         return  val
     name = property(_name)
 
+    def _arn(self):
+        parts = self.id.split('/')
+        return 'arn:aws:sqs:%s:%s:%s' % (
+            self.connection.region.name, parts[1], parts[2])
+    arn = property(_arn)
+
     def startElement(self, name, attrs, connection):
         return None
 
@@ -90,6 +96,7 @@
                            CreatedTimestamp,
                            LastModifiedTimestamp,
                            Policy
+                           ReceiveMessageWaitTimeSeconds
         :rtype: Attribute object
         :return: An Attribute object which is a mapping type holding the
                  requested name/value pairs
@@ -99,7 +106,7 @@
     def set_attribute(self, attribute, value):
         """
         Set a new value for an attribute of the Queue.
-        
+
         :type attribute: String
         :param attribute: The name of the attribute you want to set.  The
                            only valid value at this time is: VisibilityTimeout
@@ -116,7 +123,7 @@
     def get_timeout(self):
         """
         Get the visibility timeout for the queue.
-        
+
         :rtype: int
         :return: The number of seconds as an integer.
         """
@@ -152,8 +159,8 @@
 
         :type action_name: str or unicode
         :param action_name: The action.  Valid choices are:
-            *|SendMessage|ReceiveMessage|DeleteMessage|
-            ChangeMessageVisibility|GetQueueAttributes
+            SendMessage|ReceiveMessage|DeleteMessage|
+            ChangeMessageVisibility|GetQueueAttributes|*
 
         :rtype: bool
         :return: True if successful, False otherwise.
@@ -175,17 +182,24 @@
         """
         return self.connection.remove_permission(self, label)
 
-    def read(self, visibility_timeout=None):
+    def read(self, visibility_timeout=None, wait_time_seconds=None):
         """
         Read a single message from the queue.
-        
+
         :type visibility_timeout: int
         :param visibility_timeout: The timeout for this message in seconds
 
+        :type wait_time_seconds: int
+        :param wait_time_seconds: The duration (in seconds) for which the call
+            will wait for a message to arrive in the queue before returning.
+            If a message is available, the call will return sooner than
+            wait_time_seconds.
+
         :rtype: :class:`boto.sqs.message.Message`
         :return: A single message or None if queue is empty
         """
-        rs = self.get_messages(1, visibility_timeout)
+        rs = self.get_messages(1, visibility_timeout,
+                               wait_time_seconds=wait_time_seconds)
         if len(rs) == 1:
             return rs[0]
         else:
@@ -240,14 +254,14 @@
 
     # get a variable number of messages, returns a list of messages
     def get_messages(self, num_messages=1, visibility_timeout=None,
-                     attributes=None):
+                     attributes=None, wait_time_seconds=None):
         """
         Get a variable number of messages.
 
         :type num_messages: int
         :param num_messages: The maximum number of messages to read from
             the queue.
-        
+
         :type visibility_timeout: int
         :param visibility_timeout: The VisibilityTimeout for the messages read.
 
@@ -257,13 +271,20 @@
             default is to return no additional attributes.  Valid
             values: All SenderId SentTimestamp ApproximateReceiveCount
             ApproximateFirstReceiveTimestamp
-                           
+
+        :type wait_time_seconds: int
+        :param wait_time_seconds: The duration (in seconds) for which the call
+            will wait for a message to arrive in the queue before returning.
+            If a message is available, the call will return sooner than
+            wait_time_seconds.
+
         :rtype: list
         :return: A list of :class:`boto.sqs.message.Message` objects.
         """
-        return self.connection.receive_message(self, number_messages=num_messages,
-                                               visibility_timeout=visibility_timeout,
-                                               attributes=attributes)
+        return self.connection.receive_message(
+            self, number_messages=num_messages,
+            visibility_timeout=visibility_timeout, attributes=attributes,
+            wait_time_seconds=wait_time_seconds)
 
     def delete_message(self, message):
         """
@@ -396,9 +417,9 @@
         """
         Read all messages from the queue and persist them to S3.
         Messages are stored in the S3 bucket using a naming scheme of::
-        
+
             <queue_id>/<message_id>
-        
+
         Messages are deleted from the queue after being saved to S3.
         Returns the number of messages saved.
         """
diff --git a/boto/storage_uri.py b/boto/storage_uri.py
index ca0d7cb..9a6b2bf 100755
--- a/boto/storage_uri.py
+++ b/boto/storage_uri.py
@@ -23,6 +23,8 @@
 import boto
 import os
 import sys
+import textwrap
+from boto.s3.deletemarker import DeleteMarker
 from boto.exception import BotoClientError
 from boto.exception import InvalidUriError
 
@@ -65,12 +67,11 @@
 
     def check_response(self, resp, level, uri):
         if resp is None:
-            raise InvalidUriError('Attempt to get %s for "%s" failed.\nThis '
-                                  'can happen if the URI refers to a non-'
-                                  'existent object or if you meant to\noperate '
-                                  'on a directory (e.g., leaving off -R option '
-                                  'on gsutil cp, mv, or ls of a\nbucket)' %
-                                  (level, uri))
+            raise InvalidUriError('\n'.join(textwrap.wrap(
+                'Attempt to get %s for "%s" failed. This can happen if '
+                'the URI refers to a non-existent object or if you meant to '
+                'operate on a directory (e.g., leaving off -R option on gsutil '
+                'cp, mv, or ls of a bucket)' % (level, uri), 80)))
 
     def _check_bucket_uri(self, function_name):
         if issubclass(type(self), BucketStorageUri) and not self.bucket_name:
@@ -100,15 +101,7 @@
         @return: A connection to storage service provider of the given URI.
         """
         connection_args = dict(self.connection_args or ())
-        # Use OrdinaryCallingFormat instead of boto-default
-        # SubdomainCallingFormat because the latter changes the hostname
-        # that's checked during cert validation for HTTPS connections,
-        # which will fail cert validation (when cert validation is enabled).
-        # Note: the following import can't be moved up to the start of
-        # this file else it causes a config import failure when run from
-        # the resumable upload/download tests.
-        from boto.s3.connection import OrdinaryCallingFormat
-        connection_args['calling_format'] = OrdinaryCallingFormat()
+
         if (hasattr(self, 'suppress_consec_slashes') and
             'suppress_consec_slashes' not in connection_args):
             connection_args['suppress_consec_slashes'] = (
@@ -125,6 +118,23 @@
                 self.provider_pool[self.scheme] = self.connection
             elif self.scheme == 'gs':
                 from boto.gs.connection import GSConnection
+                # Use OrdinaryCallingFormat instead of boto-default
+                # SubdomainCallingFormat because the latter changes the hostname
+                # that's checked during cert validation for HTTPS connections,
+                # which will fail cert validation (when cert validation is
+                # enabled).
+                #
+                # The same is not true for S3's HTTPS certificates. In fact,
+                # we don't want to do this for S3 because S3 requires the
+                # subdomain to match the location of the bucket. If the proper
+                # subdomain is not used, the server will return a 301 redirect
+                # with no Location header.
+                #
+                # Note: the following import can't be moved up to the
+                # start of this file else it causes a config import failure when
+                # run from the resumable upload/download tests.
+                from boto.s3.connection import OrdinaryCallingFormat
+                connection_args['calling_format'] = OrdinaryCallingFormat()
                 self.connection = GSConnection(access_key_id,
                                                secret_access_key,
                                                **connection_args)
@@ -138,6 +148,11 @@
         self.connection.debug = self.debug
         return self.connection
 
+    def has_version(self):
+        return (issubclass(type(self), BucketStorageUri)
+                and ((self.version_id is not None)
+                     or (self.generation is not None)))
+
     def delete_key(self, validate=False, headers=None, version_id=None,
                    mfa_token=None):
         self._check_object_uri('delete_key')
@@ -145,11 +160,17 @@
         return bucket.delete_key(self.object_name, headers, version_id,
                                  mfa_token)
 
-    def list_bucket(self, prefix='', delimiter='', headers=None):
+    def list_bucket(self, prefix='', delimiter='', headers=None,
+                    all_versions=False):
         self._check_bucket_uri('list_bucket')
-        return self.get_bucket(headers=headers).list(prefix=prefix,
-                                                     delimiter=delimiter,
-                                                     headers=headers)
+        bucket = self.get_bucket(headers=headers)
+        if all_versions:
+            return (v for v in bucket.list_versions(
+                prefix=prefix, delimiter=delimiter, headers=headers)
+                    if not isinstance(v, DeleteMarker))
+        else:
+            return bucket.list(prefix=prefix, delimiter=delimiter,
+                               headers=headers)
 
     def get_all_keys(self, validate=False, headers=None, prefix=None):
         bucket = self.get_bucket(validate, headers)
@@ -183,12 +204,20 @@
 
     def get_contents_to_file(self, fp, headers=None, cb=None, num_cb=10,
                              torrent=False, version_id=None,
-                             res_download_handler=None, response_headers=None):
+                             res_download_handler=None, response_headers=None,
+                             hash_algs=None):
         self._check_object_uri('get_contents_to_file')
         key = self.get_key(None, headers)
         self.check_response(key, 'key', self.uri)
-        key.get_contents_to_file(fp, headers, cb, num_cb, torrent, version_id,
-                                 res_download_handler, response_headers)
+        if hash_algs:
+            key.get_contents_to_file(fp, headers, cb, num_cb, torrent,
+                                     version_id, res_download_handler,
+                                     response_headers,
+                                     hash_algs=hash_algs)
+        else:
+            key.get_contents_to_file(fp, headers, cb, num_cb, torrent,
+                                     version_id, res_download_handler,
+                                     response_headers)
 
     def get_contents_as_string(self, validate=False, headers=None, cb=None,
                                num_cb=10, torrent=False, version_id=None):
@@ -221,7 +250,8 @@
     capabilities = set([]) # A set of additional capabilities.
 
     def __init__(self, scheme, bucket_name=None, object_name=None,
-                 debug=0, connection_args=None, suppress_consec_slashes=True):
+                 debug=0, connection_args=None, suppress_consec_slashes=True,
+                 version_id=None, generation=None, is_latest=False):
         """Instantiate a BucketStorageUri from scheme,bucket,object tuple.
 
         @type scheme: string
@@ -229,7 +259,7 @@
         @type bucket_name: string
         @param bucket_name: bucket name
         @type object_name: string
-        @param object_name: object name
+        @param object_name: object name, excluding generation/version.
         @type debug: int
         @param debug: debug level to pass in to connection (range 0..2)
         @type connection_args: map
@@ -238,25 +268,92 @@
             https_connection_factory).
         @param suppress_consec_slashes: If provided, controls whether
             consecutive slashes will be suppressed in key paths.
+        @param version_id: Object version id (S3-specific).
+        @param generation: Object generation number (GCS-specific).
+        @param is_latest: boolean indicating that a versioned object is the
+            current version
 
         After instantiation the components are available in the following
-        fields: uri, scheme, bucket_name, object_name.
+        fields: scheme, bucket_name, object_name, version_id, generation,
+        is_latest, versionless_uri, version_specific_uri, uri.
+        Note: If instantiated without version info, the string representation
+        for a URI stays versionless; similarly, if instantiated with version
+        info, the string representation for a URI stays version-specific. If you
+        call one of the uri.set_contents_from_xyz() methods, a specific object
+        version will be created, and its version-specific URI string can be
+        retrieved from version_specific_uri even if the URI was instantiated
+        without version info.
         """
 
         self.scheme = scheme
         self.bucket_name = bucket_name
         self.object_name = object_name
+        self.debug = debug
         if connection_args:
             self.connection_args = connection_args
         self.suppress_consec_slashes = suppress_consec_slashes
-        if self.bucket_name and self.object_name:
-            self.uri = ('%s://%s/%s' % (self.scheme, self.bucket_name,
-                                        self.object_name))
-        elif self.bucket_name:
-            self.uri = ('%s://%s/' % (self.scheme, self.bucket_name))
-        else:
-            self.uri = ('%s://' % self.scheme)
-        self.debug = debug
+        self.version_id = version_id
+        self.generation = generation and int(generation)
+        self.is_latest = is_latest
+        self.is_version_specific = bool(self.generation) or bool(version_id)
+        self._build_uri_strings()
+
+    def _build_uri_strings(self):
+      if self.bucket_name and self.object_name:
+          self.versionless_uri = '%s://%s/%s' % (self.scheme, self.bucket_name,
+                                                 self.object_name)
+          if self.generation:
+              self.version_specific_uri = '%s#%s' % (self.versionless_uri,
+                                                     self.generation)
+          elif self.version_id:
+              self.version_specific_uri = '%s#%s' % (
+                  self.versionless_uri, self.version_id)
+          if self.is_version_specific:
+              self.uri = self.version_specific_uri
+          else:
+              self.uri = self.versionless_uri
+      elif self.bucket_name:
+          self.uri = ('%s://%s/' % (self.scheme, self.bucket_name))
+      else:
+          self.uri = ('%s://' % self.scheme)
+
+    def _update_from_key(self, key):
+      self._update_from_values(
+          getattr(key, 'version_id', None),
+          getattr(key, 'generation', None),
+          getattr(key, 'is_latest', None),
+          getattr(key, 'md5', None))
+
+    def _update_from_values(self, version_id, generation, is_latest, md5):
+      self.version_id = version_id
+      self.generation = generation
+      self.is_latest = is_latest
+      self._build_uri_strings()
+      self.md5 = md5
+
+    def get_key(self, validate=False, headers=None, version_id=None):
+        self._check_object_uri('get_key')
+        bucket = self.get_bucket(validate, headers)
+        if self.get_provider().name == 'aws':
+            key = bucket.get_key(self.object_name, headers,
+                                 version_id=(version_id or self.version_id))
+        elif self.get_provider().name == 'google':
+            key = bucket.get_key(self.object_name, headers,
+                                 generation=self.generation)
+        self.check_response(key, 'key', self.uri)
+        return key
+
+    def delete_key(self, validate=False, headers=None, version_id=None,
+                   mfa_token=None):
+        self._check_object_uri('delete_key')
+        bucket = self.get_bucket(validate, headers)
+        if self.get_provider().name == 'aws':
+            version_id = version_id or self.version_id
+            return bucket.delete_key(self.object_name, headers, version_id,
+                                     mfa_token)
+        elif self.get_provider().name == 'google':
+            return bucket.delete_key(self.object_name, headers,
+                                     generation=self.generation)
 
     def clone_replace_name(self, new_name):
         """Instantiate a BucketStorageUri from the current BucketStorageUri,
@@ -271,6 +368,35 @@
             debug=self.debug,
             suppress_consec_slashes=self.suppress_consec_slashes)
 
+    def clone_replace_key(self, key):
+        """Instantiate a BucketStorageUri from the current BucketStorageUri, by
+        replacing the object name with the object name and other metadata found
+        in the given Key object (including generation).
+
+        @type key: Key
+        @param key: key for the new StorageUri to represent
+        """
+        self._check_bucket_uri('clone_replace_key')
+        version_id = None
+        generation = None
+        is_latest = False
+        if hasattr(key, 'version_id'):
+            version_id = key.version_id
+        if hasattr(key, 'generation'):
+            generation = key.generation
+        if hasattr(key, 'is_latest'):
+            is_latest = key.is_latest
+
+        return BucketStorageUri(
+                key.provider.get_provider_name(),
+                bucket_name=key.bucket.name,
+                object_name=key.name,
+                debug=self.debug,
+                suppress_consec_slashes=self.suppress_consec_slashes,
+                version_id=version_id,
+                generation=generation,
+                is_latest=is_latest)
+
     def get_acl(self, validate=False, headers=None, version_id=None):
         """returns a bucket's acl"""
         self._check_bucket_uri('get_acl')
@@ -278,7 +404,11 @@
         # This works for both bucket- and object- level ACLs (former passes
         # key_name=None):
         key_name = self.object_name or ''
-        acl = bucket.get_acl(key_name, headers, version_id)
+        if self.get_provider().name == 'aws':
+            version_id = version_id or self.version_id
+            acl = bucket.get_acl(key_name, headers, version_id)
+        else:
+            acl = bucket.get_acl(key_name, headers, generation=self.generation)
         self.check_response(acl, 'acl', self.uri)
         return acl
 
@@ -286,9 +416,7 @@
         """returns a bucket's default object acl"""
         self._check_bucket_uri('get_def_acl')
         bucket = self.get_bucket(validate, headers)
-        # This works for both bucket- and object- level ACLs (former passes
-        # key_name=None):
-        acl = bucket.get_def_acl('', headers)
+        acl = bucket.get_def_acl(headers)
         self.check_response(acl, 'acl', self.uri)
         return acl
 
@@ -311,6 +439,16 @@
         bucket = self.get_bucket(validate, headers)
         return bucket.get_location()
 
+    def get_storage_class(self, validate=False, headers=None):
+        self._check_bucket_uri('get_storage_class')
+        # StorageClass is defined as a bucket param for GCS, but as a key
+        # param for S3.
+        if self.scheme != 'gs':
+            raise ValueError('get_storage_class() not supported for %s '
+                             'URIs.' % self.scheme)
+        bucket = self.get_bucket(validate, headers)
+        return bucket.get_storage_class()
+
     def get_subresource(self, subresource, validate=False, headers=None,
                         version_id=None):
         self._check_bucket_uri('get_subresource')
@@ -322,8 +460,8 @@
                               validate=False, headers=None):
         self._check_bucket_uri('add_group_email_grant')
         if self.scheme != 'gs':
-              raise ValueError('add_group_email_grant() not supported for %s '
-                               'URIs.' % self.scheme)
+            raise ValueError('add_group_email_grant() not supported for %s '
+                             'URIs.' % self.scheme)
         if self.object_name:
             if recursive:
               raise ValueError('add_group_email_grant() on key-ful URI cannot '
@@ -397,7 +535,7 @@
 
     def names_bucket(self):
         """Returns True if this URI names a bucket."""
-        return self.names_container()
+        return bool(self.bucket_name) and bool(not self.object_name)
 
     def names_file(self):
         """Returns True if this URI names a file."""
@@ -411,10 +549,17 @@
         """Returns True if this URI represents input/output stream."""
         return False
 
-    def create_bucket(self, headers=None, location='', policy=None):
+    def create_bucket(self, headers=None, location='', policy=None,
+                      storage_class=None):
         self._check_bucket_uri('create_bucket ')
         conn = self.connect()
-        return conn.create_bucket(self.bucket_name, headers, location, policy)
+        # Pass storage_class param only if this is a GCS bucket. (In S3 the
+        # storage class is specified on the key object.)
+        if self.scheme == 'gs':
+          return conn.create_bucket(self.bucket_name, headers, location, policy,
+                                    storage_class)
+        else:
+          return conn.create_bucket(self.bucket_name, headers, location, policy)
 
     def delete_bucket(self, headers=None):
         self._check_bucket_uri('delete_bucket')
@@ -432,22 +577,48 @@
         return provider
 
     def set_acl(self, acl_or_str, key_name='', validate=False, headers=None,
-                version_id=None):
-        """sets or updates a bucket's acl"""
+                version_id=None, if_generation=None, if_metageneration=None):
+        """Sets or updates a bucket's ACL."""
         self._check_bucket_uri('set_acl')
         key_name = key_name or self.object_name or ''
-        self.get_bucket(validate, headers).set_acl(acl_or_str, key_name,
-                                                   headers, version_id)
+        bucket = self.get_bucket(validate, headers)
+        if self.generation:
+          bucket.set_acl(
+              acl_or_str, key_name, headers, generation=self.generation,
+              if_generation=if_generation, if_metageneration=if_metageneration)
+        else:
+          version_id = version_id or self.version_id
+          bucket.set_acl(acl_or_str, key_name, headers, version_id)
 
-    def set_def_acl(self, acl_or_str, key_name='', validate=False,
-                    headers=None, version_id=None):
-        """sets or updates a bucket's default object acl"""
+    def set_xml_acl(self, xmlstring, key_name='', validate=False, headers=None,
+            version_id=None, if_generation=None, if_metageneration=None):
+        """Sets or updates a bucket's ACL with an XML string."""
+        self._check_bucket_uri('set_xml_acl')
+        key_name = key_name or self.object_name or ''
+        bucket = self.get_bucket(validate, headers)
+        if self.generation:
+          bucket.set_xml_acl(
+              xmlstring, key_name, headers, generation=self.generation,
+              if_generation=if_generation, if_metageneration=if_metageneration)
+        else:
+          version_id = version_id or self.version_id
+          bucket.set_xml_acl(xmlstring, key_name, headers,
+                             version_id=version_id)
+
+    def set_def_xml_acl(self, xmlstring, validate=False, headers=None):
+        """Sets or updates a bucket's default object ACL with an XML string."""
+        self._check_bucket_uri('set_def_xml_acl')
+        self.get_bucket(validate, headers).set_def_xml_acl(xmlstring, headers)
+
+    def set_def_acl(self, acl_or_str, validate=False, headers=None,
+                    version_id=None):
+        """Sets or updates a bucket's default object ACL."""
         self._check_bucket_uri('set_def_acl')
-        self.get_bucket(validate, headers).set_def_acl(acl_or_str, '', headers)
+        self.get_bucket(validate, headers).set_def_acl(acl_or_str, headers)
 
     def set_canned_acl(self, acl_str, validate=False, headers=None,
                        version_id=None):
-        """sets or updates a bucket's acl to a predefined (canned) value"""
+        """Sets or updates a bucket's acl to a predefined (canned) value."""
         self._check_object_uri('set_canned_acl')
         self._warn_about_args('set_canned_acl', version_id=version_id)
         key = self.get_key(validate, headers)
@@ -456,8 +627,8 @@
 
     def set_def_canned_acl(self, acl_str, validate=False, headers=None,
                            version_id=None):
-        """sets or updates a bucket's default object acl to a predefined
-           (canned) value"""
+        """Sets or updates a bucket's default object acl to a predefined
+           (canned) value."""
         self._check_bucket_uri('set_def_canned_acl ')
         key = self.get_key(validate, headers)
         self.check_response(key, 'key', self.uri)
@@ -480,11 +651,14 @@
                 sys.stderr.write('Warning: GCS does not support '
                                  'reduced_redundancy; argument ignored by '
                                  'set_contents_from_string')
-            key.set_contents_from_string(s, headers, replace, cb, num_cb,
-                                         policy, md5)
+            result = key.set_contents_from_string(
+                s, headers, replace, cb, num_cb, policy, md5)
         else:
-            key.set_contents_from_string(s, headers, replace, cb, num_cb,
-                                         policy, md5, reduced_redundancy)
+            result = key.set_contents_from_string(
+                s, headers, replace, cb, num_cb, policy, md5,
+                reduced_redundancy)
+        self._update_from_key(key)
+        return result
 
     def set_contents_from_file(self, fp, headers=None, replace=True, cb=None,
                                num_cb=10, policy=None, md5=None, size=None,
@@ -492,37 +666,51 @@
         self._check_object_uri('set_contents_from_file')
         key = self.new_key(headers=headers)
         if self.scheme == 'gs':
-            return key.set_contents_from_file(
-                    fp, headers, replace, cb, num_cb, policy, md5, size=size,
-                    rewind=rewind, res_upload_handler=res_upload_handler)
+            result = key.set_contents_from_file(
+                fp, headers, replace, cb, num_cb, policy, md5, size=size,
+                rewind=rewind, res_upload_handler=res_upload_handler)
+            if res_upload_handler:
+                self._update_from_values(None, res_upload_handler.generation,
+                                         None, md5)
         else:
             self._warn_about_args('set_contents_from_file',
                                   res_upload_handler=res_upload_handler)
-            return key.set_contents_from_file(fp, headers, replace, cb, num_cb,
-                                              policy, md5, size=size,
-                                              rewind=rewind)
+            result = key.set_contents_from_file(
+                fp, headers, replace, cb, num_cb, policy, md5, size=size,
+                rewind=rewind)
+        self._update_from_key(key)
+        return result
 
     def set_contents_from_stream(self, fp, headers=None, replace=True, cb=None,
                                  policy=None, reduced_redundancy=False):
         self._check_object_uri('set_contents_from_stream')
         dst_key = self.new_key(False, headers)
-        dst_key.set_contents_from_stream(fp, headers, replace, cb,
-                                         policy=policy,
-                                         reduced_redundancy=reduced_redundancy)
+        result = dst_key.set_contents_from_stream(
+            fp, headers, replace, cb, policy=policy,
+            reduced_redundancy=reduced_redundancy)
+        self._update_from_key(dst_key)
+        return result
 
     def copy_key(self, src_bucket_name, src_key_name, metadata=None,
                  src_version_id=None, storage_class='STANDARD',
                  preserve_acl=False, encrypt_key=False, headers=None,
-                 query_args=None):
+                 query_args=None, src_generation=None):
+        """Returns newly created key."""
         self._check_object_uri('copy_key')
         dst_bucket = self.get_bucket(validate=False, headers=headers)
-        dst_bucket.copy_key(new_key_name=self.object_name,
-                            src_bucket_name=src_bucket_name,
-                            src_key_name=src_key_name, metadata=metadata,
-                            src_version_id=src_version_id,
-                            storage_class=storage_class,
-                            preserve_acl=preserve_acl, encrypt_key=encrypt_key,
-                            headers=headers, query_args=query_args)
+        if src_generation:
+            return dst_bucket.copy_key(new_key_name=self.object_name,
+                src_bucket_name=src_bucket_name,
+                src_key_name=src_key_name, metadata=metadata,
+                storage_class=storage_class, preserve_acl=preserve_acl,
+                encrypt_key=encrypt_key, headers=headers, query_args=query_args,
+                src_generation=src_generation)
+        else:
+            return dst_bucket.copy_key(new_key_name=self.object_name,
+                src_bucket_name=src_bucket_name, src_key_name=src_key_name,
+                metadata=metadata, src_version_id=src_version_id,
+                storage_class=storage_class, preserve_acl=preserve_acl,
+                encrypt_key=encrypt_key, headers=headers, query_args=query_args)
 
     def enable_logging(self, target_bucket, target_prefix=None, validate=False,
                        headers=None, version_id=None):
@@ -535,8 +723,14 @@
         bucket = self.get_bucket(validate, headers)
         bucket.disable_logging(headers=headers)
 
+    def get_logging_config(self, validate=False, headers=None, version_id=None):
+        self._check_bucket_uri('get_logging_config')
+        bucket = self.get_bucket(validate, headers)
+        return bucket.get_logging_config(headers=headers)
+
     def set_website_config(self, main_page_suffix=None, error_key=None,
                            validate=False, headers=None):
+        self._check_bucket_uri('set_website_config')
         bucket = self.get_bucket(validate, headers)
         if not (main_page_suffix or error_key):
             bucket.delete_website_configuration(headers)
@@ -544,9 +738,43 @@
             bucket.configure_website(main_page_suffix, error_key, headers)
 
     def get_website_config(self, validate=False, headers=None):
+        self._check_bucket_uri('get_website_config')
         bucket = self.get_bucket(validate, headers)
-        return bucket.get_website_configuration_with_xml(headers)
+        return bucket.get_website_configuration(headers)
 
+    def get_versioning_config(self, headers=None):
+        self._check_bucket_uri('get_versioning_config')
+        bucket = self.get_bucket(False, headers)
+        return bucket.get_versioning_status(headers)
+
+    def configure_versioning(self, enabled, headers=None):
+        self._check_bucket_uri('configure_versioning')
+        bucket = self.get_bucket(False, headers)
+        return bucket.configure_versioning(enabled, headers)
+
+    def set_metadata(self, metadata_plus, metadata_minus, preserve_acl,
+                     headers=None):
+        return self.get_key(False).set_remote_metadata(metadata_plus,
+                                                       metadata_minus,
+                                                       preserve_acl,
+                                                       headers=headers)
+
+    def compose(self, components, content_type=None, headers=None):
+        self._check_object_uri('compose')
+        component_keys = []
+        for suri in components:
+            component_keys.append(suri.new_key())
+            component_keys[-1].generation = suri.generation
+        self.new_key().compose(
+                component_keys, content_type=content_type, headers=headers)
+
+    def exists(self, headers=None):
+      """Returns True if the object exists or False if it doesn't"""
+      if not self.object_name:
+        raise InvalidUriError('exists on object-less URI (%s)' % self.uri)
+      bucket = self.get_bucket()
+      key = bucket.get_key(self.object_name, headers=headers)
+      return bool(key)
 
 class FileStorageUri(StorageUri):
     """
@@ -634,3 +862,10 @@
         """Closes the underlying file.
         """
         self.get_key().close()
+
+    def exists(self, _headers_not_used=None):
+      """Returns True if the file exists or False if it doesn't"""
+      # The _headers_not_used parameter is ignored. It is only there to ensure
+      # that this method's signature is identical to the exists method on the
+      # BucketStorageUri class.
+      return os.path.exists(self.object_name)
diff --git a/boto/sts/connection.py b/boto/sts/connection.py
index 42835b0..5a1fdf2 100644
--- a/boto/sts/connection.py
+++ b/boto/sts/connection.py
@@ -22,7 +22,7 @@
 
 from boto.connection import AWSQueryConnection
 from boto.regioninfo import RegionInfo
-from credentials import Credentials, FederationToken
+from credentials import Credentials, FederationToken, AssumedRole
 import boto
 import boto.utils
 import datetime
@@ -32,7 +32,33 @@
 
 
 class STSConnection(AWSQueryConnection):
+    """
+    AWS Security Token Service
+    The AWS Security Token Service is a web service that enables you
+    to request temporary, limited-privilege credentials for AWS
+    Identity and Access Management (IAM) users or for users that you
+    authenticate (federated users). This guide provides descriptions
+    of the AWS Security Token Service API.
 
+    For more detailed information about using this service, go to
+    `Using Temporary Security Credentials`_.
+
+    For information about setting up signatures and authorization
+    through the API, go to `Signing AWS API Requests`_ in the AWS
+    General Reference . For general information about the Query API,
+    go to `Making Query Requests`_ in Using IAM . For information
+    about using security tokens with other AWS products, go to `Using
+    Temporary Security Credentials to Access AWS`_ in Using Temporary
+    Security Credentials .
+
+    If you're new to AWS and need additional technical information
+    about a specific AWS product, you can find the product's technical
+    documentation at `http://aws.amazon.com/documentation/`_.
+
+    We will refer to Amazon Identity and Access Management using the
+    abbreviated form IAM. All copyrights and legal protections still
+    apply.
+    """
     DefaultRegionName = 'us-east-1'
     DefaultRegionEndpoint = 'sts.amazonaws.com'
     APIVersion = '2011-06-15'
@@ -133,16 +159,65 @@
 
     def get_federation_token(self, name, duration=None, policy=None):
         """
-        :type name: str
-        :param name: The name of the Federated user associated with
-                     the credentials.
+        Returns a set of temporary security credentials (consisting of
+        an access key ID, a secret access key, and a security token)
+        for a federated user. A typical use is in a proxy application
+        that is getting temporary security credentials on behalf of
+        distributed applications inside a corporate network. Because
+        you must call the `GetFederationToken` action using the long-
+        term security credentials of an IAM user, this call is
+        appropriate in contexts where those credentials can be safely
+        stored, usually in a server-based application.
 
-        :type duration: int
-        :param duration: The number of seconds the credentials should
-                         remain valid.
+        **Note:** Do not use this call in mobile applications or
+        client-based web applications that directly get temporary
+        security credentials. For those types of applications, use
+        `AssumeRoleWithWebIdentity`.
 
-        :type policy: str
-        :param policy: A JSON policy to associate with these credentials.
+        The `GetFederationToken` action must be called by using the
+        long-term AWS security credentials of the AWS account or an
+        IAM user. Credentials that are created by IAM users are valid
+        for the specified duration, between 900 seconds (15 minutes)
+        and 129600 seconds (36 hours); credentials that are created by
+        using account credentials have a maximum duration of 3600
+        seconds (1 hour).
+
+        The permissions that are granted to the federated user are the
+        intersection of the policy that is passed with the
+        `GetFederationToken` request and policies that are associated
+        with of the entity making the `GetFederationToken` call.
+
+        For more information about how permissions work, see
+        `Controlling Permissions in Temporary Credentials`_ in Using
+        Temporary Security Credentials . For information about using
+        `GetFederationToken` to create temporary security credentials,
+        see `Creating Temporary Credentials to Enable Access for
+        Federated Users`_ in Using Temporary Security Credentials .
+
+        :type name: string
+        :param name: The name of the federated user. The name is used as an
+            identifier for the temporary security credentials (such as `Bob`).
+            For example, you can reference the federated user name in a
+            resource-based policy, such as in an Amazon S3 bucket policy.
+
+        :type policy: string
+        :param policy: A policy that specifies the permissions that are granted
+            to the federated user. By default, federated users have no
+            permissions; they do not inherit any from the IAM user. When you
+            specify a policy, the federated user's permissions are intersection
+            of the specified policy and the IAM user's policy. If you don't
+            specify a policy, federated users can only access AWS resources
+            that explicitly allow those federated users in a resource policy,
+            such as in an Amazon S3 bucket policy.
+
+        :type duration: integer
+        :param duration: The duration, in seconds, that the session
+            should last. Acceptable durations for federation sessions range
+            from 900 seconds (15 minutes) to 129600 seconds (36 hours), with
+            43200 seconds (12 hours) as the default. Sessions for AWS account
+            owners are restricted to a maximum of 3600 seconds (one hour). If
+            the duration is longer than one hour, the session for AWS account
+            owners defaults to one hour.
 
         """
         params = {'Name': name}
@@ -152,3 +227,213 @@
             params['Policy'] = policy
         return self.get_object('GetFederationToken', params,
                                 FederationToken, verb='POST')
+
+    def assume_role(self, role_arn, role_session_name, policy=None,
+                    duration_seconds=None, external_id=None):
+        """
+        Returns a set of temporary security credentials (consisting of
+        an access key ID, a secret access key, and a security token)
+        that you can use to access AWS resources that you might not
+        normally have access to. Typically, you use `AssumeRole` for
+        cross-account access or federation.
+
+        For cross-account access, imagine that you own multiple
+        accounts and need to access resources in each account. You
+        could create long-term credentials in each account to access
+        those resources. However, managing all those credentials and
+        remembering which one can access which account can be time
+        consuming. Instead, you can create one set of long-term
+        credentials in one account and then use temporary security
+        credentials to access all the other accounts by assuming roles
+        in those accounts. For more information about roles, see
+        `Roles`_ in Using IAM .
+
+        For federation, you can, for example, grant single sign-on
+        access to the AWS Management Console. If you already have an
+        identity and authentication system in your corporate network,
+        you don't have to recreate user identities in AWS in order to
+        grant those user identities access to AWS. Instead, after a
+        user has been authenticated, you call `AssumeRole` (and
+        specify the role with the appropriate permissions) to get
+        temporary security credentials for that user. With those
+        temporary security credentials, you construct a sign-in URL
+        that users can use to access the console. For more
+        information, see `Scenarios for Granting Temporary Access`_ in
+        AWS Security Token Service .
+
+        The temporary security credentials are valid for the duration
+        that you specified when calling `AssumeRole`, which can be
+        from 900 seconds (15 minutes) to 3600 seconds (1 hour). The
+        default is 1 hour.
+
+        The temporary security credentials that are returned from the
+        `AssumeRoleWithWebIdentity` response have the permissions that
+        are associated with the access policy of the role being
+        assumed and any policies that are associated with the AWS
+        resource being accessed. You can further restrict the
+        permissions of the temporary security credentials by passing a
+        policy in the request. The resulting permissions are an
+        intersection of the role's access policy and the policy that
+        you passed. These policies and any applicable resource-based
+        policies are evaluated when calls to AWS service APIs are made
+        using the temporary security credentials.
+
+        To assume a role, your AWS account must be trusted by the
+        role. The trust relationship is defined in the role's trust
+        policy when the IAM role is created. You must also have a
+        policy that allows you to call `sts:AssumeRole`.
+
+        **Important:** You cannot call `Assumerole` by using AWS
+        account credentials; access will be denied. You must use IAM
+        user credentials to call `AssumeRole`.
+
+        :type role_arn: string
+        :param role_arn: The Amazon Resource Name (ARN) of the role that the
+            caller is assuming.
+
+        :type role_session_name: string
+        :param role_session_name: An identifier for the assumed role session.
+            The session name is included as part of the `AssumedRoleUser`.
+
+        :type policy: string
+        :param policy: A supplemental policy that is associated with the
+            temporary security credentials from the `AssumeRole` call. The
+            resulting permissions of the temporary security credentials are an
+            intersection of this policy and the access policy that is
+            associated with the role. Use this policy to further restrict the
+            permissions of the temporary security credentials.
+
+        :type duration_seconds: integer
+        :param duration_seconds: The duration, in seconds, of the role session.
+            The value can range from 900 seconds (15 minutes) to 3600 seconds
+            (1 hour). By default, the value is set to 3600 seconds.
+
+        :type external_id: string
+        :param external_id: A unique identifier that is used by third parties
+            to assume a role in their customers' accounts. For each role that
+            the third party can assume, they should instruct their customers to
+            create a role with the external ID that the third party generated.
+            Each time the third party assumes the role, they must pass the
+            customer's external ID. The external ID is useful in order to help
+            third parties bind a role to the customer who created it. For more
+            information about the external ID, see `About the External ID`_ in
+            Using Temporary Security Credentials .
+
+        """
+        params = {
+            'RoleArn': role_arn,
+            'RoleSessionName': role_session_name
+        }
+        if policy is not None:
+            params['Policy'] = policy
+        if duration_seconds is not None:
+            params['DurationSeconds'] = duration_seconds
+        if external_id is not None:
+            params['ExternalId'] = external_id
+        return self.get_object('AssumeRole', params, AssumedRole, verb='POST')
+
+    def assume_role_with_web_identity(self, role_arn, role_session_name,
+                                      web_identity_token, provider_id=None,
+                                      policy=None, duration_seconds=None):
+        """
+        Returns a set of temporary security credentials for users who
+        have been authenticated in a mobile or web application with a
+        web identity provider, such as Login with Amazon, Facebook, or
+        Google. `AssumeRoleWithWebIdentity` is an API call that does
+        not require the use of AWS security credentials. Therefore,
+        you can distribute an application (for example, on mobile
+        devices) that requests temporary security credentials without
+        including long-term AWS credentials in the application or by
+        deploying server-based proxy services that use long-term AWS
+        credentials. For more information, see `Creating a Mobile
+        Application with Third-Party Sign-In`_ in AWS Security Token
+        Service .
+
+        The temporary security credentials consist of an access key
+        ID, a secret access key, and a security token. Applications
+        can use these temporary security credentials to sign calls to
+        AWS service APIs. The credentials are valid for the duration
+        that you specified when calling `AssumeRoleWithWebIdentity`,
+        which can be from 900 seconds (15 minutes) to 3600 seconds (1
+        hour). By default, the temporary security credentials are
+        valid for 1 hour.
+
+        The temporary security credentials that are returned from the
+        `AssumeRoleWithWebIdentity` response have the permissions that
+        are associated with the access policy of the role being
+        assumed. You can further restrict the permissions of the
+        temporary security credentials by passing a policy in the
+        request. The resulting permissions are an intersection of the
+        role's access policy and the policy that you passed. These
+        policies and any applicable resource-based policies are
+        evaluated when calls to AWS service APIs are made using the
+        temporary security credentials.
+
+        Before your application can call `AssumeRoleWithWebIdentity`,
+        you must have an identity token from a supported identity
+        provider and create a role that the application can assume.
+        The role that your application assumes must trust the identity
+        provider that is associated with the identity token. In other
+        words, the identity provider must be specified in the role's
+        trust policy. For more information, see ` Creating Temporary
+        Security Credentials for Mobile Apps Using Third-Party
+        Identity Providers`_.
+
+        :type role_arn: string
+        :param role_arn: The Amazon Resource Name (ARN) of the role that the
+            caller is assuming.
+
+        :type role_session_name: string
+        :param role_session_name: An identifier for the assumed role session.
+            Typically, you pass the name or identifier that is associated with
+            the user who is using your application. That way, the temporary
+            security credentials that your application will use are associated
+            with that user. This session name is included as part of the ARN
+            and assumed role ID in the `AssumedRoleUser` response element.
+
+        :type web_identity_token: string
+        :param web_identity_token: The OAuth 2.0 access token or OpenID Connect
+            ID token that is provided by the identity provider. Your
+            application must get this token by authenticating the user who is
+            using your application with a web identity provider before the
+            application makes an `AssumeRoleWithWebIdentity` call.
+
+        :type provider_id: string
+        :param provider_id: Specify this value only for OAuth access tokens. Do
+            not specify this value for OpenID Connect ID tokens, such as
+            `accounts.google.com`. This is the fully-qualified host component
+            of the domain name of the identity provider. Do not include URL
+            schemes and port numbers. Currently, `www.amazon.com` and
+            `graph.facebook.com` are supported.
+
+        :type policy: string
+        :param policy: A supplemental policy that is associated with the
+            temporary security credentials from the `AssumeRoleWithWebIdentity`
+            call. The resulting permissions of the temporary security
+            credentials are an intersection of this policy and the access
+            policy that is associated with the role. Use this policy to further
+            restrict the permissions of the temporary security credentials.
+
+        :type duration_seconds: integer
+        :param duration_seconds: The duration, in seconds, of the role session.
+            The value can range from 900 seconds (15 minutes) to 3600 seconds
+            (1 hour). By default, the value is set to 3600 seconds.
+
+        """
+        params = {
+            'RoleArn': role_arn,
+            'RoleSessionName': role_session_name,
+            'WebIdentityToken': web_identity_token,
+        }
+        if provider_id is not None:
+            params['ProviderId'] = provider_id
+        if policy is not None:
+            params['Policy'] = policy
+        if duration_seconds is not None:
+            params['DurationSeconds'] = duration_seconds
+        return self.get_object(
+            'AssumeRoleWithWebIdentity',
+            params,
+            AssumedRole,
+            verb='POST'
+        )
diff --git a/boto/sts/credentials.py b/boto/sts/credentials.py
index f6d5174..33fe4ee 100644
--- a/boto/sts/credentials.py
+++ b/boto/sts/credentials.py
@@ -15,18 +15,17 @@
 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
 # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
 # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
-# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, 
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
 # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
 # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
 # IN THE SOFTWARE.
 
-import boto.utils
 import os
 import datetime
-try:
-    import simplejson as json
-except ImportError:
-    import json
+
+import boto.utils
+from boto.compat import json
+
 
 class Credentials(object):
     """
@@ -138,7 +137,7 @@
         ts = boto.utils.parse_ts(self.expiration)
         delta = ts - now
         return delta.total_seconds() <= 0
-    
+
 class FederationToken(object):
     """
     :ivar credentials: A Credentials object containing the credentials.
@@ -173,4 +172,44 @@
             self.request_id = value
         else:
             pass
-        
+
+
+class AssumedRole(object):
+    """
+    :ivar user: The assumed role user.
+    :ivar credentials: A Credentials object containing the credentials.
+    """
+    def __init__(self, connection=None, credentials=None, user=None):
+        self._connection = connection
+        self.credentials = credentials
+        self.user = user
+
+    def startElement(self, name, attrs, connection):
+        if name == 'Credentials':
+            self.credentials = Credentials()
+            return self.credentials
+        elif name == 'AssumedRoleUser':
+            self.user = User()
+            return self.user
+
+    def endElement(self, name, value, connection):
+        pass
+
+
+class User(object):
+    """
+    :ivar arn: The arn of the user assuming the role.
+    :ivar assume_role_id: The identifier of the assumed role.
+    """
+    def __init__(self, arn=None, assume_role_id=None):
+        self.arn = arn
+        self.assume_role_id = assume_role_id
+
+    def startElement(self, name, attrs, connection):
+        pass
+
+    def endElement(self, name, value, connection):
+        if name == 'Arn':
+            self.arn = value
+        elif name == 'AssumedRoleId':
+            self.assume_role_id = value
diff --git a/boto/support/__init__.py b/boto/support/__init__.py
new file mode 100644
index 0000000..6d59b37
--- /dev/null
+++ b/boto/support/__init__.py
@@ -0,0 +1,47 @@
+# Copyright (c) 2013 Amazon.com, Inc. or its affiliates.  All Rights Reserved
+#
+# Permission is hereby granted, free of charge, to any person obtaining a
+# copy of this software and associated documentation files (the
+# "Software"), to deal in the Software without restriction, including
+# without limitation the rights to use, copy, modify, merge, publish, dis-
+# tribute, sublicense, and/or sell copies of the Software, and to permit
+# persons to whom the Software is furnished to do so, subject to the fol-
+# lowing conditions:
+#
+# The above copyright notice and this permission notice shall be included
+# in all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
+# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
+# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+# IN THE SOFTWARE.
+#
+
+from boto.regioninfo import RegionInfo
+
+
+def regions():
+    """
+    Get all available regions for the Amazon Support service.
+
+    :rtype: list
+    :return: A list of :class:`boto.regioninfo.RegionInfo`
+    """
+    from boto.support.layer1 import SupportConnection
+    return [
+        RegionInfo(
+            name='us-east-1',
+            endpoint='support.us-east-1.amazonaws.com',
+            connection_cls=SupportConnection
+        ),
+    ]
+
+
+def connect_to_region(region_name, **kw_params):
+    for region in regions():
+        if region.name == region_name:
+            return region.connect(**kw_params)
+    return None
diff --git a/boto/support/exceptions.py b/boto/support/exceptions.py
new file mode 100644
index 0000000..f4e33d0
--- /dev/null
+++ b/boto/support/exceptions.py
@@ -0,0 +1,34 @@
+# Copyright (c) 2013 Amazon.com, Inc. or its affiliates.  All Rights Reserved
+#
+# Permission is hereby granted, free of charge, to any person obtaining a
+# copy of this software and associated documentation files (the
+# "Software"), to deal in the Software without restriction, including
+# without limitation the rights to use, copy, modify, merge, publish, dis-
+# tribute, sublicense, and/or sell copies of the Software, and to permit
+# persons to whom the Software is furnished to do so, subject to the fol-
+# lowing conditions:
+#
+# The above copyright notice and this permission notice shall be included
+# in all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
+# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
+# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+# IN THE SOFTWARE.
+#
+from boto.exception import JSONResponseError
+
+
+class CaseIdNotFound(JSONResponseError):
+    pass
+
+
+class CaseCreationLimitExceeded(JSONResponseError):
+    pass
+
+
+class InternalServerError(JSONResponseError):
+    pass
diff --git a/boto/support/layer1.py b/boto/support/layer1.py
new file mode 100644
index 0000000..5e73db2
--- /dev/null
+++ b/boto/support/layer1.py
@@ -0,0 +1,529 @@
+# Copyright (c) 2013 Amazon.com, Inc. or its affiliates.  All Rights Reserved
+#
+# Permission is hereby granted, free of charge, to any person obtaining a
+# copy of this software and associated documentation files (the
+# "Software"), to deal in the Software without restriction, including
+# without limitation the rights to use, copy, modify, merge, publish, dis-
+# tribute, sublicense, and/or sell copies of the Software, and to permit
+# persons to whom the Software is furnished to do so, subject to the fol-
+# lowing conditions:
+#
+# The above copyright notice and this permission notice shall be included
+# in all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
+# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
+# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+# IN THE SOFTWARE.
+#
+
+import json
+import boto
+from boto.connection import AWSQueryConnection
+from boto.regioninfo import RegionInfo
+from boto.exception import JSONResponseError
+from boto.support import exceptions
+
+
+class SupportConnection(AWSQueryConnection):
+    """
+    AWS Support
+    The AWS Support API reference is intended for programmers who need
+    detailed information about the AWS Support actions and data types.
+    This service enables you to manage with your AWS Support cases
+    programmatically. It is built on the AWS Query API programming
+    model and provides HTTP methods that take parameters and return
+    results in JSON format.
+
+    The AWS Support service also exposes a set of `Trusted Advisor`_
+    features. You can retrieve a list of checks you can run on your
+    resources, specify checks to run and refresh, and check the status
+    of checks you have submitted.
+
+    The following list describes the AWS Support case management
+    actions:
+
+
+    + **Service names, issue categories, and available severity
+      levels. **The actions `DescribeServices`_ and
+      `DescribeSeverityLevels`_ enable you to obtain AWS service names,
+      service codes, service categories, and problem severity levels.
+      You use these values when you call the `CreateCase`_ action.
+    + **Case Creation, case details, and case resolution**. The
+      actions `CreateCase`_, `DescribeCases`_, and `ResolveCase`_ enable
+      you to create AWS Support cases, retrieve them, and resolve them.
+    + **Case communication**. The actions
+      `DescribeCaseCommunications`_ and `AddCommunicationToCase`_ enable
+      you to retrieve and add communication to AWS Support cases.
+
+
+    The following list describes the actions available from the AWS
+    Support service for Trusted Advisor:
+
+
+    + `DescribeTrustedAdviserChecks`_    returns the list of checks that you can run against your AWS
+    resources.
+    + Using the CheckId for a specific check returned by
+      DescribeTrustedAdviserChecks, you can call
+      `DescribeTrustedAdvisorCheckResult`_    and obtain a new result for the check you specified.
+    + Using `DescribeTrustedAdvisorCheckSummaries`_, you can get
+      summaries for a set of Trusted Advisor checks.
+    + `RefreshTrustedAdvisorCheck`_ enables you to request that
+      Trusted Advisor run the check again.
+    + ``_ gets statuses on the checks you are running.
+
+
+    For authentication of requests, the AWS Support uses `Signature
+    Version 4 Signing Process`_.
+
+    See the AWS Support Developer Guide for information about how to
+    use this service to manage create and manage your support cases,
+    and how to call Trusted Advisor for results of checks on your
+    resources.
+    """
+    APIVersion = "2013-04-15"
+    DefaultRegionName = "us-east-1"
+    DefaultRegionEndpoint = "support.us-east-1.amazonaws.com"
+    ServiceName = "Support"
+    TargetPrefix = "AWSSupport_20130415"
+    ResponseError = JSONResponseError
+
+    _faults = {
+        "CaseIdNotFound": exceptions.CaseIdNotFound,
+        "CaseCreationLimitExceeded": exceptions.CaseCreationLimitExceeded,
+        "InternalServerError": exceptions.InternalServerError,
+    }
+
+
+    def __init__(self, **kwargs):
+        region = kwargs.pop('region', None)
+        if not region:
+            region = RegionInfo(self, self.DefaultRegionName,
+                                self.DefaultRegionEndpoint)
+        kwargs['host'] = region.endpoint
+        AWSQueryConnection.__init__(self, **kwargs)
+        self.region = region
+
+    def _required_auth_capability(self):
+        return ['hmac-v4']
+
+    def add_communication_to_case(self, communication_body, case_id=None,
+                                  cc_email_addresses=None):
+        """
+        This action adds additional customer communication to an AWS
+        Support case. You use the CaseId value to identify the case to
+        which you want to add communication. You can list a set of
+        email addresses to copy on the communication using the
+        CcEmailAddresses value. The CommunicationBody value contains
+        the text of the communication.
+
+        This action's response indicates the success or failure of the
+        request.
+
+        This action implements a subset of the behavior on the AWS
+        Support `Your Support Cases`_ web form.
+
+        :type case_id: string
+        :param case_id:
+
+        :type communication_body: string
+        :param communication_body:
+
+        :type cc_email_addresses: list
+        :param cc_email_addresses:
+
+        """
+        params = {'communicationBody': communication_body, }
+        if case_id is not None:
+            params['caseId'] = case_id
+        if cc_email_addresses is not None:
+            params['ccEmailAddresses'] = cc_email_addresses
+        return self.make_request(action='AddCommunicationToCase',
+                                 body=json.dumps(params))
+
+    def create_case(self, subject, service_code, category_code,
+                    communication_body, severity_code=None,
+                    cc_email_addresses=None, language=None, issue_type=None):
+        """
+        Creates a new case in the AWS Support Center. This action is
+        modeled on the behavior of the AWS Support Center `Open a new
+        case`_ page. Its parameters require you to specify the
+        following information:
+
+
+        #. **ServiceCode.** Represents a code for an AWS service. You
+           obtain the ServiceCode by calling `DescribeServices`_.
+        #. **CategoryCode**. Represents a category for the service
+           defined for the ServiceCode value. You also obtain the
+           cateogory code for a service by calling `DescribeServices`_.
+           Each AWS service defines its own set of category codes.
+        #. **SeverityCode**. Represents a value that specifies the
+           urgency of the case, and the time interval in which your
+           service level agreement specifies a response from AWS Support.
+           You obtain the SeverityCode by calling
+           `DescribeSeverityLevels`_.
+        #. **Subject**. Represents the **Subject** field on the AWS
+           Support Center `Open a new case`_ page.
+        #. **CommunicationBody**. Represents the **Description** field
+           on the AWS Support Center `Open a new case`_ page.
+        #. **Language**. Specifies the human language in which AWS
+           Support handles the case. The API currently supports English
+           and Japanese.
+        #. **CcEmailAddresses**. Represents the AWS Support Center
+           **CC** field on the `Open a new case`_ page. You can list
+           email addresses to be copied on any correspondence about the
+           case. The account that opens the case is already identified by
+           passing the AWS Credentials in the HTTP POST method or in a
+           method or function call from one of the programming languages
+           supported by an `AWS SDK`_.
+
+
+        The AWS Support API does not currently support the ability to
+        add attachments to cases. You can, however, call
+        `AddCommunicationToCase`_ to add information to an open case.
+
+        A successful `CreateCase`_ request returns an AWS Support case
+        number. Case numbers are used by `DescribeCases`_ request to
+        retrieve existing AWS Support support cases.
+
+        :type subject: string
+        :param subject:
+
+        :type service_code: string
+        :param service_code:
+
+        :type severity_code: string
+        :param severity_code:
+
+        :type category_code: string
+        :param category_code:
+
+        :type communication_body: string
+        :param communication_body:
+
+        :type cc_email_addresses: list
+        :param cc_email_addresses:
+
+        :type language: string
+        :param language:
+
+        :type issue_type: string
+        :param issue_type:
+
+        """
+        params = {
+            'subject': subject,
+            'serviceCode': service_code,
+            'categoryCode': category_code,
+            'communicationBody': communication_body,
+        }
+        if severity_code is not None:
+            params['severityCode'] = severity_code
+        if cc_email_addresses is not None:
+            params['ccEmailAddresses'] = cc_email_addresses
+        if language is not None:
+            params['language'] = language
+        if issue_type is not None:
+            params['issueType'] = issue_type
+        return self.make_request(action='CreateCase',
+                                 body=json.dumps(params))
+
+    def describe_cases(self, case_id_list=None, display_id=None,
+                       after_time=None, before_time=None,
+                       include_resolved_cases=None, next_token=None,
+                       max_results=None, language=None):
+        """
+        This action returns a list of cases that you specify by
+        passing one or more CaseIds. In addition, you can filter the
+        cases by date by setting values for the AfterTime and
+        BeforeTime request parameters.
+        The response returns the following in JSON format:
+
+        #. One or more `CaseDetails`_ data types.
+        #. One or more NextToken objects, strings that specifies where
+           to paginate the returned records represented by CaseDetails .
+
+        :type case_id_list: list
+        :param case_id_list:
+
+        :type display_id: string
+        :param display_id:
+
+        :type after_time: string
+        :param after_time:
+
+        :type before_time: string
+        :param before_time:
+
+        :type include_resolved_cases: boolean
+        :param include_resolved_cases:
+
+        :type next_token: string
+        :param next_token:
+
+        :type max_results: integer
+        :param max_results:
+
+        :type language: string
+        :param language:
+
+        """
+        params = {}
+        if case_id_list is not None:
+            params['caseIdList'] = case_id_list
+        if display_id is not None:
+            params['displayId'] = display_id
+        if after_time is not None:
+            params['afterTime'] = after_time
+        if before_time is not None:
+            params['beforeTime'] = before_time
+        if include_resolved_cases is not None:
+            params['includeResolvedCases'] = include_resolved_cases
+        if next_token is not None:
+            params['nextToken'] = next_token
+        if max_results is not None:
+            params['maxResults'] = max_results
+        if language is not None:
+            params['language'] = language
+        return self.make_request(action='DescribeCases',
+                                 body=json.dumps(params))
+
+    def describe_communications(self, case_id, before_time=None,
+                                after_time=None, next_token=None,
+                                max_results=None):
+        """
+        This action returns communications regarding the support case.
+        You can use the AfterTime and BeforeTime parameters to filter
+        by date. The CaseId parameter enables you to identify a
+        specific case by its CaseId number.
+
+        The MaxResults and NextToken parameters enable you to control
+        the pagination of the result set. Set MaxResults to the number
+        of cases you want displayed on each page, and use NextToken to
+        specify the resumption of pagination.
+
+        :type case_id: string
+        :param case_id:
+
+        :type before_time: string
+        :param before_time:
+
+        :type after_time: string
+        :param after_time:
+
+        :type next_token: string
+        :param next_token:
+
+        :type max_results: integer
+        :param max_results:
+
+        """
+        params = {'caseId': case_id, }
+        if before_time is not None:
+            params['beforeTime'] = before_time
+        if after_time is not None:
+            params['afterTime'] = after_time
+        if next_token is not None:
+            params['nextToken'] = next_token
+        if max_results is not None:
+            params['maxResults'] = max_results
+        return self.make_request(action='DescribeCommunications',
+                                 body=json.dumps(params))
+
+    def describe_services(self, service_code_list=None, language=None):
+        """
+        Returns the current list of AWS services and a list of service
+        categories that applies to each one. You then use service
+        names and categories in your `CreateCase`_ requests. Each AWS
+        service has its own set of categories.
+
+        The service codes and category codes correspond to the values
+        that are displayed in the **Service** and **Category** drop-
+        down lists on the AWS Support Center `Open a new case`_ page.
+        The values in those fields, however, do not necessarily match
+        the service codes and categories returned by the
+        `DescribeServices` request. Always use the service codes and
+        categories obtained programmatically. This practice ensures
+        that you always have the most recent set of service and
+        category codes.
+
+        :type service_code_list: list
+        :param service_code_list:
+
+        :type language: string
+        :param language:
+
+        """
+        params = {}
+        if service_code_list is not None:
+            params['serviceCodeList'] = service_code_list
+        if language is not None:
+            params['language'] = language
+        return self.make_request(action='DescribeServices',
+                                 body=json.dumps(params))
+
+    def describe_severity_levels(self, language=None):
+        """
+        This action returns the list of severity levels that you can
+        assign to an AWS Support case. The severity level for a case
+        is also a field in the `CaseDetails`_ data type included in
+        any `CreateCase`_ request.
+
+        :type language: string
+        :param language:
+
+        """
+        params = {}
+        if language is not None:
+            params['language'] = language
+        return self.make_request(action='DescribeSeverityLevels',
+                                 body=json.dumps(params))
+
+    def resolve_case(self, case_id=None):
+        """
+        Takes a CaseId and returns the initial state of the case along
+        with the state of the case after the call to `ResolveCase`_
+        completed.
+
+        :type case_id: string
+        :param case_id:
+
+        """
+        params = {}
+        if case_id is not None:
+            params['caseId'] = case_id
+        return self.make_request(action='ResolveCase',
+                                 body=json.dumps(params))
+
+    def describe_trusted_advisor_check_refresh_statuses(self, check_ids):
+        """
+        Returns the status of all refresh requests Trusted Advisor
+        checks called using `RefreshTrustedAdvisorCheck`_.
+
+        :type check_ids: list
+        :param check_ids:
+
+        """
+        params = {'checkIds': check_ids, }
+        return self.make_request(action='DescribeTrustedAdvisorCheckRefreshStatuses',
+                                 body=json.dumps(params))
+
+    def describe_trusted_advisor_check_result(self, check_id, language=None):
+        """
+        This action responds with the results of a Trusted Advisor
+        check. Once you have obtained the list of available Trusted
+        Advisor checks by calling `DescribeTrustedAdvisorChecks`_, you
+        specify the CheckId for the check you want to retrieve from
+        AWS Support.
+
+        The response for this action contains a JSON-formatted
+        `TrustedAdvisorCheckResult`_ object
+        , which is a container for the following three objects:
+
+
+
+        #. `TrustedAdvisorCategorySpecificSummary`_
+        #. `TrustedAdvisorResourceDetail`_
+        #. `TrustedAdvisorResourcesSummary`_
+
+
+        In addition, the response contains the following fields:
+
+
+        #. **Status**. Overall status of the check.
+        #. **Timestamp**. Time at which Trusted Advisor last ran the
+           check.
+        #. **CheckId**. Unique identifier for the specific check
+           returned by the request.
+
+        :type check_id: string
+        :param check_id:
+
+        :type language: string
+        :param language:
+
+        """
+        params = {'checkId': check_id, }
+        if language is not None:
+            params['language'] = language
+        return self.make_request(action='DescribeTrustedAdvisorCheckResult',
+                                 body=json.dumps(params))
+
+    def describe_trusted_advisor_check_summaries(self, check_ids):
+        """
+        This action enables you to get the latest summaries for
+        Trusted Advisor checks that you specify in your request. You
+        submit the list of Trusted Advisor checks for which you want
+        summaries. You obtain these CheckIds by submitting a
+        `DescribeTrustedAdvisorChecks`_ request.
+
+        The response body contains an array of
+        `TrustedAdvisorCheckSummary`_ objects.
+
+        :type check_ids: list
+        :param check_ids:
+
+        """
+        params = {'checkIds': check_ids, }
+        return self.make_request(action='DescribeTrustedAdvisorCheckSummaries',
+                                 body=json.dumps(params))
+
+    def describe_trusted_advisor_checks(self, language):
+        """
+        This action enables you to get a list of the available Trusted
+        Advisor checks. You must specify a language code. English
+        ("en") and Japanese ("jp") are currently supported. The
+        response contains a list of `TrustedAdvisorCheckDescription`_
+        objects.
+
+        :type language: string
+        :param language:
+
+        """
+        params = {'language': language, }
+        return self.make_request(action='DescribeTrustedAdvisorChecks',
+                                 body=json.dumps(params))
+
+    def refresh_trusted_advisor_check(self, check_id):
+        """
+        This action enables you to query the service to request a
+        refresh for a specific Trusted Advisor check. Your request
+        body contains a CheckId for which you are querying. The
+        response body contains a `RefreshTrustedAdvisorCheckResult`_
+        object containing Status and TimeUntilNextRefresh fields.
+
+        :type check_id: string
+        :param check_id:
+
+        """
+        params = {'checkId': check_id, }
+        return self.make_request(action='RefreshTrustedAdvisorCheck',
+                                 body=json.dumps(params))
+
+    def make_request(self, action, body):
+        headers = {
+            'X-Amz-Target': '%s.%s' % (self.TargetPrefix, action),
+            'Host': self.region.endpoint,
+            'Content-Type': 'application/x-amz-json-1.1',
+            'Content-Length': str(len(body)),
+        }
+        http_request = self.build_base_http_request(
+            method='POST', path='/', auth_path='/', params={},
+            headers=headers, data=body)
+        response = self._mexe(http_request, sender=None,
+                              override_num_retries=10)
+        response_body = response.read()
+        boto.log.debug(response_body)
+        if response.status == 200:
+            if response_body:
+                return json.loads(response_body)
+        else:
+            json_body = json.loads(response_body)
+            fault_name = json_body.get('__type', None)
+            exception_class = self._faults.get(fault_name, self.ResponseError)
+            raise exception_class(response.status, response.reason,
+                                  body=json_body)
+
diff --git a/boto/swf/__init__.py b/boto/swf/__init__.py
index 34abc1d..5eab6bc 100644
--- a/boto/swf/__init__.py
+++ b/boto/swf/__init__.py
@@ -25,19 +25,28 @@
 from boto.ec2.regioninfo import RegionInfo
 import boto.swf.layer1
 
+REGION_ENDPOINTS = {
+    'us-east-1': 'swf.us-east-1.amazonaws.com',
+    'us-west-1': 'swf.us-west-1.amazonaws.com',
+    'us-west-2': 'swf.us-west-2.amazonaws.com',
+    'sa-east-1': 'swf.sa-east-1.amazonaws.com',
+    'eu-west-1': 'swf.eu-west-1.amazonaws.com',
+    'ap-northeast-1': 'swf.ap-northeast-1.amazonaws.com',
+    'ap-southeast-1': 'swf.ap-southeast-1.amazonaws.com',
+    'ap-southeast-2': 'swf.ap-southeast-2.amazonaws.com',
+}
 
-def regions():
+
+def regions(**kw_params):
     """
     Get all available regions for the Amazon Simple Workflow service.
 
     :rtype: list
     :return: A list of :class:`boto.regioninfo.RegionInfo`
     """
-    import boto.dynamodb.layer2
-    return [RegionInfo(name='us-east-1',
-                       endpoint='swf.us-east-1.amazonaws.com',
-                       connection_cls=boto.swf.layer1.Layer1),
-            ]
+    return [RegionInfo(name=region_name, endpoint=REGION_ENDPOINTS[region_name],
+                       connection_cls=boto.swf.layer1.Layer1)
+            for region_name in REGION_ENDPOINTS]
 
 
 def connect_to_region(region_name, **kw_params):
diff --git a/boto/swf/layer1.py b/boto/swf/layer1.py
index f11963b..8e1af90 100644
--- a/boto/swf/layer1.py
+++ b/boto/swf/layer1.py
@@ -22,17 +22,14 @@
 # IN THE SOFTWARE.
 #
 
+import time
+
 import boto
 from boto.connection import AWSAuthConnection
 from boto.provider import Provider
 from boto.exception import SWFResponseError
 from boto.swf import exceptions as swf_exceptions
-
-import time
-try:
-    import simplejson as json
-except ImportError:
-    import json
+from boto.compat import json
 
 #
 # To get full debug output, uncomment the following line and set the
@@ -90,6 +87,36 @@
     def _required_auth_capability(self):
         return ['hmac-v3-http']
 
+    @classmethod
+    def _normalize_request_dict(cls, data):
+        """
+        This class method recurses through request data dictionary and removes
+        any default values.
+
+        :type data: dict
+        :param data: Specifies request parameters with default values to be removed.
+        """
+        for item in data.keys():
+            if isinstance(data[item], dict):
+                cls._normalize_request_dict(data[item])
+            if data[item] in (None, {}):
+                del data[item]
+
+    def json_request(self, action, data, object_hook=None):
+        """
+        This method wraps around make_request() to normalize and serialize the
+        dictionary with request parameters.
+
+        :type action: string
+        :param action: Specifies an SWF action.
+
+        :type data: dict
+        :param data: Specifies request parameters associated with the action.
+        """ 
+        self._normalize_request_dict(data)
+        json_input = json.dumps(data)
+        return self.make_request(action, json_input, object_hook)
+
     def make_request(self, action, body='', object_hook=None):
         """
         :raises: ``SWFResponseError`` if response status is not 200.
@@ -147,11 +174,11 @@
 
         :raises: UnknownResourceFault, SWFOperationNotPermittedError
         """
-        data = {'domain': domain, 'taskList': {'name': task_list}}
-        if identity:
-            data['identity'] = identity
-        json_input = json.dumps(data)
-        return self.make_request('PollForActivityTask', json_input)
+        return self.json_request('PollForActivityTask', {
+            'domain': domain, 
+            'taskList': {'name': task_list},
+            'identity': identity,
+        })
 
     def respond_activity_task_completed(self, task_token, result=None):
         """
@@ -168,11 +195,10 @@
 
         :raises: UnknownResourceFault, SWFOperationNotPermittedError
         """
-        data = {'taskToken': task_token}
-        if result:
-            data['result'] = result
-        json_input = json.dumps(data)
-        return self.make_request('RespondActivityTaskCompleted', json_input)
+        return self.json_request('RespondActivityTaskCompleted', {
+            'taskToken': task_token,
+            'result': result,
+        })
 
     def respond_activity_task_failed(self, task_token,
                                      details=None, reason=None):
@@ -192,13 +218,11 @@
 
         :raises: UnknownResourceFault, SWFOperationNotPermittedError
         """
-        data = {'taskToken': task_token}
-        if details:
-            data['details'] = details
-        if reason:
-            data['reason'] = reason
-        json_input = json.dumps(data)
-        return self.make_request('RespondActivityTaskFailed', json_input)
+        return self.json_request('RespondActivityTaskFailed', {
+            'taskToken': task_token,
+            'details': details,
+            'reason': reason,
+        })
 
     def respond_activity_task_canceled(self, task_token, details=None):
         """
@@ -215,12 +239,11 @@
 
         :raises: UnknownResourceFault, SWFOperationNotPermittedError
         """
-        data = {'taskToken': task_token}
-        if details:
-            data['details'] = details
-        json_input = json.dumps(data)
-        return self.make_request('RespondActivityTaskCanceled', json_input)
-
+        return self.json_request('RespondActivityTaskCanceled', {
+            'taskToken': task_token,
+            'details': details,
+        })
+        
     def record_activity_task_heartbeat(self, task_token, details=None):
         """
         Used by activity workers to report to the service that the
@@ -242,11 +265,10 @@
 
         :raises: UnknownResourceFault, SWFOperationNotPermittedError
         """
-        data = {'taskToken': task_token}
-        if details:
-            data['details'] = details
-        json_input = json.dumps(data)
-        return self.make_request('RecordActivityTaskHeartbeat', json_input)
+        return self.json_request('RecordActivityTaskHeartbeat', {
+            'taskToken': task_token,
+            'details': details,
+        })
 
     # Actions related to Deciders
 
@@ -294,17 +316,14 @@
 
         :raises: UnknownResourceFault, SWFOperationNotPermittedError
         """
-        data = {'domain': domain, 'taskList': {'name': task_list}}
-        if identity:
-            data['identity'] = identity
-        if maximum_page_size:
-            data['maximumPageSize'] = maximum_page_size
-        if next_page_token:
-            data['nextPageToken'] = next_page_token
-        if reverse_order:
-            data['reverseOrder'] = 'true'
-        json_input = json.dumps(data)
-        return self.make_request('PollForDecisionTask', json_input)
+        return self.json_request('PollForDecisionTask', {
+            'domain': domain, 
+            'taskList': {'name': task_list},
+            'identity': identity,
+            'maximumPageSize': maximum_page_size,
+            'nextPageToken': next_page_token,
+            'reverseOrder': reverse_order,
+        })
 
     def respond_decision_task_completed(self, task_token,
                                         decisions=None,
@@ -329,13 +348,11 @@
 
         :raises: UnknownResourceFault, SWFOperationNotPermittedError
         """
-        data = {'taskToken': task_token}
-        if decisions:
-            data['decisions'] = decisions
-        if execution_context:
-            data['executionContext'] = execution_context
-        json_input = json.dumps(data)
-        return self.make_request('RespondDecisionTaskCompleted', json_input)
+        return self.json_request('RespondDecisionTaskCompleted', {
+            'taskToken': task_token,
+            'decisions': decisions,
+            'executionContext': execution_context, 
+        })
 
     def request_cancel_workflow_execution(self, domain, workflow_id,
                                           run_id=None):
@@ -360,11 +377,11 @@
 
         :raises: UnknownResourceFault, SWFOperationNotPermittedError
         """
-        data = {'domain': domain, 'workflowId': workflow_id}
-        if run_id:
-            data['runId'] = run_id
-        json_input = json.dumps(data)
-        return self.make_request('RequestCancelWorkflowExecution', json_input)
+        return self.json_request('RequestCancelWorkflowExecution', {
+            'domain': domain, 
+            'workflowId': workflow_id,
+            'runId': run_id,
+        })
 
     def start_workflow_execution(self, domain, workflow_id,
                                  workflow_name, workflow_version,
@@ -447,23 +464,19 @@
             SWFWorkflowExecutionAlreadyStartedError, SWFLimitExceededError,
             SWFOperationNotPermittedError, DefaultUndefinedFault
         """
-        data = {'domain': domain, 'workflowId': workflow_id}
-        data['workflowType'] = {'name': workflow_name,
-                                'version': workflow_version}
-        if task_list:
-            data['taskList'] = {'name': task_list}
-        if child_policy:
-            data['childPolicy'] = child_policy
-        if execution_start_to_close_timeout:
-            data['executionStartToCloseTimeout'] = execution_start_to_close_timeout
-        if input:
-            data['input'] = input
-        if tag_list:
-            data['tagList'] = tag_list
-        if task_start_to_close_timeout:
-            data['taskStartToCloseTimeout'] = task_start_to_close_timeout
-        json_input = json.dumps(data)
-        return self.make_request('StartWorkflowExecution', json_input)
+        return self.json_request('StartWorkflowExecution', {
+            'domain': domain, 
+            'workflowId': workflow_id,
+            'workflowType': {'name': workflow_name,
+                             'version': workflow_version},
+            'taskList': {'name': task_list},
+            'childPolicy': child_policy,
+            'executionStartToCloseTimeout': execution_start_to_close_timeout,
+            'input': input,
+            'tagList': tag_list,
+            'taskStartToCloseTimeout': task_start_to_close_timeout,
+
+        })
 
     def signal_workflow_execution(self, domain, signal_name, workflow_id,
                                   input=None, run_id=None):
@@ -495,14 +508,13 @@
 
         :raises: UnknownResourceFault, SWFOperationNotPermittedError
         """
-        data = {'domain': domain, 'signalName': signal_name,
-                'workflowId': workflow_id}
-        if input:
-            data['input'] = input
-        if run_id:
-            data['runId'] = run_id
-        json_input = json.dumps(data)
-        return self.make_request('SignalWorkflowExecution', json_input)
+        return self.json_request('SignalWorkflowExecution', {
+            'domain': domain, 
+            'signalName': signal_name,
+            'workflowId': workflow_id,
+            'input': input,
+            'runId': run_id,
+        })
 
     def terminate_workflow_execution(self, domain, workflow_id,
                                      child_policy=None, details=None,
@@ -554,17 +566,14 @@
 
         :raises: UnknownResourceFault, SWFOperationNotPermittedError
         """
-        data = {'domain': domain, 'workflowId': workflow_id}
-        if child_policy:
-            data['childPolicy'] = child_policy
-        if details:
-            data['details'] = details
-        if reason:
-            data['reason'] = reason
-        if run_id:
-            data['runId'] = run_id
-        json_input = json.dumps(data)
-        return self.make_request('TerminateWorkflowExecution', json_input)
+        return self.json_request('TerminateWorkflowExecution', {
+            'domain': domain, 
+            'workflowId': workflow_id,
+            'childPolicy': child_policy,
+            'details': details,
+            'reason': reason,
+            'runId': run_id,
+        })
 
 # Actions related to Administration
 
@@ -637,23 +646,17 @@
         :raises: SWFTypeAlreadyExistsError, SWFLimitExceededError,
             UnknownResourceFault, SWFOperationNotPermittedError
         """
-        data = {'domain': domain,
-                'name': name,
-                'version': version}
-        if task_list:
-            data['defaultTaskList'] = {'name': task_list}
-        if default_task_heartbeat_timeout:
-            data['defaultTaskHeartbeatTimeout'] = default_task_heartbeat_timeout
-        if default_task_schedule_to_close_timeout:
-            data['defaultTaskScheduleToCloseTimeout'] = default_task_schedule_to_close_timeout
-        if default_task_schedule_to_start_timeout:
-            data['defaultTaskScheduleToStartTimeout'] = default_task_schedule_to_start_timeout
-        if default_task_start_to_close_timeout:
-            data['defaultTaskStartToCloseTimeout'] = default_task_start_to_close_timeout
-        if description:
-            data['description'] = description
-        json_input = json.dumps(data)
-        return self.make_request('RegisterActivityType', json_input)
+        return self.json_request('RegisterActivityType', {
+            'domain': domain,
+            'name': name,
+            'version': version,
+            'defaultTaskList': {'name': task_list},
+            'defaultTaskHeartbeatTimeout': default_task_heartbeat_timeout,
+            'defaultTaskScheduleToCloseTimeout': default_task_schedule_to_close_timeout,
+            'defaultTaskScheduleToStartTimeout': default_task_schedule_to_start_timeout,
+            'defaultTaskStartToCloseTimeout': default_task_start_to_close_timeout,
+            'description': description,
+        })
 
     def deprecate_activity_type(self, domain, activity_name, activity_version):
         """
@@ -674,12 +677,12 @@
         :raises: UnknownResourceFault, TypeDeprecatedFault,
             SWFOperationNotPermittedError
         """
-        data = {'domain': domain}
-        data['activityType'] = {'name': activity_name,
-                                'version': activity_version}
-        json_input = json.dumps(data)
-        return self.make_request('DeprecateActivityType', json_input)
-
+        return self.json_request('DeprecateActivityType', {
+            'domain': domain,
+            'activityType': {'name': activity_name,
+                             'version': activity_version}
+        })
+        
 ## Workflow Management
 
     def register_workflow_type(self, domain, name, version,
@@ -752,20 +755,17 @@
         :raises: SWFTypeAlreadyExistsError, SWFLimitExceededError,
             UnknownResourceFault, SWFOperationNotPermittedError
         """
-        data = {'domain': domain, 'name': name, 'version': version}
-        if task_list:
-            data['defaultTaskList'] = {'name': task_list}
-        if default_child_policy:
-            data['defaultChildPolicy'] = default_child_policy
-        if default_execution_start_to_close_timeout:
-            data['defaultExecutionStartToCloseTimeout'] = default_execution_start_to_close_timeout
-        if default_task_start_to_close_timeout:
-            data['defaultTaskStartToCloseTimeout'] = default_task_start_to_close_timeout
-        if description:
-            data['description'] = description
-        json_input = json.dumps(data)
-        return self.make_request('RegisterWorkflowType', json_input)
-
+        return self.json_request('RegisterWorkflowType', {
+            'domain': domain, 
+            'name': name, 
+            'version': version,
+            'defaultTaskList':  {'name': task_list},
+            'defaultChildPolicy': default_child_policy,
+            'defaultExecutionStartToCloseTimeout': default_execution_start_to_close_timeout,
+            'defaultTaskStartToCloseTimeout': default_task_start_to_close_timeout,
+            'description': description,
+        })
+        
     def deprecate_workflow_type(self, domain, workflow_name, workflow_version):
         """
         Deprecates the specified workflow type. After a workflow type
@@ -787,11 +787,11 @@
         :raises: UnknownResourceFault, TypeDeprecatedFault,
             SWFOperationNotPermittedError
         """
-        data = {'domain': domain}
-        data['workflowType'] = {'name': workflow_name,
-                                'version': workflow_version}
-        json_input = json.dumps(data)
-        return self.make_request('DeprecateWorkflowType', json_input)
+        return self.json_request('DeprecateWorkflowType', {
+            'domain': domain,
+            'workflowType': {'name': workflow_name,
+                             'version': workflow_version},
+        })
 
 ## Domain Management
 
@@ -820,12 +820,11 @@
         :raises: SWFDomainAlreadyExistsError, SWFLimitExceededError,
             SWFOperationNotPermittedError
         """
-        data = {'name': name,
-                'workflowExecutionRetentionPeriodInDays': workflow_execution_retention_period_in_days}
-        if description:
-            data['description'] = description
-        json_input = json.dumps(data)
-        return self.make_request('RegisterDomain', json_input)
+        return self.json_request('RegisterDomain', {
+            'name': name,
+            'workflowExecutionRetentionPeriodInDays': workflow_execution_retention_period_in_days,
+            'description': description,
+        })
 
     def deprecate_domain(self, name):
         """
@@ -843,9 +842,7 @@
         :raises: UnknownResourceFault, DomainDeprecatedFault,
             SWFOperationNotPermittedError
         """
-        data = {'name': name}
-        json_input = json.dumps(data)
-        return self.make_request('DeprecateDomain', json_input)
+        return self.json_request('DeprecateDomain', {'name': name})
 
 # Visibility Actions
 
@@ -900,18 +897,15 @@
 
         :raises: SWFOperationNotPermittedError, UnknownResourceFault
         """
-        data = {'domain': domain, 'registrationStatus': registration_status}
-        if name:
-            data['name'] = name
-        if maximum_page_size:
-            data['maximumPageSize'] = maximum_page_size
-        if next_page_token:
-            data['nextPageToken'] = next_page_token
-        if reverse_order:
-            data['reverseOrder'] = 'true'
-        json_input = json.dumps(data)
-        return self.make_request('ListActivityTypes', json_input)
-
+        return self.json_request('ListActivityTypes', {
+            'domain': domain,
+            'name': name,
+            'registrationStatus': registration_status,
+            'maximumPageSize': maximum_page_size,
+            'nextPageToken': next_page_token,
+            'reverseOrder': reverse_order,
+        })
+        
     def describe_activity_type(self, domain, activity_name, activity_version):
         """
         Returns information about the specified activity type. This
@@ -930,11 +924,11 @@
 
         :raises: UnknownResourceFault, SWFOperationNotPermittedError
         """
-        data = {'domain': domain}
-        data['activityType'] = {'name': activity_name,
-                                'version': activity_version}
-        json_input = json.dumps(data)
-        return self.make_request('DescribeActivityType', json_input)
+        return self.json_request('DescribeActivityType', {
+            'domain': domain,
+            'activityType': {'name': activity_name,
+                             'version': activity_version}
+        })
 
 ## Workflow Visibility
 
@@ -980,17 +974,14 @@
 
         :raises: SWFOperationNotPermittedError, UnknownResourceFault
         """
-        data = {'domain': domain, 'registrationStatus': registration_status}
-        if maximum_page_size:
-            data['maximumPageSize'] = maximum_page_size
-        if name:
-            data['name'] = name
-        if next_page_token:
-            data['nextPageToken'] = next_page_token
-        if reverse_order:
-            data['reverseOrder'] = 'true'
-        json_input = json.dumps(data)
-        return self.make_request('ListWorkflowTypes', json_input)
+        return self.json_request('ListWorkflowTypes', {
+            'domain': domain, 
+            'name': name,
+            'registrationStatus': registration_status,
+            'maximumPageSize': maximum_page_size,
+            'nextPageToken': next_page_token,
+            'reverseOrder': reverse_order,
+        })
 
     def describe_workflow_type(self, domain, workflow_name, workflow_version):
         """
@@ -1011,11 +1002,11 @@
 
         :raises: UnknownResourceFault, SWFOperationNotPermittedError
         """
-        data = {'domain': domain}
-        data['workflowType'] = {'name': workflow_name,
-                                'version': workflow_version}
-        json_input = json.dumps(data)
-        return self.make_request('DescribeWorkflowType', json_input)
+        return self.json_request('DescribeWorkflowType', {
+            'domain': domain,
+            'workflowType': {'name': workflow_name,
+                             'version': workflow_version}
+        })
 
 ## Workflow Execution Visibility
 
@@ -1038,10 +1029,11 @@
 
         :raises: UnknownResourceFault, SWFOperationNotPermittedError
         """
-        data = {'domain': domain}
-        data['execution'] = {'runId': run_id, 'workflowId': workflow_id}
-        json_input = json.dumps(data)
-        return self.make_request('DescribeWorkflowExecution', json_input)
+        return self.json_request('DescribeWorkflowExecution', {
+            'domain': domain,
+            'execution': {'runId': run_id, 
+                          'workflowId': workflow_id},
+        })
 
     def get_workflow_execution_history(self, domain, run_id, workflow_id,
                                        maximum_page_size=None,
@@ -1086,17 +1078,15 @@
 
         :raises: UnknownResourceFault, SWFOperationNotPermittedError
         """
-        data = {'domain': domain}
-        data['execution'] = {'runId': run_id, 'workflowId': workflow_id}
-        if maximum_page_size:
-            data['maximumPageSize'] = maximum_page_size
-        if next_page_token:
-            data['nextPageToken'] = next_page_token
-        if reverse_order:
-            data['reverseOrder'] = 'true'
-        json_input = json.dumps(data)
-        return self.make_request('GetWorkflowExecutionHistory', json_input)
-
+        return self.json_request('GetWorkflowExecutionHistory', {
+            'domain': domain,
+            'execution': {'runId': run_id, 
+                          'workflowId': workflow_id},
+            'maximumPageSize': maximum_page_size,
+            'nextPageToken': next_page_token,
+            'reverseOrder': reverse_order,
+        })
+        
     def count_open_workflow_executions(self, domain, latest_date, oldest_date,
                                        tag=None,
                                        workflow_id=None,
@@ -1138,22 +1128,19 @@
 
         :raises: UnknownResourceFault, SWFOperationNotPermittedError
         """
-        data = {'domain': domain}
-        data['startTimeFilter'] = {'oldestDate': oldest_date,
-                                   'latestDate': latest_date}
-        if workflow_name and workflow_version:
-            data['typeFilter'] = {'name': workflow_name,
-                                  'version': workflow_version}
-        if workflow_id:
-            data['executionFilter'] = {'workflowId': workflow_id}
-        if tag:
-            data['tagFilter'] = {'tag': tag}
-        json_input = json.dumps(data)
-        return self.make_request('CountOpenWorkflowExecutions', json_input)
+        return self.json_request('CountOpenWorkflowExecutions', {
+            'domain': domain,
+            'startTimeFilter': {'oldestDate': oldest_date,
+                                'latestDate': latest_date},
+            'typeFilter': {'name': workflow_name,
+                           'version': workflow_version},
+            'executionFilter': {'workflowId': workflow_id},
+            'tagFilter': {'tag': tag},
+        })
 
     def list_open_workflow_executions(self, domain,
+                                      oldest_date,
                                       latest_date=None,
-                                      oldest_date=None,
                                       tag=None,
                                       workflow_id=None,
                                       workflow_name=None,
@@ -1217,25 +1204,18 @@
         :raises: UnknownResourceFault, SWFOperationNotPermittedError
 
         """
-        data = {'domain': domain}
-        data['startTimeFilter'] = {'oldestDate': oldest_date,
-                                   'latestDate': latest_date}
-        if tag:
-            data['tagFilter'] = {'tag': tag}
-        if workflow_name and workflow_version:
-            data['typeFilter'] = {'name': workflow_name,
-                                  'version': workflow_version}
-        if workflow_id:
-            data['executionFilter'] = {'workflowId': workflow_id}
-
-        if maximum_page_size:
-            data['maximumPageSize'] = maximum_page_size
-        if next_page_token:
-            data['nextPageToken'] = next_page_token
-        if reverse_order:
-            data['reverseOrder'] = 'true'
-        json_input = json.dumps(data)
-        return self.make_request('ListOpenWorkflowExecutions', json_input)
+        return self.json_request('ListOpenWorkflowExecutions', {
+            'domain': domain,
+            'startTimeFilter': {'oldestDate': oldest_date,
+                                'latestDate': latest_date},
+            'tagFilter': {'tag': tag},
+            'typeFilter': {'name': workflow_name,
+                           'version': workflow_version},
+            'executionFilter': {'workflowId': workflow_id},
+            'maximumPageSize': maximum_page_size,
+            'nextPageToken': next_page_token,
+            'reverseOrder': reverse_order,
+        })
 
     def count_closed_workflow_executions(self, domain,
                                          start_latest_date=None,
@@ -1309,25 +1289,18 @@
 
         :raises: UnknownResourceFault, SWFOperationNotPermittedError
         """
-        data = {'domain': domain}
-        if start_latest_date and start_oldest_date:
-            data['startTimeFilter'] = {'oldestDate': start_oldest_date,
-                                       'latestDate': start_latest_date}
-        if close_latest_date and close_oldest_date:
-            data['closeTimeFilter'] = {'oldestDate': close_oldest_date,
-                                       'latestDate': close_latest_date}
-        if close_status:
-            data['closeStatusFilter'] = {'status': close_status}
-        if tag:
-            data['tagFilter'] = {'tag': tag}
-        if workflow_name and workflow_version:
-            data['typeFilter'] = {'name': workflow_name,
-                                  'version': workflow_version}
-        if workflow_id:
-            data['executionFilter'] = {'workflowId': workflow_id}
-
-        json_input = json.dumps(data)
-        return self.make_request('CountClosedWorkflowExecutions', json_input)
+        return self.json_request('CountClosedWorkflowExecutions', {
+            'domain': domain,
+            'startTimeFilter': {'oldestDate': start_oldest_date,
+                                'latestDate': start_latest_date},
+            'closeTimeFilter': {'oldestDate': close_oldest_date,
+                                'latestDate': close_latest_date},
+            'closeStatusFilter': {'status': close_status},
+            'tagFilter': {'tag': tag},
+            'typeFilter': {'name': workflow_name,
+                           'version': workflow_version},
+            'executionFilter': {'workflowId': workflow_id}
+        })
 
     def list_closed_workflow_executions(self, domain,
                                         start_latest_date=None,
@@ -1422,32 +1395,21 @@
 
         :raises: UnknownResourceFault, SWFOperationNotPermittedError
         """
-        data = {'domain': domain}
-        if start_latest_date and start_oldest_date:
-            data['startTimeFilter'] = {'oldestDate': start_oldest_date,
-                                       'latestDate': start_latest_date}
-        if close_latest_date and close_oldest_date:
-            data['closeTimeFilter'] = {'oldestDate': close_oldest_date,
-                                       'latestDate': close_latest_date}
-
-        if workflow_id:
-            data['executionFilter'] = {'workflowId': workflow_id}
-
-        if close_status:
-            data['closeStatusFilter'] = {'status': close_status}
-        if tag:
-            data['tagFilter'] = {'tag': tag}
-        if workflow_name and workflow_version:
-            data['typeFilter'] = {'name': workflow_name,
-                                  'version': workflow_version}
-        if maximum_page_size:
-            data['maximumPageSize'] = maximum_page_size
-        if next_page_token:
-            data['nextPageToken'] = next_page_token
-        if reverse_order:
-            data['reverseOrder'] = 'true'
-        json_input = json.dumps(data)
-        return self.make_request('ListClosedWorkflowExecutions', json_input)
+        return self.json_request('ListClosedWorkflowExecutions', {
+            'domain': domain,
+            'startTimeFilter': {'oldestDate': start_oldest_date,
+                                'latestDate': start_latest_date},
+            'closeTimeFilter': {'oldestDate': close_oldest_date,
+                                'latestDate': close_latest_date},
+            'executionFilter': {'workflowId': workflow_id},
+            'closeStatusFilter': {'status': close_status},
+            'tagFilter': {'tag': tag},
+            'typeFilter': {'name': workflow_name,
+                           'version': workflow_version},
+            'maximumPageSize': maximum_page_size,
+            'nextPageToken': next_page_token,
+            'reverseOrder': reverse_order,
+        })
 
 ## Domain Visibility
 
@@ -1486,16 +1448,13 @@
 
         :raises: SWFOperationNotPermittedError
         """
-        data = {'registrationStatus': registration_status}
-        if maximum_page_size:
-            data['maximumPageSize'] = maximum_page_size
-        if next_page_token:
-            data['nextPageToken'] = next_page_token
-        if reverse_order:
-            data['reverseOrder'] = 'true'
-        json_input = json.dumps(data)
-        return self.make_request('ListDomains', json_input)
-
+        return self.json_request('ListDomains', {
+            'registrationStatus': registration_status,
+            'maximumPageSize': maximum_page_size,
+            'nextPageToken': next_page_token,
+            'reverseOrder': reverse_order,
+        })
+        
     def describe_domain(self, name):
         """
         Returns information about the specified domain including
@@ -1506,9 +1465,7 @@
 
         :raises: UnknownResourceFault, SWFOperationNotPermittedError
         """
-        data = {'name': name}
-        json_input = json.dumps(data)
-        return self.make_request('DescribeDomain', json_input)
+        return self.json_request('DescribeDomain', {'name': name})
 
 ## Task List Visibility
 
@@ -1528,9 +1485,10 @@
 
         :raises: UnknownResourceFault, SWFOperationNotPermittedError
         """
-        data = {'domain': domain, 'taskList': {'name': task_list}}
-        json_input = json.dumps(data)
-        return self.make_request('CountPendingDecisionTasks', json_input)
+        return self.json_request('CountPendingDecisionTasks', {
+            'domain': domain, 
+            'taskList': {'name': task_list}
+        })
 
     def count_pending_activity_tasks(self, domain, task_list):
         """
@@ -1548,6 +1506,7 @@
 
         :raises: UnknownResourceFault, SWFOperationNotPermittedError
         """
-        data = {'domain': domain, 'taskList': {'name': task_list}}
-        json_input = json.dumps(data)
-        return self.make_request('CountPendingActivityTasks', json_input)
+        return self.json_request('CountPendingActivityTasks', {
+            'domain': domain, 
+            'taskList': {'name': task_list}
+        })
diff --git a/boto/swf/layer2.py b/boto/swf/layer2.py
new file mode 100644
index 0000000..cb3298e
--- /dev/null
+++ b/boto/swf/layer2.py
@@ -0,0 +1,342 @@
+"""Object-oriented interface to SWF wrapping boto.swf.layer1.Layer1"""
+
+import time
+from functools import wraps
+from boto.swf.layer1 import Layer1
+from boto.swf.layer1_decisions import Layer1Decisions
+
+DEFAULT_CREDENTIALS = {
+    'aws_access_key_id': None,
+    'aws_secret_access_key': None
+}
+
+def set_default_credentials(aws_access_key_id, aws_secret_access_key):
+    """Set default credentials."""
+    DEFAULT_CREDENTIALS.update({
+        'aws_access_key_id': aws_access_key_id,
+        'aws_secret_access_key': aws_secret_access_key,
+    })
+
+class SWFBase(object):
+
+    """SWFBase."""
+
+    name = None
+    domain = None
+    aws_access_key_id = None
+    aws_secret_access_key = None
+
+    def __init__(self, **kwargs):
+        """Construct an SWF object."""
+        # Set default credentials.
+        for credkey in ('aws_access_key_id', 'aws_secret_access_key'):
+            if DEFAULT_CREDENTIALS.get(credkey):
+                setattr(self, credkey, DEFAULT_CREDENTIALS[credkey])
+        # Override attributes with keyword args.
+        for kwarg in kwargs:
+            setattr(self, kwarg, kwargs[kwarg])
+
+        self._swf = Layer1(self.aws_access_key_id, 
+                              self.aws_secret_access_key)
+
+    def __repr__(self):
+        """Generate string representation."""
+        rep_str = str(self.name)
+        if hasattr(self, 'version'):
+            rep_str += '-' + str(getattr(self, 'version'))
+        return '<%s %r at 0x%x>' % (self.__class__.__name__, rep_str, id(self))
+
+class Domain(SWFBase):
+
+    """Simple Workflow Domain."""
+
+    description = None
+    retention = 30
+    @wraps(Layer1.describe_domain)
+    def describe(self):
+        """DescribeDomain."""
+        return self._swf.describe_domain(self.name)
+
+    @wraps(Layer1.deprecate_domain)
+    def deprecate(self):
+        """DeprecateDomain"""
+        self._swf.deprecate_domain(self.name)
+
+    @wraps(Layer1.register_domain)
+    def register(self):
+        """RegisterDomain."""
+        self._swf.register_domain(self.name, str(self.retention), 
+                                  self.description)
+
+    @wraps(Layer1.list_activity_types)
+    def activities(self, status='REGISTERED', **kwargs):
+        """ListActivityTypes."""
+        act_types = self._swf.list_activity_types(self.name, status, **kwargs)
+        act_objects = []
+        for act_args in act_types['typeInfos']:
+            act_ident = act_args['activityType']
+            del act_args['activityType']
+            act_args.update(act_ident)
+            act_args.update({
+                'aws_access_key_id': self.aws_access_key_id,
+                'aws_secret_access_key': self.aws_secret_access_key,
+                'domain': self.name,
+            })
+            act_objects.append(ActivityType(**act_args))
+        return act_objects
+
+    @wraps(Layer1.list_workflow_types)
+    def workflows(self, status='REGISTERED', **kwargs):
+        """ListWorkflowTypes."""
+        wf_types = self._swf.list_workflow_types(self.name, status, **kwargs)
+        wf_objects = []
+        for wf_args in wf_types['typeInfos']:
+            wf_ident = wf_args['workflowType']
+            del wf_args['workflowType']
+            wf_args.update(wf_ident)
+            wf_args.update({
+                'aws_access_key_id': self.aws_access_key_id,
+                'aws_secret_access_key': self.aws_secret_access_key,
+                'domain': self.name,
+            })
+            
+            wf_objects.append(WorkflowType(**wf_args))
+        return wf_objects
+
+    def executions(self, closed=False, **kwargs):
+        """List list open/closed executions.
+
+        For more info, try:
+        >>> help(boto.swf.layer1.Layer1.list_closed_workflow_executions)
+        >>> help(boto.swf.layer1.Layer1.list_open_workflow_executions)
+        """
+        if closed:
+            executions = self._swf.list_closed_workflow_executions(self.name,
+                                                                   **kwargs)
+        else:
+            if 'oldest_date' not in kwargs:
+                # Last 24 hours.
+                kwargs['oldest_date'] = time.time() - (3600 * 24)
+            executions = self._swf.list_open_workflow_executions(self.name, 
+                                                                 **kwargs)
+        exe_objects = []
+        for exe_args in executions['executionInfos']:
+            for nested_key in ('execution', 'workflowType'):
+                nested_dict = exe_args[nested_key]
+                del exe_args[nested_key]
+                exe_args.update(nested_dict)
+            
+            exe_args.update({
+                'aws_access_key_id': self.aws_access_key_id,
+                'aws_secret_access_key': self.aws_secret_access_key,
+                'domain': self.name,
+            })
+            
+            exe_objects.append(WorkflowExecution(**exe_args))
+        return exe_objects
+
+    @wraps(Layer1.count_pending_activity_tasks)
+    def count_pending_activity_tasks(self, task_list):
+        """CountPendingActivityTasks."""
+        return self._swf.count_pending_activity_tasks(self.name, task_list)
+
+    @wraps(Layer1.count_pending_decision_tasks)
+    def count_pending_decision_tasks(self, task_list):
+        """CountPendingDecisionTasks."""
+        return self._swf.count_pending_decision_tasks(self.name, task_list)
+ 
+
+class Actor(SWFBase):
+
+    """Simple Workflow Actor interface."""
+
+    task_list = None
+    last_tasktoken = None
+    domain = None
+
+    def run(self):
+        """To be overloaded by subclasses."""
+        raise NotImplementedError()
+
+class ActivityWorker(Actor):
+
+    """ActivityWorker."""
+
+    @wraps(Layer1.respond_activity_task_canceled)
+    def cancel(self, task_token=None, details=None):
+        """RespondActivityTaskCanceled."""
+        if task_token is None:
+            task_token = self.last_tasktoken
+        return self._swf.respond_activity_task_canceled(task_token, details)
+
+    @wraps(Layer1.respond_activity_task_completed)
+    def complete(self, task_token=None, result=None):
+        """RespondActivityTaskCompleted."""
+        if task_token is None:
+            task_token = self.last_tasktoken
+        return self._swf.respond_activity_task_completed(task_token, result)
+
+    @wraps(Layer1.respond_activity_task_failed)
+    def fail(self, task_token=None, details=None, reason=None):
+        """RespondActivityTaskFailed."""
+        if task_token is None:
+            task_token = self.last_tasktoken
+        return self._swf.respond_activity_task_failed(task_token, details,
+                                                      reason)
+
+    @wraps(Layer1.record_activity_task_heartbeat)
+    def heartbeat(self, task_token=None, details=None):
+        """RecordActivityTaskHeartbeat."""
+        if task_token is None:
+            task_token = self.last_tasktoken
+        return self._swf.record_activity_task_heartbeat(task_token, details)
+
+    @wraps(Layer1.poll_for_activity_task)
+    def poll(self, **kwargs):
+        """PollForActivityTask."""
+        task = self._swf.poll_for_activity_task(self.domain, self.task_list,
+                                                **kwargs)
+        self.last_tasktoken = task.get('taskToken')
+        return task
+
+class Decider(Actor):
+
+    """Simple Workflow Decider."""
+
+    @wraps(Layer1.respond_decision_task_completed)
+    def complete(self, task_token=None, decisions=None, **kwargs):
+        """RespondDecisionTaskCompleted."""
+        if isinstance(decisions, Layer1Decisions):
+            # Extract decision list from a Layer1Decisions instance.
+            decisions = decisions._data
+        if task_token is None:
+            task_token = self.last_tasktoken
+        return self._swf.respond_decision_task_completed(task_token, decisions,
+                                                         **kwargs)
+
+    @wraps(Layer1.poll_for_decision_task)
+    def poll(self, **kwargs):
+        """PollForDecisionTask."""
+        result = self._swf.poll_for_decision_task(self.domain, self.task_list,
+                                                  **kwargs)
+        # Record task token.
+        self.last_tasktoken = result.get('taskToken')
+        # Record the last event.
+        return result
+
+class WorkflowType(SWFBase):
+
+    """WorkflowType."""
+
+    version = None
+    task_list = None
+    child_policy = 'TERMINATE'
+
+    @wraps(Layer1.describe_workflow_type)
+    def describe(self):
+        """DescribeWorkflowType."""
+        return self._swf.describe_workflow_type(self.domain, self.name,
+                                                self.version)
+    @wraps(Layer1.register_workflow_type)
+    def register(self, **kwargs):
+        """RegisterWorkflowType."""
+        args = {
+            'default_execution_start_to_close_timeout': '3600',
+            'default_task_start_to_close_timeout': '300',
+            'default_child_policy': 'TERMINATE',
+        }
+        args.update(kwargs)
+        self._swf.register_workflow_type(self.domain, self.name, self.version,
+                                         **args)
+
+    @wraps(Layer1.deprecate_workflow_type)
+    def deprecate(self):
+        """DeprecateWorkflowType."""
+        self._swf.deprecate_workflow_type(self.domain, self.name, self.version)
+    
+    @wraps(Layer1.start_workflow_execution)
+    def start(self, **kwargs):
+        """StartWorkflowExecution."""
+        if 'workflow_id' in kwargs:
+            workflow_id = kwargs['workflow_id']
+            del kwargs['workflow_id']
+        else:
+            workflow_id = '%s-%s-%i' % (self.name, self.version, time.time())
+
+        for def_attr in ('task_list', 'child_policy'):
+            kwargs[def_attr] = kwargs.get(def_attr, getattr(self, def_attr))
+        run_id = self._swf.start_workflow_execution(self.domain, workflow_id, 
+                                    self.name, self.version, **kwargs)['runId']
+        return WorkflowExecution(name=self.name, version=self.version,
+               runId=run_id, domain=self.domain, workflowId=workflow_id,
+               aws_access_key_id=self.aws_access_key_id,
+               aws_secret_access_key=self.aws_secret_access_key)
+
+class WorkflowExecution(SWFBase):
+
+    """WorkflowExecution."""
+
+    workflowId = None
+    runId = None
+
+    @wraps(Layer1.signal_workflow_execution)
+    def signal(self, signame, **kwargs):
+        """SignalWorkflowExecution."""
+        self._swf.signal_workflow_execution(self.domain, signame, 
+                                            self.workflowId, **kwargs)
+
+    @wraps(Layer1.terminate_workflow_execution)
+    def terminate(self, **kwargs):
+        """TerminateWorkflowExecution (p. 103)."""
+        return self._swf.terminate_workflow_execution(self.domain, 
+                                        self.workflowId, **kwargs)
+
+    @wraps(Layer1.get_workflow_execution_history)
+    def history(self, **kwargs):
+        """GetWorkflowExecutionHistory."""
+        return self._swf.get_workflow_execution_history(self.domain, self.runId,
+                                            self.workflowId, **kwargs)['events']
+
+    @wraps(Layer1.describe_workflow_execution)
+    def describe(self):
+        """DescribeWorkflowExecution."""
+        return self._swf.describe_workflow_execution(self.domain, self.runId,
+                                                             self.workflowId)
+
+    @wraps(Layer1.request_cancel_workflow_execution)
+    def request_cancel(self):
+        """RequestCancelWorkflowExecution."""
+        return self._swf.request_cancel_workflow_execution(self.domain,
+                                                   self.workflowId, self.runId)
+
+
+class ActivityType(SWFBase):
+
+    """ActivityType."""
+
+    version = None
+
+    @wraps(Layer1.deprecate_activity_type)
+    def deprecate(self):
+        """DeprecateActivityType."""
+        return self._swf.deprecate_activity_type(self.domain, self.name,
+                                                 self.version)
+
+    @wraps(Layer1.describe_activity_type)
+    def describe(self):
+        """DescribeActivityType."""
+        return self._swf.describe_activity_type(self.domain, self.name,
+                                                self.version)
+
+    @wraps(Layer1.register_activity_type)
+    def register(self, **kwargs):
+        """RegisterActivityType."""
+        args = {
+            'default_task_heartbeat_timeout': '600',
+            'default_task_schedule_to_close_timeout': '3900',
+            'default_task_schedule_to_start_timeout': '300',
+            'default_task_start_to_close_timeout': '3600',
+        }
+        args.update(kwargs)
+        self._swf.register_activity_type(self.domain, self.name, self.version,
+                                         **args)
diff --git a/boto/utils.py b/boto/utils.py
index 0945364..97fdd2d 100644
--- a/boto/utils.py
+++ b/boto/utils.py
@@ -53,11 +53,11 @@
 import smtplib
 import datetime
 import re
-from email.MIMEMultipart import MIMEMultipart
-from email.MIMEBase import MIMEBase
-from email.MIMEText import MIMEText
-from email.Utils import formatdate
-from email import Encoders
+import email.mime.multipart
+import email.mime.base
+import email.mime.text
+import email.utils
+import email.encoders
 import gzip
 import base64
 try:
@@ -73,10 +73,7 @@
     import md5
     _hashfn = md5.md5
 
-try:
-    import simplejson as json
-except:
-    import json
+from boto.compat import json
 
 # List of Query String Arguments of Interest
 qsa_of_interest = ['acl', 'cors', 'defaultObjectAcl', 'location', 'logging',
@@ -86,7 +83,16 @@
                    'response-content-language', 'response-expires',
                    'response-cache-control', 'response-content-disposition',
                    'response-content-encoding', 'delete', 'lifecycle',
-                   'tagging']
+                   'tagging', 'restore',
+                   # storageClass is a QSA for buckets in Google Cloud Storage.
+                   # (StorageClass is associated to individual keys in S3, but
+                   # having it listed here should cause no problems because
+                   # GET bucket?storageClass is not part of the S3 API.)
+                   'storageClass',
+                   # websiteConfig is a QSA for buckets in Google Cloud Storage.
+                   'websiteConfig',
+                   # compose is a QSA for objects in Google Cloud Storage.
+                   'compose']
 
 
 _first_cap_regex = re.compile('(.)([A-Z][a-z]+)')
@@ -100,9 +106,12 @@
     else:
         return (nv[0], urllib.unquote(nv[1]))
 
-# generates the aws canonical string for the given parameters
+
 def canonical_string(method, path, headers, expires=None,
                      provider=None):
+    """
+    Generates the aws canonical string for the given parameters
+    """
     if not provider:
         provider = boto.provider.get_default()
     interesting_headers = {}
@@ -110,7 +119,7 @@
         lk = key.lower()
         if headers[key] != None and (lk in ['content-md5', 'content-type', 'date'] or
                                      lk.startswith(provider.header_prefix)):
-            interesting_headers[lk] = headers[key].strip()
+            interesting_headers[lk] = str(headers[key]).strip()
 
     # these keys get empty strings if they don't exist
     if 'content-type' not in interesting_headers:
@@ -154,6 +163,7 @@
 
     return buf
 
+
 def merge_meta(headers, metadata, provider=None):
     if not provider:
         provider = boto.provider.get_default()
@@ -162,13 +172,14 @@
     for k in metadata.keys():
         if k.lower() in ['cache-control', 'content-md5', 'content-type',
                          'content-encoding', 'content-disposition',
-                         'date', 'expires']:
+                         'expires']:
             final_headers[k] = metadata[k]
         else:
             final_headers[metadata_prefix + k] = metadata[k]
 
     return final_headers
 
+
 def get_aws_metadata(headers, provider=None):
     if not provider:
         provider = boto.provider.get_default()
@@ -184,12 +195,22 @@
             del headers[hkey]
     return metadata
 
+
 def retry_url(url, retry_on_404=True, num_retries=10):
+    """
+    Retry a url.  This is specifically used for accessing the metadata
+    service on an instance.  Since this address should never be proxied
+    (for security reasons), we create a ProxyHandler with a NULL
+    dictionary to override any proxy settings in the environment.
+    """
     for i in range(0, num_retries):
         try:
+            proxy_handler = urllib2.ProxyHandler({})
+            opener = urllib2.build_opener(proxy_handler)
             req = urllib2.Request(url)
-            resp = urllib2.urlopen(req)
-            return resp.read()
+            r = opener.open(req)
+            result = r.read()
+            return result
         except urllib2.HTTPError, e:
             # in 2.6 you use getcode(), in 2.5 and earlier you use code
             if hasattr(e, 'getcode'):
@@ -198,18 +219,20 @@
                 code = e.code
             if code == 404 and not retry_on_404:
                 return ''
-        except urllib2.URLError, e:
-            raise e
         except Exception, e:
             pass
         boto.log.exception('Caught exception reading instance data')
-        time.sleep(2**i)
+        # If not on the last iteration of the loop then sleep.
+        if i + 1 != num_retries:
+            time.sleep(2 ** i)
     boto.log.error('Unable to read instance data, giving up')
     return ''
 
+
 def _get_instance_metadata(url, num_retries):
     return LazyLoadMetadata(url, num_retries)
 
+
 class LazyLoadMetadata(dict):
     def __init__(self, url, num_retries):
         self._url = url
@@ -287,8 +310,24 @@
         self._materialize()
         return super(LazyLoadMetadata, self).__repr__()
 
+
+def _build_instance_metadata_url(url, version, path):
+    """
+    Builds an EC2 metadata URL for fetching information about an instance.
+
+    Requires the following arguments: a URL, a version and a path.
+
+    Example:
+
+        >>> _build_instance_metadata_url('http://169.254.169.254', 'latest', 'meta-data')
+        http://169.254.169.254/latest/meta-data/
+
+    """
+    return '%s/%s/%s/' % (url, version, path)
+
+
 def get_instance_metadata(version='latest', url='http://169.254.169.254',
-                          timeout=None, num_retries=5):
+                          data='meta-data', timeout=None, num_retries=5):
     """
     Returns the instance metadata as a nested Python dictionary.
     Simple values (e.g. local_hostname, hostname, etc.) will be
@@ -304,21 +343,22 @@
         original = socket.getdefaulttimeout()
         socket.setdefaulttimeout(timeout)
     try:
-        return _get_instance_metadata('%s/%s/meta-data/' % (url, version),
-                                      num_retries=num_retries)
+        metadata_url = _build_instance_metadata_url(url, version, data)
+        return _get_instance_metadata(metadata_url, num_retries=num_retries)
     except urllib2.URLError, e:
         return None
     finally:
         if timeout is not None:
             socket.setdefaulttimeout(original)
 
+
 def get_instance_identity(version='latest', url='http://169.254.169.254',
                           timeout=None, num_retries=5):
     """
     Returns the instance identity as a nested Python dictionary.
     """
     iid = {}
-    base_url = 'http://169.254.169.254/latest/dynamic/instance-identity'
+    base_url = _build_instance_metadata_url(url, version, 'dynamic/instance-identity')
     if timeout is not None:
         original = socket.getdefaulttimeout()
         socket.setdefaulttimeout(timeout)
@@ -338,9 +378,10 @@
         if timeout is not None:
             socket.setdefaulttimeout(original)
 
+
 def get_instance_userdata(version='latest', sep=None,
                           url='http://169.254.169.254'):
-    ud_url = '%s/%s/user-data' % (url, version)
+    ud_url = _build_instance_metadata_url(url, version, 'user-data')
     user_data = retry_url(ud_url, retry_on_404=False)
     if user_data:
         if sep:
@@ -353,20 +394,26 @@
 
 ISO8601 = '%Y-%m-%dT%H:%M:%SZ'
 ISO8601_MS = '%Y-%m-%dT%H:%M:%S.%fZ'
+RFC1123 = '%a, %d %b %Y %H:%M:%S %Z'
 
 def get_ts(ts=None):
     if not ts:
         ts = time.gmtime()
     return time.strftime(ISO8601, ts)
 
+
 def parse_ts(ts):
     ts = ts.strip()
     try:
         dt = datetime.datetime.strptime(ts, ISO8601)
         return dt
     except ValueError:
-        dt = datetime.datetime.strptime(ts, ISO8601_MS)
-        return dt
+        try:
+            dt = datetime.datetime.strptime(ts, ISO8601_MS)
+            return dt
+        except ValueError:
+            dt = datetime.datetime.strptime(ts, RFC1123)
+            return dt
 
 def find_class(module_name, class_name=None):
     if class_name:
@@ -384,6 +431,7 @@
     except:
         return None
 
+
 def update_dme(username, password, dme_id, ip_address):
     """
     Update your Dynamic DNS record with DNSMadeEasy.com
@@ -393,6 +441,7 @@
     s = urllib2.urlopen(dme_url % (username, password, dme_id, ip_address))
     return s.read()
 
+
 def fetch_file(uri, file=None, username=None, password=None):
     """
     Fetch a file based on the URI provided. If you do not pass in a file pointer
@@ -406,7 +455,8 @@
     try:
         if uri.startswith('s3://'):
             bucket_name, key_name = uri[len('s3://'):].split('/', 1)
-            c = boto.connect_s3(aws_access_key_id=username, aws_secret_access_key=password)
+            c = boto.connect_s3(aws_access_key_id=username,
+                                aws_secret_access_key=password)
             bucket = c.get_bucket(bucket_name)
             key = bucket.get_key(key_name)
             key.get_contents_to_file(file)
@@ -426,20 +476,23 @@
         file = None
     return file
 
+
 class ShellCommand(object):
 
-    def __init__(self, command, wait=True, fail_fast=False, cwd = None):
+    def __init__(self, command, wait=True, fail_fast=False, cwd=None):
         self.exit_code = 0
         self.command = command
         self.log_fp = StringIO.StringIO()
         self.wait = wait
         self.fail_fast = fail_fast
-        self.run(cwd = cwd)
+        self.run(cwd=cwd)
 
     def run(self, cwd=None):
         boto.log.info('running:%s' % self.command)
-        self.process = subprocess.Popen(self.command, shell=True, stdin=subprocess.PIPE,
-                                        stdout=subprocess.PIPE, stderr=subprocess.PIPE,
+        self.process = subprocess.Popen(self.command, shell=True,
+                                        stdin=subprocess.PIPE,
+                                        stdout=subprocess.PIPE,
+                                        stderr=subprocess.PIPE,
                                         cwd=cwd)
         if(self.wait):
             while self.process.poll() == None:
@@ -468,6 +521,7 @@
 
     output = property(getOutput, setReadOnly, None, 'The STDIN and STDERR output of the command')
 
+
 class AuthSMTPHandler(logging.handlers.SMTPHandler):
     """
     This class extends the SMTPHandler in the standard Python logging module
@@ -482,14 +536,16 @@
     args=('localhost', 'username', 'password', 'from@abc', ['user1@abc', 'user2@xyz'], 'Logger Subject')
     """
 
-    def __init__(self, mailhost, username, password, fromaddr, toaddrs, subject):
+    def __init__(self, mailhost, username, password,
+                 fromaddr, toaddrs, subject):
         """
         Initialize the handler.
 
         We have extended the constructor to accept a username/password
         for SMTP authentication.
         """
-        logging.handlers.SMTPHandler.__init__(self, mailhost, fromaddr, toaddrs, subject)
+        logging.handlers.SMTPHandler.__init__(self, mailhost, fromaddr,
+                                              toaddrs, subject)
         self.username = username
         self.password = password
 
@@ -512,7 +568,7 @@
                             self.fromaddr,
                             ','.join(self.toaddrs),
                             self.getSubject(record),
-                            formatdate(), msg)
+                            email.utils.formatdate(), msg)
             smtp.sendmail(self.fromaddr, self.toaddrs, msg)
             smtp.quit()
         except (KeyboardInterrupt, SystemExit):
@@ -520,6 +576,7 @@
         except:
             self.handleError(record)
 
+
 class LRUCache(dict):
     """A dictionary-like object that stores only a certain number of items, and
     discards its least recently used item when full.
@@ -553,11 +610,9 @@
     C
 
     This code is based on the LRUCache class from Genshi which is based on
-    Mighty's LRUCache from ``myghtyutils.util``, written
-    by Mike Bayer and released under the MIT license (Genshi uses the
-    BSD License). See:
-
-      http://svn.myghty.org/myghtyutils/trunk/lib/myghtyutils/util.py
+    `Myghty <http://www.myghty.org>`_'s LRUCache from ``myghtyutils.util``,
+    written by Mike Bayer and released under the MIT license (Genshi uses the
+    BSD License).
     """
 
     class _Item(object):
@@ -565,6 +620,7 @@
             self.previous = self.next = None
             self.key = key
             self.value = value
+
         def __repr__(self):
             return repr(self.value)
 
@@ -639,15 +695,18 @@
         item.next = self.head
         self.head.previous = self.head = item
 
+
 class Password(object):
     """
     Password object that stores itself as hashed.
     Hash defaults to SHA512 if available, MD5 otherwise.
     """
-    hashfunc=_hashfn
+    hashfunc = _hashfn
+
     def __init__(self, str=None, hashfunc=None):
         """
-        Load the string from an initial value, this should be the raw hashed password.
+        Load the string from an initial value, this should be the
+        raw hashed password.
         """
         self.str = str
         if hashfunc:
@@ -670,7 +729,9 @@
         else:
             return 0
 
-def notify(subject, body=None, html_body=None, to_string=None, attachments=None, append_instance_id=True):
+
+def notify(subject, body=None, html_body=None, to_string=None,
+           attachments=None, append_instance_id=True):
     attachments = attachments or []
     if append_instance_id:
         subject = "[%s] %s" % (boto.config.get_value("Instance", "instance-id"), subject)
@@ -679,20 +740,20 @@
     if to_string:
         try:
             from_string = boto.config.get_value('Notification', 'smtp_from', 'boto')
-            msg = MIMEMultipart()
+            msg = email.mime.multipart.MIMEMultipart()
             msg['From'] = from_string
             msg['Reply-To'] = from_string
             msg['To'] = to_string
-            msg['Date'] = formatdate(localtime=True)
+            msg['Date'] = email.utils.formatdate(localtime=True)
             msg['Subject'] = subject
 
             if body:
-                msg.attach(MIMEText(body))
+                msg.attach(email.mime.text.MIMEText(body))
 
             if html_body:
-                part = MIMEBase('text', 'html')
+                part = email.mime.base.MIMEBase('text', 'html')
                 part.set_payload(html_body)
-                Encoders.encode_base64(part)
+                email.encoders.encode_base64(part)
                 msg.attach(part)
 
             for part in attachments:
@@ -720,6 +781,7 @@
         except:
             boto.log.exception('notify failed')
 
+
 def get_utf8_value(value):
     if not isinstance(value, str) and not isinstance(value, unicode):
         value = str(value)
@@ -728,6 +790,7 @@
     else:
         return value
 
+
 def mklist(value):
     if not isinstance(value, list):
         if isinstance(value, tuple):
@@ -736,6 +799,7 @@
             value = [value]
     return value
 
+
 def pythonize_name(name):
     """Convert camel case to a "pythonic" name.
 
@@ -772,17 +836,17 @@
     :return: Final mime multipart
     :rtype: str:
     """
-    wrapper = MIMEMultipart()
+    wrapper = email.mime.multipart.MIMEMultipart()
     for name, con in content:
         definite_type = guess_mime_type(con, deftype)
         maintype, subtype = definite_type.split('/', 1)
         if maintype == 'text':
-            mime_con = MIMEText(con, _subtype=subtype)
+            mime_con = email.mime.text.MIMEText(con, _subtype=subtype)
         else:
-            mime_con = MIMEBase(maintype, subtype)
+            mime_con = email.mime.base.MIMEBase(maintype, subtype)
             mime_con.set_payload(con)
             # Encode the payload using Base64
-            Encoders.encode_base64(mime_con)
+            email.encoders.encode_base64(mime_con)
         mime_con.add_header('Content-Disposition', 'attachment', filename=name)
         wrapper.attach(mime_con)
     rcontent = wrapper.as_string()
@@ -798,6 +862,7 @@
 
     return rcontent
 
+
 def guess_mime_type(content, deftype):
     """Description: Guess the mime type of a block of text
     :param content: content we're finding the type of
@@ -810,13 +875,13 @@
     :return: <description>
     """
     #Mappings recognized by cloudinit
-    starts_with_mappings={
-        '#include' : 'text/x-include-url',
-        '#!' : 'text/x-shellscript',
-        '#cloud-config' : 'text/cloud-config',
-        '#upstart-job'  : 'text/upstart-job',
-        '#part-handler' : 'text/part-handler',
-        '#cloud-boothook' : 'text/cloud-boothook'
+    starts_with_mappings = {
+        '#include': 'text/x-include-url',
+        '#!': 'text/x-shellscript',
+        '#cloud-config': 'text/cloud-config',
+        '#upstart-job': 'text/upstart-job',
+        '#part-handler': 'text/part-handler',
+        '#cloud-boothook': 'text/cloud-boothook'
     }
     rtype = deftype
     for possible_type, mimetype in starts_with_mappings.items():
@@ -825,6 +890,7 @@
             break
     return(rtype)
 
+
 def compute_md5(fp, buf_size=8192, size=None):
     """
     Compute MD5 hash on passed file and return results in a tuple of values.
diff --git a/boto/vpc/__init__.py b/boto/vpc/__init__.py
index e5c0eef..e529b6f 100644
--- a/boto/vpc/__init__.py
+++ b/boto/vpc/__init__.py
@@ -33,6 +33,47 @@
 from boto.vpc.dhcpoptions import DhcpOptions
 from boto.vpc.subnet import Subnet
 from boto.vpc.vpnconnection import VpnConnection
+from boto.ec2 import RegionData
+from boto.regioninfo import RegionInfo
+
+def regions(**kw_params):
+    """
+    Get all available regions for the EC2 service.
+    You may pass any of the arguments accepted by the VPCConnection
+    object's constructor as keyword arguments and they will be
+    passed along to the VPCConnection object.
+
+    :rtype: list
+    :return: A list of :class:`boto.ec2.regioninfo.RegionInfo`
+    """
+    regions = []
+    for region_name in RegionData:
+        region = RegionInfo(name=region_name,
+                            endpoint=RegionData[region_name],
+                            connection_cls=VPCConnection)
+        regions.append(region)
+    return regions
+
+
+def connect_to_region(region_name, **kw_params):
+    """
+    Given a valid region name, return a
+    :class:`boto.vpc.VPCConnection`.
+    Any additional parameters after the region_name are passed on to
+    the connect method of the region object.
+
+    :type: str
+    :param region_name: The name of the region to connect to.
+
+    :rtype: :class:`boto.vpc.VPCConnection` or ``None``
+    :return: A connection to the given region, or None if an invalid region
+             name is given
+    """
+    for region in regions(**kw_params):
+        if region.name == region_name:
+            return region.connect(**kw_params)
+    return None
+
 
 class VPCConnection(EC2Connection):
 
@@ -93,6 +134,39 @@
         params = {'VpcId': vpc_id}
         return self.get_status('DeleteVpc', params)
 
+    def modify_vpc_attribute(self, vpc_id,
+                             enable_dns_support=None,
+                             enable_dns_hostnames=None):
+        """
+        Modifies the specified attribute of the specified VPC.
+        You can only modify one attribute at a time.
+
+        :type vpc_id: str
+        :param vpc_id: The ID of the vpc to be deleted.
+
+        :type enable_dns_support: bool
+        :param enable_dns_support: Specifies whether the DNS server
+            provided by Amazon is enabled for the VPC.
+
+        :type enable_dns_hostnames: bool
+        :param enable_dns_hostnames: Specifies whether DNS hostnames are
+            provided for the instances launched in this VPC. You can only
+            set this attribute to ``true`` if EnableDnsSupport
+            is also ``true``.
+        """
+        params = {'VpcId': vpc_id}
+        if enable_dns_support is not None:
+            if enable_dns_support:
+                params['EnableDnsSupport.Value'] = 'true'
+            else:
+                params['EnableDnsSupport.Value'] = 'false'
+        if enable_dns_hostnames is not None:
+            if enable_dns_hostnames:
+                params['EnableDnsHostnames.Value'] = 'true'
+            else:
+                params['EnableDnsHostnames.Value'] = 'false'
+        return self.get_status('ModifyVpcAttribute', params)
+
     # Route Tables
 
     def get_all_route_tables(self, route_table_ids=None, filters=None):
@@ -118,7 +192,8 @@
             self.build_list_params(params, route_table_ids, "RouteTableId")
         if filters:
             self.build_filter_params(params, dict(filters))
-        return self.get_list('DescribeRouteTables', params, [('item', RouteTable)])
+        return self.get_list('DescribeRouteTables', params,
+                             [('item', RouteTable)])
 
     def associate_route_table(self, route_table_id, subnet_id):
         """
@@ -182,7 +257,8 @@
         params = { 'RouteTableId': route_table_id }
         return self.get_status('DeleteRouteTable', params)
 
-    def create_route(self, route_table_id, destination_cidr_block, gateway_id=None, instance_id=None):
+    def create_route(self, route_table_id, destination_cidr_block,
+                     gateway_id=None, instance_id=None):
         """
         Creates a new route in the route table within a VPC. The route's target
         can be either a gateway attached to the VPC or a NAT instance in the
@@ -216,6 +292,44 @@
 
         return self.get_status('CreateRoute', params)
 
+    def replace_route(self, route_table_id, destination_cidr_block,
+                     gateway_id=None, instance_id=None, interface_id=None):
+        """
+        Replaces an existing route within a route table in a VPC.
+
+        :type route_table_id: str
+        :param route_table_id: The ID of the route table for the route.
+
+        :type destination_cidr_block: str
+        :param destination_cidr_block: The CIDR address block used for the
+                                       destination match.
+
+        :type gateway_id: str
+        :param gateway_id: The ID of the gateway attached to your VPC.
+
+        :type instance_id: str
+        :param instance_id: The ID of a NAT instance in your VPC.
+
+        :type interface_id: str
+        :param interface_id: Allows routing to network interface attachments.
+
+        :rtype: bool
+        :return: True if successful
+        """
+        params = {
+            'RouteTableId': route_table_id,
+            'DestinationCidrBlock': destination_cidr_block
+        }
+
+        if gateway_id is not None:
+            params['GatewayId'] = gateway_id
+        elif instance_id is not None:
+            params['InstanceId'] = instance_id
+        elif interface_id is not None:
+            params['NetworkInterfaceId'] = interface_id
+
+        return self.get_status('ReplaceRoute', params)
+
     def delete_route(self, route_table_id, destination_cidr_block):
         """
         Deletes a route from a route table within a VPC.
@@ -239,7 +353,8 @@
 
     # Internet Gateways
 
-    def get_all_internet_gateways(self, internet_gateway_ids=None, filters=None):
+    def get_all_internet_gateways(self, internet_gateway_ids=None,
+                                  filters=None):
         """
         Get a list of internet gateways. You can filter results to return information
         about only those gateways that you're interested in.
@@ -254,10 +369,12 @@
         params = {}
 
         if internet_gateway_ids:
-            self.build_list_params(params, internet_gateway_ids, 'InternetGatewayId')
+            self.build_list_params(params, internet_gateway_ids,
+                                   'InternetGatewayId')
         if filters:
             self.build_filter_params(params, dict(filters))
-        return self.get_list('DescribeInternetGateways', params, [('item', InternetGateway)])
+        return self.get_list('DescribeInternetGateways', params,
+                             [('item', InternetGateway)])
 
     def create_internet_gateway(self):
         """
@@ -286,7 +403,7 @@
         Attach an internet gateway to a specific VPC.
 
         :type internet_gateway_id: str
-        :param internet_gateway_id: The ID of the internet gateway to delete.
+        :param internet_gateway_id: The ID of the internet gateway to attach.
 
         :type vpc_id: str
         :param vpc_id: The ID of the VPC to attach to.
@@ -323,15 +440,17 @@
 
     # Customer Gateways
 
-    def get_all_customer_gateways(self, customer_gateway_ids=None, filters=None):
+    def get_all_customer_gateways(self, customer_gateway_ids=None,
+                                  filters=None):
         """
-        Retrieve information about your CustomerGateways.  You can filter results to
-        return information only about those CustomerGateways that match your search
-        parameters.  Otherwise, all CustomerGateways associated with your account
-        are returned.
+        Retrieve information about your CustomerGateways.  You can filter
+        results to return information only about those CustomerGateways that
+        match your search parameters.  Otherwise, all CustomerGateways
+        associated with your account are returned.
 
         :type customer_gateway_ids: list
-        :param customer_gateway_ids: A list of strings with the desired CustomerGateway ID's
+        :param customer_gateway_ids: A list of strings with the desired
+            CustomerGateway ID's.
 
         :type filters: list of tuples
         :param filters: A list of tuples containing filters.  Each tuple
@@ -349,18 +468,20 @@
         """
         params = {}
         if customer_gateway_ids:
-            self.build_list_params(params, customer_gateway_ids, 'CustomerGatewayId')
+            self.build_list_params(params, customer_gateway_ids,
+                                   'CustomerGatewayId')
         if filters:
             self.build_filter_params(params, dict(filters))
 
-        return self.get_list('DescribeCustomerGateways', params, [('item', CustomerGateway)])
+        return self.get_list('DescribeCustomerGateways', params,
+                             [('item', CustomerGateway)])
 
     def create_customer_gateway(self, type, ip_address, bgp_asn):
         """
         Create a new Customer Gateway
 
         :type type: str
-        :param type: Type of VPN Connection.  Only valid valid currently is 'ipsec.1'
+        :param type: Type of VPN Connection.  Only valid value currently is 'ipsec.1'
 
         :type ip_address: str
         :param ip_address: Internet-routable IP address for customer's gateway.
@@ -422,14 +543,15 @@
             self.build_list_params(params, vpn_gateway_ids, 'VpnGatewayId')
         if filters:
             self.build_filter_params(params, dict(filters))
-        return self.get_list('DescribeVpnGateways', params, [('item', VpnGateway)])
+        return self.get_list('DescribeVpnGateways', params,
+                             [('item', VpnGateway)])
 
     def create_vpn_gateway(self, type, availability_zone=None):
         """
         Create a new Vpn Gateway
 
         :type type: str
-        :param type: Type of VPN Connection.  Only valid valid currently is 'ipsec.1'
+        :param type: Type of VPN Connection.  Only valid value currently is 'ipsec.1'
 
         :type availability_zone: str
         :param availability_zone: The Availability Zone where you want the VPN gateway.
@@ -491,7 +613,7 @@
 
                         - *state*, a list of states of the Subnet
                           (pending,available)
-                        - *vpdId*, a list of IDs of teh VPC the subnet is in.
+                        - *vpcId*, a list of IDs of teh VPC the subnet is in.
                         - *cidrBlock*, a list of CIDR blocks of the subnet
                         - *availabilityZone*, list of the Availability Zones
                           the subnet is in.
@@ -558,29 +680,77 @@
         params = {}
         if dhcp_options_ids:
             self.build_list_params(params, dhcp_options_ids, 'DhcpOptionsId')
-        return self.get_list('DescribeDhcpOptions', params, [('item', DhcpOptions)])
+        return self.get_list('DescribeDhcpOptions', params,
+                             [('item', DhcpOptions)])
 
-    def create_dhcp_options(self, vpc_id, cidr_block, availability_zone=None):
+    def create_dhcp_options(self, domain_name=None, domain_name_servers=None,
+                            ntp_servers=None, netbios_name_servers=None,
+                            netbios_node_type=None):
         """
         Create a new DhcpOption
 
-        :type vpc_id: str
-        :param vpc_id: The ID of the VPC where you want to create the subnet.
+        This corresponds to
+        http://docs.amazonwebservices.com/AWSEC2/latest/APIReference/ApiReference-query-CreateDhcpOptions.html
 
-        :type cidr_block: str
-        :param cidr_block: The CIDR block you want the subnet to cover.
+        :type domain_name: str
+        :param domain_name: A domain name of your choice (for example,
+            example.com)
 
-        :type availability_zone: str
-        :param availability_zone: The AZ you want the subnet in
+        :type domain_name_servers: list of strings
+        :param domain_name_servers: The IP address of a domain name server. You
+            can specify up to four addresses.
+
+        :type ntp_servers: list of strings
+        :param ntp_servers: The IP address of a Network Time Protocol (NTP)
+            server. You can specify up to four addresses.
+
+        :type netbios_name_servers: list of strings
+        :param netbios_name_servers: The IP address of a NetBIOS name server.
+            You can specify up to four addresses.
+
+        :type netbios_node_type: str
+        :param netbios_node_type: The NetBIOS node type (1, 2, 4, or 8). For
+            more information about the values, see RFC 2132. We recommend you
+            only use 2 at this time (broadcast and multicast are currently not
+            supported).
 
         :rtype: The newly created DhcpOption
         :return: A :class:`boto.vpc.customergateway.DhcpOption` object
         """
-        params = {'VpcId' : vpc_id,
-                  'CidrBlock' : cidr_block}
-        if availability_zone:
-            params['AvailabilityZone'] = availability_zone
-        return self.get_object('CreateDhcpOption', params, DhcpOptions)
+
+        key_counter = 1
+        params = {}
+
+        def insert_option(params, name, value):
+            params['DhcpConfiguration.%d.Key' % (key_counter,)] = name
+            if isinstance(value, (list, tuple)):
+                for idx, value in enumerate(value, 1):
+                    key_name = 'DhcpConfiguration.%d.Value.%d' % (
+                        key_counter, idx)
+                    params[key_name] = value
+            else:
+                key_name = 'DhcpConfiguration.%d.Value.1' % (key_counter,)
+                params[key_name] = value
+
+            return key_counter + 1
+
+        if domain_name:
+            key_counter = insert_option(params,
+                'domain-name', domain_name)
+        if domain_name_servers:
+            key_counter = insert_option(params,
+                'domain-name-servers', domain_name_servers)
+        if ntp_servers:
+            key_counter = insert_option(params,
+                'ntp-servers', ntp_servers)
+        if netbios_name_servers:
+            key_counter = insert_option(params,
+                'netbios-name-servers', netbios_name_servers)
+        if netbios_node_type:
+            key_counter = insert_option(params,
+                'netbios-node-type', netbios_node_type)
+
+        return self.get_object('CreateDhcpOptions', params, DhcpOptions)
 
     def delete_dhcp_options(self, dhcp_options_id):
         """
@@ -642,10 +812,12 @@
         """
         params = {}
         if vpn_connection_ids:
-            self.build_list_params(params, vpn_connection_ids, 'Vpn_ConnectionId')
+            self.build_list_params(params, vpn_connection_ids,
+                                   'Vpn_ConnectionId')
         if filters:
             self.build_filter_params(params, dict(filters))
-        return self.get_list('DescribeVpnConnections', params, [('item', VpnConnection)])
+        return self.get_list('DescribeVpnConnections', params,
+                             [('item', VpnConnection)])
 
     def create_vpn_connection(self, type, customer_gateway_id, vpn_gateway_id):
         """
@@ -681,3 +853,91 @@
         """
         params = {'VpnConnectionId': vpn_connection_id}
         return self.get_status('DeleteVpnConnection', params)
+
+    def disable_vgw_route_propagation(self, route_table_id, gateway_id):
+        """
+        Disables a virtual private gateway (VGW) from propagating routes to the
+        routing tables of an Amazon VPC.
+
+        :type route_table_id: str
+        :param route_table_id: The ID of the routing table.
+
+        :type gateway_id: str
+        :param gateway_id: The ID of the virtual private gateway.
+
+        :rtype: bool
+        :return: True if successful
+        """
+        params = {
+            'RouteTableId': route_table_id,
+            'GatewayId': gateway_id,
+        }
+        self.get_status('DisableVgwRoutePropagation', params)
+
+    def enable_vgw_route_propagation(self, route_table_id, gateway_id):
+        """
+        Enables a virtual private gateway (VGW) to propagate routes to the
+        routing tables of an Amazon VPC.
+
+        :type route_table_id: str
+        :param route_table_id: The ID of the routing table.
+
+        :type gateway_id: str
+        :param gateway_id: The ID of the virtual private gateway.
+
+        :rtype: bool
+        :return: True if successful
+        """
+        params = {
+            'RouteTableId': route_table_id,
+            'GatewayId': gateway_id,
+        }
+        self.get_status('EnableVgwRoutePropagation', params)
+
+    def create_vpn_connection_route(self, destination_cidr_block,
+                                    vpn_connection_id):
+        """
+        Creates a new static route associated with a VPN connection between an
+        existing virtual private gateway and a VPN customer gateway. The static
+        route allows traffic to be routed from the virtual private gateway to
+        the VPN customer gateway.
+
+        :type destination_cidr_block: str
+        :param destination_cidr_block: The CIDR block associated with the local
+            subnet of the customer data center.
+
+        :type vpn_connection_id: str
+        :param vpn_connection_id: The ID of the VPN connection.
+
+        :rtype: bool
+        :return: True if successful
+        """
+        params = {
+            'DestinationCidrBlock': destination_cidr_block,
+            'VpnConnectionId': vpn_connection_id,
+        }
+        self.get_status('CreateVpnConnectionRoute', params)
+
+    def delete_vpn_connection_route(self, destination_cidr_block,
+                                    vpn_connection_id):
+        """
+        Deletes a static route associated with a VPN connection between an
+        existing virtual private gateway and a VPN customer gateway. The static
+        route allows traffic to be routed from the virtual private gateway to
+        the VPN customer gateway.
+
+        :type destination_cidr_block: str
+        :param destination_cidr_block: The CIDR block associated with the local
+            subnet of the customer data center.
+
+        :type vpn_connection_id: str
+        :param vpn_connection_id: The ID of the VPN connection.
+
+        :rtype: bool
+        :return: True if successful
+        """
+        params = {
+            'DestinationCidrBlock': destination_cidr_block,
+            'VpnConnectionId': vpn_connection_id,
+        }
+        self.get_status('DeleteVpnConnectionRoute', params)
diff --git a/boto/vpc/vpc.py b/boto/vpc/vpc.py
index 0539acd..8fdaa62 100644
--- a/boto/vpc/vpc.py
+++ b/boto/vpc/vpc.py
@@ -14,7 +14,7 @@
 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
 # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
 # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
-# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, 
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
 # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
 # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
 # IN THE SOFTWARE.
@@ -28,15 +28,28 @@
 class VPC(TaggedEC2Object):
 
     def __init__(self, connection=None):
+        """
+        Represents a VPC.
+
+        :ivar id: The unique ID of the VPC.
+        :ivar dhcp_options_id: The ID of the set of DHCP options you've associated with the VPC
+                                (or default if the default options are associated with the VPC).
+        :ivar state: The current state of the VPC.
+        :ivar cidr_block: The CIDR block for the VPC.
+        :ivar is_default: Indicates whether the VPC is the default VPC.
+        :ivar instance_tenancy: The allowed tenancy of instances launched into the VPC.
+        """
         TaggedEC2Object.__init__(self, connection)
         self.id = None
         self.dhcp_options_id = None
         self.state = None
         self.cidr_block = None
+        self.is_default = None
+        self.instance_tenancy = None
 
     def __repr__(self):
         return 'VPC:%s' % self.id
-    
+
     def endElement(self, name, value, connection):
         if name == 'vpcId':
             self.id = value
@@ -46,9 +59,24 @@
             self.state = value
         elif name == 'cidrBlock':
             self.cidr_block = value
+        elif name == 'isDefault':
+            self.is_default = True if value == 'true' else False
+        elif name == 'instanceTenancy':
+            self.instance_tenancy = value
         else:
             setattr(self, name, value)
 
     def delete(self):
         return self.connection.delete_vpc(self.id)
 
+    def _update(self, updated):
+        self.__dict__.update(updated.__dict__)
+
+    def update(self, validate=False):
+        vpc_list = self.connection.get_all_vpcs([self.id])
+        if len(vpc_list):
+            updated_vpc = vpc_list[0]
+            self._update(updated_vpc)
+        elif validate:
+            raise ValueError('%s is not a valid VPC ID' % (self.id,))
+        return self.state
diff --git a/boto/vpc/vpnconnection.py b/boto/vpc/vpnconnection.py
index 3979238..aa49c36 100644
--- a/boto/vpc/vpnconnection.py
+++ b/boto/vpc/vpnconnection.py
@@ -14,31 +14,173 @@
 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
 # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
 # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
-# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, 
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
 # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
 # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
 # IN THE SOFTWARE.
+import boto
+from datetime import datetime
+from boto.resultset import ResultSet
 
 """
 Represents a VPN Connectionn
 """
 
-from boto.ec2.ec2object import EC2Object
+from boto.ec2.ec2object import TaggedEC2Object
 
-class VpnConnection(EC2Object):
+class VpnConnectionOptions(object):
+    """
+    Represents VPN connection options
 
+    :ivar static_routes_only: Indicates whether the VPN connection uses static
+        routes only.  Static routes must be used for devices that don't support
+        BGP.
+
+    """
+    def __init__(self, static_routes_only=None):
+        self.static_routes_only = static_routes_only
+
+    def __repr__(self):
+        return 'VpnConnectionOptions'
+
+    def startElement(self, name, attrs, connection):
+        pass
+
+    def endElement(self, name, value, connection):
+        if name == 'staticRoutesOnly':
+            self.static_routes_only = True if value == 'true' else False
+        else:
+            setattr(self, name, value)
+
+class VpnStaticRoute(object):
+    """
+    Represents a static route for a VPN connection.
+
+    :ivar destination_cidr_block: The CIDR block associated with the local
+        subnet of the customer data center.
+    :ivar source: Indicates how the routes were provided.
+    :ivar state: The current state of the static route.
+    """
+    def __init__(self, destination_cidr_block=None, source=None, state=None):
+        self.destination_cidr_block = destination_cidr_block
+        self.source = source
+        self.available = state
+
+    def __repr__(self):
+        return 'VpnStaticRoute: %s' % self.destination_cidr_block
+
+    def startElement(self, name, attrs, connection):
+        pass
+
+    def endElement(self, name, value, connection):
+        if name == 'destinationCidrBlock':
+            self.destination_cidr_block = value
+        elif name == 'source':
+            self.source = value
+        elif name == 'state':
+            self.state = value
+        else:
+            setattr(self, name, value)
+
+class VpnTunnel(object):
+    """
+    Represents telemetry for a VPN tunnel
+
+    :ivar outside_ip_address: The Internet-routable IP address of the
+        virtual private gateway's outside interface.
+    :ivar status: The status of the VPN tunnel. Valid values: UP | DOWN
+    :ivar last_status_change: The date and time of the last change in status.
+    :ivar status_message: If an error occurs, a description of the error.
+    :ivar accepted_route_count: The number of accepted routes.
+    """
+    def __init__(self, outside_ip_address=None, status=None, last_status_change=None,
+                 status_message=None, accepted_route_count=None):
+        self.outside_ip_address = outside_ip_address
+        self.status = status
+        self.last_status_change = last_status_change
+        self.status_message = status_message
+        self.accepted_route_count = accepted_route_count
+
+    def __repr__(self):
+        return 'VpnTunnel: %s' % self.outside_ip_address
+
+    def startElement(self, name, attrs, connection):
+        pass
+
+    def endElement(self, name, value, connection):
+        if name == 'outsideIpAddress':
+            self.outside_ip_address = value
+        elif name == 'status':
+            self.status = value
+        elif name == 'lastStatusChange':
+            self.last_status_change =  datetime.strptime(value,
+                                        '%Y-%m-%dT%H:%M:%S.%fZ')
+        elif name == 'statusMessage':
+            self.status_message = value
+        elif name == 'acceptedRouteCount':
+            try:
+                value = int(value)
+            except ValueError:
+                boto.log.warning('Error converting code (%s) to int' % value)
+            self.accepted_route_count = value
+        else:
+            setattr(self, name, value)
+
+class VpnConnection(TaggedEC2Object):
+    """
+    Represents a VPN Connection
+
+    :ivar id: The ID of the VPN connection.
+    :ivar state: The current state of the VPN connection.
+        Valid values: pending | available | deleting | deleted
+    :ivar customer_gateway_configuration: The configuration information for the
+        VPN connection's customer gateway (in the native XML format). This
+        element is always present in the
+        :class:`boto.vpc.VPCConnection.create_vpn_connection` response;
+        however, it's present in the
+        :class:`boto.vpc.VPCConnection.get_all_vpn_connections` response only
+        if the VPN connection is in the pending or available state.
+    :ivar type: The type of VPN connection (ipsec.1).
+    :ivar customer_gateway_id: The ID of the customer gateway at your end of
+        the VPN connection.
+    :ivar vpn_gateway_id: The ID of the virtual private gateway
+        at the AWS side of the VPN connection.
+    :ivar tunnels: A list of the vpn tunnels (always 2)
+    :ivar options: The option set describing the VPN connection.
+    :ivar static_routes: A list of static routes associated with a VPN
+        connection.
+
+    """
     def __init__(self, connection=None):
-        EC2Object.__init__(self, connection)
+        TaggedEC2Object.__init__(self, connection)
         self.id = None
         self.state = None
         self.customer_gateway_configuration = None
         self.type = None
         self.customer_gateway_id = None
         self.vpn_gateway_id = None
+        self.tunnels = []
+        self.options = None
+        self.static_routes = []
 
     def __repr__(self):
         return 'VpnConnection:%s' % self.id
-    
+
+    def startElement(self, name, attrs, connection):
+        retval = super(VpnConnection, self).startElement(name, attrs, connection)
+        if retval is not None:
+            return retval
+        if name == 'vgwTelemetry':
+            self.tunnels = ResultSet([('item', VpnTunnel)])
+            return self.tunnels
+        elif name == 'routes':
+            self.static_routes = ResultSet([('item', VpnStaticRoute)])
+            return self.static_routes
+        elif name == 'options':
+            self.options = VpnConnectionOptions()
+            return self.options
+        return None
+
     def endElement(self, name, value, connection):
         if name == 'vpnConnectionId':
             self.id = value
@@ -57,4 +199,3 @@
 
     def delete(self):
         return self.connection.delete_vpn_connection(self.id)
-
diff --git a/docs/source/autoscale_tut.rst b/docs/source/autoscale_tut.rst
index 879d522..86fc529 100644
--- a/docs/source/autoscale_tut.rst
+++ b/docs/source/autoscale_tut.rst
@@ -32,9 +32,6 @@
 >>> from boto.ec2.autoscale import AutoScaleConnection
 >>> conn = AutoScaleConnection('<aws access key>', '<aws secret key>')
 
-Alternatively, you can use the shortcut:
-
->>> conn = boto.connect_autoscale()
 
 A Note About Regions and Endpoints
 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
@@ -42,7 +39,8 @@
 default the US endpoint is used. To choose a specific region, instantiate the
 AutoScaleConnection object with that region's endpoint.
 
->>> ec2 = boto.connect_autoscale(host='autoscaling.eu-west-1.amazonaws.com')
+>>> import boto.ec2.autoscale
+>>> autoscale = boto.ec2.autoscale.connect_to_region('eu-west-1')
 
 Alternatively, edit your boto.cfg with the default Autoscale endpoint to use::
 
@@ -94,7 +92,8 @@
 
 >>> ag = AutoScalingGroup(group_name='my_group', load_balancers=['my-lb'],
                           availability_zones=['us-east-1a', 'us-east-1b'],
-                          launch_config=lc, min_size=4, max_size=8)
+                          launch_config=lc, min_size=4, max_size=8,
+                          connection=conn)
 >>> conn.create_auto_scaling_group(ag)
 
 We now have a new autoscaling group defined! At this point instances should be
@@ -116,14 +115,14 @@
 
 Scaling a Group Up or Down
 ^^^^^^^^^^^^^^^^^^^^^^^^^^
-It can also be useful to scale a group up or down depending on certain criteria. 
+It can also be useful to scale a group up or down depending on certain criteria.
 For example, if the average CPU utilization of the group goes above 70%, you may
 want to scale up the number of instances to deal with demand. Likewise, you
-might want to scale down if usage drops again. 
-These rules for **how** to scale are defined by *Scaling Polices*, and the rules for
+might want to scale down if usage drops again.
+These rules for **how** to scale are defined by *Scaling Policies*, and the rules for
 **when** to scale are defined by CloudWatch *Metric Alarms*.
 
-For example, let's configure scaling for the above group based on CPU utilization. 
+For example, let's configure scaling for the above group based on CPU utilization.
 We'll say it should scale up if the average CPU usage goes above 70% and scale
 down if it goes below 40%.
 
@@ -132,6 +131,7 @@
 
 We need one policy for scaling up and one for scaling down.
 
+>>> from boto.ec2.autoscale import ScalingPolicy
 >>> scale_up_policy = ScalingPolicy(
             name='scale_up', adjustment_type='ChangeInCapacity',
             as_name='my_group', scaling_adjustment=1, cooldown=180)
@@ -147,11 +147,11 @@
 
 Now that the polices have been digested by AWS, they have extra properties
 that we aren't aware of locally. We need to refresh them by requesting them
-back again. 
+back again.
 
->>> scale_up_policy = autoscale.get_all_policies(
+>>> scale_up_policy = conn.get_all_policies(
             as_group='my_group', policy_names=['scale_up'])[0]
->>> scale_down_policy = autoscale.get_all_policies(
+>>> scale_down_policy = conn.get_all_policies(
             as_group='my_group', policy_names=['scale_down'])[0]
 
 Specifically, we'll need the Amazon Resource Name (ARN) of each policy, which
@@ -160,7 +160,8 @@
 Next we'll create CloudWatch alarms that will define when to run the
 Auto Scaling Policies.
 
->>> cloudwatch = boto.connect_cloudwatch()
+>>> import boto.ec2.cloudwatch
+>>> cloudwatch = boto.ec2.cloudwatch.connect_to_region('us-west-2')
 
 It makes sense to measure the average CPU usage across the whole Auto Scaling
 Group, rather than individual instances. We express that as CloudWatch
@@ -170,6 +171,7 @@
 
 Create an alarm for when to scale up, and one for when to scale down.
 
+>>> from boto.ec2.cloudwatch import MetricAlarm
 >>> scale_up_alarm = MetricAlarm(
             name='scale_up_on_cpu', namespace='AWS/EC2',
             metric='CPUUtilization', statistic='Average',
@@ -188,4 +190,30 @@
             dimensions=alarm_dimensions)
 >>> cloudwatch.create_alarm(scale_down_alarm)
 
-Auto Scaling will now create a new instance if the existing cluster averages more than 70% CPU for two minutes. Similarly, it will terminate an instance when CPU usage sits below 40%. Auto Scaling will not add or remove instances beyond the limits of the Scaling Group's 'max_size' and 'min_size' properties.
+Auto Scaling will now create a new instance if the existing cluster averages
+more than 70% CPU for two minutes. Similarly, it will terminate an instance
+when CPU usage sits below 40%. Auto Scaling will not add or remove instances
+beyond the limits of the Scaling Group's 'max_size' and 'min_size' properties.
+
+To retrieve the instances in your autoscale group:
+
+>>> import boto.ec2
+>>> ec2 = boto.ec2.connect_to_region('us-west-2)
+>>> conn.get_all_groups(names=['my_group'])[0]
+>>> instance_ids = [i.instance_id for i in group.instances]
+>>> reservations = ec2.get_all_instances(instance_ids)
+>>> instances = [i for r in reservations for i in r.instances]
+
+To delete your autoscale group, we first need to shutdown all the
+instances:
+
+>>> ag.shutdown_instances()
+
+Once the instances have been shutdown, you can delete the autoscale
+group:
+
+>>> ag.delete()
+
+You can also delete your launch configuration:
+
+>>> lc.delete()
diff --git a/docs/source/boto_config_tut.rst b/docs/source/boto_config_tut.rst
index 76b27b6..dc8000e 100644
--- a/docs/source/boto_config_tut.rst
+++ b/docs/source/boto_config_tut.rst
@@ -11,8 +11,8 @@
 these options can be passed into the constructors for top-level objects such as
 connections. Some options, such as credentials, can also be read from
 environment variables (e.g. ``AWS_ACCESS_KEY_ID`` and ``AWS_SECRET_ACCESS_KEY``).
-But there is no central place to manage these options. So, the development
-version of boto has now introduced the notion of boto config files.
+It is also possible to manage these options in a central place through the use
+of boto config files.
 
 Details
 -------
@@ -33,6 +33,13 @@
 :py:class:`Config <boto.pyami.config.Config>` class defines additional
 methods that are described on the PyamiConfigMethods page.
 
+An example ``~/.boto`` file should look like::
+
+    [Credentials]
+    aws_access_key_id = <your_access_key_here>
+    aws_secret_access_key = <your_secret_key_here>
+
+
 Sections
 --------
 
@@ -50,7 +57,7 @@
 * Credentials specified as options in the config file.
 
 This section defines the following options: ``aws_access_key_id`` and
-``aws_secret_access_key``. The former being your aws key id and the latter
+``aws_secret_access_key``. The former being your AWS key id and the latter
 being the secret key.
 
 For example::
@@ -60,12 +67,38 @@
     aws_secret_access_key = <your secret key>
 
 Please notice that quote characters are not used to either side of the '='
-operator even when both your aws access key id and secret key are strings.
+operator even when both your AWS access key id and secret key are strings.
+
+For greater security, the secret key can be stored in a keyring and
+retrieved via the keyring package.  To use a keyring, use ``keyring``,
+rather than ``aws_secret_access_key``::
+
+    [Credentials]
+    aws_access_key_id = <your access key>
+    keyring = <keyring name>
+
+To use a keyring, you must have the Python `keyring
+<http://pypi.python.org/pypi/keyring>`_ package installed and in the
+Python path. To learn about setting up keyrings, see the `keyring
+documentation
+<http://pypi.python.org/pypi/keyring#installing-and-using-python-keyring-lib>`_
+
+Credentials can also be supplied for a Eucalyptus service::
+
+    [Credentials]
+    euca_access_key_id = <your access key>
+    euca_secret_access_key = <your secret key>
+
+Finally, this section is also be used to provide credentials for the Internet Archive API::
+
+    [Credentials]
+    ia_access_key_id = <your access key>
+    ia_secret_access_key = <your secret key>
 
 Boto
 ^^^^
 
-The Boto section is used to specify options that control the operaton of
+The Boto section is used to specify options that control the operation of
 boto itself. This section defines the following options:
 
 :debug: Controls the level of debug messages that will be printed by the boto library.
@@ -84,7 +117,7 @@
   request. The default number of retries is 5 but you can change the default
   with this option.
 
-As an example::
+For example::
 
     [Boto]
     debug = 0
@@ -95,6 +128,152 @@
     proxy_user = foo
     proxy_pass = bar
 
+
+:connection_stale_duration: Amount of time to wait in seconds before a
+  connection will stop getting reused. AWS will disconnect connections which
+  have been idle for 180 seconds.
+:is_secure: Is the connection over SSL. This setting will overide passed in
+  values.
+:https_validate_certificates: Validate HTTPS certificates. This is on by default
+:ca_certificates_file: Location of CA certificates
+:http_socket_timeout: Timeout used to overwrite the system default socket
+  timeout for httplib .
+:send_crlf_after_proxy_auth_headers: Change line ending behaviour with proxies.
+  For more details see this `discussion <https://groups.google.com/forum/?fromgroups=#!topic/boto-dev/teenFvOq2Cc>`_
+
+These settings will default to::
+
+    [Boto]
+    connection_stale_duration = 180
+    is_secure = True
+    https_validate_certificates = True
+    ca_certificates_file = cacerts.txt
+    http_socket_timeout = 60
+    send_crlf_after_proxy_auth_headers = False
+
+You can control the timeouts and number of retries used when retrieving
+information from the Metadata Service (this is used for retrieving credentials
+for IAM roles on EC2 instances):
+
+:metadata_service_timeout: Number of seconds until requests to the metadata
+  service will timeout (float).
+:metadata_service_num_attempts: Number of times to attempt to retrieve
+  information from the metadata service before giving up (int).
+
+These settings will default to::
+
+    [Boto]
+    metadata_service_timeout = 1.0
+    metadata_service_num_attempts = 1
+
+
+This section is also used for specifying endpoints for non-AWS services such as
+Eucalyptus and Walrus.
+
+:eucalyptus_host: Select a default endpoint host for eucalyptus
+:walrus_host: Select a default host for Walrus
+
+For example::
+
+    [Boto]
+    eucalyptus_host = somehost.example.com
+    walrus_host = somehost.example.com
+
+
+Finally, the Boto section is used to set defaults versions for many AWS services
+
+AutoScale settings:
+
+options:
+:autoscale_version: Set the API version
+:autoscale_endpoint: Endpoint to use
+:autoscale_region_name: Default region to use
+
+For example::
+
+    [Boto]
+    autoscale_version = 2011-01-01
+    autoscale_endpoint = autoscaling.us-west-2.amazonaws.com
+    autoscale_region_name = us-west-2
+
+
+Cloudformation settings can also be defined:
+
+:cfn_version: Cloud formation API version
+:cfn_region_name: Default region name
+:cfn_region_endpoint: Default endpoint
+
+For example::
+
+    [Boto]
+    cfn_version = 2010-05-15
+    cfn_region_name = us-west-2
+    cfn_region_endpoint = cloudformation.us-west-2.amazonaws.com
+
+Cloudsearch settings:
+
+:cs_region_name: Default cloudsearch region
+:cs_region_endpoint: Default cloudsearch endpoint
+
+For example::
+
+    [Boto]
+    cs_region_name = us-west-2
+    cs_region_endpoint = cloudsearch.us-west-2.amazonaws.com
+
+Cloudwatch settings:
+
+:cloudwatch_version: Cloudwatch API version
+:cloudwatch_region_name: Default region name
+:cloudwatch_region_endpoint: Default endpoint
+
+For example::
+
+    [Boto]
+    cloudwatch_version = 2010-08-01
+    cloudwatch_region_name = us-west-2
+    cloudwatch_region_endpoint = monitoring.us-west-2.amazonaws.com
+
+EC2 settings:
+
+:ec2_version: EC2 API version
+:ec2_region_name: Default region name
+:ec2_region_endpoint: Default endpoint
+
+For example::
+
+    [Boto]
+    ec2_version = 2012-12-01
+    ec2_region_name = us-west-2
+    ec2_region_endpoint = ec2.us-west-2.amazonaws.com
+
+ELB settings:
+
+:elb_version: ELB API version
+:elb_region_name: Default region name
+:elb_region_endpoint: Default endpoint
+
+For example::
+
+    [Boto]
+    elb_version = 2012-06-01
+    elb_region_name = us-west-2
+    elb_region_endpoint = elasticloadbalancing.us-west-2.amazonaws.com
+
+EMR settings:
+
+:emr_version: EMR API version
+:emr_region_name: Default region name
+:emr_region_endpoint: Default endpoint
+
+For example::
+
+    [Boto]
+    emr_version = 2009-03-31
+    emr_region_name = us-west-2
+    emr_region_endpoint = elasticmapreduce.us-west-2.amazonaws.com
+
+
 Precedence
 ----------
 
@@ -102,9 +281,119 @@
 options stored in environmental variables or you can explicitly pass them to
 method calls i.e.::
 
-	>>> boto.connect_ec2('<KEY_ID>','<SECRET_KEY>')
+    >>> boto.ec2.connect_to_region(
+    ...     'us-west-2',
+    ...     aws_access_key_id='foo',
+    ...     aws_secret_access_key='bar')
 
 In these cases where these options can be found in more than one place boto
 will first use the explicitly supplied arguments, if none found it will then
 look for them amidst environment variables and if that fails it will use the
 ones in boto config.
+
+Notification
+^^^^^^^^^^^^
+
+If you are using notifications for boto.pyami, you can specify the email
+details through the following variables.
+
+:smtp_from: Used as the sender in notification emails.
+:smtp_to: Destination to which emails should be sent
+:smtp_host: Host to connect to when sending notification emails.
+:smtp_port: Port to connect to when connecting to the :smtp_host:
+
+Default values are::
+
+    [notification]
+    smtp_from = boto
+    smtp_to = None
+    smtp_host = localhost
+    smtp_port = 25
+    smtp_tls = True
+    smtp_user = john
+    smtp_pass = hunter2
+
+SWF
+^^^
+
+The SWF section allows you to configure the default region to be used for the
+Amazon Simple Workflow service.
+
+:region: Set the default region
+
+Example::
+
+    [SWF]
+    region = us-west-2
+
+Pyami
+^^^^^
+
+The Pyami section is used to configure the working directory for PyAMI.
+
+:working_dir: Working directory used by PyAMI
+
+Example::
+
+    [Pyami]
+    working_dir = /home/foo/
+
+DB
+^^
+The DB section is used to configure access to databases through the
+:func:`boto.sdb.db.manager.get_manager` function.
+
+:db_type: Type of the database. Current allowed values are `SimpleDB` and
+    `XML`.
+:db_user: AWS access key id.
+:db_passwd: AWS secret access key.
+:db_name: Database that will be connected to.
+:db_table: Table name :note: This doesn't appear to be used.
+:db_host: Host to connect to
+:db_port: Port to connect to
+:enable_ssl: Use SSL
+
+More examples::
+
+    [DB]
+    db_type = SimpleDB
+    db_user = <aws access key id>
+    db_passwd = <aws secret access key>
+    db_name = my_domain
+    db_table = table
+    db_host = sdb.amazonaws.com
+    enable_ssl = True
+    debug = True
+
+    [DB_TestBasic]
+    db_type = SimpleDB
+    db_user = <another aws access key id>
+    db_passwd = <another aws secret access key>
+    db_name = basic_domain
+    db_port = 1111
+
+SDB
+^^^
+
+This section is used to configure SimpleDB
+
+:region: Set the region to which SDB should connect
+
+Example::
+
+    [SDB]
+    region = us-west-2
+
+DynamoDB
+^^^^^^^^
+
+This section is used to configure DynamoDB
+
+:region: Choose the default region
+:validate_checksums: Check checksums returned by DynamoDB
+
+Example::
+
+    [DynamoDB]
+    region = us-west-2
+    validate_checksums = True
diff --git a/docs/source/cloudsearch_tut.rst b/docs/source/cloudsearch_tut.rst
index 6916eac..f29bcca 100644
--- a/docs/source/cloudsearch_tut.rst
+++ b/docs/source/cloudsearch_tut.rst
@@ -9,38 +9,275 @@
 
 .. _Cloudsearch: http://aws.amazon.com/cloudsearch/
 
+Creating a Connection
+---------------------
+The first step in accessing CloudSearch is to create a connection to the service.
+
+The recommended method of doing this is as follows::
+
+    >>> import boto.cloudsearch
+    >>> conn = boto.cloudsearch.connect_to_region("us-west-2",
+    ...             aws_access_key_id='<aws access key'>,
+    ...             aws_secret_access_key='<aws secret key>')
+
+At this point, the variable conn will point to a CloudSearch connection object
+in the us-west-2 region. Currently, this is the only region which has the
+CloudSearch service. In this example, the AWS access key and AWS secret key are
+passed in to the method explicitly. Alternatively, you can set the environment
+variables:
+
+* `AWS_ACCESS_KEY_ID` - Your AWS Access Key ID
+* `AWS_SECRET_ACCESS_KEY` - Your AWS Secret Access Key
+
+and then simply call::
+
+   >>> import boto.cloudsearch
+   >>> conn = boto.cloudsearch.connect_to_region("us-west-2")
+
+In either case, conn will point to the Connection object which we will use
+throughout the remainder of this tutorial.
+
 Creating a Domain
 -----------------
 
-    >>> import boto
+Once you have a connection established with the CloudSearch service, you will
+want to create a domain. A domain encapsulates the data that you wish to index,
+as well as indexes and metadata relating to it::
+
+    >>> from boto.cloudsearch.domain import Domain
+    >>> domain = Domain(conn, conn.create_domain('demo'))
+
+This domain can be used to control access policies, indexes, and the actual
+document service, which you will use to index and search.
+
+Setting access policies
+-----------------------
+
+Before you can connect to a document service, you need to set the correct
+access properties.  For example, if you were connecting from 192.168.1.0, you
+could give yourself access as follows::
 
     >>> our_ip = '192.168.1.0'
 
-    >>> conn = boto.connect_cloudsearch()
-    >>> domain = conn.create_domain('demo')
-
     >>> # Allow our IP address to access the document and search services
     >>> policy = domain.get_access_policies()
     >>> policy.allow_search_ip(our_ip)
     >>> policy.allow_doc_ip(our_ip)
 
+You can use the :py:meth:`allow_search_ip
+<boto.cloudsearch.optionstatus.ServicePoliciesStatus.allow_search_ip>` and
+:py:meth:`allow_doc_ip <boto.cloudsearch.optionstatus.ServicePoliciesStatus.allow_doc_ip>`
+methods to give different CIDR blocks access to searching and the document
+service respectively.
+
+Creating index fields
+---------------------
+
+Each domain can have up to twenty index fields which are indexed by the
+CloudSearch service. For each index field, you will need to specify whether
+it's a text or integer field, as well as optionaly a default value::
+
     >>> # Create an 'text' index field called 'username'
     >>> uname_field = domain.create_index_field('username', 'text')
-    
-    >>> # But it would be neat to drill down into different countries    
-    >>> loc_field = domain.create_index_field('location', 'text', facet=True)
-    
-    >>> # Epoch time of when the user last did something
-    >>> time_field = domain.create_index_field('last_activity', 'uint', default=0)
-    
-    >>> follower_field = domain.create_index_field('follower_count', 'uint', default=0)
 
-    >>> domain.create_rank_expression('recently_active', 'last_activity')  # We'll want to be able to just show the most recently active users
-    
-    >>> domain.create_rank_expression('activish', 'text_relevance + ((follower_count/(time() - last_activity))*1000)')  # Let's get trickier and combine text relevance with a really dynamic expression
+    >>> # Epoch time of when the user last did something
+    >>> time_field = domain.create_index_field('last_activity',
+    ...                                        'uint',
+    ...                                        default=0)
+
+It is also possible to mark an index field as a facet. Doing so allows a search
+query to return categories into which results can be grouped, or to create
+drill-down categories::
+
+    >>> # But it would be neat to drill down into different countries
+    >>> loc_field = domain.create_index_field('location', 'text', facet=True)
+
+Finally, you can also mark a snippet of text as being able to be returned
+directly in your search query by using the results option::
+
+    >>> # Directly insert user snippets in our results
+    >>> snippet_field = domain.create_index_field('snippet', 'text', result=True)
+
+You can add up to 20 index fields in this manner::
+
+    >>> follower_field = domain.create_index_field('follower_count',
+    ...                                            'uint',
+    ...                                            default=0)
+
+Adding Documents to the Index
+-----------------------------
+
+Now, we can add some documents to our new search domain. First, you will need a
+document service object through which queries are sent::
+
+    >>> doc_service = domain.get_document_service()
+
+For this example, we will use a pre-populated list of sample content for our
+import. You would normally pull such data from your database or another
+document store::
+
+    >>> users = [
+        {
+            'id': 1,
+            'username': 'dan',
+            'last_activity': 1334252740,
+            'follower_count': 20,
+            'location': 'USA',
+            'snippet': 'Dan likes watching sunsets and rock climbing',
+        },
+        {
+            'id': 2,
+            'username': 'dankosaur',
+            'last_activity': 1334252904,
+            'follower_count': 1,
+            'location': 'UK',
+            'snippet': 'Likes to dress up as a dinosaur.',
+        },
+        {
+            'id': 3,
+            'username': 'danielle',
+            'last_activity': 1334252969,
+            'follower_count': 100,
+            'location': 'DE',
+            'snippet': 'Just moved to Germany!'
+        },
+        {
+            'id': 4,
+            'username': 'daniella',
+            'last_activity': 1334253279,
+            'follower_count': 7,
+            'location': 'USA',
+            'snippet': 'Just like Dan, I like to watch a good sunset, but heights scare me.',
+        }
+    ]
+
+When adding documents to our document service, we will batch them together. You
+can schedule a document to be added by using the :py:meth:`add
+<boto.cloudsearch.document.DocumentServiceConnection.add>` method. Whenever you are adding a
+document, you must provide a unique ID, a version ID, and the actual document
+to be indexed. In this case, we are using the user ID as our unique ID. The
+version ID is used to determine which is the latest version of an object to be
+indexed. If you wish to update a document, you must use a higher version ID. In
+this case, we are using the time of the user's last activity as a version
+number::
+
+    >>> for user in users:
+    >>>     doc_service.add(user['id'], user['last_activity'], user)
+
+When you are ready to send the batched request to the document service, you can
+do with the :py:meth:`commit
+<boto.cloudsearch.document.DocumentServiceConnection.commit>` method. Note that
+cloudsearch will charge per 1000 batch uploads. Each batch upload must be under
+5MB::
+
+    >>> result = doc_service.commit()
+
+The result is an instance of :py:class:`CommitResponse
+<boto.cloudsearch.document.CommitResponse>` which will make the plain
+dictionary response a nice object (ie result.adds, result.deletes) and raise an
+exception for us if all of our documents weren't actually committed.
+
+After you have successfully committed some documents to cloudsearch, you must
+use :py:meth:`clear_sdf
+<boto.cloudsearch.document.DocumentServiceConnection.clear_sdf>`, if you wish
+to use the same document service connection again so that its internal cache is
+cleared.
+
+Searching Documents
+-------------------
+
+Now, let's try performing a search. First, we will need a
+SearchServiceConnection::
+
+    >>> search_service = domain.get_search_service()
+
+A standard search will return documents which contain the exact words being
+searched for::
+
+    >>> results = search_service.search(q="dan")
+    >>> results.hits
+    2
+    >>> map(lambda x: x['id'], results)
+    [u'1', u'4']
+
+The standard search does not look at word order::
+
+    >>> results = search_service.search(q="dinosaur dress")
+    >>> results.hits
+    1
+    >>> map(lambda x: x['id'], results)
+    [u'2']
+
+It's also possible to do more complex queries using the bq argument (Boolean
+Query). When you are using bq, your search terms must be enclosed in single
+quotes::
+
+    >>> results = search_service.search(bq="'dan'")
+    >>> results.hits
+    2
+    >>> map(lambda x: x['id'], results)
+    [u'1', u'4']
+
+When you are using boolean queries, it's also possible to use wildcards to
+extend your search to all words which start with your search terms::
+
+    >>> results = search_service.search(bq="'dan*'")
+    >>> results.hits
+    4
+    >>> map(lambda x: x['id'], results)
+    [u'1', u'2', u'3', u'4']
+
+The boolean query also allows you to create more complex queries. You can OR
+term together using "|", AND terms together using "+" or a space, and you can
+remove words from the query using the "-" operator::
+
+    >>> results = search_service.search(bq="'watched|moved'")
+    >>> results.hits
+    2
+    >>> map(lambda x: x['id'], results)
+    [u'3', u'4']
+
+By default, the search will return 10 terms but it is possible to adjust this
+by using the size argument as follows::
+
+    >>> results = search_service.search(bq="'dan*'", size=2)
+    >>> results.hits
+    4
+    >>> map(lambda x: x['id'], results)
+    [u'1', u'2']
+
+It is also possible to offset the start of the search by using the start
+argument as follows::
+
+    >>> results = search_service.search(bq="'dan*'", start=2)
+    >>> results.hits
+    4
+    >>> map(lambda x: x['id'], results)
+    [u'3', u'4']
+
+
+Ordering search results and rank expressions
+--------------------------------------------
+
+If your search query is going to return many results, it is good to be able to
+sort them. You can order your search results by using the rank argument. You are
+able to sort on any fields which have the results option turned on::
+
+    >>> results = search_service.search(bq=query, rank=['-follower_count'])
+
+You can also create your own rank expressions to sort your results according to
+other criteria, such as showing most recently active user, or combining the
+recency score with the text_relevance::
+
+    >>> domain.create_rank_expression('recently_active', 'last_activity')
+
+    >>> domain.create_rank_expression('activish',
+    ...   'text_relevance + ((follower_count/(time() - last_activity))*1000)')
+
+    >>> results = search_service.search(bq=query, rank=['-recently_active'])
 
 Viewing and Adjusting Stemming for a Domain
---------------------------------------------
+-------------------------------------------
 
 A stemming dictionary maps related words to a common stem. A stem is
 typically the root or base word from which variants are derived. For
@@ -53,7 +290,7 @@
 the request matches documents that contain run as well as running.
 
 To get the current stemming dictionary defined for a domain, use the
-``get_stemming`` method of the Domain object.
+:py:meth:`get_stemming <boto.cloudsearch.domain.Domain.get_stemming>` method::
 
     >>> stems = domain.get_stemming()
     >>> stems
@@ -62,7 +299,7 @@
 
 This returns a dictionary object that can be manipulated directly to
 add additional stems for your search domain by adding pairs of term:stem
-to the stems dictionary.
+to the stems dictionary::
 
     >>> stems['stems']['running'] = 'run'
     >>> stems['stems']['ran'] = 'run'
@@ -71,12 +308,12 @@
     >>>
 
 This has changed the value locally.  To update the information in
-Amazon CloudSearch, you need to save the data.
+Amazon CloudSearch, you need to save the data::
 
     >>> stems.save()
 
 You can also access certain CloudSearch-specific attributes related to
-the stemming dictionary defined for your domain.
+the stemming dictionary defined for your domain::
 
     >>> stems.status
     u'RequiresIndexDocuments'
@@ -101,7 +338,7 @@
 matches.
 
 To view the stopwords currently defined for your domain, use the
-``get_stopwords`` method of the Domain object.
+:py:meth:`get_stopwords <boto.cloudsearch.domain.Domain.get_stopwords>` method::
 
     >>> stopwords = domain.get_stopwords()
     >>> stopwords
@@ -124,17 +361,18 @@
      u'the',
      u'to',
      u'was']}
-     >>>
+    >>>
 
 You can add additional stopwords by simply appending the values to the
-list.
+list::
 
     >>> stopwords['stopwords'].append('foo')
     >>> stopwords['stopwords'].append('bar')
     >>> stopwords
 
 Similarly, you could remove currently defined stopwords from the list.
-To save the changes, use the ``save`` method.
+To save the changes, use the :py:meth:`save
+<boto.cloudsearch.optionstatus.OptionStatus.save>` method::
 
     >>> stopwords.save()
 
@@ -151,114 +389,44 @@
 indexed term.
 
 If you want two terms to match the same documents, you must define
-them as synonyms of each other. For example:
+them as synonyms of each other. For example::
 
     cat, feline
     feline, cat
 
 To view the synonyms currently defined for your domain, use the
-``get_synonyms`` method of the Domain object.
+:py:meth:`get_synonyms <boto.cloudsearch.domain.Domain.get_synonyms>` method::
 
-    >>> synonyms = domain.get_synsonyms()
+    >>> synonyms = domain.get_synonyms()
     >>> synonyms
     {u'synonyms': {}}
     >>>
 
 You can define new synonyms by adding new term:synonyms entries to the
-synonyms dictionary object.
+synonyms dictionary object::
 
     >>> synonyms['synonyms']['cat'] = ['feline', 'kitten']
     >>> synonyms['synonyms']['dog'] = ['canine', 'puppy']
 
-To save the changes, use the ``save`` method.
+To save the changes, use the :py:meth:`save
+<boto.cloudsearch.optionstatus.OptionStatus.save>` method::
 
     >>> synonyms.save()
 
 The synonyms object has similar attributes defined above for stemming
 that provide additional information about the stopwords in your domain.
 
-Adding Documents to the Index
------------------------------
-
-Now, we can add some documents to our new search domain.
-
-    >>> doc_service = domain.get_document_service()
-
-    >>> # Presumably get some users from your db of choice.
-    >>> users = [
-        {
-            'id': 1,
-            'username': 'dan',
-            'last_activity': 1334252740,
-            'follower_count': 20,
-            'location': 'USA'
-        },
-        {
-            'id': 2,
-            'username': 'dankosaur',
-            'last_activity': 1334252904,
-            'follower_count': 1,
-            'location': 'UK'
-        },
-        {
-            'id': 3,
-            'username': 'danielle',
-            'last_activity': 1334252969,
-            'follower_count': 100,
-            'location': 'DE'
-        },
-        {
-            'id': 4,
-            'username': 'daniella',
-            'last_activity': 1334253279,
-            'follower_count': 7,
-            'location': 'USA'
-        }
-    ]
-
-    >>> for user in users:
-    >>>     doc_service.add(user['id'], user['last_activity'], user)
-
-    >>> result = doc_service.commit()  # Actually post the SDF to the document service
-
-The result is an instance of `cloudsearch.CommitResponse` which will
-makes the plain dictionary response a nice object (ie result.adds,
-result.deletes) and raise an exception for us if all of our documents
-weren't actually committed.
-
-
-Searching Documents
--------------------
-
-Now, let's try performing a search.
-
-    >>> # Get an instance of cloudsearch.SearchServiceConnection
-    >>> search_service = domain.get_search_service()
-
-    >>> # Horray wildcard search
-    >>> query = "username:'dan*'"
-
-
-    >>> results = search_service.search(bq=query, rank=['-recently_active'], start=0, size=10)
-    
-    >>> # Results will give us back a nice cloudsearch.SearchResults object that looks as
-    >>> # close as possible to pysolr.Results
-
-    >>> print "Got %s results back." % results.hits
-    >>> print "User ids are:"
-    >>> for result in results:
-    >>>     print result['id']
-
-
 Deleting Documents
 ------------------
 
+It is also possible to delete documents::
+
     >>> import time
     >>> from datetime import datetime
 
     >>> doc_service = domain.get_document_service()
 
     >>> # Again we'll cheat and use the current epoch time as our version number
-     
+
     >>> doc_service.delete(4, int(time.mktime(datetime.utcnow().timetuple())))
     >>> service.commit()
diff --git a/docs/source/cloudwatch_tut.rst b/docs/source/cloudwatch_tut.rst
index 5639c04..c930209 100644
--- a/docs/source/cloudwatch_tut.rst
+++ b/docs/source/cloudwatch_tut.rst
@@ -12,8 +12,8 @@
 It takes a while for the monitoring data to start accumulating but once

 it does, you can do this::

 

-    >>> import boto

-    >>> c = boto.connect_cloudwatch()

+    >>> import boto.ec2.cloudwatch

+    >>> c = boto.ec2.cloudwatch.connect_to_region('us-west-2')

     >>> metrics = c.list_metrics()

     >>> metrics

     [Metric:NetworkIn,

@@ -113,4 +113,4 @@
      u'Timestamp': u'2009-05-21T19:55:00Z',

      u'Unit': u'Percent'}

 

-My server obviously isn't very busy right now!
\ No newline at end of file
+My server obviously isn't very busy right now!

diff --git a/docs/source/conf.py b/docs/source/conf.py
index fa1d0c2..4fbbf3f 100644
--- a/docs/source/conf.py
+++ b/docs/source/conf.py
@@ -2,9 +2,13 @@
 
 import os
 import boto
+import sys
 
-extensions = ['sphinx.ext.autodoc', 'sphinx.ext.intersphinx', 'sphinx.ext.todo']
-autoclass_content="both"
+sys.path.append(os.path.join(os.path.dirname(__file__), 'extensions'))
+
+extensions = ['sphinx.ext.autodoc', 'sphinx.ext.intersphinx', 'sphinx.ext.todo',
+              'githublinks']
+autoclass_content = "both"
 templates_path = ['_templates']
 source_suffix = '.rst'
 master_doc = 'index'
@@ -22,6 +26,7 @@
    u'Mitch Garnaat', 'manual'),
 ]
 intersphinx_mapping = {'http://docs.python.org/': None}
+github_project_url = 'https://github.com/boto/boto/'
 
 try:
     release = os.environ.get('SVN_REVISION', 'HEAD')
diff --git a/docs/source/dynamodb2_tut.rst b/docs/source/dynamodb2_tut.rst
new file mode 100644
index 0000000..52f8018
--- /dev/null
+++ b/docs/source/dynamodb2_tut.rst
@@ -0,0 +1,555 @@
+.. _dynamodb2_tut:
+
+===============================================
+An Introduction to boto's DynamoDB v2 interface
+===============================================
+
+This tutorial focuses on the boto interface to AWS' DynamoDB_ v2. This tutorial
+assumes that you have boto already downloaded and installed.
+
+.. _DynamoDB: http://aws.amazon.com/dynamodb/
+
+.. warning::
+
+    This tutorial covers the **SECOND** major release of DynamoDB (including
+    local secondary index support). The documentation for the original
+    version of DynamoDB (& boto's support for it) is at
+    :doc:`DynamoDB v1 <dynamodb_tut>`.
+
+The v2 DynamoDB API has both a high-level & low-level component. The low-level
+API (contained primarily within ``boto.dynamodb2.layer1``) provides an
+interface that rough matches exactly what is provided by the API. It supports
+all options available to the service.
+
+The high-level API attempts to make interacting with the service more natural
+from Python. It supports most of the featureset.
+
+
+The High-Level API
+==================
+
+Most of the interaction centers around a single object, the ``Table``. Tables
+act as a way to effectively namespace your records. If you're familiar with
+database tables from an RDBMS, tables will feel somewhat familiar.
+
+
+Creating a New Table
+--------------------
+
+To create a new table, you need to call ``Table.create`` & specify (at a
+minimum) both the table's name as well as the key schema for the table.
+
+Since both the key schema and local secondary indexes can not be
+modified after the table is created, you'll need to plan ahead of time how you
+think the table will be used. Both the keys & indexes are also used for
+querying, so you'll want to represent the data you'll need when querying
+there as well.
+
+For the schema, you can either have a single ``HashKey`` or a combined
+``HashKey+RangeKey``. The ``HashKey`` by itself should be thought of as a
+unique identifier (for instance, like a username or UUID). It is typically
+looked up as an exact value.
+A ``HashKey+RangeKey`` combination is slightly different, in that the
+``HashKey`` acts like a namespace/prefix & the ``RangeKey`` acts as a value
+that can be referred to by a sorted range of values.
+
+For the local secondary indexes, you can choose from an ``AllIndex``, a
+``KeysOnlyIndex`` or a ``IncludeIndex`` field. Each builds an index of values
+that can be queried on. The ``AllIndex`` duplicates all values onto the index
+(to prevent additional reads to fetch the data). The ``KeysOnlyIndex``
+duplicates only the keys from the schema onto the index. The ``IncludeIndex``
+lets you specify a list of fieldnames to duplicate over.
+
+Simple example::
+
+    >>> from boto.dynamodb2.fields import HashKey
+    >>> from boto.dynamodb2.table import Table
+
+    # Uses your ``aws_access_key_id`` & ``aws_secret_access_key`` from either a
+    # config file or environment variable & the default region.
+    >>> users = Table.create('users', schema=[
+    ...     HashKey('username'),
+    ... ])
+
+A full example::
+
+    >>> from boto.dynamodb2.fields import HashKey, RangeKey, KeysOnlyIndex
+    >>> from boto.dynamodb2.layer1 import DynamoDBConnection
+    >>> from boto.dynamodb2.table import Table
+    >>> from boto.dynamodb2.types import Number
+
+    >>> users = Table.create('users', schema=[
+    ...     HashKey('account_type', data_type=NUMBER),
+    ...     RangeKey('last_name'),
+    ... ], throughput={
+    ...     'read': 5,
+    ...     'write': 15,
+    ... }, indexes=[
+    ...     AllIndex('EverythingIndex', parts=[
+    ...         HashKey('account_type', data_type=NUMBER),
+    ...     ])
+    ... ],
+    ... # If you need to specify custom parameters like keys or region info...
+    ... connection=DynamoDBConnection(
+    ...     aws_access_key_id='key',
+    ...     aws_secret_access_key='key',
+    ...     region='us-west-2'
+    ... ))
+
+
+Using an Existing Table
+-----------------------
+
+Once a table has been created, using it is relatively simple. You can either
+specify just the ``table_name`` (allowing the object to lazily do an additional
+call to get details about itself if needed) or provide the ``schema/indexes``
+again (same as what was used with ``Table.create``) to avoid extra overhead.
+
+Lazy example::
+
+    >>> from boto.dynamodb2.table import Table
+    >>> users = Table('users')
+
+Efficient example::
+
+    >>> from boto.dynamodb2.table import Table
+    >>> users = Table('users', schema=[
+    ...     HashKey('account_type', data_type=NUMBER),
+    ...     RangeKey('last_name'),
+    ... ], indexes=[
+    ...     AllIndex('EverythingIndex', parts=[
+    ...         HashKey('account_type', data_type=NUMBER),
+    ...     ])
+    ... ])
+
+
+Creating a New Item
+-------------------
+
+Once you have a ``Table`` instance, you can add new items to the table. There
+are two ways to do this.
+
+The first is to use the ``Table.put_item`` method. Simply hand it a dictionary
+of data & it will create the item on the server side. This dictionary should
+be relatively flat (as you can nest in other dictionaries) & **must** contain
+the keys used in the ``schema``.
+
+Example::
+
+    >>> from boto.dynamodb2.table import Table
+    >>> users = Table('users')
+
+    # Create the new user.
+    >>> users.put_item(data={
+    ...     'username': 'johndoe',
+    ...     'first_name': 'John',
+    ...     'last_name': 'Doe',
+    ... })
+    True
+
+The alternative is to manually construct an ``Item`` instance & tell it to
+``save`` itself. This is useful if the object will be around for awhile & you
+don't want to re-fetch it.
+
+Example::
+
+    >>> from boto.dynamodb2.items import Item
+    >>> from boto.dynamodb2.table import Table
+    >>> users = Table('users')
+
+    # WARNING - This doens't save it yet!
+    >>> johndoe = Item(users, data={
+    ...     'username': 'johndoe',
+    ...     'first_name': 'John',
+    ...     'last_name': 'Doe',
+    ... })
+    # The data now gets persisted to the server.
+    >>> johndoe.save()
+    True
+
+
+Getting an Item & Accessing Data
+--------------------------------
+
+With data now in DynamoDB, if you know the key of the item, you can fetch it
+back out. Specify the key value(s) as kwargs to ``Table.get_item``.
+
+Example::
+
+    >>> from boto.dynamodb2.table import Table
+    >>> users = Table('users')
+
+    >>> johndoe = users.get_item(username='johndoe')
+
+Once you have an ``Item`` instance, it presents a dictionary-like interface to
+the data.::
+
+    >>> johndoe = users.get_item(username='johndoe')
+
+    # Read a field out.
+    >>> johndoe['first_name']
+    'John'
+
+    # Change a field (DOESN'T SAVE YET!).
+    >>> johndoe['first_name'] = 'Johann'
+
+    # Delete data from it (DOESN'T SAVE YET!).
+    >>> del johndoe['last_name']
+
+
+Updating an Item
+----------------
+
+Just creating new items or changing only the in-memory version of the ``Item``
+isn't particularly effective. To persist the changes to DynamoDB, you have
+three choices.
+
+The first is sending all the data with the expectation nothing has changed
+since you read the data. DynamoDB will verify the data is in the original state
+and, if so, will all of the item's data. If that expectation fails, the call
+will fail::
+
+    >>> johndoe = users.get_item(username='johndoe')
+    >>> johndoe['first_name'] = 'Johann'
+    >>> johndoe['whatever'] = "man, that's just like your opinion"
+    >>> del johndoe['last_name']
+
+    # Affects all fields, even the ones not changed locally.
+    >>> johndoe.save()
+    True
+
+The second is a full overwrite. If you can be confident your version of the
+data is the most correct, you can force an overwrite of the data.::
+
+    >>> johndoe = users.get_item(username='johndoe')
+    >>> johndoe['first_name'] = 'Johann'
+    >>> johndoe['whatever'] = "man, that's just like your opinion"
+    >>> del johndoe['last_name']
+
+    # Specify ``overwrite=True`` to fully replace the data.
+    >>> johndoe.save(overwrite=True)
+    True
+
+The last is a partial update. If you've only modified certain fields, you
+can send a partial update that only writes those fields, allowing other
+(potentially changed) fields to go untouched.::
+
+    >>> johndoe = users.get_item(username='johndoe')
+    >>> johndoe['first_name'] = 'Johann'
+    >>> johndoe['whatever'] = "man, that's just like your opinion"
+    >>> del johndoe['last_name']
+
+    # Partial update, only sending/affecting the
+    # ``first_name/whatever/last_name`` fields.
+    >>> johndoe.partial_save()
+    True
+
+
+Deleting an Item
+----------------
+
+You can also delete items from the table. You have two choices, depending on
+what data you have present.
+
+If you already have an ``Item`` instance, the easiest approach is just to call
+``Item.delete``.::
+
+    >>> johndoe.delete()
+    True
+
+If you don't have an ``Item`` instance & you don't want to incur the
+``Table.get_item`` call to get it, you can call ``Table.delete_item`` method.::
+
+    >>> from boto.dynamodb2.table import Table
+    >>> users = Table('users')
+
+    >>> users.delete_item(username='johndoe')
+    True
+
+
+Batch Writing
+-------------
+
+If you're loading a lot of data at a time, making use of batch writing can
+both speed up the process & reduce the number of write requests made to the
+service.
+
+Batch writing involves wrapping the calls you want batched in a context manager.
+The context manager immitates the ``Table.put_item`` & ``Table.delete_item``
+APIs. Getting & using the context manager looks like::
+
+    >>> from boto.dynamodb2.table import Table
+    >>> users = Table('users')
+
+    >>> with users.batch_write() as batch:
+    ...     batch.put_item(data={
+    ...         'username': 'anotherdoe',
+    ...         'first_name': 'Another',
+    ...         'last_name': 'Doe',
+    ...         'date_joined': int(time.time()),
+    ...     })
+    ...     batch.put_item(data={
+    ...         'username': 'alice',
+    ...         'first_name': 'Alice',
+    ...         'date_joined': int(time.time()),
+    ...     })
+    ...     batch.delete_item(username=jane')
+
+However, there are some limitations on what you can do within the context
+manager.
+
+* It can't read data at all or do batch any other operations.
+* You can't put & delete the same data within a batch request.
+
+.. note::
+
+    Additionally, the context manager can only batch 25 items at a time for a
+    request (this is a DynamoDB limitation). It is handled for you so you can
+    keep writing additional items, but you should be aware that 100 ``put_item``
+    calls is 4 batch requests, not 1.
+
+
+Querying
+--------
+
+Manually fetching out each item by itself isn't tenable for large datasets.
+To cope with fetching many records, you can either perform a standard query,
+query via a local secondary index or scan the entire table.
+
+A standard query typically gets run against a hash+range key combination.
+Filter parameters are passed as kwargs & use a ``__`` to separate the fieldname
+from the operator being used to filter the value.
+
+In terms of querying, our original schema is less than optimal. For the
+following examples, we'll be using the following table setup::
+
+    >>> users = Table.create('users', schema=[
+    ...     HashKey('account_type'),
+    ...     RangeKey('last_name'),
+    ... ], indexes=[
+    ...     AllIndex('DateJoinedIndex', parts=[
+    ...         HashKey('account_type'),
+    ...         RangeKey('date_joined', data_type=NUMBER),
+    ...     ]),
+    ... ])
+
+When executing the query, you get an iterable back that contains your results.
+These results may be spread over multiple requests as DynamoDB paginates them.
+This is done transparently, but you should be aware it may take more than one
+request.
+
+To run a query for last names starting with the letter "D"::
+
+    >>> names_with_d = users.query(
+    ...     account_type__eq='standard_user',
+    ...     last_name__beginswith='D'
+    ... )
+
+    >>> for user in names_with_d:
+    ...     print user['first_name']
+    'Bob'
+    'Jane'
+    'John'
+
+You can also reverse results (``reverse=True``) as well as limiting them
+(``limit=2``)::
+
+    >>> rev_with_d = users.query(
+    ...     account_type__eq='standard_user',
+    ...     last_name__beginswith='D',
+    ...     reverse=True,
+    ...     limit=2
+    ... )
+
+    >>> for user in rev_with_d:
+    ...     print user['first_name']
+    'John'
+    'Jane'
+
+You can also run queries against the local secondary indexes. Simply provide
+the index name (``index='FirstNameIndex'``) & filter parameters against its
+fields::
+
+    # Users within the last hour.
+    >>> recent = users.query(
+    ...     account_type__eq='standard_user',
+    ...     date_joined__gte=time.time() - (60 * 60),
+    ...     index='DateJoinedIndex'
+    ... )
+
+    >>> for user in recent:
+    ...     print user['first_name']
+    'Alice'
+    'Jane'
+
+Finally, if you need to query on data that's not in either a key or in an
+index, you can run a ``Table.scan`` across the whole table, which accepts a
+similar but expanded set of filters. If you're familiar with the Map/Reduce
+concept, this is akin to what DynamoDB does.
+
+.. warning::
+
+    Scans are consistent & run over the entire table, so relatively speaking,
+    they're more expensive than plain queries or queries against an LSI.
+
+An example scan of all records in the table looks like::
+
+    >>> all_users = users.scan()
+
+Filtering a scan looks like::
+
+    >>> owners_with_emails = users.scan(
+    ...     is_owner__eq=1,
+    ...     email__null=False,
+    ... )
+
+    >>> for user in recent:
+    ...     print user['first_name']
+    'George'
+    'John'
+
+
+Parallel Scan
+-------------
+
+DynamoDB also includes a feature called "Parallel Scan", which allows you
+to make use of **extra** read capacity to divide up your result set & scan
+an entire table faster.
+
+This does require extra code on the user's part & you should ensure that
+you need the speed boost, have enough data to justify it and have the extra
+capacity to read it without impacting other queries/scans.
+
+To run it, you should pick the ``total_segments`` to use, which is an integer
+representing the number of temporary partitions you'd divide your table into.
+You then need to spin up a thread/process for each one, giving each
+thread/process a ``segment``, which is a zero-based integer of the segment
+you'd like to scan.
+
+An example of using parallel scan to send out email to all users might look
+something like::
+
+    #!/usr/bin/env python
+    import threading
+
+    import boto.ses
+    import boto.dynamodb2
+    from boto.dynamodb2.table import Table
+
+
+    AWS_ACCESS_KEY_ID = '<YOUR_AWS_KEY_ID>'
+    AWS_SECRET_ACCESS_KEY = '<YOUR_AWS_SECRET_KEY>'
+    APPROVED_EMAIL = 'some@address.com'
+
+
+    def send_email(email):
+        # Using Amazon's Simple Email Service, send an email to a given
+        # email address. You must already have an email you've verified with
+        # AWS before this will work.
+        conn = boto.ses.connect_to_region(
+            'us-east-1',
+            aws_access_key_id=AWS_ACCESS_KEY_ID,
+            aws_secret_access_key=AWS_SECRET_ACCESS_KEY
+        )
+        conn.send_email(
+            APPROVED_EMAIL,
+            "[OurSite] New feature alert!",
+            "We've got some exciting news! We added a new feature to...",
+            [email]
+        )
+
+
+    def process_segment(segment=0, total_segments=10):
+        # This method/function is executed in each thread, each getting its
+        # own segment to process through.
+        conn = boto.dynamodb2.connect_to_region(
+            'us-east-1',
+            aws_access_key_id=AWS_ACCESS_KEY_ID,
+            aws_secret_access_key=AWS_SECRET_ACCESS_KEY
+        )
+        table = Table('users', connection=conn)
+
+        # We pass in the segment & total_segments to scan here.
+        for user in table.scan(segment=segment, total_segments=total_segments):
+            send_email(user['email'])
+
+
+    def send_all_emails():
+        pool = []
+        # We're choosing to divide the table in 3, then...
+        pool_size = 3
+
+        # ...spinning up a thread for each segment.
+        for i in range(pool_size):
+            worker = threading.Thread(
+                target=process_segment,
+                kwargs={
+                    'segment': i,
+                    'total_segments': pool_size,
+                }
+            )
+            pool.append(worker)
+            # We start them to let them start scanning & consuming their
+            # assigned segment.
+            worker.start()
+
+        # Finally, we wait for each to finish.
+        for thread in pool:
+            thread.join()
+
+
+    if __name__ == '__main__':
+        send_all_emails()
+
+
+Batch Reading
+-------------
+
+Similar to batch writing, batch reading can also help reduce the number of
+API requests necessary to access a large number of items. The
+``Table.batch_get`` method takes a list (or any sliceable collection) of keys
+& fetches all of them, presented as an iterator interface.
+
+This is done lazily, so if you never iterate over the results, no requests are
+executed. Additionally, if you only iterate over part of the set, the minumum
+number of calls are made to fetch those results (typically max 100 per
+response).
+
+Example::
+
+    >>> from boto.dynamodb2.table import Table
+    >>> users = Table('users')
+
+    # No request yet.
+    >>> many_users = users.batch_get(keys=[
+        {'username': 'alice'},
+        {'username': 'bob'},
+        {'username': 'fred'},
+        {'username': 'jane'},
+        {'username': 'johndoe'},
+    ])
+
+    # Now the request is performed, requesting all five in one request.
+    >>> for user in many_users:
+    ...     print user['first_name']
+    'Alice'
+    'Bobby'
+    'Fred'
+    'Jane'
+    'John'
+
+
+Deleting a Table
+----------------
+
+Deleting a table is a simple exercise. When you no longer need a table, simply
+run::
+
+    >>> users.delete()
+
+
+Next Steps
+----------
+
+You can find additional information about other calls & parameter options
+in the :doc:`API docs <ref/dynamodb2>`.
diff --git a/docs/source/dynamodb_tut.rst b/docs/source/dynamodb_tut.rst
index 3e64122..7479058 100644
--- a/docs/source/dynamodb_tut.rst
+++ b/docs/source/dynamodb_tut.rst
@@ -1,240 +1,348 @@
-.. dynamodb_tut:

-

-============================================

-An Introduction to boto's DynamoDB interface

-============================================

-

-This tutorial focuses on the boto interface to AWS' DynamoDB_. This tutorial

-assumes that you have boto already downloaded and installed.

-

-.. _DynamoDB: http://aws.amazon.com/dynamodb/

-

-Creating a Connection

----------------------

-

-The first step in accessing DynamoDB is to create a connection to the service.

-To do so, the most straight forward way is the following::

-

-    >>> import boto

-    >>> conn = boto.connect_dynamodb(

-            aws_access_key_id='<YOUR_AWS_KEY_ID>',

-            aws_secret_access_key='<YOUR_AWS_SECRET_KEY>')

-    >>> conn

-    <boto.dynamodb.layer2.Layer2 object at 0x3fb3090>

-

-Bear in mind that if you have your credentials in boto config in your home

-directory, the two keyword arguments in the call above are not needed. More

-details on configuration can be found in :doc:`boto_config_tut`.

-

-.. note:: At this

-    time, Amazon DynamoDB is available only in the US-EAST-1 region. The

-    ``connect_dynamodb`` method automatically connect to that region.

-

-The :py:func:`boto.connect_dynamodb` functions returns a

-:py:class:`boto.dynamodb.layer2.Layer2` instance, which is a high-level API

-for working with DynamoDB. Layer2 is a set of abstractions that sit atop

-the lower level :py:class:`boto.dynamodb.layer1.Layer1` API, which closely

-mirrors the Amazon DynamoDB API. For the purpose of this tutorial, we'll

-just be covering Layer2.

-

-Listing Tables

---------------

-

-Now that we have a DynamoDB connection object, we can then query for a list of

-existing tables in that region::

-

-    >>> conn.list_tables()

-    ['test-table', 'another-table']

-

-Creating Tables

----------------

-

-DynamoDB tables are created with the

-:py:meth:`Layer2.create_table <boto.dynamodb.layer2.Layer2.create_table>`

-method. While DynamoDB's items (a rough equivalent to a relational DB's row)

-don't have a fixed schema, you do need to create a schema for the table's

-hash key element, and the optional range key element. This is explained in

-greater detail in DynamoDB's `Data Model`_ documentation.

-

-We'll start by defining a schema that has a hash key and a range key that

-are both keys::

-

-    >>> message_table_schema = conn.create_schema(

-            hash_key_name='forum_name',

-            hash_key_proto_value='S',

-            range_key_name='subject',

-            range_key_proto_value='S'

-        )

-

-The next few things to determine are table name and read/write throughput. We'll

-defer explaining throughput to the DynamoDB's `Provisioned Throughput`_ docs.

-

-We're now ready to create the table::

-

-    >>> table = conn.create_table(

-            name='messages',

-            schema=message_table_schema,

-            read_units=10,

-            write_units=10

-        )

-    >>> table

-    Table(messages)

-

-This returns a :py:class:`boto.dynamodb.table.Table` instance, which provides

-simple ways to create (put), update, and delete items.

-

-.. _Data Model: http://docs.amazonwebservices.com/amazondynamodb/latest/developerguide/DataModel.html

-.. _Provisioned Throughput: http://docs.amazonwebservices.com/amazondynamodb/latest/developerguide/ProvisionedThroughputIntro.html

-

-Getting a Table

----------------

-

-To retrieve an existing table, use

-:py:meth:`Layer2.get_table <boto.dynamodb.layer2.Layer2.get_table>`::

-

-    >>> conn.list_tables()

-    ['test-table', 'another-table', 'messages']

-    >>> table = conn.get_table('messages')

-    >>> table

-    Table(messages)

-

-:py:meth:`Layer2.get_table <boto.dynamodb.layer2.Layer2.get_table>`, like

-:py:meth:`Layer2.create_table <boto.dynamodb.layer2.Layer2.create_table>`,

-returns a :py:class:`boto.dynamodb.table.Table` instance.

-

-Describing Tables

------------------

-

-To get a complete description of a table, use

-:py:meth:`Layer2.describe_table <boto.dynamodb.layer2.Layer2.describe_table>`::

-

-    >>> conn.list_tables()

-    ['test-table', 'another-table', 'messages']

-    >>> conn.describe_table('messages')

-    {

-        'Table': {

-            'CreationDateTime': 1327117581.624,

-            'ItemCount': 0,

-            'KeySchema': {

-                'HashKeyElement': {

-                    'AttributeName': 'forum_name',

-                    'AttributeType': 'S'

-                },

-                'RangeKeyElement': {

-                    'AttributeName': 'subject',

-                    'AttributeType': 'S'

-                }

-            },

-            'ProvisionedThroughput': {

-                'ReadCapacityUnits': 10,

-                'WriteCapacityUnits': 10

-            },

-            'TableName': 'messages',

-            'TableSizeBytes': 0,

-            'TableStatus': 'ACTIVE'

-        }

-    }

-

-Adding Items

-------------

-

-Continuing on with our previously created ``messages`` table, adding an::

-

-    >>> table = conn.get_table('messages')

-    >>> item_data = {

-            'Body': 'http://url_to_lolcat.gif',

-            'SentBy': 'User A',

-            'ReceivedTime': '12/9/2011 11:36:03 PM',

-        }

-    >>> item = table.new_item(

-            # Our hash key is 'forum'

-            hash_key='LOLCat Forum',

-            # Our range key is 'subject'

-            range_key='Check this out!',

-            # This has the

-            attrs=item_data

-        )

-

-The

-:py:meth:`Table.new_item <boto.dynamodb.table.Table.new_item>` method creates

-a new :py:class:`boto.dynamodb.item.Item` instance with your specified

-hash key, range key, and attributes already set.

-:py:class:`Item <boto.dynamodb.item.Item>` is a :py:class:`dict` sub-class,

-meaning you can edit your data as such::

-

-    item['a_new_key'] = 'testing'

-    del item['a_new_key']

-

-After you are happy with the contents of the item, use

-:py:meth:`Item.put <boto.dynamodb.item.Item.put>` to commit it to DynamoDB::

-

-    >>> item.put()

-

-Retrieving Items

-----------------

-

-Now, let's check if it got added correctly. Since DynamoDB works under an

-'eventual consistency' mode, we need to specify that we wish a consistent read,

-as follows::

-

-    >>> table = conn.get_table('messages')

-    >>> item = table.get_item(

-            # Your hash key was 'forum_name'

-            hash_key='LOLCat Forum',

-            # Your range key was 'subject'

-            range_key='Check this out!'

-        )

-    >>> item

-    {

-        # Note that this was your hash key attribute (forum_name)

-        'forum_name': 'LOLCat Forum',

-        # This is your range key attribute (subject)

-        'subject': 'Check this out!'

-        'Body': 'http://url_to_lolcat.gif',

-        'ReceivedTime': '12/9/2011 11:36:03 PM',

-        'SentBy': 'User A',

-    }

-

-Updating Items

---------------

-

-To update an item's attributes, simply retrieve it, modify the value, then

-:py:meth:`Item.put <boto.dynamodb.item.Item.put>` it again::

-

-    >>> table = conn.get_table('messages')

-    >>> item = table.get_item(

-            hash_key='LOLCat Forum',

-            range_key='Check this out!'

-        )

-    >>> item['SentBy'] = 'User B'

-    >>> item.put()

-

-Deleting Items

---------------

-

-To delete items, use the

-:py:meth:`Item.delete <boto.dynamodb.item.Item.delete>` method::

-

-    >>> table = conn.get_table('messages')

-    >>> item = table.get_item(

-            hash_key='LOLCat Forum',

-            range_key='Check this out!'

-        )

-    >>> item.delete()

-

-Deleting Tables

----------------

-

-.. WARNING::

-  Deleting a table will also **permanently** delete all of its contents without prompt. Use carefully.

-

-There are two easy ways to delete a table. Through your top-level

-:py:class:`Layer2 <boto.dynamodb.layer2.Layer2>` object::

-

-    >>> conn.delete_table(table)

-

-Or by getting the table, then using

-:py:meth:`Table.delete <boto.dynamodb.table.Table.delete>`::

-

-    >>> table = conn.get_table('messages')

-    >>> table.delete()

+.. dynamodb_tut:
+
+============================================
+An Introduction to boto's DynamoDB interface
+============================================
+
+This tutorial focuses on the boto interface to AWS' DynamoDB_. This tutorial
+assumes that you have boto already downloaded and installed.
+
+.. _DynamoDB: http://aws.amazon.com/dynamodb/
+
+.. warning::
+
+    This tutorial covers the **ORIGINAL** release of DynamoDB.
+    It has since been supplanted by a second major version & an
+    updated API to talk to the new version. The documentation for the
+    new version of DynamoDB (& boto's support for it) is at
+    :doc:`DynamoDB v2 <dynamodb2_tut>`.
+
+
+Creating a Connection
+---------------------
+
+The first step in accessing DynamoDB is to create a connection to the service.
+To do so, the most straight forward way is the following::
+
+    >>> import boto.dynamodb
+    >>> conn = boto.dynamodb.connect_to_region(
+            'us-west-2',
+            aws_access_key_id='<YOUR_AWS_KEY_ID>',
+            aws_secret_access_key='<YOUR_AWS_SECRET_KEY>')
+    >>> conn
+    <boto.dynamodb.layer2.Layer2 object at 0x3fb3090>
+
+Bear in mind that if you have your credentials in boto config in your home
+directory, the two keyword arguments in the call above are not needed. More
+details on configuration can be found in :doc:`boto_config_tut`.
+
+The :py:func:`boto.dynamodb.connect_to_region` function returns a
+:py:class:`boto.dynamodb.layer2.Layer2` instance, which is a high-level API
+for working with DynamoDB. Layer2 is a set of abstractions that sit atop
+the lower level :py:class:`boto.dynamodb.layer1.Layer1` API, which closely
+mirrors the Amazon DynamoDB API. For the purpose of this tutorial, we'll
+just be covering Layer2.
+
+
+Listing Tables
+--------------
+
+Now that we have a DynamoDB connection object, we can then query for a list of
+existing tables in that region::
+
+    >>> conn.list_tables()
+    ['test-table', 'another-table']
+
+
+Creating Tables
+---------------
+
+DynamoDB tables are created with the
+:py:meth:`Layer2.create_table <boto.dynamodb.layer2.Layer2.create_table>`
+method. While DynamoDB's items (a rough equivalent to a relational DB's row)
+don't have a fixed schema, you do need to create a schema for the table's
+hash key element, and the optional range key element. This is explained in
+greater detail in DynamoDB's `Data Model`_ documentation.
+
+We'll start by defining a schema that has a hash key and a range key that
+are both strings::
+
+    >>> message_table_schema = conn.create_schema(
+            hash_key_name='forum_name',
+            hash_key_proto_value=str,
+            range_key_name='subject',
+            range_key_proto_value=str
+        )
+
+The next few things to determine are table name and read/write throughput. We'll
+defer explaining throughput to the DynamoDB's `Provisioned Throughput`_ docs.
+
+We're now ready to create the table::
+
+    >>> table = conn.create_table(
+            name='messages',
+            schema=message_table_schema,
+            read_units=10,
+            write_units=10
+        )
+    >>> table
+    Table(messages)
+
+This returns a :py:class:`boto.dynamodb.table.Table` instance, which provides
+simple ways to create (put), update, and delete items.
+
+
+Getting a Table
+---------------
+
+To retrieve an existing table, use
+:py:meth:`Layer2.get_table <boto.dynamodb.layer2.Layer2.get_table>`::
+
+    >>> conn.list_tables()
+    ['test-table', 'another-table', 'messages']
+    >>> table = conn.get_table('messages')
+    >>> table
+    Table(messages)
+
+:py:meth:`Layer2.get_table <boto.dynamodb.layer2.Layer2.get_table>`, like
+:py:meth:`Layer2.create_table <boto.dynamodb.layer2.Layer2.create_table>`,
+returns a :py:class:`boto.dynamodb.table.Table` instance.
+
+Keep in mind that :py:meth:`Layer2.get_table <boto.dynamodb.layer2.Layer2.get_table>`
+will make an API call to retrieve various attributes of the table including the
+creation time, the read and write capacity, and the table schema.  If you
+already know the schema, you can save an API call and create a
+:py:class:`boto.dynamodb.table.Table` object without making any calls to
+Amazon DynamoDB::
+
+    >>> table = conn.table_from_schema(
+        name='messages',
+        schema=message_table_schema)
+
+If you do this, the following fields will have ``None`` values:
+
+  * create_time
+  * status
+  * read_units
+  * write_units
+
+In addition, the ``item_count`` and ``size_bytes`` will be 0.
+If you create a table object directly from a schema object and
+decide later that you need to retrieve any of these additional
+attributes, you can use the
+:py:meth:`Table.refresh <boto.dynamodb.table.Table.refresh>` method::
+
+    >>> from boto.dynamodb.schema import Schema
+    >>> table = conn.table_from_schema(
+            name='messages',
+            schema=Schema.create(hash_key=('forum_name', 'S'),
+                                 range_key=('subject', 'S')))
+    >>> print table.write_units
+    None
+    >>> # Now we decide we need to know the write_units:
+    >>> table.refresh()
+    >>> print table.write_units
+    10
+
+
+The recommended best practice is to retrieve a table object once and
+use that object for the duration of your application. So, for example,
+instead of this::
+
+    class Application(object):
+        def __init__(self, layer2):
+            self._layer2 = layer2
+
+        def retrieve_item(self, table_name, key):
+            return self._layer2.get_table(table_name).get_item(key)
+
+You can do something like this instead::
+
+    class Application(object):
+        def __init__(self, layer2):
+            self._layer2 = layer2
+            self._tables_by_name = {}
+
+        def retrieve_item(self, table_name, key):
+            table = self._tables_by_name.get(table_name)
+            if table is None:
+                table = self._layer2.get_table(table_name)
+                self._tables_by_name[table_name] = table
+            return table.get_item(key)
+
+
+Describing Tables
+-----------------
+
+To get a complete description of a table, use
+:py:meth:`Layer2.describe_table <boto.dynamodb.layer2.Layer2.describe_table>`::
+
+    >>> conn.list_tables()
+    ['test-table', 'another-table', 'messages']
+    >>> conn.describe_table('messages')
+    {
+        'Table': {
+            'CreationDateTime': 1327117581.624,
+            'ItemCount': 0,
+            'KeySchema': {
+                'HashKeyElement': {
+                    'AttributeName': 'forum_name',
+                    'AttributeType': 'S'
+                },
+                'RangeKeyElement': {
+                    'AttributeName': 'subject',
+                    'AttributeType': 'S'
+                }
+            },
+            'ProvisionedThroughput': {
+                'ReadCapacityUnits': 10,
+                'WriteCapacityUnits': 10
+            },
+            'TableName': 'messages',
+            'TableSizeBytes': 0,
+            'TableStatus': 'ACTIVE'
+        }
+    }
+
+
+Adding Items
+------------
+
+Continuing on with our previously created ``messages`` table, adding an::
+
+    >>> table = conn.get_table('messages')
+    >>> item_data = {
+            'Body': 'http://url_to_lolcat.gif',
+            'SentBy': 'User A',
+            'ReceivedTime': '12/9/2011 11:36:03 PM',
+        }
+    >>> item = table.new_item(
+            # Our hash key is 'forum'
+            hash_key='LOLCat Forum',
+            # Our range key is 'subject'
+            range_key='Check this out!',
+            # This has the
+            attrs=item_data
+        )
+
+The
+:py:meth:`Table.new_item <boto.dynamodb.table.Table.new_item>` method creates
+a new :py:class:`boto.dynamodb.item.Item` instance with your specified
+hash key, range key, and attributes already set.
+:py:class:`Item <boto.dynamodb.item.Item>` is a :py:class:`dict` sub-class,
+meaning you can edit your data as such::
+
+    item['a_new_key'] = 'testing'
+    del item['a_new_key']
+
+After you are happy with the contents of the item, use
+:py:meth:`Item.put <boto.dynamodb.item.Item.put>` to commit it to DynamoDB::
+
+    >>> item.put()
+
+
+Retrieving Items
+----------------
+
+Now, let's check if it got added correctly. Since DynamoDB works under an
+'eventual consistency' mode, we need to specify that we wish a consistent read,
+as follows::
+
+    >>> table = conn.get_table('messages')
+    >>> item = table.get_item(
+            # Your hash key was 'forum_name'
+            hash_key='LOLCat Forum',
+            # Your range key was 'subject'
+            range_key='Check this out!'
+        )
+    >>> item
+    {
+        # Note that this was your hash key attribute (forum_name)
+        'forum_name': 'LOLCat Forum',
+        # This is your range key attribute (subject)
+        'subject': 'Check this out!'
+        'Body': 'http://url_to_lolcat.gif',
+        'ReceivedTime': '12/9/2011 11:36:03 PM',
+        'SentBy': 'User A',
+    }
+
+
+Updating Items
+--------------
+
+To update an item's attributes, simply retrieve it, modify the value, then
+:py:meth:`Item.put <boto.dynamodb.item.Item.put>` it again::
+
+    >>> table = conn.get_table('messages')
+    >>> item = table.get_item(
+            hash_key='LOLCat Forum',
+            range_key='Check this out!'
+        )
+    >>> item['SentBy'] = 'User B'
+    >>> item.put()
+
+Working with Decimals
+---------------------
+
+To avoid the loss of precision, you can stipulate that the
+``decimal.Decimal`` type be used for numeric values::
+
+    >>> import decimal
+    >>> conn.use_decimals()
+    >>> table = conn.get_table('messages')
+    >>> item = table.new_item(
+            hash_key='LOLCat Forum',
+            range_key='Check this out!'
+        )
+    >>> item['decimal_type'] = decimal.Decimal('1.12345678912345')
+    >>> item.put()
+    >>> print table.get_item('LOLCat Forum', 'Check this out!')
+    {u'forum_name': 'LOLCat Forum', u'decimal_type': Decimal('1.12345678912345'),
+     u'subject': 'Check this out!'}
+
+You can enable the usage of ``decimal.Decimal`` by using either the ``use_decimals``
+method, or by passing in the
+:py:class:`Dynamizer <boto.dynamodb.types.Dynamizer>` class for
+the ``dynamizer`` param::
+
+    >>> from boto.dynamodb.types import Dynamizer
+    >>> conn = boto.dynamodb.connect_to_region(dynamizer=Dynamizer)
+
+This mechanism can also be used if you want to customize the encoding/decoding
+process of DynamoDB types.
+
+
+Deleting Items
+--------------
+
+To delete items, use the
+:py:meth:`Item.delete <boto.dynamodb.item.Item.delete>` method::
+
+    >>> table = conn.get_table('messages')
+    >>> item = table.get_item(
+            hash_key='LOLCat Forum',
+            range_key='Check this out!'
+        )
+    >>> item.delete()
+
+
+Deleting Tables
+---------------
+
+.. WARNING::
+  Deleting a table will also **permanently** delete all of its contents without prompt. Use carefully.
+
+There are two easy ways to delete a table. Through your top-level
+:py:class:`Layer2 <boto.dynamodb.layer2.Layer2>` object::
+
+    >>> conn.delete_table(table)
+
+Or by getting the table, then using
+:py:meth:`Table.delete <boto.dynamodb.table.Table.delete>`::
+
+    >>> table = conn.get_table('messages')
+    >>> table.delete()
+
+
+.. _Data Model: http://docs.amazonwebservices.com/amazondynamodb/latest/developerguide/DataModel.html
+.. _Provisioned Throughput: http://docs.amazonwebservices.com/amazondynamodb/latest/developerguide/ProvisionedThroughputIntro.html
diff --git a/docs/source/ec2_tut.rst b/docs/source/ec2_tut.rst
index f8614db..d9ffe38 100644
--- a/docs/source/ec2_tut.rst
+++ b/docs/source/ec2_tut.rst
@@ -12,23 +12,19 @@
 ---------------------
 
 The first step in accessing EC2 is to create a connection to the service.
-There are two ways to do this in boto.  The first is::
+The recommended way of doing this in boto is::
 
-    >>> from boto.ec2.connection import EC2Connection
-    >>> conn = EC2Connection('<AWS_ACCESS_KEY_ID>', '<AWS_SECRET_ACCESS_KEY>')
+    >>> import boto.ec2
+    >>> conn = boto.ec2.connect_to_region("us-west-2",
+    ...    aws_access_key_id='<aws access key>',
+    ...    aws_secret_access_key='<aws secret key>')
 
-At this point the variable conn will point to an EC2Connection object.  In
-this example, the AWS access key and AWS secret key are passed in to the
-method explicitely.  Alternatively, you can set the boto config environment variables
-and then call the constructor without any arguments, like this::
+At this point the variable ``conn`` will point to an EC2Connection object.  In
+this example, the AWS access key and AWS secret key are passed in to the method
+explicitly.  Alternatively, you can set the boto config environment variables
+and then simply specify which region you want as follows::
 
-    >>> conn = EC2Connection()
-
-There is also a shortcut function in the boto package, called connect_ec2
-that may provide a slightly easier means of creating a connection::
-
-    >>> import boto
-    >>> conn = boto.connect_ec2()
+    >>> conn = boto.ec2.connect_to_region("us-west-2")
 
 In either case, conn will point to an EC2Connection object which we will
 use throughout the remainder of this tutorial.
@@ -41,7 +37,7 @@
 instance as follows::
 
     >>> conn.run_instances('<ami-image-id>')
-    
+
 This will launch an instance in the specified region with the default parameters.
 You will not be able to SSH into this machine, as it doesn't have a security
 group set. See :doc:`security_groups` for details on creating one.
@@ -88,3 +84,95 @@
 Please use with care since once you request termination for an instance there
 is no turning back.
 
+Checking What Instances Are Running
+-----------------------------------
+You can also get information on your currently running instances::
+
+    >>> reservations = conn.get_all_instances()
+    >>> reservations
+    [Reservation:r-00000000]
+
+A reservation corresponds to a command to start instances. You can see what
+instances are associated with a reservation::
+
+    >>> instances = reservations[0].instances
+    >>> instances
+    [Instance:i-00000000]
+
+An instance object allows you get more meta-data available about the instance::
+
+    >>> inst = instances[0]
+    >>> inst.instance_type
+    u'c1.xlarge'
+    >>> inst.placement
+    u'us-west-2'
+
+In this case, we can see that our instance is a c1.xlarge instance in the
+`us-west-2` availability zone.
+
+=================================
+Using Elastic Block Storage (EBS)
+=================================
+
+
+EBS Basics
+----------
+
+EBS can be used by EC2 instances for permanent storage. Note that EBS volumes
+must be in the same availability zone as the EC2 instance you wish to attach it
+to.
+
+To actually create a volume you will need to specify a few details. The
+following example will create a 50GB EBS in one of the `us-west-2` availability
+zones::
+
+   >>> vol = conn.create_volume(50, "us-west-2")
+   >>> vol
+   Volume:vol-00000000
+
+You can check that the volume is now ready and available::
+
+   >>> curr_vol = conn.get_all_volumes([vol.id])[0]
+   >>> curr_vol.status
+   u'available'
+   >>> curr_vol.zone
+   u'us-west-2'
+
+We can now attach this volume to the EC2 instance we created earlier, making it
+available as a new device::
+
+   >>> conn.attach_volume (vol.id, inst.id, "/dev/sdx")
+   u'attaching'
+
+You will now have a new volume attached to your instance. Note that with some
+Linux kernels, `/dev/sdx` may get translated to `/dev/xvdx`. This device can
+now be used as a normal block device within Linux.
+
+Working With Snapshots
+----------------------
+
+Snapshots allow you to make point-in-time snapshots of an EBS volume for future
+recovery. Snapshots allow you to create incremental backups, and can also be
+used to instantiate multiple new volumes. Snapshots can also be used to move
+EBS volumes across availability zones or making backups to S3.
+
+Creating a snapshot is easy::
+
+   >>> snapshot = conn.create_snapshot(vol.id, 'My snapshot')
+   >>> snapshot
+   Snapshot:snap-00000000
+
+Once you have a snapshot, you can create a new volume from it. Volumes are
+created lazily from snapshots, which means you can start using such a volume
+straight away::
+
+   >>> new_vol = snapshot.create_volume('us-west-2')
+   >>> conn.attach_volume (new_vol.id, inst.id, "/dev/sdy")
+   u'attaching'
+
+If you no longer need a snapshot, you can also easily delete it::
+
+   >>> conn.delete_snapshot(snapshot.id)
+   True
+
+
diff --git a/docs/source/elb_tut.rst b/docs/source/elb_tut.rst
index d560b2c..4d5661c 100644
--- a/docs/source/elb_tut.rst
+++ b/docs/source/elb_tut.rst
@@ -43,48 +43,27 @@
 
 The first step in accessing ELB is to create a connection to the service.
 
->>> import boto
->>> conn = boto.connect_elb(
-        aws_access_key_id='YOUR-KEY-ID-HERE',
-        aws_secret_access_key='YOUR-SECRET-HERE'
-    )
-
-
-A Note About Regions and Endpoints
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
 
 Like EC2, the ELB service has a different endpoint for each region. By default
-the US East endpoint is used. To choose a specific region, instantiate the
-ELBConnection object with that region's information.
+the US East endpoint is used. To choose a specific region, use the
+``connect_to_region`` function::
 
->>> from boto.regioninfo import RegionInfo
->>> reg = RegionInfo(
-        name='eu-west-1',
-        endpoint='elasticloadbalancing.eu-west-1.amazonaws.com'
-    )
->>> conn = boto.connect_elb(
-        aws_access_key_id='YOUR-KEY-ID-HERE',
-        aws_secret_access_key='YOUR-SECRET-HERE',
-        region=reg
-    )
-
-Another way to connect to an alternative region is like this:
-
->>> import boto.ec2.elb
->>> elb = boto.ec2.elb.connect_to_region('eu-west-1')
+    >>> import boto.ec2.elb
+    >>> elb = boto.ec2.elb.connect_to_region('us-west-2')
 
 Here's yet another way to discover what regions are available and then
-connect to one:
+connect to one::
 
->>> import boto.ec2.elb
->>> regions = boto.ec2.elb.regions()
->>> regions
-[RegionInfo:us-east-1,
- RegionInfo:ap-northeast-1,
- RegionInfo:us-west-1,
- RegionInfo:ap-southeast-1,
- RegionInfo:eu-west-1]
->>> elb = regions[-1].connect()
+    >>> import boto.ec2.elb
+    >>> regions = boto.ec2.elb.regions()
+    >>> regions
+    [RegionInfo:us-east-1,
+     RegionInfo:ap-northeast-1,
+     RegionInfo:us-west-1,
+     RegionInfo:us-west-2,
+     RegionInfo:ap-southeast-1,
+     RegionInfo:eu-west-1]
+    >>> elb = regions[-1].connect()
 
 Alternatively, edit your boto.cfg with the default ELB endpoint to use::
 
@@ -194,9 +173,9 @@
 and TCP. We want the load balancer to span the availability zones
 *us-east-1a* and *us-east-1b*:
 
->>> regions = ['us-east-1a', 'us-east-1b']
+>>> zones = ['us-east-1a', 'us-east-1b']
 >>> ports = [(80, 8080, 'http'), (443, 8443, 'tcp')]
->>> lb = conn.create_load_balancer('my-lb', regions, ports)
+>>> lb = conn.create_load_balancer('my-lb', zones, ports)
 >>> # This is from the previous section.
 >>> lb.configure_health_check(hc)
 
@@ -233,7 +212,7 @@
 
 To remove instances:
 
->>> lb.degregister_instances(instance_ids)
+>>> lb.deregister_instances(instance_ids)
 
 Modifying Availability Zones for a Load Balancer
 ------------------------------------------------
diff --git a/docs/source/emr_tut.rst b/docs/source/emr_tut.rst
index 996781e..c42d188 100644
--- a/docs/source/emr_tut.rst
+++ b/docs/source/emr_tut.rst
@@ -27,18 +27,18 @@
 
 >>> conn = EmrConnection()
 
-There is also a shortcut function in the boto package called connect_emr
-that may provide a slightly easier means of creating a connection:
+There is also a shortcut function in boto
+that makes it easy to create EMR connections:
 
->>> import boto
->>> conn = boto.connect_emr()
+>>> import boto.emr
+>>> conn = boto.emr.connect_to_region('us-west-2')
 
 In either case, conn points to an EmrConnection object which we will use
 throughout the remainder of this tutorial.
 
 Creating Streaming JobFlow Steps
 --------------------------------
-Upon creating a connection to Elastic Mapreduce you will next 
+Upon creating a connection to Elastic Mapreduce you will next
 want to create one or more jobflow steps.  There are two types of steps, streaming
 and custom jar, both of which have a class in the boto Elastic Mapreduce implementation.
 
@@ -76,8 +76,8 @@
 -----------------
 Once you have created one or more jobflow steps, you will next want to create and run a jobflow.  Creating a jobflow that executes either of the steps we created above can be accomplished by:
 
->>> import boto
->>> conn = boto.connect_emr()
+>>> import boto.emr
+>>> conn = boto.emr.connect_to_region('us-west-2')
 >>> jobid = conn.run_jobflow(name='My jobflow', 
 ...                          log_uri='s3://<my log uri>/jobflow_logs', 
 ...                          steps=[step])
@@ -102,7 +102,6 @@
 --------------------
 By default when all the steps of a jobflow have finished or failed the jobflow terminates.  However, if you set the keep_alive parameter to True or just want to halt the execution of a jobflow early you can terminate a jobflow by:
 
->>> import boto
->>> conn = boto.connect_emr()
+>>> import boto.emr
+>>> conn = boto.emr.connect_to_region('us-west-2')
 >>> conn.terminate_jobflow('<jobflow id>') 
-
diff --git a/docs/source/extensions/githublinks/__init__.py b/docs/source/extensions/githublinks/__init__.py
new file mode 100644
index 0000000..9641a83
--- /dev/null
+++ b/docs/source/extensions/githublinks/__init__.py
@@ -0,0 +1,55 @@
+"""Add github roles to sphinx docs.
+
+Based entirely on Doug Hellmann's bitbucket version, but
+adapted for Github.
+(https://bitbucket.org/dhellmann/sphinxcontrib-bitbucket/)
+
+"""
+from urlparse import urljoin
+
+from docutils import nodes, utils
+from docutils.parsers.rst.roles import set_classes
+
+
+def make_node(rawtext, app, type_, slug, options):
+    base_url = app.config.github_project_url
+    if base_url is None:
+        raise ValueError(
+            "Configuration value for 'github_project_url' is not set.")
+    relative = '%s/%s' % (type_, slug)
+    full_ref = urljoin(base_url, relative)
+    set_classes(options)
+    if type_ == 'issues':
+        type_ = 'issue'
+    node = nodes.reference(rawtext, type_ + ' ' + utils.unescape(slug),
+                           refuri=full_ref, **options)
+    return node
+
+
+def github_sha(name, rawtext, text, lineno, inliner,
+                 options={}, content=[]):
+    app = inliner.document.settings.env.app
+    node = make_node(rawtext, app, 'commit', text, options)
+    return [node], []
+
+
+def github_issue(name, rawtext, text, lineno, inliner,
+                 options={}, content=[]):
+    try:
+        issue = int(text)
+    except ValueError:
+        msg = inliner.reporter.error(
+            "Invalid Github Issue '%s', must be an integer" % text,
+            line=lineno)
+        problem = inliner.problematic(rawtext, rawtext, msg)
+        return [problem], [msg]
+    app = inliner.document.settings.env.app
+    node = make_node(rawtext, app, 'issues', str(issue), options)
+    return [node], []
+
+
+def setup(app):
+    app.info('Adding github link roles')
+    app.add_role('sha', github_sha)
+    app.add_role('issue', github_issue)
+    app.add_config_value('github_project_url', None, 'env')
diff --git a/docs/source/getting_started.rst b/docs/source/getting_started.rst
new file mode 100644
index 0000000..ab8e306
--- /dev/null
+++ b/docs/source/getting_started.rst
@@ -0,0 +1,177 @@
+.. _getting-started:
+
+=========================
+Getting Started with Boto
+=========================
+
+This tutorial will walk you through installing and configuring ``boto``, as
+well how to use it to make API calls.
+
+This tutorial assumes you are familiar with Python & that you have registered
+for an `Amazon Web Services`_ account. You'll need retrieve your
+``Access Key ID`` and ``Secret Access Key`` from the web-based console.
+
+.. _`Amazon Web Services`: https://aws.amazon.com/
+
+
+Installing Boto
+---------------
+
+You can use ``pip`` to install the latest released version of ``boto``::
+
+    pip install boto
+
+If you want to install ``boto`` from source::
+
+    git clone git://github.com/boto/boto.git
+    cd boto
+    python setup.py install
+
+
+Using Virtual Environments
+--------------------------
+
+Another common way to install ``boto`` is to use a ``virtualenv``, which
+provides isolated environments. First, install the ``virtualenv`` Python
+package::
+
+    pip install virtualenv
+
+Next, create a virtual environment by using the ``virtualenv`` command and
+specifying where you want the virtualenv to be created (you can specify
+any directory you like, though this example allows for compatibility with
+``virtualenvwrapper``)::
+
+    mkdir ~/.virtualenvs
+    virtualenv ~/.virtualenvs/boto
+
+You can now activate the virtual environment::
+
+    source ~/.virtualenvs/boto/bin/activate
+
+Now, any usage of ``python`` or ``pip`` (within the current shell) will default
+to the new, isolated version within your virtualenv.
+
+You can now install ``boto`` into this virtual environment::
+
+    pip install boto
+
+When you are done using ``boto``, you can deactivate your virtual environment::
+
+    deactivate
+
+If you are creating a lot of virtual environments, `virtualenvwrapper`_
+is an excellent tool that lets you easily manage your virtual environments.
+
+.. _`virtualenvwrapper`: http://virtualenvwrapper.readthedocs.org/en/latest/
+
+
+Configuring Boto Credentials
+----------------------------
+
+You have a few options for configuring ``boto`` (see :doc:`boto_config_tut`).
+For this tutorial, we'll be using a configuration file. First, create a
+``~/.boto`` file with these contents::
+
+    [Credentials]
+    aws_access_key_id = YOURACCESSKEY
+    aws_secret_access_key = YOURSECRETKEY
+
+``boto`` supports a number of configuration values. For more information,
+see :doc:`boto_config_tut`. The above file, however, is all we need for now.
+You're now ready to use ``boto``.
+
+
+Making Connections
+------------------
+
+``boto`` provides a number of convenience functions to simplify connecting to a
+service. For example, to work with S3, you can run::
+
+    >>> import boto
+    >>> s3 = boto.connect_s3()
+
+If you want to connect to a different region, you can import the service module
+and use the ``connect_to_region`` functions. For example, to create an EC2
+client in 'us-west-2' region, you'd run the following::
+
+    >>> import boto.ec2
+    >>> ec2 = boto.ec2.connect_to_region('us-west-2')
+
+
+Troubleshooting Connections
+---------------------------
+
+When calling the various ``connect_*`` functions, you might run into an error
+like this::
+
+    >>> import boto
+    >>> s3 = boto.connect_s3()
+    Traceback (most recent call last):
+      File "<stdin>", line 1, in <module>
+      File "boto/__init__.py", line 121, in connect_s3
+        return S3Connection(aws_access_key_id, aws_secret_access_key, **kwargs)
+      File "boto/s3/connection.py", line 171, in __init__
+        validate_certs=validate_certs)
+      File "boto/connection.py", line 548, in __init__
+        host, config, self.provider, self._required_auth_capability())
+      File "boto/auth.py", line 668, in get_auth_handler
+        'Check your credentials' % (len(names), str(names)))
+    boto.exception.NoAuthHandlerFound: No handler was ready to authenticate. 1 handlers were checked. ['HmacAuthV1Handler'] Check your credentials
+
+This is because ``boto`` cannot find credentials to use. Verify that you have
+created a ``~/.boto`` file as shown above. You can also turn on debug logging
+to verify where your credentials are coming from::
+
+    >>> import boto
+    >>> boto.set_stream_logger('boto')
+    >>> s3 = boto.connect_s3()
+    2012-12-10 17:15:03,799 boto [DEBUG]:Using access key found in config file.
+    2012-12-10 17:15:03,799 boto [DEBUG]:Using secret key found in config file.
+
+
+Interacting with AWS Services
+-----------------------------
+
+Once you have a client for the specific service you want, there are methods on
+that object that will invoke API operations for that service. The following
+code demonstrates how to create a bucket and put an object in that bucket::
+
+    >>> import boto
+    >>> import time
+    >>> s3 = boto.connect_s3()
+
+    # Create a new bucket. Buckets must have a globally unique name (not just
+    # unique to your account).
+    >>> bucket = s3.create_bucket('boto-demo-%s' % int(time.time()))
+
+    # Create a new key/value pair.
+    >>> key = bucket.new_key('mykey')
+    >>> key.set_contents_from_string("Hello World!")
+
+    # Sleep to ensure the data is eventually there.
+    >>> time.sleep(2)
+
+    # Retrieve the contents of ``mykey``.
+    >>> print key.get_contents_as_string()
+    'Hello World!'
+
+    # Delete the key.
+    >>> key.delete()
+    # Delete the bucket.
+    >>> bucket.delete()
+
+Each service supports a different set of commands. You'll want to refer to the
+other guides & API references in this documentation, as well as referring to
+the `official AWS API`_ documentation.
+
+.. _`official AWS API`: https://aws.amazon.com/documentation/
+
+Next Steps
+----------
+
+For many of the services that ``boto`` supports, there are tutorials as
+well as detailed API documentation. If you are interested in a specific
+service, the tutorial for the service is a good starting point. For instance,
+if you'd like more information on S3, check out the :ref:`S3 Tutorial <s3_tut>`
+and the :doc:`S3 API reference <ref/s3>`.
diff --git a/docs/source/index.rst b/docs/source/index.rst
index 1a7e930..252f14a 100644
--- a/docs/source/index.rst
+++ b/docs/source/index.rst
@@ -9,6 +9,13 @@
 
 .. _Amazon Web Services: http://aws.amazon.com/
 
+Getting Started
+---------------
+
+If you've never used ``boto`` before, you should read the
+:doc:`Getting Started with Boto <getting_started>` guide to get familiar
+with ``boto`` & its usage.
+
 Currently Supported Services
 ----------------------------
 
@@ -17,6 +24,8 @@
   * :doc:`Elastic Compute Cloud (EC2) <ec2_tut>` -- (:doc:`API Reference <ref/ec2>`)
   * :doc:`Elastic MapReduce (EMR) <emr_tut>` -- (:doc:`API Reference <ref/emr>`)
   * :doc:`Auto Scaling <autoscale_tut>` -- (:doc:`API Reference <ref/autoscale>`)
+  * Data Pipeline -- (:doc:`API Reference <ref/datapipeline>`)
+  * Elastic Transcoder -- (:doc:`API Reference <ref/elastictranscoder>`)
 
 * **Content Delivery**
 
@@ -25,16 +34,21 @@
 * **Database**
 
   * :doc:`SimpleDB <simpledb_tut>` -- (:doc:`API Reference <ref/sdb>`)
+  * :doc:`DynamoDB2 <dynamodb2_tut>` -- (:doc:`API Reference <ref/dynamodb2>`) -- (:doc:`Migration Guide from v1 <migrations/dynamodb_v1_to_v2>`)
   * :doc:`DynamoDB <dynamodb_tut>` -- (:doc:`API Reference <ref/dynamodb>`)
-  * Relational Data Services (RDS) -- (:doc:`API Reference <ref/rds>`)
+  * :doc:`Relational Data Services (RDS) <rds_tut>` -- (:doc:`API Reference <ref/rds>`)
+  * ElastiCache -- (:doc:`API Reference <ref/elasticache>`)
+  * Redshift -- (:doc:`API Reference <ref/redshift>`)
 
 * **Deployment and Management**
 
   * CloudFormation -- (:doc:`API Reference <ref/cloudformation>`)
+  * Elastic Beanstalk -- (:doc:`API Reference <ref/beanstalk>`)
 
 * **Identity & Access**
 
   * Identity and Access Management (IAM) -- (:doc:`API Reference <ref/iam>`)
+  * Security Token Service (STS) -- (:doc:`API Reference <ref/sts>`)
 
 * **Application Services**
 
@@ -68,6 +82,11 @@
 
   * Mechanical Turk -- (:doc:`API Reference <ref/mturk>`)
 
+* **Other**
+
+  * Marketplace Web Services -- (:doc:`API Reference <ref/mws>`)
+  * :doc:`Support <support_tut>` -- (:doc:`API Reference <ref/support>`)
+
 Additional Resources
 --------------------
 
@@ -85,9 +104,24 @@
 .. _IRC channel: http://webchat.freenode.net/?channels=boto
 .. _Follow Mitch on Twitter: http://twitter.com/garnaat
 
+
+Release Notes
+-------------
+
+.. toctree::
+   :titlesonly:
+
+   releasenotes/v2.9.5
+   releasenotes/v2.9.4
+   releasenotes/v2.9.3
+   releasenotes/v2.9.2
+   releasenotes/v2.9.1
+
+
 .. toctree::
    :hidden:
 
+   getting_started
    ec2_tut
    security_groups
    ref/ec2
@@ -102,9 +136,11 @@
    ref/sdb_db
    dynamodb_tut
    ref/dynamodb
+   rds_tut
    ref/rds
    ref/cloudformation
    ref/iam
+   ref/mws
    sqs_tut
    ref/sqs
    ref/sns
@@ -126,6 +162,16 @@
    boto_config_tut
    ref/index
    documentation
+   contributing
+   ref/datapipeline
+   ref/elasticache
+   ref/elastictranscoder
+   ref/redshift
+   ref/dynamodb2
+   support_tut
+   ref/support
+   dynamodb2_tut
+   migrations/dynamodb_v1_to_v2
 
 
 Indices and tables
@@ -134,4 +180,3 @@
 * :ref:`genindex`
 * :ref:`modindex`
 * :ref:`search`
-
diff --git a/docs/source/migrations/dynamodb_v1_to_v2.rst b/docs/source/migrations/dynamodb_v1_to_v2.rst
new file mode 100644
index 0000000..d945e17
--- /dev/null
+++ b/docs/source/migrations/dynamodb_v1_to_v2.rst
@@ -0,0 +1,366 @@
+.. dynamodb_v1_to_v2:
+
+=========================================
+Migrating from DynamoDB v1 to DynamoDB v2
+=========================================
+
+For the v2 release of AWS' DynamoDB_, the high-level API for interacting via
+``boto`` was rewritten. Since there were several new features added in v2,
+people using the v1 API may wish to transition their code to the new API.
+This guide covers the high-level APIs.
+
+.. _DynamoDB: http://aws.amazon.com/dynamodb/
+
+
+Creating New Tables
+===================
+
+DynamoDB v1::
+
+    >>> import boto.dynamodb
+    >>> conn = boto.dynamodb.connect_to_region()
+    >>> message_table_schema = conn.create_schema(
+    ...     hash_key_name='forum_name',
+    ...     hash_key_proto_value=str,
+    ...     range_key_name='subject',
+    ...     range_key_proto_value=str
+    ... )
+    >>> table = conn.create_table(
+    ...     name='messages',
+    ...     schema=message_table_schema,
+    ...     read_units=10,
+    ...     write_units=10
+    ... )
+
+DynamoDB v2::
+
+    >>> from boto.dynamodb2.fields import HashKey
+    >>> from boto.dynamodb2.fields import RangeKey
+    >>> from boto.dynamodb2.table import Table
+
+    >>> table = Table.create('messages', schema=[
+    ...     HashKey('forum_name'),
+    ...     RangeKey('subject'),
+    ... ], throughput={
+    ...     'read': 10,
+    ...     'write': 10,
+    ... })
+
+
+Using an Existing Table
+=======================
+
+DynamoDB v1::
+
+    >>> import boto.dynamodb
+    >>> conn = boto.dynamodb.connect_to_region()
+    # With API calls.
+    >>> table = conn.get_table('messages')
+
+    # Without API calls.
+    >>> message_table_schema = conn.create_schema(
+    ...     hash_key_name='forum_name',
+    ...     hash_key_proto_value=str,
+    ...     range_key_name='subject',
+    ...     range_key_proto_value=str
+    ... )
+    >>> table = conn.table_from_schema(
+    ...     name='messages',
+    ...     schema=message_table_schema)
+
+
+DynamoDB v2::
+
+    >>> from boto.dynamodb2.table import Table
+    # With API calls.
+    >>> table = Table('messages')
+
+    # Without API calls.
+    >>> from boto.dynamodb2.fields import HashKey
+    >>> from boto.dynamodb2.table import Table
+    >>> table = Table('messages', schema=[
+    ...     HashKey('forum_name'),
+    ...     HashKey('subject'),
+    ... ])
+
+
+Updating Throughput
+===================
+
+DynamoDB v1::
+
+    >>> import boto.dynamodb
+    >>> conn = boto.dynamodb.connect_to_region()
+    >>> table = conn.get_table('messages')
+    >>> conn.update_throughput(table, read_units=5, write_units=15)
+
+DynamoDB v2::
+
+    >>> from boto.dynamodb2.table import Table
+    >>> table = Table('messages')
+    >>> table.update(throughput={
+    ...     'read': 5,
+    ...     'write': 15,
+    ... })
+
+
+Deleting a Table
+================
+
+DynamoDB v1::
+
+    >>> import boto.dynamodb
+    >>> conn = boto.dynamodb.connect_to_region()
+    >>> table = conn.get_table('messages')
+    >>> conn.delete_table(table)
+
+DynamoDB v2::
+
+    >>> from boto.dynamodb2.table import Table
+    >>> table = Table('messages')
+    >>> table.delete()
+
+
+Creating an Item
+================
+
+DynamoDB v1::
+
+    >>> import boto.dynamodb
+    >>> conn = boto.dynamodb.connect_to_region()
+    >>> table = conn.get_table('messages')
+    >>> item_data = {
+    ...     'Body': 'http://url_to_lolcat.gif',
+    ...     'SentBy': 'User A',
+    ...     'ReceivedTime': '12/9/2011 11:36:03 PM',
+    ... }
+    >>> item = table.new_item(
+    ...     # Our hash key is 'forum'
+    ...     hash_key='LOLCat Forum',
+    ...     # Our range key is 'subject'
+    ...     range_key='Check this out!',
+    ...     # This has the
+    ...     attrs=item_data
+    ... )
+
+DynamoDB v2::
+
+    >>> from boto.dynamodb2.table import Table
+    >>> table = Table('messages')
+    >>> item = table.put_item(data={
+    ...     'forum_name': 'LOLCat Forum',
+    ...     'subject': 'Check this out!',
+    ...     'Body': 'http://url_to_lolcat.gif',
+    ...     'SentBy': 'User A',
+    ...     'ReceivedTime': '12/9/2011 11:36:03 PM',
+    ... })
+
+
+Getting an Existing Item
+========================
+
+DynamoDB v1::
+
+    >>> table = conn.get_table('messages')
+    >>> item = table.get_item(
+    ...     hash_key='LOLCat Forum',
+    ...     range_key='Check this out!'
+    ... )
+
+DynamoDB v2::
+
+    >>> table = Table('messages')
+    >>> item = table.get_item(
+    ...     forum_name='LOLCat Forum',
+    ...     subject='Check this out!'
+    ... )
+
+
+Updating an Item
+================
+
+DynamoDB v1::
+
+    >>> item['a_new_key'] = 'testing'
+    >>> del item['a_new_key']
+    >>> item.put()
+
+DynamoDB v2::
+
+    >>> item['a_new_key'] = 'testing'
+    >>> del item['a_new_key']
+
+    # Conditional save, only if data hasn't changed.
+    >>> item.save()
+
+    # Forced full overwrite.
+    >>> item.save(overwrite=True)
+
+    # Partial update (only changed fields).
+    >>> item.partial_save()
+
+
+Deleting an Item
+================
+
+DynamoDB v1::
+
+    >>> item.delete()
+
+DynamoDB v2::
+
+    >>> item.delete()
+
+
+Querying
+========
+
+DynamoDB v1::
+
+    >>> import boto.dynamodb
+    >>> conn = boto.dynamodb.connect_to_region()
+    >>> table = conn.get_table('messages')
+    >>> from boto.dynamodb.condition import BEGINS_WITH
+    >>> items = table.query('Amazon DynamoDB',
+    ...                     range_key_condition=BEGINS_WITH('DynamoDB'),
+    ...                     request_limit=1, max_results=1)
+    >>> for item in items:
+    >>>     print item['Body']
+
+DynamoDB v2::
+
+    >>> from boto.dynamodb2.table import Table
+    >>> table = Table('messages')
+    >>> items = table.query(
+    ...     forum_name__eq='Amazon DynamoDB',
+    ...     subject__beginswith='DynamoDB',
+    ...     limit=1
+    ... )
+    >>> for item in items:
+    >>>     print item['Body']
+
+
+Scans
+=====
+
+DynamoDB v1::
+
+    >>> import boto.dynamodb
+    >>> conn = boto.dynamodb.connect_to_region()
+    >>> table = conn.get_table('messages')
+
+    # All items.
+    >>> items = table.scan()
+
+    # With a filter.
+    >>> items = table.scan(scan_filter={'Replies': GT(0)})
+
+DynamoDB v2::
+
+    >>> from boto.dynamodb2.table import Table
+    >>> table = Table('messages')
+
+    # All items.
+    >>> items = table.scan()
+
+    # With a filter.
+    >>> items = table.scan(replies__gt=0)
+
+
+Batch Gets
+==========
+
+DynamoDB v1::
+
+    >>> import boto.dynamodb
+    >>> conn = boto.dynamodb.connect_to_region()
+    >>> table = conn.get_table('messages')
+    >>> from boto.dynamodb.batch import BatchList
+    >>> the_batch = BatchList(conn)
+    >>> the_batch.add_batch(table, keys=[
+    ...     ('LOLCat Forum', 'Check this out!'),
+    ...     ('LOLCat Forum', 'I can haz docs?'),
+    ...     ('LOLCat Forum', 'Maru'),
+    ... ])
+    >>> results = conn.batch_get_item(the_batch)
+
+    # (Largely) Raw dictionaries back from DynamoDB.
+    >>> for item_dict in response['Responses'][table.name]['Items']:
+    ...     print item_dict['Body']
+
+DynamoDB v2::
+
+    >>> from boto.dynamodb2.table import Table
+    >>> table = Table('messages')
+    >>> results = table.batch_get(keys=[
+    ...     {'forum_name': 'LOLCat Forum', 'subject': 'Check this out!'},
+    ...     {'forum_name': 'LOLCat Forum', 'subject': 'I can haz docs?'},
+    ...     {'forum_name': 'LOLCat Forum', 'subject': 'Maru'},
+    ... ])
+
+    # Lazy requests across pages, if paginated.
+    >>> for res in results:
+    ...     # You get back actual ``Item`` instances.
+    ...     print item['Body']
+
+
+Batch Writes
+============
+
+DynamoDB v1::
+
+    >>> import boto.dynamodb
+    >>> conn = boto.dynamodb.connect_to_region()
+    >>> table = conn.get_table('messages')
+    >>> from boto.dynamodb.batch import BatchWriteList
+    >>> from boto.dynamodb.item import Item
+
+    # You must manually manage this so that your total ``puts/deletes`` don't
+    # exceed 25.
+    >>> the_batch = BatchList(conn)
+    >>> the_batch.add_batch(table, puts=[
+    ...     Item(table, 'Corgi Fanciers', 'Sploots!', {
+    ...         'Body': 'Post your favorite corgi-on-the-floor shots!',
+    ...         'SentBy': 'User B',
+    ...         'ReceivedTime': '2013/05/02 10:56:45 AM',
+    ...     }),
+    ...     Item(table, 'Corgi Fanciers', 'Maximum FRAPS', {
+    ...         'Body': 'http://internetvideosite/watch?v=1247869',
+    ...         'SentBy': 'User C',
+    ...         'ReceivedTime': '2013/05/01 09:15:25 PM',
+    ...     }),
+    ... ], deletes=[
+    ...     ('LOLCat Forum', 'Off-topic post'),
+    ...     ('LOLCat Forum', 'They be stealin mah bukket!'),
+    ... ])
+    >>> conn.batch_write_item(the_writes)
+
+DynamoDB v2::
+
+    >>> from boto.dynamodb2.table import Table
+    >>> table = Table('messages')
+
+    # Uses a context manager, which also automatically handles batch sizes.
+    >>> with table.batch_write() as batch:
+    ...     batch.delete_item(
+    ...         forum_name='LOLCat Forum',
+    ...         subject='Off-topic post'
+    ...     )
+    ...     batch.put_item(data={
+    ...         'forum_name': 'Corgi Fanciers',
+    ...         'subject': 'Sploots!',
+    ...         'Body': 'Post your favorite corgi-on-the-floor shots!',
+    ...         'SentBy': 'User B',
+    ...         'ReceivedTime': '2013/05/02 10:56:45 AM',
+    ...     })
+    ...     batch.put_item(data={
+    ...         'forum_name': 'Corgi Fanciers',
+    ...         'subject': 'Sploots!',
+    ...         'Body': 'Post your favorite corgi-on-the-floor shots!',
+    ...         'SentBy': 'User B',
+    ...         'ReceivedTime': '2013/05/02 10:56:45 AM',
+    ...     })
+    ...     batch.delete_item(
+    ...         forum_name='LOLCat Forum',
+    ...         subject='They be stealin mah bukket!'
+    ...     )
diff --git a/docs/source/rds_tut.rst b/docs/source/rds_tut.rst
new file mode 100644
index 0000000..6955cbe
--- /dev/null
+++ b/docs/source/rds_tut.rst
@@ -0,0 +1,108 @@
+.. _rds_tut:
+
+=======================================
+An Introduction to boto's RDS interface
+=======================================
+
+This tutorial focuses on the boto interface to the Relational Database Service
+from Amazon Web Services.  This tutorial assumes that you have boto already
+downloaded and installed, and that you wish to setup a MySQL instance in RDS.
+
+Creating a Connection
+---------------------
+The first step in accessing RDS is to create a connection to the service.
+The recommended method of doing this is as follows::
+
+    >>> import boto.rds
+    >>> conn = boto.rds.connect_to_region(
+    ...     "us-west-2",
+    ...     aws_access_key_id='<aws access key'>,
+    ...     aws_secret_access_key='<aws secret key>')
+
+At this point the variable conn will point to an RDSConnection object in the
+US-WEST-2 region. Bear in mind that just as any other AWS service, RDS is
+region-specific. In this example, the AWS access key and AWS secret key are
+passed in to the method explicitely. Alternatively, you can set the environment
+variables:
+
+* ``AWS_ACCESS_KEY_ID`` - Your AWS Access Key ID
+* ``AWS_SECRET_ACCESS_KEY`` - Your AWS Secret Access Key
+
+and then simply call::
+
+    >>> import boto.rds
+    >>> conn = boto.rds.connect_to_region("us-west-2")
+
+In either case, conn will point to an RDSConnection object which we will
+use throughout the remainder of this tutorial.
+
+Starting an RDS Instance
+------------------------
+
+Creating a DB instance is easy. You can do so as follows::
+
+   >>> db = conn.create_dbinstance("db-master-1", 10, 'db.m1.small', 'root', 'hunter2')
+
+This example would create a DB identified as ``db-master-1`` with 10GB of
+storage. This instance would be running on ``db.m1.small`` type, with the login
+name being ``root``, and the password ``hunter2``.
+
+To check on the status of your RDS instance, you will have to query the RDS connection again::
+
+    >>> instances = conn.get_all_dbinstances("db-master-1")
+    >>> instances
+    [DBInstance:db-master-1]
+    >>> db = instances[0]
+    >>> db.status
+    u'available'
+    >>> db.endpoint
+    (u'db-master-1.aaaaaaaaaa.us-west-2.rds.amazonaws.com', 3306)
+
+Creating a Security Group
+-------------------------
+
+Before you can actually connect to this RDS service, you must first
+create a security group. You can add a CIDR range or an :py:class:`EC2 security
+group <boto.ec2.securitygroup.SecurityGroup>`  to your :py:class:`DB security
+group <boto.rds.dbsecuritygroup.DBSecurityGroup>` ::
+
+    >>> sg = conn.create_dbsecurity_group('web_servers', 'Web front-ends')
+    >>> sg.authorize(cidr_ip='10.3.2.45/32')
+    True
+
+You can then associate this security group with your RDS instance::
+
+    >>> db.modify(security_groups=[sg])
+
+
+Connecting to your New Database
+-------------------------------
+
+Once you have reached this step, you can connect to your RDS instance as you
+would with any other MySQL instance::
+
+    >>> db.endpoint
+    (u'db-master-1.aaaaaaaaaa.us-west-2.rds.amazonaws.com', 3306)
+
+    % mysql -h db-master-1.aaaaaaaaaa.us-west-2.rds.amazonaws.com -u root -phunter2
+    mysql>
+
+
+Making a backup
+---------------
+
+You can also create snapshots of your database very easily::
+
+    >>> db.snapshot('db-master-1-2013-02-05')
+    DBSnapshot:db-master-1-2013-02-05
+
+
+Once this snapshot is complete, you can create a new database instance from
+it::
+
+    >>> db2 = conn.restore_dbinstance_from_dbsnapshot(
+    ...    'db-master-1-2013-02-05',
+    ...    'db-restored-1',
+    ...    'db.m1.small',
+    ...    'us-west-2')
+
diff --git a/docs/source/ref/beanstalk.rst b/docs/source/ref/beanstalk.rst
new file mode 100644
index 0000000..e65a468
--- /dev/null
+++ b/docs/source/ref/beanstalk.rst
@@ -0,0 +1,26 @@
+.. ref-beanstalk
+
+=================
+Elastic Beanstalk
+=================
+
+boto.beanstalk
+--------------
+
+.. automodule:: boto.beanstalk
+   :members:
+   :undoc-members:
+
+boto.beanstalk.layer1
+---------------------
+
+.. automodule:: boto.beanstalk.layer1
+   :members:
+   :undoc-members:
+
+boto.beanstalk.response
+-----------------------
+
+.. automodule:: boto.beanstalk.response
+   :members:
+   :undoc-members:
diff --git a/docs/source/ref/cloudsearch.rst b/docs/source/ref/cloudsearch.rst
index 14671ee..1610200 100644
--- a/docs/source/ref/cloudsearch.rst
+++ b/docs/source/ref/cloudsearch.rst
@@ -7,7 +7,7 @@
 boto.cloudsearch
 ----------------
 
-.. automodule:: boto.swf
+.. automodule:: boto.cloudsearch
    :members:   
    :undoc-members:
 
diff --git a/docs/source/ref/datapipeline.rst b/docs/source/ref/datapipeline.rst
new file mode 100644
index 0000000..316147c
--- /dev/null
+++ b/docs/source/ref/datapipeline.rst
@@ -0,0 +1,26 @@
+.. _ref-datapipeline:
+
+=============
+Data Pipeline
+=============
+
+boto.datapipeline
+-----------------
+
+.. automodule:: boto.datapipeline
+   :members:
+   :undoc-members:
+
+boto.datapipeline.layer1
+------------------------
+
+.. automodule:: boto.datapipeline.layer1
+   :members:
+   :undoc-members:
+
+boto.datapipeline.exceptions
+----------------------------
+
+.. automodule:: boto.datapipeline.exceptions
+   :members:
+   :undoc-members:
diff --git a/docs/source/ref/dynamodb.rst b/docs/source/ref/dynamodb.rst
index 560556e..00a1375 100644
--- a/docs/source/ref/dynamodb.rst
+++ b/docs/source/ref/dynamodb.rst
@@ -8,7 +8,7 @@
 -------------
 
 .. automodule:: boto.dynamodb
-   :members:   
+   :members:
    :undoc-members:
 
 boto.dynamodb.layer1
@@ -53,4 +53,9 @@
    :members:
    :undoc-members:
 
+boto.dynamodb.types
+-------------------
 
+.. automodule:: boto.dynamodb.types
+   :members:
+   :undoc-members:
diff --git a/docs/source/ref/dynamodb2.rst b/docs/source/ref/dynamodb2.rst
new file mode 100644
index 0000000..97db6b0
--- /dev/null
+++ b/docs/source/ref/dynamodb2.rst
@@ -0,0 +1,61 @@
+.. ref-dynamodb2
+
+=========
+DynamoDB2
+=========
+
+High-Level API
+==============
+
+boto.dynamodb2.fields
+---------------------
+
+.. automodule:: boto.dynamodb2.fields
+   :members:
+   :undoc-members:
+
+boto.dynamodb2.items
+--------------------
+
+.. automodule:: boto.dynamodb2.items
+   :members:
+   :undoc-members:
+
+boto.dynamodb2.results
+----------------------
+
+.. automodule:: boto.dynamodb2.results
+   :members:
+   :undoc-members:
+
+boto.dynamodb2.table
+--------------------
+
+.. automodule:: boto.dynamodb2.table
+   :members:
+   :undoc-members:
+
+
+Low-Level API
+=============
+
+boto.dynamodb2
+--------------
+
+.. automodule:: boto.dynamodb2
+   :members:
+   :undoc-members:
+
+boto.dynamodb2.layer1
+---------------------
+
+.. automodule:: boto.dynamodb2.layer1
+   :members:
+   :undoc-members:
+
+boto.dynamodb2.exceptions
+-------------------------
+
+.. automodule:: boto.dynamodb2.exceptions
+   :members:
+   :undoc-members:
diff --git a/docs/source/ref/ec2.rst b/docs/source/ref/ec2.rst
index 0d5ac0e..3d06fa8 100644
--- a/docs/source/ref/ec2.rst
+++ b/docs/source/ref/ec2.rst
@@ -8,14 +8,14 @@
 --------
 
 .. automodule:: boto.ec2
-   :members:   
+   :members:
    :undoc-members:
 
 boto.ec2.address
 ----------------
 
 .. automodule:: boto.ec2.address
-   :members:   
+   :members:
    :undoc-members:
 
 boto.ec2.autoscale
@@ -23,11 +23,18 @@
 
 See the :doc:`Auto Scaling Reference <autoscale>`.
 
+boto.ec2.blockdevicemapping
+---------------------------
+
+.. automodule:: boto.ec2.blockdevicemapping
+   :members:
+   :undoc-members:
+
 boto.ec2.buyreservation
 -----------------------
 
 .. automodule:: boto.ec2.buyreservation
-   :members:   
+   :members:
    :undoc-members:
 
 boto.ec2.cloudwatch
@@ -39,95 +46,150 @@
 -------------------
 
 .. automodule:: boto.ec2.connection
-   :members:   
+   :members:
    :undoc-members:
 
 boto.ec2.ec2object
 ------------------
 
 .. automodule:: boto.ec2.ec2object
-   :members:   
+   :members:
    :undoc-members:
 
 boto.ec2.elb
--------------------
+------------
 
 See the :doc:`ELB Reference <elb>`.
 
+boto.ec2.group
+--------------
+
+.. automodule:: boto.ec2.group
+   :members:
+   :undoc-members:
+
 boto.ec2.image
 --------------
 
 .. automodule:: boto.ec2.image
-   :members:   
+   :members:
    :undoc-members:
 
 boto.ec2.instance
 -----------------
 
 .. automodule:: boto.ec2.instance
-   :members:   
+   :members:
    :undoc-members:
 
 boto.ec2.instanceinfo
 ---------------------
 
 .. automodule:: boto.ec2.instanceinfo
-   :members:   
+   :members:
    :undoc-members:
 
 boto.ec2.instancestatus
---------------------------
+-----------------------
 
 .. automodule:: boto.ec2.instancestatus
-   :members:   
+   :members:
    :undoc-members:
 
 boto.ec2.keypair
 ----------------
 
 .. automodule:: boto.ec2.keypair
-   :members:   
+   :members:
+   :undoc-members:
+
+boto.ec2.launchspecification
+----------------------------
+
+.. automodule:: boto.ec2.launchspecification
+   :members:
+   :undoc-members:
+
+boto.ec2.networkinterface
+-------------------------
+
+.. automodule:: boto.ec2.networkinterface
+   :members:
+   :undoc-members:
+
+boto.ec2.placementgroup
+-----------------------
+
+.. automodule:: boto.ec2.placementgroup
+   :members:
    :undoc-members:
 
 boto.ec2.regioninfo
 -------------------
 
 .. automodule:: boto.ec2.regioninfo
-   :members:   
+   :members:
    :undoc-members:
 
 boto.ec2.reservedinstance
 -------------------------
 
 .. automodule:: boto.ec2.reservedinstance
-   :members:   
+   :members:
    :undoc-members:
 
 boto.ec2.securitygroup
 ----------------------
 
 .. automodule:: boto.ec2.securitygroup
-   :members:   
+   :members:
    :undoc-members:
 
 boto.ec2.snapshot
 -----------------
 
 .. automodule:: boto.ec2.snapshot
-   :members:   
+   :members:
+   :undoc-members:
+
+boto.ec2.spotinstancerequest
+----------------------------
+
+.. automodule:: boto.ec2.spotinstancerequest
+   :members:
+   :undoc-members:
+
+boto.ec2.tag
+------------
+
+.. automodule:: boto.ec2.tag
+   :members:
+   :undoc-members:
+
+boto.ec2.vmtype
+---------------
+
+.. automodule:: boto.ec2.vmtype
+   :members:
    :undoc-members:
 
 boto.ec2.volume
 ---------------
 
 .. automodule:: boto.ec2.volume
-   :members:   
+   :members:
+   :undoc-members:
+
+boto.ec2.volumestatus
+---------------------
+
+.. automodule:: boto.ec2.volumestatus
+   :members:
    :undoc-members:
 
 boto.ec2.zone
 -------------
 
 .. automodule:: boto.ec2.zone
-   :members:   
+   :members:
    :undoc-members:
-
diff --git a/docs/source/ref/elasticache.rst b/docs/source/ref/elasticache.rst
new file mode 100644
index 0000000..9d08a17
--- /dev/null
+++ b/docs/source/ref/elasticache.rst
@@ -0,0 +1,19 @@
+.. ref-elasticache
+
+==================
+Amazon ElastiCache
+==================
+
+boto.elasticache
+----------------
+
+.. automodule:: boto.elasticache
+   :members:
+   :undoc-members:
+
+boto.elasticache.layer1
+-----------------------
+
+.. automodule:: boto.elasticache.layer1
+   :members:
+   :undoc-members:
diff --git a/docs/source/ref/elastictranscoder.rst b/docs/source/ref/elastictranscoder.rst
new file mode 100644
index 0000000..b59eeac
--- /dev/null
+++ b/docs/source/ref/elastictranscoder.rst
@@ -0,0 +1,26 @@
+.. _ref-elastictranscoder:
+
+==================
+Elastic Transcoder
+==================
+
+boto.elastictranscoder
+----------------------
+
+.. automodule:: boto.elastictranscoder
+   :members:
+   :undoc-members:
+
+boto.elastictranscoder.layer1
+-----------------------------
+
+.. automodule:: boto.elastictranscoder.layer1
+   :members:
+   :undoc-members:
+
+boto.elastictranscoder.exceptions
+---------------------------------
+
+.. automodule:: boto.elastictranscoder.exceptions
+   :members:
+   :undoc-members:
diff --git a/docs/source/ref/glacier.rst b/docs/source/ref/glacier.rst
index 6f5ccbb..94edd53 100644
--- a/docs/source/ref/glacier.rst
+++ b/docs/source/ref/glacier.rst
@@ -1,7 +1,7 @@
 .. ref-glacier
 
 =======
-Glaicer
+Glacier
 =======
 
 boto.glacier
@@ -12,7 +12,7 @@
    :undoc-members:
 
 boto.glacier.layer1
-------------------
+-------------------
 
 .. automodule:: boto.glacier.layer1
    :members:
@@ -46,6 +46,13 @@
    :members:
    :undoc-members:
 
+boto.glacier.concurrent
+-----------------------
+
+.. automodule:: boto.glacier.concurrent
+   :members:
+   :undoc-members:
+
 boto.glacier.exceptions
 -----------------------
 
diff --git a/docs/source/ref/gs.rst b/docs/source/ref/gs.rst
index 6f24a19..e411dee 100644
--- a/docs/source/ref/gs.rst
+++ b/docs/source/ref/gs.rst
@@ -8,14 +8,27 @@
 -----------
 
 .. automodule:: boto.gs.acl
-   :members:   
+   :members:
+   :inherited-members:
    :undoc-members:
 
 boto.gs.bucket
 --------------
 
 .. automodule:: boto.gs.bucket
-   :members:   
+   :members:
+   :inherited-members:
+   :undoc-members:
+   :exclude-members: BucketPaymentBody, LoggingGroup, MFADeleteRE, VersionRE,
+                     VersioningBody, WebsiteBody, WebsiteErrorFragment,
+                     WebsiteMainPageFragment, startElement, endElement
+
+boto.gs.bucketlistresultset
+---------------------------
+
+.. automodule:: boto.gs.bucketlistresultset
+   :members:
+   :inherited-members:
    :undoc-members:
 
 boto.gs.connection
@@ -23,26 +36,37 @@
 
 .. automodule:: boto.gs.connection
    :members:
+   :inherited-members:
+   :undoc-members:
+
+boto.gs.cors
+------------
+
+.. automodule:: boto.gs.cors
+   :members:
    :undoc-members:
 
 boto.gs.key
 -----------
 
 .. automodule:: boto.gs.key
-   :members:   
+   :members:
+   :inherited-members:
    :undoc-members:
 
 boto.gs.user
 ------------
 
 .. automodule:: boto.gs.user
-   :members:   
+   :members:
+   :inherited-members:
    :undoc-members:
 
 boto.gs.resumable_upload_handler
 --------------------------------
 
 .. automodule:: boto.gs.resumable_upload_handler
-   :members:   
+   :members:
+   :inherited-members:
    :undoc-members:
 
diff --git a/docs/source/ref/index.rst b/docs/source/ref/index.rst
index b13fc06..3def15d 100644
--- a/docs/source/ref/index.rst
+++ b/docs/source/ref/index.rst
@@ -8,6 +8,7 @@
    :maxdepth: 4
 
    boto
+   beanstalk
    cloudformation
    cloudfront
    cloudsearch
@@ -23,8 +24,10 @@
    iam
    manage
    mturk
+   mws
    pyami
    rds
+   redshift
    route53
    s3
    sdb
diff --git a/docs/source/ref/mturk.rst b/docs/source/ref/mturk.rst
index 1c8429b..b116d37 100644
--- a/docs/source/ref/mturk.rst
+++ b/docs/source/ref/mturk.rst
@@ -18,6 +18,13 @@
    :members:   
    :undoc-members:
 
+boto.mturk.layoutparam
+----------------------
+
+.. automodule:: boto.mturk.layoutparam
+   :members:   
+   :undoc-members:
+
 boto.mturk.notification
 -----------------------
 
diff --git a/docs/source/ref/mws.rst b/docs/source/ref/mws.rst
new file mode 100644
index 0000000..df5cc22
--- /dev/null
+++ b/docs/source/ref/mws.rst
@@ -0,0 +1,33 @@
+.. ref-mws
+
+===
+mws
+===
+
+boto.mws
+--------
+
+.. automodule:: boto.mws
+   :members:
+   :undoc-members:
+
+boto.mws.connection
+-------------------
+
+.. automodule:: boto.mws.connection
+   :members:
+   :undoc-members:
+
+boto.mws.exception
+-------------------
+
+.. automodule:: boto.mws.exception
+   :members:
+   :undoc-members:
+
+boto.mws.response
+-------------------
+
+.. automodule:: boto.mws.response
+   :members:
+   :undoc-members:
diff --git a/docs/source/ref/redshift.rst b/docs/source/ref/redshift.rst
new file mode 100644
index 0000000..b3d8463
--- /dev/null
+++ b/docs/source/ref/redshift.rst
@@ -0,0 +1,26 @@
+.. _ref-redshift:
+
+========
+Redshift
+========
+
+boto.redshift
+-------------
+
+.. automodule:: boto.redshift
+   :members:
+   :undoc-members:
+
+boto.redshift.layer1
+--------------------
+
+.. automodule:: boto.redshift.layer1
+   :members:
+   :undoc-members:
+
+boto.redshift.exceptions
+------------------------
+
+.. automodule:: boto.redshift.exceptions
+   :members:
+   :undoc-members:
diff --git a/docs/source/ref/s3.rst b/docs/source/ref/s3.rst
index 1082c08..ce5c925 100644
--- a/docs/source/ref/s3.rst
+++ b/docs/source/ref/s3.rst
@@ -68,7 +68,7 @@
    :undoc-members:
 
 boto.s3.multidelete
------------------
+-------------------
 
 .. automodule:: boto.s3.multidelete
    :members:
@@ -96,7 +96,7 @@
    :undoc-members:
 
 boto.s3.tagging
---------------
+---------------
 
 .. automodule:: boto.s3.tagging
    :members:
diff --git a/docs/source/ref/support.rst b/docs/source/ref/support.rst
new file mode 100644
index 0000000..d63d809
--- /dev/null
+++ b/docs/source/ref/support.rst
@@ -0,0 +1,26 @@
+.. _ref-support:
+
+=======
+Support
+=======
+
+boto.support
+------------
+
+.. automodule:: boto.support
+   :members:
+   :undoc-members:
+
+boto.support.layer1
+-------------------
+
+.. automodule:: boto.support.layer1
+   :members:
+   :undoc-members:
+
+boto.support.exceptions
+-----------------------
+
+.. automodule:: boto.support.exceptions
+   :members:
+   :undoc-members:
diff --git a/docs/source/releasenotes/v2.9.1.rst b/docs/source/releasenotes/v2.9.1.rst
new file mode 100644
index 0000000..488730e
--- /dev/null
+++ b/docs/source/releasenotes/v2.9.1.rst
@@ -0,0 +1,48 @@
+boto v2.9.1
+===========
+
+:date: 2013/04/30
+
+Primarily a bugfix release, this release also includes support for the new
+AWS Support API.
+
+
+Features
+--------
+
+* AWS Support API - A client was added to support the new AWS Support API. It
+  gives programmatic access to Support cases opened with AWS. A short example
+  might look like::
+
+    >>> from boto.support.layer1 import SupportConnection
+    >>> conn = SupportConnection()
+    >>> new_case = conn.create_case(
+    ...     subject='Description of the issue',
+    ...     service_code='amazon-cloudsearch',
+    ...     category_code='performance',
+    ...     communication_body="We're seeing some latency from one of our...",
+    ...     severity_code='low'
+    ... )
+    >>> new_case['caseId']
+    u'case-...'
+
+  The :ref:`Support Tutorial <support_tut>` has more information on how to use
+  the new API. (:sha:`8c0451`)
+
+
+Bugfixes
+--------
+
+* The reintroduction of ``ResumableUploadHandler.get_upload_id`` that was
+  accidentally removed in a previous commit. (:sha:`758322`)
+* Added ``OrdinaryCallingFormat`` to support Google Storage's certificate
+  verification. (:sha:`4ca83b`)
+* Added the ``eu-west-1`` region for Redshift. (:sha:`e98b95`)
+* Added support for overriding the port any connection in ``boto`` uses.
+  (:sha:`08e893`)
+* Added retry/checksumming support to the DynamoDB v2 client. (:sha:`969ae2`)
+* Several documentation improvements/fixes:
+
+    * Incorrect docs on EC2's ``import_key_pair``. (:sha:`6ada7d`)
+    * Clearer docs on the DynamoDB  ``count`` parameter. (:sha:`dfa456`)
+    * Fixed a typo in the ``autoscale_tut``. (:sha:`6df1ae`)
diff --git a/docs/source/releasenotes/v2.9.2.rst b/docs/source/releasenotes/v2.9.2.rst
new file mode 100644
index 0000000..0e9994a
--- /dev/null
+++ b/docs/source/releasenotes/v2.9.2.rst
@@ -0,0 +1,18 @@
+boto v2.9.2
+===========
+
+:date: 2013/04/30
+
+A hotfix release that adds the ``boto.support`` into ``setup.py``.
+
+
+Features
+--------
+
+* None.
+
+
+Bugfixes
+--------
+
+* Fixed the missing ``boto.support`` in ``setup.py``. (:sha:`9ac196`)
diff --git a/docs/source/releasenotes/v2.9.3.rst b/docs/source/releasenotes/v2.9.3.rst
new file mode 100644
index 0000000..1835862
--- /dev/null
+++ b/docs/source/releasenotes/v2.9.3.rst
@@ -0,0 +1,53 @@
+boto v2.9.3
+===========
+
+:date: 2013/05/15
+
+This release adds ELB support to Opsworks, optimized EBS support in EC2
+AutoScale, Parallel Scan support to DynamoDB v2, a higher-level interface to
+DynamoDB v2 and API updates to DataPipeline.
+
+
+Features
+--------
+
+* ELB support in Opsworks - You can now attach & describe the Elastic Load
+  Balancers within the Opsworks client. (:sha:`ecda87`)
+* Optimized EBS support in EC2 AutoScale - You can now specify whether an
+  AutoScale instance should be optimized for EBS I/O. (:sha:`f8acaa`)
+* Parallel Scan support in DynamoDB v2 - If you have extra read capacity &
+  a large amount of data, you can scan over the records in parallel by
+  telling DynamoDB to split the table into segments, then spinning up
+  threads/processes to each run over their own segment. (:sha:`db7f7b` & :sha:`7ed73c`)
+* Higher-level interface to DynamoDB v2 - A more convenient API for using
+  DynamoDB v2. The :ref:`DynamoDB v2 Tutorial <dynamodb2_tut>` has more
+  information on how to use the new API. (:sha:`0f7c8b`)
+
+
+Backward-Incompatible Changes
+-----------------------------
+
+* API Update for DataPipeline - The ``error_code`` (integer) argument to
+  ``set_task_status`` changed to ``error_id`` (string). Many documentation
+  updates were also added. (:sha:`a78572`)
+
+
+Bugfixes
+--------
+
+* Bumped the AWS Support API version. (:sha:`0323f4`)
+* Fixed the S3 ``ResumableDownloadHandler`` so that it no longer tries to use
+  a hashing algorithm when used outside of GCS. (:sha:`29b046`)
+* Fixed a bug where Sig V4 URIs were improperly canonicalized. (:sha:`5269d8`)
+* Fixed a bug where Sig V4 ports were not included. (:sha:`cfaba3`)
+* Fixed a bug in CloudWatch's ``build_put_params`` that would overwrite
+  existing/necessary variables. (:sha:`550e00`)
+* Several documentation improvements/fixes:
+
+    * Added docs for RDS ``modify/modify_dbinstance``. (:sha:`777d73`)
+    * Fixed a typo in the ``README.rst``. (:sha:`181e0f`)
+    * Documentation fallout from the previous release. (:sha:`14a111`)
+    * Fixed a typo in the EC2 ``Image.run`` docs. (:sha:`5edd6a`)
+    * Added/improved docs for EC2 ``Image.run``. (:sha:`773ce5`)
+    * Added a CONTRIBUTING doc. (:sha:`cecbe8`)
+    * Fixed S3 ``create_bucket`` docs to specify "European Union". (:sha:`ddddfd`)
diff --git a/docs/source/releasenotes/v2.9.4.rst b/docs/source/releasenotes/v2.9.4.rst
new file mode 100644
index 0000000..675afd4
--- /dev/null
+++ b/docs/source/releasenotes/v2.9.4.rst
@@ -0,0 +1,30 @@
+boto v2.9.4
+===========
+
+:date: 2013/05/20
+
+This release adds updated Elastic Transcoder support & fixes several bugs
+from recent releases & API updates.
+
+
+Features
+--------
+
+* Updated Elastic Transcoder support - It now supports HLS, WebM, MPEG2-TS & a
+  host of `other features`_. (:sha:`89196a`)
+
+  .. _`other features`: http://aws.typepad.com/aws/2013/05/new-features-for-the-amazon-elastic-transcoder.html
+
+
+Bugfixes
+--------
+
+* Fixed a bug in the canonicalization of URLs on Windows. (:sha:`09ef8c`)
+* Fixed glacier part size bug (:issue:`1478`, :sha:`9e04171`)
+* Fixed a bug in the bucket regex for S3 involving capital letters.
+  (:sha:`950031`)
+* Fixed a bug where timestamps from Cloudformation would fail to be parsed.
+  (:sha:`b40542`)
+* Several documentation improvements/fixes:
+
+    * Added autodocs for many of the EC2 apis. (:sha:`79f939`)
diff --git a/docs/source/releasenotes/v2.9.5.rst b/docs/source/releasenotes/v2.9.5.rst
new file mode 100644
index 0000000..5df46bd
--- /dev/null
+++ b/docs/source/releasenotes/v2.9.5.rst
@@ -0,0 +1,32 @@
+boto v2.9.5
+===========
+
+:date: 2013/05/28
+
+This release adds support for `web identity federation`_ within the Secure
+Token Service (STS) & fixes several bugs.
+
+.. _`web identity federation`: http://docs.aws.amazon.com/STS/latest/UsingSTS/CreatingWIF.html
+
+Features
+--------
+
+* Added support for web identity federation - You can now delegate token access
+  via either an Oauth 2.0 or OpenID provider. (:sha:`9bd0a3`)
+
+
+Bugfixes
+--------
+
+* Altered the S3 key buffer to be a configurable value. (:issue:`1506`,
+  :sha:`8e3e36`)
+* Added Sphinx extension for better release notes. (:issue:`1511`,
+  :sha:`e2e32d` & :sha:`3d998b`)
+* Fixed a bug where DynamoDB v2 would only ever connect to the default endpoint.
+  (:issue:`1508`, :sha:`139912`)
+* Fixed a iteration/empty results bug & a ``between`` bug in DynamoDB v2.
+  (:issue:`1512`, :sha:`d109b6`)
+* Fixed an issue with ``EbsOptimized`` in EC2 Autoscale. (:issue:`1513`,
+  :sha:`424c41`)
+* Fixed a missing instance variable bug in DynamoDB v2. (:issue:`1516`,
+  :sha:`6fa8bf`)
diff --git a/docs/source/s3_tut.rst b/docs/source/s3_tut.rst
index 81b97e4..fc75e10 100644
--- a/docs/source/s3_tut.rst
+++ b/docs/source/s3_tut.rst
@@ -20,18 +20,18 @@
 this example, the AWS access key and AWS secret key are passed in to the
 method explicitely.  Alternatively, you can set the environment variables:
 
-AWS_ACCESS_KEY_ID - Your AWS Access Key ID
-AWS_SECRET_ACCESS_KEY - Your AWS Secret Access Key
+* `AWS_ACCESS_KEY_ID` - Your AWS Access Key ID
+* `AWS_SECRET_ACCESS_KEY` - Your AWS Secret Access Key
 
 and then call the constructor without any arguments, like this:
 
 >>> conn = S3Connection()
 
 There is also a shortcut function in the boto package, called connect_s3
-that may provide a slightly easier means of creating a connection:
+that may provide a slightly easier means of creating a connection::
 
->>> import boto
->>> conn = boto.connect_s3()
+    >>> import boto
+    >>> conn = boto.connect_s3()
 
 In either case, conn will point to an S3Connection object which we will
 use throughout the remainder of this tutorial.
@@ -44,14 +44,14 @@
 in S3.  A bucket can hold an unlimited amount of data so you could potentially
 have just one bucket in S3 for all of your information.  Or, you could create
 separate buckets for different types of data.  You can figure all of that out
-later, first let's just create a bucket.  That can be accomplished like this:
+later, first let's just create a bucket.  That can be accomplished like this::
 
->>> bucket = conn.create_bucket('mybucket')
-Traceback (most recent call last):
-  File "<stdin>", line 1, in ?
-  File "boto/connection.py", line 285, in create_bucket
-    raise S3CreateError(response.status, response.reason)
-boto.exception.S3CreateError: S3Error[409]: Conflict
+    >>> bucket = conn.create_bucket('mybucket')
+    Traceback (most recent call last):
+      File "<stdin>", line 1, in ?
+      File "boto/connection.py", line 285, in create_bucket
+        raise S3CreateError(response.status, response.reason)
+    boto.exception.S3CreateError: S3Error[409]: Conflict
 
 Whoa.  What happended there?  Well, the thing you have to know about
 buckets is that they are kind of like domain names.  It's one flat name
@@ -72,21 +72,26 @@
 The example above assumes that you want to create a bucket in the
 standard US region.  However, it is possible to create buckets in
 other locations.  To do so, first import the Location object from the
-boto.s3.connection module, like this:
+boto.s3.connection module, like this::
 
->>> from boto.s3.connection import Location
->>> dir(Location)
-['DEFAULT', 'EU', 'USWest', 'APSoutheast', '__doc__', '__module__']
->>>
+    >>> from boto.s3.connection import Location
+    >>> print '\n'.join(i for i in dir(Location) if i[0].isupper())
+    APNortheast
+    APSoutheast
+    APSoutheast2
+    DEFAULT
+    EU
+    SAEast
+    USWest
+    USWest2
 
-As you can see, the Location object defines three possible locations;
-DEFAULT, EU, USWest, and APSoutheast.  By default, the location is the
-empty string which is interpreted as the US Classic Region, the
-original S3 region.  However, by specifying another location at the
-time the bucket is created, you can instruct S3 to create the bucket
-in that location.  For example:
+As you can see, the Location object defines a number of possible locations.  By
+default, the location is the empty string which is interpreted as the US
+Classic Region, the original S3 region.  However, by specifying another
+location at the time the bucket is created, you can instruct S3 to create the
+bucket in that location.  For example::
 
->>> conn.create_bucket('mybucket', location=Location.EU)
+    >>> conn.create_bucket('mybucket', location=Location.EU)
 
 will create the bucket in the EU region (assuming the name is available).
 
@@ -99,34 +104,36 @@
 within your bucket.
 
 The Key object is used in boto to keep track of data stored in S3.  To store
-new data in S3, start by creating a new Key object:
+new data in S3, start by creating a new Key object::
 
->>> from boto.s3.key import Key
->>> k = Key(bucket)
->>> k.key = 'foobar'
->>> k.set_contents_from_string('This is a test of S3')
+    >>> from boto.s3.key import Key
+    >>> k = Key(bucket)
+    >>> k.key = 'foobar'
+    >>> k.set_contents_from_string('This is a test of S3')
 
 The net effect of these statements is to create a new object in S3 with a
 key of "foobar" and a value of "This is a test of S3".  To validate that
-this worked, quit out of the interpreter and start it up again.  Then:
+this worked, quit out of the interpreter and start it up again.  Then::
 
->>> import boto
->>> c = boto.connect_s3()
->>> b = c.create_bucket('mybucket') # substitute your bucket name here
->>> from boto.s3.key import Key
->>> k = Key(b)
->>> k.key = 'foobar'
->>> k.get_contents_as_string()
-'This is a test of S3'
+    >>> import boto
+    >>> c = boto.connect_s3()
+    >>> b = c.create_bucket('mybucket') # substitute your bucket name here
+    >>> from boto.s3.key import Key
+    >>> k = Key(b)
+    >>> k.key = 'foobar'
+    >>> k.get_contents_as_string()
+    'This is a test of S3'
 
 So, we can definitely store and retrieve strings.  A more interesting
 example may be to store the contents of a local file in S3 and then retrieve
 the contents to another local file.
 
->>> k = Key(b)
->>> k.key = 'myfile'
->>> k.set_contents_from_filename('foo.jpg')
->>> k.get_contents_to_filename('bar.jpg')
+::
+
+    >>> k = Key(b)
+    >>> k.key = 'myfile'
+    >>> k.set_contents_from_filename('foo.jpg')
+    >>> k.get_contents_to_filename('bar.jpg')
 
 There are a couple of things to note about this.  When you send data to
 S3 from a file or filename, boto will attempt to determine the correct
@@ -136,24 +143,77 @@
 to and from S3 so you should be able to send and receive large files without
 any problem.
 
+Accessing A Bucket
+------------------
+
+Once a bucket exists, you can access it by getting the bucket. For example::
+
+    >>> mybucket = conn.get_bucket('mybucket') # Substitute in your bucket name
+    >>> mybucket.list()
+    <listing of keys in the bucket)
+
+By default, this method tries to validate the bucket's existence. You can
+override this behavior by passing ``validate=False``.::
+
+    >>> nonexistent = conn.get_bucket('i-dont-exist-at-all', validate=False)
+
+If the bucket does not exist, a ``S3ResponseError`` will commonly be thrown. If
+you'd rather not deal with any exceptions, you can use the ``lookup`` method.::
+
+    >>> nonexistent = conn.lookup('i-dont-exist-at-all')
+    >>> if nonexistent is None:
+    ...     print "No such bucket!"
+    ...
+    No such bucket!
+
+Deleting A Bucket
+-----------------
+
+Removing a bucket can be done using the ``delete_bucket`` method. For example::
+
+    >>> conn.delete_bucket('mybucket') # Substitute in your bucket name
+
+The bucket must be empty of keys or this call will fail & an exception will be
+raised. You can remove a non-empty bucket by doing something like::
+
+    >>> full_bucket = conn.get_bucket('bucket-to-delete')
+    # It's full of keys. Delete them all.
+    >>> for key in full_bucket.list():
+    ...     key.delete()
+    ...
+    # The bucket is empty now. Delete it.
+    >>> conn.delete_bucket('bucket-to-delete')
+
+.. warning::
+
+    This method can cause data loss! Be very careful when using it.
+
+    Additionally, be aware that using the above method for removing all keys
+    and deleting the bucket involves a request for each key. As such, it's not
+    particularly fast & is very chatty.
+
 Listing All Available Buckets
 -----------------------------
 In addition to accessing specific buckets via the create_bucket method
 you can also get a list of all available buckets that you have created.
 
->>> rs = conn.get_all_buckets()
+::
+
+    >>> rs = conn.get_all_buckets()
 
 This returns a ResultSet object (see the SQS Tutorial for more info on
 ResultSet objects).  The ResultSet can be used as a sequence or list type
 object to retrieve Bucket objects.
 
->>> len(rs)
-11
->>> for b in rs:
-... print b.name
-...
-<listing of available buckets>
->>> b = rs[0]
+::
+
+    >>> len(rs)
+    11
+    >>> for b in rs:
+    ... print b.name
+    ...
+    <listing of available buckets>
+    >>> b = rs[0]
 
 Setting / Getting the Access Control List for Buckets and Keys
 --------------------------------------------------------------
@@ -168,6 +228,7 @@
 
 2. Use a "canned" access control policy.  There are four canned policies
    defined:
+
    a. private: Owner gets FULL_CONTROL.  No one else has any access rights.
    b. public-read: Owners gets FULL_CONTROL and the anonymous principal is granted READ access.
    c. public-read-write: Owner gets FULL_CONTROL and the anonymous principal is granted READ and WRITE access.
@@ -194,17 +255,19 @@
 get_acl object.  This method parses the AccessControlPolicy response sent
 by S3 and creates a set of Python objects that represent the ACL.
 
->>> acp = b.get_acl()
->>> acp
-<boto.acl.Policy instance at 0x2e6940>
->>> acp.acl
-<boto.acl.ACL instance at 0x2e69e0>
->>> acp.acl.grants
-[<boto.acl.Grant instance at 0x2e6a08>]
->>> for grant in acp.acl.grants:
-...   print grant.permission, grant.display_name, grant.email_address, grant.id
-...
-FULL_CONTROL <boto.user.User instance at 0x2e6a30>
+::
+
+    >>> acp = b.get_acl()
+    >>> acp
+    <boto.acl.Policy instance at 0x2e6940>
+    >>> acp.acl
+    <boto.acl.ACL instance at 0x2e69e0>
+    >>> acp.acl.grants
+    [<boto.acl.Grant instance at 0x2e6a08>]
+    >>> for grant in acp.acl.grants:
+    ...   print grant.permission, grant.display_name, grant.email_address, grant.id
+    ...
+    FULL_CONTROL <boto.user.User instance at 0x2e6a30>
 
 The Python objects representing the ACL can be found in the acl.py module
 of boto.
@@ -212,10 +275,10 @@
 Both the Bucket object and the Key object also provide shortcut
 methods to simplify the process of granting individuals specific
 access.  For example, if you want to grant an individual user READ
-access to a particular object in S3 you could do the following:
+access to a particular object in S3 you could do the following::
 
->>> key = b.lookup('mykeytoshare')
->>> key.add_email_grant('READ', 'foo@bar.com')
+    >>> key = b.lookup('mykeytoshare')
+    >>> key.add_email_grant('READ', 'foo@bar.com')
 
 The email address provided should be the one associated with the users
 AWS account.  There is a similar method called add_user_grant that accepts the
@@ -226,23 +289,23 @@
 S3 allows arbitrary user metadata to be assigned to objects within a bucket.
 To take advantage of this S3 feature, you should use the set_metadata and
 get_metadata methods of the Key object to set and retrieve metadata associated
-with an S3 object.  For example:
+with an S3 object.  For example::
 
->>> k = Key(b)
->>> k.key = 'has_metadata'
->>> k.set_metadata('meta1', 'This is the first metadata value')
->>> k.set_metadata('meta2', 'This is the second metadata value')
->>> k.set_contents_from_filename('foo.txt')
+    >>> k = Key(b)
+    >>> k.key = 'has_metadata'
+    >>> k.set_metadata('meta1', 'This is the first metadata value')
+    >>> k.set_metadata('meta2', 'This is the second metadata value')
+    >>> k.set_contents_from_filename('foo.txt')
 
 This code associates two metadata key/value pairs with the Key k.  To retrieve
-those values later:
+those values later::
 
->>> k = b.get_key('has_metadata)
->>> k.get_metadata('meta1')
-'This is the first metadata value'
->>> k.get_metadata('meta2')
-'This is the second metadata value'
->>>
+    >>> k = b.get_key('has_metadata')
+    >>> k.get_metadata('meta1')
+    'This is the first metadata value'
+    >>> k.get_metadata('meta2')
+    'This is the second metadata value'
+    >>>
 
 Setting/Getting/Deleting CORS Configuration on a Bucket
 -------------------------------------------------------
@@ -253,12 +316,12 @@
 rich client-side web applications with Amazon S3 and selectively allow
 cross-origin access to your Amazon S3 resources.
 
-To create a CORS configuration and associate it with a bucket:
+To create a CORS configuration and associate it with a bucket::
 
->>> from boto.s3.cors import CORSConfiguration
->>> cors_cfg = CORSConfiguration()
->>> cors_cfg.add_rule(['PUT', 'POST', 'DELETE'], 'https://www.example.com', allowed_header='*', max_age_seconds=3000, expose_header='x-amz-server-side-encryption')
->>> cors_cfg.add_rule('GET', '*')
+    >>> from boto.s3.cors import CORSConfiguration
+    >>> cors_cfg = CORSConfiguration()
+    >>> cors_cfg.add_rule(['PUT', 'POST', 'DELETE'], 'https://www.example.com', allowed_header='*', max_age_seconds=3000, expose_header='x-amz-server-side-encryption')
+    >>> cors_cfg.add_rule('GET', '*')
 
 The above code creates a CORS configuration object with two rules.
 
@@ -269,17 +332,119 @@
   return any requested headers.
 * The second rule allows cross-origin GET requests from all origins.
 
-To associate this configuration with a bucket:
+To associate this configuration with a bucket::
 
->>> import boto
->>> c = boto.connect_s3()
->>> bucket = c.lookup('mybucket')
->>> bucket.set_cors(cors_cfg)
+    >>> import boto
+    >>> c = boto.connect_s3()
+    >>> bucket = c.lookup('mybucket')
+    >>> bucket.set_cors(cors_cfg)
 
-To retrieve the CORS configuration associated with a bucket:
+To retrieve the CORS configuration associated with a bucket::
 
->>> cors_cfg = bucket.get_cors()
+    >>> cors_cfg = bucket.get_cors()
 
-And, finally, to delete all CORS configurations from a bucket:
+And, finally, to delete all CORS configurations from a bucket::
 
->>> bucket.delete_cors()
+    >>> bucket.delete_cors()
+
+Transitioning Objects to Glacier
+--------------------------------
+
+You can configure objects in S3 to transition to Glacier after a period of
+time.  This is done using lifecycle policies.  A lifecycle policy can also
+specify that an object should be deleted after a period of time.  Lifecycle
+configurations are assigned to buckets and require these parameters:
+
+* The object prefix that identifies the objects you are targeting.
+* The action you want S3 to perform on the identified objects.
+* The date (or time period) when you want S3 to perform these actions.
+
+For example, given a bucket ``s3-glacier-boto-demo``, we can first retrieve the
+bucket::
+
+    >>> import boto
+    >>> c = boto.connect_s3()
+    >>> bucket = c.get_bucket('s3-glacier-boto-demo')
+
+Then we can create a lifecycle object.  In our example, we want all objects
+under ``logs/*`` to transition to Glacier 30 days after the object is created.
+
+::
+
+    >>> from boto.s3.lifecycle import Lifecycle, Transition, Rule
+    >>> to_glacier = Transition(days=30, storage_class='GLACIER')
+    >>> rule = Rule('ruleid', 'logs/', 'Enabled', transition=to_glacier)
+    >>> lifecycle = Lifecycle()
+    >>> lifecycle.append(rule)
+
+.. note::
+
+  For API docs for the lifecycle objects, see :py:mod:`boto.s3.lifecycle`
+
+We can now configure the bucket with this lifecycle policy::
+
+    >>> bucket.configure_lifecycle(lifecycle)
+True
+
+You can also retrieve the current lifecycle policy for the bucket::
+
+    >>> current = bucket.get_lifecycle_config()
+    >>> print current[0].transition
+    <Transition: in: 30 days, GLACIER>
+
+When an object transitions to Glacier, the storage class will be
+updated.  This can be seen when you **list** the objects in a bucket::
+
+    >>> for key in bucket.list():
+    ...   print key, key.storage_class
+    ...
+    <Key: s3-glacier-boto-demo,logs/testlog1.log> GLACIER
+
+You can also use the prefix argument to the ``bucket.list`` method::
+
+    >>> print list(b.list(prefix='logs/testlog1.log'))[0].storage_class
+    u'GLACIER'
+
+
+Restoring Objects from Glacier
+------------------------------
+
+Once an object has been transitioned to Glacier, you can restore the object
+back to S3.  To do so, you can use the :py:meth:`boto.s3.key.Key.restore`
+method of the key object.
+The ``restore`` method takes an integer that specifies the number of days
+to keep the object in S3.
+
+::
+
+    >>> import boto
+    >>> c = boto.connect_s3()
+    >>> bucket = c.get_bucket('s3-glacier-boto-demo')
+    >>> key = bucket.get_key('logs/testlog1.log')
+    >>> key.restore(days=5)
+
+It takes about 4 hours for a restore operation to make a copy of the archive
+available for you to access.  While the object is being restored, the
+``ongoing_restore`` attribute will be set to ``True``::
+
+
+    >>> key = bucket.get_key('logs/testlog1.log')
+    >>> print key.ongoing_restore
+    True
+
+When the restore is finished, this value will be ``False`` and the expiry
+date of the object will be non ``None``::
+
+    >>> key = bucket.get_key('logs/testlog1.log')
+    >>> print key.ongoing_restore
+    False
+    >>> print key.expiry_date
+    "Fri, 21 Dec 2012 00:00:00 GMT"
+
+
+.. note:: If there is no restore operation either in progress or completed,
+  the ``ongoing_restore`` attribute will be ``None``.
+
+Once the object is restored you can then download the contents::
+
+    >>> key.get_contents_to_filename('testlog1.log')
diff --git a/docs/source/ses_tut.rst b/docs/source/ses_tut.rst
index c71e886..d19a4e3 100644
--- a/docs/source/ses_tut.rst
+++ b/docs/source/ses_tut.rst
@@ -15,18 +15,19 @@
 The first step in accessing SES is to create a connection to the service.

 To do so, the most straight forward way is the following::

 

-    >>> import boto

-    >>> conn = boto.connect_ses(

+    >>> import boto.ses

+    >>> conn = boto.ses.connect_to_region(

+            'us-west-2',

             aws_access_key_id='<YOUR_AWS_KEY_ID>',

             aws_secret_access_key='<YOUR_AWS_SECRET_KEY>')

     >>> conn

-    SESConnection:email.us-east-1.amazonaws.com

+    SESConnection:email.us-west-2.amazonaws.com

 

 Bear in mind that if you have your credentials in boto config in your home

 directory, the two keyword arguments in the call above are not needed. More

 details on configuration can be fond in :doc:`boto_config_tut`.

 

-The :py:func:`boto.connect_ses` functions returns a

+The :py:func:`boto.ses.connect_to_region` functions returns a

 :py:class:`boto.ses.connection.SESConnection` instance, which is a the boto API

 for working with SES.

 

@@ -168,4 +169,4 @@
                 ]

             }

         }

-    }
\ No newline at end of file
+    }

diff --git a/docs/source/simpledb_tut.rst b/docs/source/simpledb_tut.rst
index 3960726..98cabfe 100644
--- a/docs/source/simpledb_tut.rst
+++ b/docs/source/simpledb_tut.rst
@@ -13,8 +13,11 @@
 The first step in accessing SimpleDB is to create a connection to the service.
 To do so, the most straight forward way is the following::
 
-    >>> import boto
-    >>> conn = boto.connect_sdb(aws_access_key_id='<YOUR_AWS_KEY_ID>',aws_secret_access_key='<YOUR_AWS_SECRET_KEY>')
+    >>> import boto.sdb
+    >>> conn = boto.sdb.connect_to_region(
+    ...     'us-west-2',
+    ...     aws_access_key_id='<YOUR_AWS_KEY_ID>',
+    ...     aws_secret_access_key='<YOUR_AWS_SECRET_KEY>')
     >>> conn
     SDBConnection:sdb.amazonaws.com
     >>>
diff --git a/docs/source/sqs_tut.rst b/docs/source/sqs_tut.rst
index 742800f..d4d69c9 100644
--- a/docs/source/sqs_tut.rst
+++ b/docs/source/sqs_tut.rst
@@ -11,29 +11,27 @@
 Creating a Connection
 ---------------------
 The first step in accessing SQS is to create a connection to the service.
-There are two ways to do this in boto.  The first is::
+The recommended method of doing this is as follows::
 
-    >>> from boto.sqs.connection import SQSConnection
-    >>> conn = SQSConnection('<aws access key>', '<aws secret key>')
+    >>> import boto.sqs
+    >>> conn = boto.sqs.connect_to_region(
+    ...     "us-west-2",
+    ...     aws_access_key_id='<aws access key'>,
+    ...     aws_secret_access_key='<aws secret key>')
 
-At this point the variable conn will point to an SQSConnection object. Bear in mind that
-just as any other AWS service SQS is region-specfic. Also important to note is that by default,
-if no region is provided, it'll connect to the US-EAST-1 region. In
-this example, the AWS access key and AWS secret key are passed in to the
-method explicitely.  Alternatively, you can set the environment variables:
+At this point the variable conn will point to an SQSConnection object in the
+US-WEST-2 region. Bear in mind that just as any other AWS service, SQS is
+region-specific. In this example, the AWS access key and AWS secret key are
+passed in to the method explicitely. Alternatively, you can set the environment
+variables:
 
-AWS_ACCESS_KEY_ID - Your AWS Access Key ID
-AWS_SECRET_ACCESS_KEY - Your AWS Secret Access Key
+* ``AWS_ACCESS_KEY_ID`` - Your AWS Access Key ID
+* ``AWS_SECRET_ACCESS_KEY`` - Your AWS Secret Access Key
 
-and then call the constructor without any arguments, like this::
+and then simply call::
 
-    >>> conn = SQSConnection()
-
-There is also a shortcut function in the boto package, called connect_sqs
-that may provide a slightly easier means of creating a connection::
-
-    >>> import boto
-    >>> conn = boto.connect_sqs()
+    >>> import boto.sqs
+    >>> conn = boto.sqs.connect_to_region("us-west-2")
 
 In either case, conn will point to an SQSConnection object which we will
 use throughout the remainder of this tutorial.
@@ -88,7 +86,7 @@
 
 Getting a Queue (by name)
 -------------------------
-If you wish to explicitly retrieve an existing queue and the name of the queue is known, 
+If you wish to explicitly retrieve an existing queue and the name of the queue is known,
 you can retrieve the queue as follows::
 
     >>> my_queue = conn.get_queue('myqueue')
@@ -209,7 +207,7 @@
 
 Deleting Messages and Queues
 ----------------------------
-As stated above, messages are never deleted by the queue unless explicitly told to do so. 
+As stated above, messages are never deleted by the queue unless explicitly told to do so.
 To remove a message from a queue:
 
 >>> q.delete_message(m)
@@ -219,7 +217,7 @@
 
 >>> conn.delete_queue(q)
 
-However, and this is a good safe guard, this won't succeed unless the queue is empty.
+This will delete the queue, even if there are still messages within the queue.
 
 Additional Information
 ----------------------
diff --git a/docs/source/support_tut.rst b/docs/source/support_tut.rst
new file mode 100644
index 0000000..8dbc4fc
--- /dev/null
+++ b/docs/source/support_tut.rst
@@ -0,0 +1,154 @@
+.. _support_tut:
+
+===========================================
+An Introduction to boto's Support interface
+===========================================
+
+This tutorial focuses on the boto interface to Amazon Web Services Support,
+allowing you to programmatically interact with cases created with Support.
+This tutorial assumes that you have already downloaded and installed ``boto``.
+
+Creating a Connection
+---------------------
+
+The first step in accessing Support is to create a connection
+to the service.  There are two ways to do this in boto.  The first is:
+
+>>> from boto.support.connection import SupportConnection
+>>> conn = SupportConnection('<aws access key>', '<aws secret key>')
+
+At this point the variable ``conn`` will point to a ``SupportConnection``
+object. In this example, the AWS access key and AWS secret key are passed in to
+the method explicitly. Alternatively, you can set the environment variables:
+
+**AWS_ACCESS_KEY_ID**
+    Your AWS Access Key ID
+
+**AWS_SECRET_ACCESS_KEY**
+    Your AWS Secret Access Key
+
+and then call the constructor without any arguments, like this:
+
+>>> conn = SupportConnection()
+
+There is also a shortcut function in boto
+that makes it easy to create Support connections:
+
+>>> import boto.support
+>>> conn = boto.support.connect_to_region('us-west-2')
+
+In either case, ``conn`` points to a ``SupportConnection`` object which we will
+use throughout the remainder of this tutorial.
+
+
+Describing Existing Cases
+-------------------------
+
+If you have existing cases or want to fetch cases in the future, you'll
+use the ``SupportConnection.describe_cases`` method. For example::
+
+    >>> cases = conn.describe_cases()
+    >>> len(cases['cases'])
+    1
+    >>> cases['cases'][0]['title']
+    'A test case.'
+    >>> cases['cases'][0]['caseId']
+    'case-...'
+
+You can also fetch a set of cases (or single case) by providing a
+``case_id_list`` parameter::
+
+    >>> cases = conn.describe_cases(case_id_list=['case-1'])
+    >>> len(cases['cases'])
+    1
+    >>> cases['cases'][0]['title']
+    'A test case.'
+    >>> cases['cases'][0]['caseId']
+    'case-...'
+
+
+Describing Service Codes
+------------------------
+
+In order to create a new case, you'll need to fetch the service (& category)
+codes available to you. Fetching them is a simple call to::
+
+    >>> services = conn.describe_services()
+    >>> services['services'][0]['code']
+    'amazon-cloudsearch'
+
+If you only care about certain services, you can pass a list of service codes::
+
+    >>> service_details = conn.describe_services(service_code_list=[
+    ...     'amazon-cloudsearch',
+    ...     'amazon-dynamodb',
+    ... ])
+
+
+Describing Severity Levels
+--------------------------
+
+In order to create a new case, you'll also need to fetch the severity levels
+available to you. Fetching them looks like::
+
+    >>> severities = conn.describe_severity_levels()
+    >>> severities['severityLevels'][0]['code']
+    'low'
+
+
+Creating a Case
+---------------
+
+Upon creating a connection to Support, you can now work with existing Support
+cases, create new cases or resolve them. We'll start with creating a new case::
+
+    >>> new_case = conn.create_case(
+    ...     subject='This is a test case.',
+    ...     service_code='',
+    ...     category_code='',
+    ...     communication_body="",
+    ...     severity_code='low'
+    ... )
+    >>> new_case['caseId']
+    'case-...'
+
+For the ``service_code/category_code`` parameters, you'll need to do a
+``SupportConnection.describe_services`` call, then select the appropriate
+service code (& appropriate category code within that service) from the
+response.
+
+For the ``severity_code`` parameter, you'll need to do a
+``SupportConnection.describe_severity_levels`` call, then select the appropriate
+severity code from the response.
+
+
+Adding to a Case
+----------------
+
+Since the purpose of a support case involves back-and-forth communication,
+you can add additional communication to the case as well. Providing a response
+might look like::
+
+    >>> result = conn.add_communication_to_case(
+    ...     communication_body="This is a followup. It's working now."
+    ...     case_id='case-...'
+    ... )
+
+
+Fetching all Communications for a Case
+--------------------------------------
+
+Getting all communications for a given case looks like::
+
+    >>> communications = conn.describe_communications('case-...')
+
+
+Resolving a Case
+----------------
+
+Once a case is finished, you should mark it as resolved to close it out.
+Resolving a case looks like::
+
+    >>> closed = conn.resolve_case(case_id='case-...')
+    >>> closed['result']
+    True
diff --git a/docs/source/vpc_tut.rst b/docs/source/vpc_tut.rst
index ce26ead..1244c4e 100644
--- a/docs/source/vpc_tut.rst
+++ b/docs/source/vpc_tut.rst
@@ -97,4 +97,13 @@
 --------------------------------------------------
 
 >>> ec2.connection.release_address(None, 'eipalloc-35cf685d')
->>>
\ No newline at end of file
+>>>
+
+To Get All VPN Connections
+--------------------------
+>>> vpns = c.get_all_vpn_connections()
+>>> vpns[0].id
+u'vpn-12ef67bv'
+>>> tunnels = vpns[0].tunnels
+>>> tunnels
+[VpnTunnel: 177.12.34.56, VpnTunnel: 177.12.34.57]
diff --git a/requirements.txt b/requirements.txt
index b2776cb..4d6572c 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -1,9 +1,12 @@
-mock==0.8.0
-nose==1.1.2
-M2Crypto==0.21.1
-requests==0.13.1
+mock==1.0.1
+nose==1.2.1
+# If you upgrade to ``requests>=1.2.1``, please update
+# ``boto/cloudsearch/document.py``.
+requests>=1.1.0
+rsa==3.1.1
 tox==1.4
 Sphinx==1.1.3
 simplejson==2.5.2
 argparse==1.2.1
 unittest2==0.5.1
+httpretty==0.5.5
diff --git a/setup.py b/setup.py
index 662c5e1..50e12f6 100644
--- a/setup.py
+++ b/setup.py
@@ -23,6 +23,8 @@
 # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
 # IN THE SOFTWARE.
 
+from __future__ import with_statement
+
 try:
     from setuptools import setup
     extra = dict(test_suite="tests.test.suite", include_package_data=True)
@@ -39,18 +41,23 @@
     print >> sys.stderr, error
     sys.exit(1)
 
+def readme():
+    with open("README.rst") as f:
+        return f.read()
+
 setup(name = "boto",
       version = __version__,
       description = "Amazon Web Services Library",
-      long_description = open("README.rst").read(),
+      long_description = readme(),
       author = "Mitch Garnaat",
       author_email = "mitch@garnaat.com",
       scripts = ["bin/sdbadmin", "bin/elbadmin", "bin/cfadmin",
                  "bin/s3put", "bin/fetch_file", "bin/launch_instance",
                  "bin/list_instances", "bin/taskadmin", "bin/kill_instance",
                  "bin/bundle_image", "bin/pyami_sendmail", "bin/lss3",
-                 "bin/cq", "bin/route53", "bin/s3multiput", "bin/cwutil",
-                 "bin/instance_events", "bin/asadmin", "bin/glacier"],
+                 "bin/cq", "bin/route53", "bin/cwutil", "bin/instance_events",
+                 "bin/asadmin", "bin/glacier", "bin/mturk",
+                 "bin/dynamodb_dump", "bin/dynamodb_load"],
       url = "https://github.com/boto/boto/",
       packages = ["boto", "boto.sqs", "boto.s3", "boto.gs", "boto.file",
                   "boto.ec2", "boto.ec2.cloudwatch", "boto.ec2.autoscale",
@@ -64,7 +71,10 @@
                   "boto.fps", "boto.emr", "boto.emr", "boto.sns",
                   "boto.ecs", "boto.iam", "boto.route53", "boto.ses",
                   "boto.cloudformation", "boto.sts", "boto.dynamodb",
-                  "boto.swf", "boto.mws", "boto.cloudsearch", "boto.glacier"],
+                  "boto.swf", "boto.mws", "boto.cloudsearch", "boto.glacier",
+                  "boto.beanstalk", "boto.datapipeline", "boto.elasticache",
+                  "boto.elastictranscoder", "boto.opsworks", "boto.redshift",
+                  "boto.dynamodb2", "boto.support"],
       package_data = {"boto.cacerts": ["cacerts.txt"]},
       license = "MIT",
       platforms = "Posix; MacOS X; Windows",
diff --git a/tests/integration/cloudformation/test_connection.py b/tests/integration/cloudformation/test_connection.py
new file mode 100644
index 0000000..9152aa1
--- /dev/null
+++ b/tests/integration/cloudformation/test_connection.py
@@ -0,0 +1,110 @@
+#!/usr/bin/env python
+import time
+import json
+
+from tests.unit import  unittest
+from boto.cloudformation.connection import CloudFormationConnection
+
+
+BASIC_EC2_TEMPLATE = {
+    "AWSTemplateFormatVersion": "2010-09-09",
+    "Description": "AWS CloudFormation Sample Template EC2InstanceSample",
+    "Parameters": {
+    },
+    "Mappings": {
+        "RegionMap": {
+            "us-east-1": {
+                "AMI": "ami-7f418316"
+            }
+        }
+    },
+    "Resources": {
+        "Ec2Instance": {
+            "Type": "AWS::EC2::Instance",
+            "Properties": {
+                "ImageId": {
+                    "Fn::FindInMap": [
+                        "RegionMap",
+                        {
+                            "Ref": "AWS::Region"
+                        },
+                        "AMI"
+                    ]
+                },
+                "UserData": {
+                    "Fn::Base64": "a" * 15000
+                }
+            }
+        }
+    },
+    "Outputs": {
+        "InstanceId": {
+            "Description": "InstanceId of the newly created EC2 instance",
+            "Value": {
+                "Ref": "Ec2Instance"
+            }
+        },
+        "AZ": {
+            "Description": "Availability Zone of the newly created EC2 instance",
+            "Value": {
+                "Fn::GetAtt": [
+                    "Ec2Instance",
+                    "AvailabilityZone"
+                ]
+            }
+        },
+        "PublicIP": {
+            "Description": "Public IP address of the newly created EC2 instance",
+            "Value": {
+                "Fn::GetAtt": [
+                    "Ec2Instance",
+                    "PublicIp"
+                ]
+            }
+        },
+        "PrivateIP": {
+            "Description": "Private IP address of the newly created EC2 instance",
+            "Value": {
+                "Fn::GetAtt": [
+                    "Ec2Instance",
+                    "PrivateIp"
+                ]
+            }
+        },
+        "PublicDNS": {
+            "Description": "Public DNSName of the newly created EC2 instance",
+            "Value": {
+                "Fn::GetAtt": [
+                    "Ec2Instance",
+                    "PublicDnsName"
+                ]
+            }
+        },
+        "PrivateDNS": {
+            "Description": "Private DNSName of the newly created EC2 instance",
+            "Value": {
+                "Fn::GetAtt": [
+                    "Ec2Instance",
+                    "PrivateDnsName"
+                ]
+            }
+        }
+    }
+}
+
+
+class TestCloudformationConnection(unittest.TestCase):
+    def setUp(self):
+        self.connection = CloudFormationConnection()
+        self.stack_name = 'testcfnstack' + str(int(time.time()))
+
+    def test_large_template_stack_size(self):
+        # See https://github.com/boto/boto/issues/1037
+        body = self.connection.create_stack(
+            self.stack_name,
+            template_body=json.dumps(BASIC_EC2_TEMPLATE))
+        self.addCleanup(self.connection.delete_stack, self.stack_name)
+
+
+if __name__ == '__main__':
+    unittest.main()
diff --git a/tests/integration/datapipeline/test_layer1.py b/tests/integration/datapipeline/test_layer1.py
new file mode 100644
index 0000000..6634770
--- /dev/null
+++ b/tests/integration/datapipeline/test_layer1.py
@@ -0,0 +1,122 @@
+#!/usr/bin/env python
+# Copyright (c) 2012 Amazon.com, Inc. or its affiliates.  All Rights Reserved
+#
+# Permission is hereby granted, free of charge, to any person obtaining a
+# copy of this software and associated documentation files (the
+# "Software"), to deal in the Software without restriction, including
+# without limitation the rights to use, copy, modify, merge, publish, dis-
+# tribute, sublicense, and/or sell copies of the Software, and to permit
+# persons to whom the Software is furnished to do so, subject to the fol-
+# lowing conditions:
+#
+# The above copyright notice and this permission notice shall be included
+# in all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
+# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
+# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+# IN THE SOFTWARE.
+#
+import time
+from tests.unit import unittest
+
+from boto.datapipeline import layer1
+
+
+class TestDataPipeline(unittest.TestCase):
+    datapipeline = True
+
+    def setUp(self):
+        self.connection = layer1.DataPipelineConnection()
+        self.sample_pipeline_objects = [
+            {'fields': [
+                {'key': 'workerGroup', 'stringValue': 'MyworkerGroup'}],
+             'id': 'Default',
+             'name': 'Default'},
+            {'fields': [
+                {'key': 'startDateTime', 'stringValue': '2012-09-25T17:00:00'},
+                {'key': 'type', 'stringValue': 'Schedule'},
+                {'key': 'period', 'stringValue': '1 hour'},
+                {'key': 'endDateTime', 'stringValue': '2012-09-25T18:00:00'}],
+             'id': 'Schedule',
+             'name': 'Schedule'},
+            {'fields': [
+                {'key': 'type', 'stringValue': 'ShellCommandActivity'},
+                {'key': 'command', 'stringValue': 'echo hello'},
+                {'key': 'parent', 'refValue': 'Default'},
+                {'key': 'schedule', 'refValue': 'Schedule'}],
+             'id': 'SayHello',
+             'name': 'SayHello'}
+        ]
+        self.connection.auth_service_name = 'datapipeline'
+
+    def create_pipeline(self, name, unique_id, description=None):
+        response = self.connection.create_pipeline(name, unique_id,
+                                                   description)
+        pipeline_id = response['pipelineId']
+        self.addCleanup(self.connection.delete_pipeline, pipeline_id)
+        return pipeline_id
+
+    def get_pipeline_state(self, pipeline_id):
+        response = self.connection.describe_pipelines([pipeline_id])
+        for attr in response['pipelineDescriptionList'][0]['fields']:
+            if attr['key'] == '@pipelineState':
+                return attr['stringValue']
+
+    def test_can_create_and_delete_a_pipeline(self):
+        response = self.connection.create_pipeline('name', 'unique_id',
+                                                   'description')
+        self.connection.delete_pipeline(response['pipelineId'])
+
+    def test_validate_pipeline(self):
+        pipeline_id = self.create_pipeline('name2', 'unique_id2')
+
+        self.connection.validate_pipeline_definition(
+            self.sample_pipeline_objects, pipeline_id)
+
+    def test_put_pipeline_definition(self):
+        pipeline_id = self.create_pipeline('name3', 'unique_id3')
+        self.connection.put_pipeline_definition(self.sample_pipeline_objects,
+                                                pipeline_id)
+
+        # We should now be able to get the pipeline definition and see
+        # that it matches what we put.
+        response = self.connection.get_pipeline_definition(pipeline_id)
+        objects = response['pipelineObjects']
+        self.assertEqual(len(objects), 3)
+        self.assertEqual(objects[0]['id'], 'Default')
+        self.assertEqual(objects[0]['name'], 'Default')
+        self.assertEqual(objects[0]['fields'],
+                         [{'key': 'workerGroup', 'stringValue': 'MyworkerGroup'}])
+
+    def test_activate_pipeline(self):
+        pipeline_id = self.create_pipeline('name4', 'unique_id4')
+        self.connection.put_pipeline_definition(self.sample_pipeline_objects,
+                                                pipeline_id)
+        self.connection.activate_pipeline(pipeline_id)
+
+        attempts = 0
+        state = self.get_pipeline_state(pipeline_id)
+        while state != 'SCHEDULED' and attempts < 10:
+            time.sleep(10)
+            attempts += 1
+            state = self.get_pipeline_state(pipeline_id)
+            if attempts > 10:
+                self.fail("Pipeline did not become scheduled "
+                          "after 10 attempts.")
+        objects = self.connection.describe_objects(['Default'], pipeline_id)
+        field = objects['pipelineObjects'][0]['fields'][0]
+        self.assertDictEqual(field, {'stringValue': 'COMPONENT', 'key': '@sphere'})
+
+    def test_list_pipelines(self):
+        pipeline_id = self.create_pipeline('name5', 'unique_id5')
+        pipeline_id_list = [p['id'] for p in
+                            self.connection.list_pipelines()['pipelineIdList']]
+        self.assertTrue(pipeline_id in pipeline_id_list)
+
+
+if __name__ == '__main__':
+    unittest.main()
diff --git a/tests/integration/dynamodb/test_layer2.py b/tests/integration/dynamodb/test_layer2.py
index a87ade2..a57c4a9 100644
--- a/tests/integration/dynamodb/test_layer2.py
+++ b/tests/integration/dynamodb/test_layer2.py
@@ -23,10 +23,11 @@
 """
 Tests for Layer2 of Amazon DynamoDB
 """
-
 import unittest
 import time
 import uuid
+from decimal import Decimal
+
 from boto.dynamodb.exceptions import DynamoDBKeyNotFoundError
 from boto.dynamodb.exceptions import DynamoDBConditionalCheckFailedError
 from boto.dynamodb.layer2 import Layer2
@@ -43,6 +44,16 @@
         self.hash_key_proto_value = ''
         self.range_key_name = 'subject'
         self.range_key_proto_value = ''
+        self.table_name = 'sample_data_%s' % int(time.time())
+
+    def create_sample_table(self):
+        schema = self.dynamodb.create_schema(
+            self.hash_key_name, self.hash_key_proto_value,
+            self.range_key_name,
+            self.range_key_proto_value)
+        table = self.create_table(self.table_name, schema, 5, 5)
+        table.refresh(wait_for_active=True)
+        return table
 
     def create_table(self, table_name, schema, read_units, write_units):
         result = self.dynamodb.create_table(table_name, schema, read_units, write_units)
@@ -229,7 +240,7 @@
             'Answered': 0,
             'Tags': set(['largeobject', 'multipart upload']),
             'LastPostDateTime': '12/9/2011 11:36:03 PM'
-            }
+        }
         item3 = table.new_item(item3_key, item3_range, item3_attrs)
         item3.put()
 
@@ -238,20 +249,20 @@
         table2_item1_attrs = {
             'DateTimePosted': '25/1/2011 12:34:56 PM',
             'Text': 'I think boto rocks and so does DynamoDB'
-            }
+        }
         table2_item1 = table2.new_item(table2_item1_key,
                                        attrs=table2_item1_attrs)
         table2_item1.put()
 
         # Try a few queries
-        items = table.query('Amazon DynamoDB', BEGINS_WITH('DynamoDB'))
+        items = table.query('Amazon DynamoDB', range_key_condition=BEGINS_WITH('DynamoDB'))
         n = 0
         for item in items:
             n += 1
         assert n == 2
         assert items.consumed_units > 0
 
-        items = table.query('Amazon DynamoDB', BEGINS_WITH('DynamoDB'),
+        items = table.query('Amazon DynamoDB', range_key_condition=BEGINS_WITH('DynamoDB'),
                             request_limit=1, max_results=1)
         n = 0
         for item in items:
@@ -267,7 +278,7 @@
         assert n == 3
         assert items.consumed_units > 0
 
-        items = table.scan({'Replies': GT(0)})
+        items = table.scan(scan_filter={'Replies': GT(0)})
         n = 0
         for item in items:
             n += 1
@@ -299,8 +310,8 @@
         item4 = table.get_item(item3_key, item3_range, consistent_read=True)
         assert item4['IntAttr'] == integer_value
         assert item4['FloatAttr'] == float_value
-        assert item4['TrueBoolean'] == True
-        assert item4['FalseBoolean'] == False
+        assert bool(item4['TrueBoolean']) is True
+        assert bool(item4['FalseBoolean']) is False
         # The values will not necessarily be in the same order as when
         # we wrote them to the DB.
         for i in item4['IntSetAttr']:
@@ -336,7 +347,7 @@
             'Answered': 0,
             'Tags': set(['largeobject', 'multipart upload']),
             'LastPostDateTime': '12/9/2011 11:36:03 PM'
-            }
+        }
         item5_key = 'Amazon S3'
         item5_range = 'S3 Thread 3'
         item5_attrs = {
@@ -347,7 +358,7 @@
             'Answered': 0,
             'Tags': set(['largeobject', 'multipart upload']),
             'LastPostDateTime': '12/9/2011 11:36:03 PM'
-            }
+        }
         item4 = table.new_item(item4_key, item4_range, item4_attrs)
         item5 = table.new_item(item5_key, item5_range, item5_attrs)
         batch_list = c.new_batch_write_list()
@@ -355,21 +366,31 @@
         response = batch_list.submit()
         # should really check for unprocessed items
 
+        # Do some generator gymnastics
+        results = table.scan(scan_filter={'Tags': CONTAINS('table')})
+        assert results.scanned_count == 5
+        results = table.scan(request_limit=2, max_results=5)
+        assert results.count == 2
+        for item in results:
+            if results.count == 2:
+                assert results.remaining == 4
+                results.remaining -= 2
+                results.next_response()
+            else:
+                assert results.count == 4
+                assert results.remaining in (0, 1)
+        assert results.count == 4
+        results = table.scan(request_limit=6, max_results=4)
+        assert len(list(results)) == 4
+        assert results.count == 4
+
         batch_list = c.new_batch_write_list()
         batch_list.add_batch(table, deletes=[(item4_key, item4_range),
                                              (item5_key, item5_range)])
         response = batch_list.submit()
 
-
         # Try queries
-        results = table.query('Amazon DynamoDB', BEGINS_WITH('DynamoDB'))
-        n = 0
-        for item in results:
-            n += 1
-        assert n == 2
-
-        # Try scans
-        results = table.scan({'Tags': CONTAINS('table')})
+        results = table.query('Amazon DynamoDB', range_key_condition=BEGINS_WITH('DynamoDB'))
         n = 0
         for item in results:
             n += 1
@@ -380,7 +401,7 @@
         item1.delete(expected_value=expected)
 
         self.assertFalse(table.has_item(item1_key, range_key=item1_range,
-                                       consistent_read=True))
+                                        consistent_read=True))
         # Now delete the remaining items
         ret_vals = item2.delete(return_values='ALL_OLD')
         # some additional checks here would be useful
@@ -414,7 +435,7 @@
             'BinarySequence': set([Binary('\x01\x02'), Binary('\x03\x04')]),
             'Tags': set(['largeobject', 'multipart upload']),
             'LastPostDateTime': '12/9/2011 11:36:03 PM'
-            }
+        }
         item1 = table.new_item(item1_key, item1_range, item1_attrs)
         item1.put()
 
@@ -428,3 +449,36 @@
         self.assertEqual(retrieved['BinaryData'], bytes('\x01\x02\x03\x04'))
         self.assertEqual(retrieved['BinarySequence'],
                          set([Binary('\x01\x02'), Binary('\x03\x04')]))
+
+    def test_put_decimal_attrs(self):
+        self.dynamodb.use_decimals()
+        table = self.create_sample_table()
+        item = table.new_item('foo', 'bar')
+        item['decimalvalue'] = Decimal('1.12345678912345')
+        item.put()
+        retrieved = table.get_item('foo', 'bar')
+        self.assertEqual(retrieved['decimalvalue'], Decimal('1.12345678912345'))
+
+    def test_lossy_float_conversion(self):
+        table = self.create_sample_table()
+        item = table.new_item('foo', 'bar')
+        item['floatvalue'] = 1.12345678912345
+        item.put()
+        retrieved = table.get_item('foo', 'bar')['floatvalue']
+        # Notice how this is not equal to the original value.
+        self.assertNotEqual(1.12345678912345, retrieved)
+        # Instead, it's truncated:
+        self.assertEqual(1.12345678912, retrieved)
+
+    def test_large_integers(self):
+        # It's not just floating point numbers, large integers
+        # can trigger rouding issues.
+        self.dynamodb.use_decimals()
+        table = self.create_sample_table()
+        item = table.new_item('foo', 'bar')
+        item['decimalvalue'] = Decimal('129271300103398600')
+        item.put()
+        retrieved = table.get_item('foo', 'bar')
+        self.assertEqual(retrieved['decimalvalue'], Decimal('129271300103398600'))
+        # Also comparable directly to an int.
+        self.assertEqual(retrieved['decimalvalue'], 129271300103398600)
diff --git a/tests/integration/dynamodb/test_table.py b/tests/integration/dynamodb/test_table.py
new file mode 100644
index 0000000..c407b36
--- /dev/null
+++ b/tests/integration/dynamodb/test_table.py
@@ -0,0 +1,84 @@
+# Copyright (c) 2012 Amazon.com, Inc. or its affiliates.  All Rights Reserved
+#
+# Permission is hereby granted, free of charge, to any person obtaining a
+# copy of this software and associated documentation files (the
+# "Software"), to deal in the Software without restriction, including
+# without limitation the rights to use, copy, modify, merge, publish, dis-
+# tribute, sublicense, and/or sell copies of the Software, and to permit
+# persons to whom the Software is furnished to do so, subject to the fol-
+# lowing conditions:
+#
+# The above copyright notice and this permission notice shall be included
+# in all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
+# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
+# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+# IN THE SOFTWARE.
+#
+import time
+from tests.unit import unittest
+
+from boto.dynamodb.layer2 import Layer2
+from boto.dynamodb.table import Table
+from boto.dynamodb.schema import Schema
+
+
+class TestDynamoDBTable(unittest.TestCase):
+    dynamodb = True
+
+    def setUp(self):
+        self.dynamodb = Layer2()
+        self.schema = Schema.create(('foo', 'N'), ('bar', 'S'))
+        self.table_name = 'testtable%s' % int(time.time())
+
+    def create_table(self, table_name, schema, read_units, write_units):
+        result = self.dynamodb.create_table(table_name, schema, read_units, write_units)
+        self.addCleanup(self.dynamodb.delete_table, result)
+        return result
+
+    def assertAllEqual(self, *items):
+        first = items[0]
+        for item in items[1:]:
+            self.assertEqual(first, item)
+
+    def test_table_retrieval_parity(self):
+        created_table = self.dynamodb.create_table(
+            self.table_name, self.schema, 1, 1)
+        created_table.refresh(wait_for_active=True)
+
+        retrieved_table = self.dynamodb.get_table(self.table_name)
+
+        constructed_table = self.dynamodb.table_from_schema(self.table_name,
+                                                            self.schema)
+
+        # All three tables should have the same name
+        # and schema attributes.
+        self.assertAllEqual(created_table.name,
+                            retrieved_table.name,
+                            constructed_table.name)
+
+        self.assertAllEqual(created_table.schema,
+                            retrieved_table.schema,
+                            constructed_table.schema)
+
+        # However for create_time, status, read/write units,
+        # only the created/retrieved table will have equal
+        # values.
+        self.assertEqual(created_table.create_time,
+                         retrieved_table.create_time)
+        self.assertEqual(created_table.status,
+                         retrieved_table.status)
+        self.assertEqual(created_table.read_units,
+                         retrieved_table.read_units)
+        self.assertEqual(created_table.write_units,
+                         retrieved_table.write_units)
+
+        # The constructed table will have values of None.
+        self.assertIsNone(constructed_table.create_time)
+        self.assertIsNone(constructed_table.status)
+        self.assertIsNone(constructed_table.read_units)
+        self.assertIsNone(constructed_table.write_units)
diff --git a/tests/integration/dynamodb2/__init__.py b/tests/integration/dynamodb2/__init__.py
new file mode 100644
index 0000000..e69de29
--- /dev/null
+++ b/tests/integration/dynamodb2/__init__.py
diff --git a/tests/integration/dynamodb2/test_cert_verification.py b/tests/integration/dynamodb2/test_cert_verification.py
new file mode 100644
index 0000000..3901c57
--- /dev/null
+++ b/tests/integration/dynamodb2/test_cert_verification.py
@@ -0,0 +1,40 @@
+# Copyright (c) 2012 Mitch Garnaat http://garnaat.org/
+# Copyright (c) 2012 Amazon.com, Inc. or its affiliates.
+# All rights reserved.
+#
+# Permission is hereby granted, free of charge, to any person obtaining a
+# copy of this software and associated documentation files (the
+# "Software"), to deal in the Software without restriction, including
+# without limitation the rights to use, copy, modify, merge, publish, dis-
+# tribute, sublicense, and/or sell copies of the Software, and to permit
+# persons to whom the Software is furnished to do so, subject to the fol-
+# lowing conditions:
+#
+# The above copyright notice and this permission notice shall be included
+# in all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
+# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
+# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+# IN THE SOFTWARE.
+
+"""
+Check that all of the certs on all service endpoints validate.
+"""
+
+import unittest
+import boto.dynamodb2
+
+
+class CertVerificationTest(unittest.TestCase):
+
+    dynamodb2 = True
+    ssl = True
+
+    def test_certs(self):
+        for region in boto.dynamodb2.regions():
+            c = region.connect()
+            c.list_tables()
diff --git a/tests/integration/dynamodb2/test_highlevel.py b/tests/integration/dynamodb2/test_highlevel.py
new file mode 100644
index 0000000..a02046b
--- /dev/null
+++ b/tests/integration/dynamodb2/test_highlevel.py
@@ -0,0 +1,266 @@
+# Copyright (c) 2013 Amazon.com, Inc. or its affiliates.
+# All rights reserved.
+#
+# Permission is hereby granted, free of charge, to any person obtaining a
+# copy of this software and associated documentation files (the
+# "Software"), to deal in the Software without restriction, including
+# without limitation the rights to use, copy, modify, merge, publish, dis-
+# tribute, sublicense, and/or sell copies of the Software, and to permit
+# persons to whom the Software is furnished to do so, subject to the fol-
+# lowing conditions:
+#
+# The above copyright notice and this permission notice shall be included
+# in all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
+# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
+# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+# IN THE SOFTWARE.
+
+"""
+Tests for DynamoDB v2 high-level abstractions.
+"""
+import time
+
+from tests.unit import unittest
+from boto.dynamodb2 import exceptions
+from boto.dynamodb2.fields import HashKey, RangeKey, KeysOnlyIndex
+from boto.dynamodb2.table import Table
+from boto.dynamodb2.types import NUMBER
+
+
+class DynamoDBv2Test(unittest.TestCase):
+    dynamodb = True
+
+    def test_integration(self):
+        # Test creating a full table with all options specified.
+        users = Table.create('users', schema=[
+            HashKey('username'),
+            RangeKey('friend_count', data_type=NUMBER)
+        ], throughput={
+            'read': 5,
+            'write': 5,
+        }, indexes={
+            KeysOnlyIndex('LastNameIndex', parts=[
+                HashKey('username'),
+                RangeKey('last_name')
+            ]),
+        })
+        self.addCleanup(users.delete)
+
+        self.assertEqual(len(users.schema), 2)
+        self.assertEqual(users.throughput['read'], 5)
+
+        # Wait for it.
+        time.sleep(60)
+
+        # Make sure things line up if we're introspecting the table.
+        users_hit_api = Table('users')
+        users_hit_api.describe()
+        self.assertEqual(len(users.schema), len(users_hit_api.schema))
+        self.assertEqual(users.throughput, users_hit_api.throughput)
+        self.assertEqual(len(users.indexes), len(users_hit_api.indexes))
+
+        # Test putting some items individually.
+        users.put_item(data={
+            'username': 'johndoe',
+            'first_name': 'John',
+            'last_name': 'Doe',
+            'friend_count': 4
+        })
+
+        users.put_item(data={
+            'username': 'alice',
+            'first_name': 'Alice',
+            'last_name': 'Expert',
+            'friend_count': 2
+        })
+
+        time.sleep(5)
+
+        # Test batch writing.
+        with users.batch_write() as batch:
+            batch.put_item({
+                'username': 'jane',
+                'first_name': 'Jane',
+                'last_name': 'Doe',
+                'friend_count': 3
+            })
+            batch.delete_item(username='alice', friend_count=2)
+            batch.put_item({
+                'username': 'bob',
+                'first_name': 'Bob',
+                'last_name': 'Smith',
+                'friend_count': 1
+            })
+
+        time.sleep(5)
+
+        # Test getting an item & updating it.
+        # This is the "safe" variant (only write if there have been no
+        # changes).
+        jane = users.get_item(username='jane', friend_count=3)
+        self.assertEqual(jane['first_name'], 'Jane')
+        jane['last_name'] = 'Doh'
+        self.assertTrue(jane.save())
+
+        # Test strongly consistent getting of an item.
+        # Additionally, test the overwrite behavior.
+        client_1_jane = users.get_item(
+            username='jane',
+            friend_count=3,
+            consistent=True
+        )
+        self.assertEqual(jane['first_name'], 'Jane')
+        client_2_jane = users.get_item(
+            username='jane',
+            friend_count=3,
+            consistent=True
+        )
+        self.assertEqual(jane['first_name'], 'Jane')
+
+        # Write & assert the ``first_name`` is gone, then...
+        del client_1_jane['first_name']
+        self.assertTrue(client_1_jane.save())
+        check_name = users.get_item(
+            username='jane',
+            friend_count=3,
+            consistent=True
+        )
+        self.assertEqual(check_name['first_name'], None)
+
+        # ...overwrite the data with what's in memory.
+        client_2_jane['first_name'] = 'Joan'
+        # Now a write that fails due to default expectations...
+        self.assertRaises(exceptions.JSONResponseError, client_2_jane.save)
+        # ... so we force an overwrite.
+        self.assertTrue(client_2_jane.save(overwrite=True))
+        check_name_again = users.get_item(
+            username='jane',
+            friend_count=3,
+            consistent=True
+        )
+        self.assertEqual(check_name_again['first_name'], 'Joan')
+
+        # Reset it.
+        jane.mark_dirty()
+        self.assertTrue(jane.save(overwrite=True))
+
+        # Test the partial update behavior.
+        client_3_jane = users.get_item(
+            username='jane',
+            friend_count=3,
+            consistent=True
+        )
+        client_4_jane = users.get_item(
+            username='jane',
+            friend_count=3,
+            consistent=True
+        )
+        client_3_jane['favorite_band'] = 'Feed Me'
+        # No ``overwrite`` needed due to new data.
+        self.assertTrue(client_3_jane.save())
+        # Expectations are only checked on the ``first_name``, so what wouldn't
+        # have succeeded by default does succeed here.
+        client_4_jane['first_name'] = 'Jacqueline'
+        self.assertTrue(client_4_jane.partial_save())
+        partial_jane = users.get_item(
+            username='jane',
+            friend_count=3,
+            consistent=True
+        )
+        self.assertEqual(partial_jane['favorite_band'], 'Feed Me')
+        self.assertEqual(partial_jane['first_name'], 'Jacqueline')
+
+        # Reset it.
+        jane.mark_dirty()
+        self.assertTrue(jane.save(overwrite=True))
+
+        # Test the eventually consistent query.
+        results = users.query(
+            username__eq='johndoe',
+            last_name__eq='Doe',
+            index='LastNameIndex',
+            reverse=True
+        )
+
+        for res in results:
+            self.assertTrue(res['username'] in ['johndoe',])
+
+        # Test the strongly consistent query.
+        c_results = users.query(
+            username__eq='johndoe',
+            last_name__eq='Doe',
+            index='LastNameIndex',
+            reverse=True,
+            consistent=True
+        )
+
+        for res in c_results:
+            self.assertTrue(res['username'] in ['johndoe',])
+
+        # Test scans without filters.
+        all_users = users.scan(limit=7)
+        self.assertEqual(all_users.next()['username'], 'bob')
+        self.assertEqual(all_users.next()['username'], 'jane')
+        self.assertEqual(all_users.next()['username'], 'johndoe')
+
+        # Test scans with a filter.
+        filtered_users = users.scan(limit=2, username__beginswith='j')
+        self.assertEqual(filtered_users.next()['username'], 'jane')
+        self.assertEqual(filtered_users.next()['username'], 'johndoe')
+
+        # Test deleting a single item.
+        johndoe = users.get_item(username='johndoe', friend_count=4)
+        johndoe.delete()
+
+        # Test the eventually consistent batch get.
+        results = users.batch_get(keys=[
+            {'username': 'bob', 'friend_count': 1},
+            {'username': 'jane', 'friend_count': 3}
+        ])
+        batch_users = []
+
+        for res in results:
+            batch_users.append(res)
+            self.assertTrue(res['first_name'] in ['Bob', 'Jane'])
+
+        self.assertEqual(len(batch_users), 2)
+
+        # Test the strongly consistent batch get.
+        c_results = users.batch_get(keys=[
+            {'username': 'bob', 'friend_count': 1},
+            {'username': 'jane', 'friend_count': 3}
+        ], consistent=True)
+        c_batch_users = []
+
+        for res in c_results:
+            c_batch_users.append(res)
+            self.assertTrue(res['first_name'] in ['Bob', 'Jane'])
+
+        self.assertEqual(len(c_batch_users), 2)
+
+        # Test count, but in a weak fashion. Because lag time.
+        self.assertTrue(users.count() > -1)
+
+        # Test without LSIs (describe calls shouldn't fail).
+        admins = Table.create('admins', schema=[
+            HashKey('username')
+        ])
+        self.addCleanup(admins.delete)
+        time.sleep(60)
+        admins.describe()
+        self.assertEqual(admins.throughput['read'], 5)
+        self.assertEqual(admins.indexes, [])
+
+        # A single query term should fail on a table with *ONLY* a HashKey.
+        self.assertRaises(
+            exceptions.QueryError,
+            admins.query,
+            username__eq='johndoe'
+        )
+        # But it shouldn't break on more complex tables.
+        res = users.query(username__eq='johndoe')
diff --git a/tests/integration/dynamodb2/test_layer1.py b/tests/integration/dynamodb2/test_layer1.py
new file mode 100644
index 0000000..0a0beef
--- /dev/null
+++ b/tests/integration/dynamodb2/test_layer1.py
@@ -0,0 +1,324 @@
+# Copyright (c) 2013 Amazon.com, Inc. or its affiliates.
+# All rights reserved.
+#
+# Permission is hereby granted, free of charge, to any person obtaining a
+# copy of this software and associated documentation files (the
+# "Software"), to deal in the Software without restriction, including
+# without limitation the rights to use, copy, modify, merge, publish, dis-
+# tribute, sublicense, and/or sell copies of the Software, and to permit
+# persons to whom the Software is furnished to do so, subject to the fol-
+# lowing conditions:
+#
+# The above copyright notice and this permission notice shall be included
+# in all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
+# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
+# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+# IN THE SOFTWARE.
+
+"""
+Tests for Layer1 of DynamoDB v2
+"""
+import time
+
+from tests.unit import unittest
+from boto.dynamodb2 import exceptions
+from boto.dynamodb2.layer1 import DynamoDBConnection
+
+
+class DynamoDBv2Layer1Test(unittest.TestCase):
+    dynamodb = True
+
+    def setUp(self):
+        self.dynamodb = DynamoDBConnection()
+        self.table_name = 'test-%d' % int(time.time())
+        self.hash_key_name = 'username'
+        self.hash_key_type = 'S'
+        self.range_key_name = 'date_joined'
+        self.range_key_type = 'N'
+        self.read_units = 5
+        self.write_units = 5
+        self.attributes = [
+            {
+                'AttributeName': self.hash_key_name,
+                'AttributeType': self.hash_key_type,
+            },
+            {
+                'AttributeName': self.range_key_name,
+                'AttributeType': self.range_key_type,
+            }
+        ]
+        self.schema = [
+            {
+                'AttributeName': self.hash_key_name,
+                'KeyType': 'HASH',
+            },
+            {
+                'AttributeName': self.range_key_name,
+                'KeyType': 'RANGE',
+            },
+        ]
+        self.provisioned_throughput = {
+            'ReadCapacityUnits': self.read_units,
+            'WriteCapacityUnits': self.write_units,
+        }
+        self.lsi = [
+            {
+                'IndexName': 'MostRecentIndex',
+                'KeySchema': [
+                    {
+                        'AttributeName': self.hash_key_name,
+                        'KeyType': 'HASH',
+                    },
+                    {
+                        'AttributeName': self.range_key_name,
+                        'KeyType': 'RANGE',
+                    },
+                ],
+                'Projection': {
+                    'ProjectionType': 'KEYS_ONLY',
+                }
+            }
+        ]
+
+    def create_table(self, table_name, attributes, schema,
+                     provisioned_throughput, lsi=None, wait=True):
+        # Note: This is a slightly different ordering that makes less sense.
+        result = self.dynamodb.create_table(
+            attributes,
+            table_name,
+            schema,
+            provisioned_throughput,
+            local_secondary_indexes=lsi
+        )
+        self.addCleanup(self.dynamodb.delete_table, table_name)
+        if wait:
+            while True:
+                description = self.dynamodb.describe_table(table_name)
+                if description['Table']['TableStatus'].lower() == 'active':
+                    return result
+                else:
+                    time.sleep(5)
+        else:
+            return result
+
+    def test_integrated(self):
+        result = self.create_table(
+            self.table_name,
+            self.attributes,
+            self.schema,
+            self.provisioned_throughput,
+            self.lsi
+        )
+        self.assertEqual(
+            result['TableDescription']['TableName'],
+            self.table_name
+        )
+
+        description = self.dynamodb.describe_table(self.table_name)
+        self.assertEqual(description['Table']['ItemCount'], 0)
+
+        # Create some records.
+        record_1_data = {
+            'username': {'S': 'johndoe'},
+            'first_name': {'S': 'John'},
+            'last_name': {'S': 'Doe'},
+            'date_joined': {'N': '1366056668'},
+            'friend_count': {'N': '3'},
+            'friends': {'SS': ['alice', 'bob', 'jane']},
+        }
+        r1_result = self.dynamodb.put_item(self.table_name, record_1_data)
+
+        # Get the data.
+        record_1 = self.dynamodb.get_item(self.table_name, key={
+            'username': {'S': 'johndoe'},
+            'date_joined': {'N': '1366056668'},
+        }, consistent_read=True)
+        self.assertEqual(record_1['Item']['username']['S'], 'johndoe')
+        self.assertEqual(record_1['Item']['first_name']['S'], 'John')
+        self.assertEqual(record_1['Item']['friends']['SS'], [
+            'alice', 'bob', 'jane'
+        ])
+
+        # Now in a batch.
+        self.dynamodb.batch_write_item({
+            self.table_name: [
+                {
+                    'PutRequest': {
+                        'Item': {
+                            'username': {'S': 'jane'},
+                            'first_name': {'S': 'Jane'},
+                            'last_name': {'S': 'Doe'},
+                            'date_joined': {'N': '1366056789'},
+                            'friend_count': {'N': '1'},
+                            'friends': {'SS': ['johndoe']},
+                        },
+                    },
+                },
+            ]
+        })
+
+        # Now a query.
+        lsi_results = self.dynamodb.query(
+            self.table_name,
+            index_name='MostRecentIndex',
+            key_conditions={
+                'username': {
+                    'AttributeValueList': [
+                        {'S': 'johndoe'},
+                    ],
+                    'ComparisonOperator': 'EQ',
+                },
+            },
+            consistent_read=True
+        )
+        self.assertEqual(lsi_results['Count'], 1)
+
+        results = self.dynamodb.query(self.table_name, key_conditions={
+            'username': {
+                'AttributeValueList': [
+                    {'S': 'jane'},
+                ],
+                'ComparisonOperator': 'EQ',
+            },
+            'date_joined': {
+                'AttributeValueList': [
+                    {'N': '1366050000'}
+                ],
+                'ComparisonOperator': 'GT',
+            }
+        }, consistent_read=True)
+        self.assertEqual(results['Count'], 1)
+
+        # Now a scan.
+        results = self.dynamodb.scan(self.table_name)
+        self.assertEqual(results['Count'], 2)
+        s_items = sorted([res['username']['S'] for res in results['Items']])
+        self.assertEqual(s_items, ['jane', 'johndoe'])
+
+        self.dynamodb.delete_item(self.table_name, key={
+            'username': {'S': 'johndoe'},
+            'date_joined': {'N': '1366056668'},
+        })
+
+        results = self.dynamodb.scan(self.table_name)
+        self.assertEqual(results['Count'], 1)
+
+        # Parallel scan (minus client-side threading).
+        self.dynamodb.batch_write_item({
+            self.table_name: [
+                {
+                    'PutRequest': {
+                        'Item': {
+                            'username': {'S': 'johndoe'},
+                            'first_name': {'S': 'Johann'},
+                            'last_name': {'S': 'Does'},
+                            'date_joined': {'N': '1366058000'},
+                            'friend_count': {'N': '1'},
+                            'friends': {'SS': ['jane']},
+                        },
+                    },
+                    'PutRequest': {
+                        'Item': {
+                            'username': {'S': 'alice'},
+                            'first_name': {'S': 'Alice'},
+                            'last_name': {'S': 'Expert'},
+                            'date_joined': {'N': '1366056800'},
+                            'friend_count': {'N': '2'},
+                            'friends': {'SS': ['johndoe', 'jane']},
+                        },
+                    },
+                },
+            ]
+        })
+        time.sleep(20)
+        results = self.dynamodb.scan(self.table_name, segment=0, total_segments=2)
+        self.assertTrue(results['Count'] in [1, 2])
+        results = self.dynamodb.scan(self.table_name, segment=1, total_segments=2)
+        self.assertTrue(results['Count'] in [1, 2])
+
+    def test_without_range_key(self):
+        result = self.create_table(
+            self.table_name,
+            [
+                {
+                    'AttributeName': self.hash_key_name,
+                    'AttributeType': self.hash_key_type,
+                },
+            ],
+            [
+                {
+                    'AttributeName': self.hash_key_name,
+                    'KeyType': 'HASH',
+                },
+            ],
+            self.provisioned_throughput
+        )
+        self.assertEqual(
+            result['TableDescription']['TableName'],
+            self.table_name
+        )
+
+        description = self.dynamodb.describe_table(self.table_name)
+        self.assertEqual(description['Table']['ItemCount'], 0)
+
+        # Create some records.
+        record_1_data = {
+            'username': {'S': 'johndoe'},
+            'first_name': {'S': 'John'},
+            'last_name': {'S': 'Doe'},
+            'date_joined': {'N': '1366056668'},
+            'friend_count': {'N': '3'},
+            'friends': {'SS': ['alice', 'bob', 'jane']},
+        }
+        r1_result = self.dynamodb.put_item(self.table_name, record_1_data)
+
+        # Now try a range-less get.
+        johndoe = self.dynamodb.get_item(self.table_name, key={
+            'username': {'S': 'johndoe'},
+        }, consistent_read=True)
+        self.assertEqual(johndoe['Item']['username']['S'], 'johndoe')
+        self.assertEqual(johndoe['Item']['first_name']['S'], 'John')
+        self.assertEqual(johndoe['Item']['friends']['SS'], [
+            'alice', 'bob', 'jane'
+        ])
+
+    def test_throughput_exceeded_regression(self):
+        tiny_tablename = 'TinyThroughput'
+        tiny = self.create_table(
+            tiny_tablename,
+            self.attributes,
+            self.schema,
+            {
+                'ReadCapacityUnits': 1,
+                'WriteCapacityUnits': 1,
+            }
+        )
+
+        self.dynamodb.put_item(tiny_tablename, {
+            'username': {'S': 'johndoe'},
+            'first_name': {'S': 'John'},
+            'last_name': {'S': 'Doe'},
+            'date_joined': {'N': '1366056668'},
+        })
+        self.dynamodb.put_item(tiny_tablename, {
+            'username': {'S': 'jane'},
+            'first_name': {'S': 'Jane'},
+            'last_name': {'S': 'Doe'},
+            'date_joined': {'N': '1366056669'},
+        })
+        self.dynamodb.put_item(tiny_tablename, {
+            'username': {'S': 'alice'},
+            'first_name': {'S': 'Alice'},
+            'last_name': {'S': 'Expert'},
+            'date_joined': {'N': '1366057000'},
+        })
+        time.sleep(20)
+
+        for i in range(100):
+            # This would cause an exception due to a non-existant instance variable.
+            self.dynamodb.scan(tiny_tablename)
diff --git a/tests/integration/ec2/autoscale/test_connection.py b/tests/integration/ec2/autoscale/test_connection.py
index cf8d99a..094adb1 100644
--- a/tests/integration/ec2/autoscale/test_connection.py
+++ b/tests/integration/ec2/autoscale/test_connection.py
@@ -165,3 +165,18 @@
         assert not found
 
         print '--- tests completed ---'
+
+    def test_ebs_optimized_regression(self):
+        c = AutoScaleConnection()
+        time_string = '%d' % int(time.time())
+        lc_name = 'lc-%s' % time_string
+        lc = LaunchConfiguration(
+            name=lc_name,
+            image_id='ami-2272864b',
+            instance_type='t1.micro',
+            ebs_optimized=True
+        )
+        # This failed due to the difference between native Python ``True/False``
+        # & the expected string variants.
+        c.create_launch_configuration(lc)
+        self.addCleanup(c.delete_launch_configuration, lc_name)
diff --git a/tests/integration/ec2/cloudwatch/test_connection.py b/tests/integration/ec2/cloudwatch/test_connection.py
index 922c17b..7e990b8 100644
--- a/tests/integration/ec2/cloudwatch/test_connection.py
+++ b/tests/integration/ec2/cloudwatch/test_connection.py
@@ -34,6 +34,7 @@
 # HTTP response body for CloudWatchConnection.describe_alarms
 DESCRIBE_ALARMS_BODY = """<DescribeAlarmsResponse xmlns="http://monitoring.amazonaws.com/doc/2010-08-01/">
   <DescribeAlarmsResult>
+    <NextToken>mynexttoken</NextToken>
     <MetricAlarms>
       <member>
         <StateUpdatedTimestamp>2011-11-18T23:43:59.111Z</StateUpdatedTimestamp>
@@ -264,6 +265,7 @@
 
         c.make_request = make_request
         alarms = c.describe_alarms()
+        self.assertEquals(alarms.next_token, 'mynexttoken')
         self.assertEquals(alarms[0].name, 'FancyAlarm')
         self.assertEquals(alarms[0].comparison, '<')
         self.assertEquals(alarms[0].dimensions, {u'Job': [u'ANiceCronJob']})
diff --git a/tests/integration/ec2/elb/test_connection.py b/tests/integration/ec2/elb/test_connection.py
index 2d574d9..618d0ce 100644
--- a/tests/integration/ec2/elb/test_connection.py
+++ b/tests/integration/ec2/elb/test_connection.py
@@ -30,15 +30,23 @@
 class ELBConnectionTest(unittest.TestCase):
     ec2 = True
 
+    def setup(self):
+        """Creates a named load balancer that can be safely
+        deleted at the end of each test"""
+        self.conn = ELBConnection()
+        self.name = 'elb-boto-unit-test'
+        self.availability_zones = ['us-east-1a']
+        self.listeners = [(80, 8000, 'HTTP')]
+        self.balancer = self.conn.create_load_balancer(name, availability_zones, listeners)
+
     def tearDown(self):
-        """ Deletes all load balancers after every test. """
-        for lb in ELBConnection().get_all_load_balancers():
-            lb.delete()
+        """ Deletes the test load balancer after every test.
+        It does not delete EVERY load balancer in your account"""
+        self.balancer.delete()
 
     def test_build_list_params(self):
-        c = ELBConnection()
         params = {}
-        c.build_list_params(
+        self.conn.build_list_params(
             params, ['thing1', 'thing2', 'thing3'], 'ThingName%d')
         expected_params = {
             'ThingName1': 'thing1',
@@ -52,76 +60,60 @@
     # balancer.dns_name, along the lines of the existing EC2 unit tests.
 
     def test_create_load_balancer(self):
-        c = ELBConnection()
-        name = 'elb-boto-unit-test'
-        availability_zones = ['us-east-1a']
-        listeners = [(80, 8000, 'HTTP')]
-        balancer = c.create_load_balancer(name, availability_zones, listeners)
-        self.assertEqual(balancer.name, name)
-        self.assertEqual(balancer.availability_zones, availability_zones)
-        self.assertEqual(balancer.listeners, listeners)
+        self.assertEqual(self.balancer.name, self.name)
+        self.assertEqual(self.balancer.availability_zones,\
+            self.availability_zones)
+        self.assertEqual(self.balancer.listeners, self.listeners)
 
-        balancers = c.get_all_load_balancers()
-        self.assertEqual([lb.name for lb in balancers], [name])
+        balancers = self.conn.get_all_load_balancers()
+        self.assertEqual([lb.name for lb in balancers], [self.name])
 
     def test_create_load_balancer_listeners(self):
-        c = ELBConnection()
-        name = 'elb-boto-unit-test'
-        availability_zones = ['us-east-1a']
-        listeners = [(80, 8000, 'HTTP')]
-        balancer = c.create_load_balancer(name, availability_zones, listeners)
-
         more_listeners = [(443, 8001, 'HTTP')]
-        c.create_load_balancer_listeners(name, more_listeners)
-        balancers = c.get_all_load_balancers()
-        self.assertEqual([lb.name for lb in balancers], [name])
+        self.conn.create_load_balancer_listeners(self.name, more_listeners)
+        balancers = self.conn.get_all_load_balancers()
+        self.assertEqual([lb.name for lb in balancers], [self.name])
         self.assertEqual(
             sorted(l.get_tuple() for l in balancers[0].listeners),
-            sorted(listeners + more_listeners)
+            sorted(self.listeners + more_listeners)
             )
 
     def test_delete_load_balancer_listeners(self):
-        c = ELBConnection()
-        name = 'elb-boto-unit-test'
-        availability_zones = ['us-east-1a']
-        listeners = [(80, 8000, 'HTTP'), (443, 8001, 'HTTP')]
-        balancer = c.create_load_balancer(name, availability_zones, listeners)
+        mod_listeners = [(80, 8000, 'HTTP'), (443, 8001, 'HTTP')]
+        mod_name = self.name + "_mod"
+        self.mod_balancer = self.conn.create_load_balancer(mod_name,\
+            self.availability_zones, mod_listeners)
 
-        balancers = c.get_all_load_balancers()
-        self.assertEqual([lb.name for lb in balancers], [name])
+        mod_balancers = self.conn.get_all_load_balancers(load_balancer_names=[mod_name])
+        self.assertEqual([lb.name for lb in mod_balancers], [mod_name])
         self.assertEqual(
-            sorted([l.get_tuple() for l in balancers[0].listeners]),
-            sorted(listeners))
+            sorted([l.get_tuple() for l in mod_balancers[0].listeners]),
+            sorted(mod_listeners))
 
-        c.delete_load_balancer_listeners(name, [443])
-        balancers = c.get_all_load_balancers()
-        self.assertEqual([lb.name for lb in balancers], [name])
-        self.assertEqual([l.get_tuple() for l in balancers[0].listeners],
-                         listeners[:1])
+        self.conn.delete_load_balancer_listeners(self.mod_balancer.name, [443])
+        mod_balancers = self.conn.get_all_load_balancers(load_balancer_names=[mod_name])
+        self.assertEqual([lb.name for lb in mod_balancers], [mod_name])
+        self.assertEqual([l.get_tuple() for l in mod_balancers[0].listeners],
+                         mod_listeners[:1])
+        self.mod_balancer.delete()
 
     def test_create_load_balancer_listeners_with_policies(self):
-        c = ELBConnection()
-        name = 'elb-boto-unit-test-policy'
-        availability_zones = ['us-east-1a']
-        listeners = [(80, 8000, 'HTTP')]
-        balancer = c.create_load_balancer(name, availability_zones, listeners)
-
         more_listeners = [(443, 8001, 'HTTP')]
-        c.create_load_balancer_listeners(name, more_listeners)
+        self.conn.create_load_balancer_listeners(self.name, more_listeners)
 
         lb_policy_name = 'lb-policy'
-        c.create_lb_cookie_stickiness_policy(1000, name, lb_policy_name)
-        c.set_lb_policies_of_listener(name, listeners[0][0], lb_policy_name)
+        self.conn.create_lb_cookie_stickiness_policy(1000, self.name, lb_policy_name)
+        self.conn.set_lb_policies_of_listener(self.name, self.listeners[0][0], lb_policy_name)
 
         app_policy_name = 'app-policy'
-        c.create_app_cookie_stickiness_policy('appcookie', name, app_policy_name)
-        c.set_lb_policies_of_listener(name, more_listeners[0][0], app_policy_name)
+        self.conn.create_app_cookie_stickiness_policy('appcookie', self.name, app_policy_name)
+        self.conn.set_lb_policies_of_listener(self.name, more_listeners[0][0], app_policy_name)
 
-        balancers = c.get_all_load_balancers()
-        self.assertEqual([lb.name for lb in balancers], [name])
+        balancers = self.conn.get_all_load_balancers(load_balancer_names=[self.name])
+        self.assertEqual([lb.name for lb in balancers], [self.name])
         self.assertEqual(
             sorted(l.get_tuple() for l in balancers[0].listeners),
-            sorted(listeners + more_listeners)
+            sorted(self.listeners + more_listeners)
             )
         # Policy names should be checked here once they are supported
         # in the Listener object.
diff --git a/tests/integration/ec2/test_cert_verification.py b/tests/integration/ec2/test_cert_verification.py
index 6b1c574..d2428fa 100644
--- a/tests/integration/ec2/test_cert_verification.py
+++ b/tests/integration/ec2/test_cert_verification.py
@@ -26,7 +26,7 @@
 """
 
 import unittest
-import boto.rds
+import boto.ec2
 
 
 class CertVerificationTest(unittest.TestCase):
@@ -35,6 +35,6 @@
     ssl = True
 
     def test_certs(self):
-        for region in boto.rds.regions():
+        for region in boto.ec2.regions():
             c = region.connect()
-            c.get_all_dbinstances()
+            c.get_all_instances()
diff --git a/tests/integration/ec2/vpc/__init__.py b/tests/integration/ec2/vpc/__init__.py
new file mode 100644
index 0000000..e69de29
--- /dev/null
+++ b/tests/integration/ec2/vpc/__init__.py
diff --git a/tests/integration/ec2/vpc/test_connection.py b/tests/integration/ec2/vpc/test_connection.py
new file mode 100644
index 0000000..59c0734
--- /dev/null
+++ b/tests/integration/ec2/vpc/test_connection.py
@@ -0,0 +1,95 @@
+#!/usr/bin/env python
+# Copyright (c) 2012 Amazon.com, Inc. or its affiliates.  All Rights Reserved
+#
+# Permission is hereby granted, free of charge, to any person obtaining a
+# copy of this software and associated documentation files (the
+# "Software"), to deal in the Software without restriction, including
+# without limitation the rights to use, copy, modify, merge, publish, dis-
+# tribute, sublicense, and/or sell copies of the Software, and to permit
+# persons to whom the Software is furnished to do so, subject to the fol-
+# lowing conditions:
+#
+# The above copyright notice and this permission notice shall be included
+# in all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
+# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
+# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+# IN THE SOFTWARE.
+#
+import unittest
+import time
+
+import boto
+from boto.ec2.networkinterface import NetworkInterfaceCollection
+from boto.ec2.networkinterface import NetworkInterfaceSpecification
+from boto.ec2.networkinterface import PrivateIPAddress
+
+
+class TestVPCConnection(unittest.TestCase):
+    def setUp(self):
+        self.api = boto.connect_vpc()
+        vpc = self.api.create_vpc('10.0.0.0/16')
+        self.addCleanup(self.api.delete_vpc, vpc.id)
+
+        self.subnet = self.api.create_subnet(vpc.id, '10.0.0.0/24')
+        self.addCleanup(self.api.delete_subnet, self.subnet.id)
+
+    def terminate_instance(self, instance):
+        instance.terminate()
+        for i in xrange(300):
+            instance.update()
+            if instance.state == 'terminated':
+                # Give it a litle more time to settle.
+                time.sleep(10)
+                return
+            else:
+                time.sleep(10)
+
+    def test_multi_ip_create(self):
+        interface = NetworkInterfaceSpecification(
+            device_index=0, subnet_id=self.subnet.id,
+            private_ip_address='10.0.0.21',
+            description="This is a test interface using boto.",
+            delete_on_termination=True, private_ip_addresses=[
+                PrivateIPAddress(private_ip_address='10.0.0.22',
+                                 primary=False),
+                PrivateIPAddress(private_ip_address='10.0.0.23',
+                                 primary=False),
+                PrivateIPAddress(private_ip_address='10.0.0.24',
+                                 primary=False)])
+        interfaces = NetworkInterfaceCollection(interface)
+
+        reservation = self.api.run_instances(image_id='ami-a0cd60c9', instance_type='m1.small',
+                                             network_interfaces=interfaces)
+        # Give it a few seconds to start up.
+        time.sleep(10)
+        instance = reservation.instances[0]
+        self.addCleanup(self.terminate_instance, instance)
+        retrieved = self.api.get_all_instances(instance_ids=[instance.id])
+        self.assertEqual(len(retrieved), 1)
+        retrieved_instances = retrieved[0].instances
+        self.assertEqual(len(retrieved_instances), 1)
+        retrieved_instance = retrieved_instances[0]
+
+        self.assertEqual(len(retrieved_instance.interfaces), 1)
+        interface = retrieved_instance.interfaces[0]
+        
+        private_ip_addresses = interface.private_ip_addresses
+        self.assertEqual(len(private_ip_addresses), 4)
+        self.assertEqual(private_ip_addresses[0].private_ip_address,
+                         '10.0.0.21')
+        self.assertEqual(private_ip_addresses[0].primary, True)
+        self.assertEqual(private_ip_addresses[1].private_ip_address,
+                         '10.0.0.22')
+        self.assertEqual(private_ip_addresses[2].private_ip_address,
+                         '10.0.0.23')
+        self.assertEqual(private_ip_addresses[3].private_ip_address,
+                         '10.0.0.24')
+
+
+if __name__ == '__main__':
+    unittest.main()
diff --git a/tests/integration/elasticache/__init__.py b/tests/integration/elasticache/__init__.py
new file mode 100644
index 0000000..e69de29
--- /dev/null
+++ b/tests/integration/elasticache/__init__.py
diff --git a/tests/integration/elasticache/test_layer1.py b/tests/integration/elasticache/test_layer1.py
new file mode 100644
index 0000000..f6552c4
--- /dev/null
+++ b/tests/integration/elasticache/test_layer1.py
@@ -0,0 +1,67 @@
+# Copyright (c) 2013 Amazon.com, Inc. or its affiliates.  All Rights Reserved
+#
+# Permission is hereby granted, free of charge, to any person obtaining a
+# copy of this software and associated documentation files (the
+# "Software"), to deal in the Software without restriction, including
+# without limitation the rights to use, copy, modify, merge, publish, dis-
+# tribute, sublicense, and/or sell copies of the Software, and to permit
+# persons to whom the Software is furnished to do so, subject to the fol-
+# lowing conditions:
+#
+# The above copyright notice and this permission notice shall be included
+# in all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
+# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
+# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+# IN THE SOFTWARE.
+#
+import time
+from tests.unit import unittest
+
+from boto.elasticache import layer1
+from boto.exception import BotoServerError
+
+
+class TestElastiCacheConnection(unittest.TestCase):
+    def setUp(self):
+        self.elasticache = layer1.ElastiCacheConnection()
+
+    def wait_until_cluster_available(self, cluster_id):
+        timeout = time.time() + 600
+        while time.time() < timeout:
+            response = self.elasticache.describe_cache_clusters(cluster_id)
+            status = response['DescribeCacheClustersResponse']\
+                    ['DescribeCacheClustersResult']\
+                    ['CacheClusters'][0]['CacheClusterStatus']
+            if status == 'available':
+                break
+            time.sleep(5)
+        else:
+            self.fail('Timeout waiting for cache cluster %r'
+                      'to become available.' % cluster_id)
+
+    def test_create_delete_cache_cluster(self):
+        cluster_id = 'cluster-id2'
+        self.elasticache.create_cache_cluster(
+            cluster_id, 1, 'cache.t1.micro', 'memcached')
+        self.wait_until_cluster_available(cluster_id)
+
+        self.elasticache.delete_cache_cluster(cluster_id)
+        timeout = time.time() + 600
+        while time.time() < timeout:
+            try:
+                self.elasticache.describe_cache_clusters(cluster_id)
+            except BotoServerError:
+                break
+            time.sleep(5)
+        else:
+            self.fail('Timeout waiting for cache cluster %s'
+                      'to be deleted.' % cluster_id)
+
+
+if __name__ == '__main__':
+    unittest.main()
diff --git a/tests/integration/elastictranscoder/__init__.py b/tests/integration/elastictranscoder/__init__.py
new file mode 100644
index 0000000..e69de29
--- /dev/null
+++ b/tests/integration/elastictranscoder/__init__.py
diff --git a/tests/integration/elastictranscoder/test_cert_verification.py b/tests/integration/elastictranscoder/test_cert_verification.py
new file mode 100644
index 0000000..adf2e8f
--- /dev/null
+++ b/tests/integration/elastictranscoder/test_cert_verification.py
@@ -0,0 +1,35 @@
+# Copyright (c) 2013 Amazon.com, Inc. or its affiliates.  All Rights Reserved
+#
+# Permission is hereby granted, free of charge, to any person obtaining a
+# copy of this software and associated documentation files (the
+# "Software"), to deal in the Software without restriction, including
+# without limitation the rights to use, copy, modify, merge, publish, dis-
+# tribute, sublicense, and/or sell copies of the Software, and to permit
+# persons to whom the Software is furnished to do so, subject to the fol-
+# lowing conditions:
+#
+# The above copyright notice and this permission notice shall be included
+# in all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
+# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
+# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+# IN THE SOFTWARE.
+#
+
+from tests.unit import unittest
+import boto.elastictranscoder
+
+
+class CertVerificationTest(unittest.TestCase):
+
+    elastictranscoder = True
+    ssl = True
+
+    def test_certs(self):
+        for region in boto.elastictranscoder.regions():
+            c = region.connect()
+            c.list_pipelines()
diff --git a/tests/integration/elastictranscoder/test_layer1.py b/tests/integration/elastictranscoder/test_layer1.py
new file mode 100644
index 0000000..fa2f840
--- /dev/null
+++ b/tests/integration/elastictranscoder/test_layer1.py
@@ -0,0 +1,115 @@
+# Copyright (c) 2013 Amazon.com, Inc. or its affiliates.  All Rights Reserved
+#
+# Permission is hereby granted, free of charge, to any person obtaining a
+# copy of this software and associated documentation files (the
+# "Software"), to deal in the Software without restriction, including
+# without limitation the rights to use, copy, modify, merge, publish, dis-
+# tribute, sublicense, and/or sell copies of the Software, and to permit
+# persons to whom the Software is furnished to do so, subject to the fol-
+# lowing conditions:
+#
+# The above copyright notice and this permission notice shall be included
+# in all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
+# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
+# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+# IN THE SOFTWARE.
+#
+import unittest
+import time
+
+from boto.elastictranscoder.layer1 import ElasticTranscoderConnection
+from boto.elastictranscoder.exceptions import ValidationException
+import boto.s3
+import boto.sns
+import boto.iam
+import boto.sns
+
+
+class TestETSLayer1PipelineManagement(unittest.TestCase):
+    def setUp(self):
+        self.api = ElasticTranscoderConnection()
+        self.s3 = boto.connect_s3()
+        self.sns = boto.connect_sns()
+        self.iam = boto.connect_iam()
+        self.sns = boto.connect_sns()
+        self.timestamp = str(int(time.time()))
+        self.input_bucket = 'boto-pipeline-%s' % self.timestamp
+        self.output_bucket = 'boto-pipeline-out-%s' % self.timestamp
+        self.role_name = 'boto-ets-role-%s' % self.timestamp
+        self.pipeline_name = 'boto-pipeline-%s' % self.timestamp
+        self.s3.create_bucket(self.input_bucket)
+        self.s3.create_bucket(self.output_bucket)
+        self.addCleanup(self.s3.delete_bucket, self.input_bucket)
+        self.addCleanup(self.s3.delete_bucket, self.output_bucket)
+        self.role = self.iam.create_role(self.role_name)
+        self.role_arn = self.role['create_role_response']['create_role_result']\
+                                 ['role']['arn']
+        self.addCleanup(self.iam.delete_role, self.role_name)
+
+    def create_pipeline(self):
+        pipeline = self.api.create_pipeline(
+            self.pipeline_name, self.input_bucket,
+            self.output_bucket, self.role_arn,
+            {'Progressing': '', 'Completed': '', 'Warning': '', 'Error': ''})
+        pipeline_id = pipeline['Pipeline']['Id']
+
+        self.addCleanup(self.api.delete_pipeline, pipeline_id)
+        return pipeline_id
+
+    def test_create_delete_pipeline(self):
+        pipeline = self.api.create_pipeline(
+            self.pipeline_name, self.input_bucket,
+            self.output_bucket, self.role_arn,
+            {'Progressing': '', 'Completed': '', 'Warning': '', 'Error': ''})
+        pipeline_id = pipeline['Pipeline']['Id']
+
+        self.api.delete_pipeline(pipeline_id)
+
+    def test_can_retrieve_pipeline_information(self):
+        pipeline_id = self.create_pipeline()
+
+        # The pipeline shows up in list_pipelines
+        pipelines = self.api.list_pipelines()['Pipelines']
+        pipeline_names = [p['Name'] for p in pipelines]
+        self.assertIn(self.pipeline_name, pipeline_names)
+
+        # The pipeline shows up in read_pipeline
+        response = self.api.read_pipeline(pipeline_id)
+        self.assertEqual(response['Pipeline']['Id'], pipeline_id)
+
+    def test_update_pipeline(self):
+        pipeline_id = self.create_pipeline()
+        self.api.update_pipeline_status(pipeline_id, 'Paused')
+
+        response = self.api.read_pipeline(pipeline_id)
+        self.assertEqual(response['Pipeline']['Status'], 'Paused')
+
+    def test_update_pipeline_notification(self):
+        pipeline_id = self.create_pipeline()
+        response = self.sns.create_topic('pipeline-errors')
+        topic_arn = response['CreateTopicResponse']['CreateTopicResult']\
+                            ['TopicArn']
+        self.addCleanup(self.sns.delete_topic, topic_arn)
+
+        self.api.update_pipeline_notifications(
+            pipeline_id,
+            {'Progressing': '', 'Completed': '',
+             'Warning': '', 'Error': topic_arn})
+
+        response = self.api.read_pipeline(pipeline_id)
+        self.assertEqual(response['Pipeline']['Notifications']['Error'],
+                         topic_arn)
+
+    def test_list_jobs_by_pipeline(self):
+        pipeline_id = self.create_pipeline()
+        response = self.api.list_jobs_by_pipeline(pipeline_id)
+        self.assertEqual(response['Jobs'], [])
+
+    def test_proper_error_when_pipeline_does_not_exist(self):
+        with self.assertRaises(ValidationException):
+            self.api.read_pipeline('badpipelineid')
diff --git a/tests/integration/glacier/test_layer1.py b/tests/integration/glacier/test_layer1.py
new file mode 100644
index 0000000..effb562
--- /dev/null
+++ b/tests/integration/glacier/test_layer1.py
@@ -0,0 +1,44 @@
+# Copyright (c) 2012 Amazon.com, Inc. or its affiliates.  All Rights Reserved
+#
+# Permission is hereby granted, free of charge, to any person obtaining a
+# copy of this software and associated documentation files (the
+# "Software"), to deal in the Software without restriction, including
+# without limitation the rights to use, copy, modify, merge, publish, dis-
+# tribute, sublicense, and/or sell copies of the Software, and to permit
+# persons to whom the Software is furnished to do so, subject to the fol-
+# lowing conditions:
+#
+# The above copyright notice and this permission notice shall be included
+# in all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
+# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
+# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+# IN THE SOFTWARE.
+#
+from tests.unit import unittest
+
+from boto.glacier.layer1 import Layer1
+
+
+class TestGlacierLayer1(unittest.TestCase):
+    glacier = True
+
+    def delete_vault(self, vault_name):
+        pass
+
+    def test_initialiate_multipart_upload(self):
+        # Create a vault, initiate a multipart upload,
+        # then cancel it.
+        glacier = Layer1()
+        glacier.create_vault('l1testvault')
+        self.addCleanup(glacier.delete_vault, 'l1testvault')
+        upload_id = glacier.initiate_multipart_upload('l1testvault', 4*1024*1024,
+                                                      'double  spaces  here')['UploadId']
+        self.addCleanup(glacier.abort_multipart_upload, 'l1testvault', upload_id)
+        response = glacier.list_multipart_uploads('l1testvault')['UploadsList']
+        self.assertEqual(len(response), 1)
+        self.assertEqual(response[0]['MultipartUploadId'], upload_id)
diff --git a/tests/integration/gs/__init__.py b/tests/integration/gs/__init__.py
new file mode 100644
index 0000000..e69de29
--- /dev/null
+++ b/tests/integration/gs/__init__.py
diff --git a/tests/integration/gs/cb_test_harness.py b/tests/integration/gs/cb_test_harness.py
new file mode 100644
index 0000000..195b5eb
--- /dev/null
+++ b/tests/integration/gs/cb_test_harness.py
@@ -0,0 +1,71 @@
+# Copyright 2010 Google Inc.
+#
+# Permission is hereby granted, free of charge, to any person obtaining a
+# copy of this software and associated documentation files (the
+# "Software"), to deal in the Software without restriction, including
+# without limitation the rights to use, copy, modify, merge, publish, dis-
+# tribute, sublicense, and/or sell copies of the Software, and to permit
+# persons to whom the Software is furnished to do so, subject to the fol-
+# lowing conditions:
+#
+# The above copyright notice and this permission notice shall be included
+# in all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
+# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
+# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+# IN THE SOFTWARE.
+
+"""
+Test harness that allows us to raise exceptions, change file content,
+and record the byte transfer callback sequence, to test various resumable
+upload and download cases. The 'call' method of this harness can be passed
+as the 'cb' parameter to boto.s3.Key.send_file() and boto.s3.Key.get_file(),
+allowing testing of various file upload/download conditions.
+"""
+
+import socket
+
+
+class CallbackTestHarness(object):
+
+    def __init__(self, fail_after_n_bytes=0, num_times_to_fail=1,
+                 exception=socket.error('mock socket error', 0),
+                 fp_to_change=None, fp_change_pos=None):
+        self.fail_after_n_bytes = fail_after_n_bytes
+        self.num_times_to_fail = num_times_to_fail
+        self.exception = exception
+        # If fp_to_change and fp_change_pos are specified, 3 bytes will be
+        # written at that position just before the first exception is thrown.
+        self.fp_to_change = fp_to_change
+        self.fp_change_pos = fp_change_pos
+        self.num_failures = 0
+        self.transferred_seq_before_first_failure = []
+        self.transferred_seq_after_first_failure = []
+
+    def call(self, total_bytes_transferred, unused_total_size):
+        """
+        To use this test harness, pass the 'call' method of the instantiated
+        object as the cb param to the set_contents_from_file() or
+        get_contents_to_file() call.
+        """
+        # Record transfer sequence to allow verification.
+        if self.num_failures:
+            self.transferred_seq_after_first_failure.append(
+                total_bytes_transferred)
+        else:
+            self.transferred_seq_before_first_failure.append(
+                total_bytes_transferred)
+        if (total_bytes_transferred >= self.fail_after_n_bytes and
+            self.num_failures < self.num_times_to_fail):
+            self.num_failures += 1
+            if self.fp_to_change and self.fp_change_pos is not None:
+                cur_pos = self.fp_to_change.tell()
+                self.fp_to_change.seek(self.fp_change_pos)
+                self.fp_to_change.write('abc')
+                self.fp_to_change.seek(cur_pos)
+            self.called = True
+            raise self.exception
diff --git a/tests/integration/gs/test_basic.py b/tests/integration/gs/test_basic.py
new file mode 100644
index 0000000..9ac60b9
--- /dev/null
+++ b/tests/integration/gs/test_basic.py
@@ -0,0 +1,379 @@
+# -*- coding: utf-8 -*-
+# Copyright (c) 2006-2011 Mitch Garnaat http://garnaat.org/
+# Copyright (c) 2010, Eucalyptus Systems, Inc.
+# Copyright (c) 2011, Nexenta Systems, Inc.
+# Copyright (c) 2012, Google, Inc.
+# All rights reserved.
+#
+# Permission is hereby granted, free of charge, to any person obtaining a
+# copy of this software and associated documentation files (the
+# "Software"), to deal in the Software without restriction, including
+# without limitation the rights to use, copy, modify, merge, publish, dis-
+# tribute, sublicense, and/or sell copies of the Software, and to permit
+# persons to whom the Software is furnished to do so, subject to the fol-
+# lowing conditions:
+#
+# The above copyright notice and this permission notice shall be included
+# in all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
+# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
+# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+# IN THE SOFTWARE.
+
+"""
+Some integration tests for the GSConnection
+"""
+
+import os
+import re
+import StringIO
+import xml.sax
+
+from boto import handler
+from boto import storage_uri
+from boto.gs.acl import ACL
+from boto.gs.cors import Cors
+from tests.integration.gs.testcase import GSTestCase
+
+
+CORS_EMPTY = '<CorsConfig></CorsConfig>'
+CORS_DOC = ('<CorsConfig><Cors><Origins><Origin>origin1.example.com'
+            '</Origin><Origin>origin2.example.com</Origin></Origins>'
+            '<Methods><Method>GET</Method><Method>PUT</Method>'
+            '<Method>POST</Method></Methods><ResponseHeaders>'
+            '<ResponseHeader>foo</ResponseHeader>'
+            '<ResponseHeader>bar</ResponseHeader></ResponseHeaders>'
+            '</Cors></CorsConfig>')
+
+# Regexp for matching project-private default object ACL.
+PROJECT_PRIVATE_RE = ('\s*<AccessControlList>\s*<Entries>\s*<Entry>'
+  '\s*<Scope type="GroupById"><ID>[0-9a-fA-F]+</ID></Scope>'
+  '\s*<Permission>FULL_CONTROL</Permission>\s*</Entry>\s*<Entry>'
+  '\s*<Scope type="GroupById"><ID>[0-9a-fA-F]+</ID></Scope>'
+  '\s*<Permission>FULL_CONTROL</Permission>\s*</Entry>\s*<Entry>'
+  '\s*<Scope type="GroupById"><ID>[0-9a-fA-F]+</ID></Scope>'
+  '\s*<Permission>READ</Permission></Entry>\s*</Entries>'
+  '\s*</AccessControlList>\s*')
+
+
+class GSBasicTest(GSTestCase):
+    """Tests some basic GCS functionality."""
+
+    def test_read_write(self):
+        """Tests basic read/write to keys."""
+        bucket = self._MakeBucket()
+        bucket_name = bucket.name
+        # now try a get_bucket call and see if it's really there
+        bucket = self._GetConnection().get_bucket(bucket_name)
+        key_name = 'foobar'
+        k = bucket.new_key(key_name)
+        s1 = 'This is a test of file upload and download'
+        k.set_contents_from_string(s1)
+        tmpdir = self._MakeTempDir()
+        fpath = os.path.join(tmpdir, key_name)
+        fp = open(fpath, 'wb')
+        # now get the contents from gcs to a local file
+        k.get_contents_to_file(fp)
+        fp.close()
+        fp = open(fpath)
+        # check to make sure content read from gcs is identical to original
+        self.assertEqual(s1, fp.read())
+        fp.close()
+        # check to make sure set_contents_from_file is working
+        sfp = StringIO.StringIO('foo')
+        k.set_contents_from_file(sfp)
+        self.assertEqual(k.get_contents_as_string(), 'foo')
+        sfp2 = StringIO.StringIO('foo2')
+        k.set_contents_from_file(sfp2)
+        self.assertEqual(k.get_contents_as_string(), 'foo2')
+
+    def test_get_all_keys(self):
+        """Tests get_all_keys."""
+        phony_mimetype = 'application/x-boto-test'
+        headers = {'Content-Type': phony_mimetype}
+        tmpdir = self._MakeTempDir()
+        fpath = os.path.join(tmpdir, 'foobar1')
+        fpath2 = os.path.join(tmpdir, 'foobar')
+        with open(fpath2, 'w') as f:
+            f.write('test-data')
+        bucket = self._MakeBucket()
+
+        # First load some data for the first one, overriding content type.
+        k = bucket.new_key('foobar')
+        s1 = 'test-contents'
+        s2 = 'test-contents2'
+        k.name = 'foo/bar'
+        k.set_contents_from_string(s1, headers)
+        k.name = 'foo/bas'
+        k.set_contents_from_filename(fpath2)
+        k.name = 'foo/bat'
+        k.set_contents_from_string(s1)
+        k.name = 'fie/bar'
+        k.set_contents_from_string(s1)
+        k.name = 'fie/bas'
+        k.set_contents_from_string(s1)
+        k.name = 'fie/bat'
+        k.set_contents_from_string(s1)
+        # try resetting the contents to another value
+        md5 = k.md5
+        k.set_contents_from_string(s2)
+        self.assertNotEqual(k.md5, md5)
+
+        fp2 = open(fpath2, 'rb')
+        k.md5 = None
+        k.base64md5 = None
+        k.set_contents_from_stream(fp2)
+        fp = open(fpath, 'wb')
+        k.get_contents_to_file(fp)
+        fp.close()
+        fp2.seek(0, 0)
+        fp = open(fpath, 'rb')
+        self.assertEqual(fp2.read(), fp.read())
+        fp.close()
+        fp2.close()
+        all = bucket.get_all_keys()
+        self.assertEqual(len(all), 6)
+        rs = bucket.get_all_keys(prefix='foo')
+        self.assertEqual(len(rs), 3)
+        rs = bucket.get_all_keys(prefix='', delimiter='/')
+        self.assertEqual(len(rs), 2)
+        rs = bucket.get_all_keys(maxkeys=5)
+        self.assertEqual(len(rs), 5)
+
+    def test_bucket_lookup(self):
+        """Test the bucket lookup method."""
+        bucket = self._MakeBucket()
+        k = bucket.new_key('foo/bar')
+        phony_mimetype = 'application/x-boto-test'
+        headers = {'Content-Type': phony_mimetype}
+        k.set_contents_from_string('testdata', headers)
+
+        k = bucket.lookup('foo/bar')
+        self.assertIsInstance(k, bucket.key_class)
+        self.assertEqual(k.content_type, phony_mimetype)
+        k = bucket.lookup('notthere')
+        self.assertIsNone(k)
+
+    def test_metadata(self):
+        """Test key metadata operations."""
+        bucket = self._MakeBucket()
+        k = self._MakeKey(bucket=bucket)
+        key_name = k.name
+        s1 = 'This is a test of file upload and download'
+
+        mdkey1 = 'meta1'
+        mdval1 = 'This is the first metadata value'
+        k.set_metadata(mdkey1, mdval1)
+        mdkey2 = 'meta2'
+        mdval2 = 'This is the second metadata value'
+        k.set_metadata(mdkey2, mdval2)
+
+        # Test unicode character.
+        mdval3 = u'föö'
+        mdkey3 = 'meta3'
+        k.set_metadata(mdkey3, mdval3)
+        k.set_contents_from_string(s1)
+
+        k = bucket.lookup(key_name)
+        self.assertEqual(k.get_metadata(mdkey1), mdval1)
+        self.assertEqual(k.get_metadata(mdkey2), mdval2)
+        self.assertEqual(k.get_metadata(mdkey3), mdval3)
+        k = bucket.new_key(key_name)
+        k.get_contents_as_string()
+        self.assertEqual(k.get_metadata(mdkey1), mdval1)
+        self.assertEqual(k.get_metadata(mdkey2), mdval2)
+        self.assertEqual(k.get_metadata(mdkey3), mdval3)
+
+    def test_list_iterator(self):
+        """Test list and iterator."""
+        bucket = self._MakeBucket()
+        num_iter = len([k for k in bucket.list()])
+        rs = bucket.get_all_keys()
+        num_keys = len(rs)
+        self.assertEqual(num_iter, num_keys)
+
+    def test_acl(self):
+        """Test bucket and key ACLs."""
+        bucket = self._MakeBucket()
+
+        # try some acl stuff
+        bucket.set_acl('public-read')
+        acl = bucket.get_acl()
+        self.assertEqual(len(acl.entries.entry_list), 2)
+        bucket.set_acl('private')
+        acl = bucket.get_acl()
+        self.assertEqual(len(acl.entries.entry_list), 1)
+        k = self._MakeKey(bucket=bucket)
+        k.set_acl('public-read')
+        acl = k.get_acl()
+        self.assertEqual(len(acl.entries.entry_list), 2)
+        k.set_acl('private')
+        acl = k.get_acl()
+        self.assertEqual(len(acl.entries.entry_list), 1)
+
+        # Test case-insensitivity of XML ACL parsing.
+        acl_xml = (
+            '<ACCESSControlList><EntrIes><Entry>'    +
+            '<Scope type="AllUsers"></Scope><Permission>READ</Permission>' +
+            '</Entry></EntrIes></ACCESSControlList>')
+        acl = ACL()
+        h = handler.XmlHandler(acl, bucket)
+        xml.sax.parseString(acl_xml, h)
+        bucket.set_acl(acl)
+        self.assertEqual(len(acl.entries.entry_list), 1)
+        aclstr = k.get_xml_acl()
+        self.assertGreater(aclstr.count('/Entry', 1), 0)
+
+    def test_logging(self):
+        """Test set/get raw logging subresource."""
+        bucket = self._MakeBucket()
+        empty_logging_str="<?xml version='1.0' encoding='UTF-8'?><Logging/>"
+        logging_str = (
+            "<?xml version='1.0' encoding='UTF-8'?><Logging>"
+            "<LogBucket>log-bucket</LogBucket>" +
+            "<LogObjectPrefix>example</LogObjectPrefix>" +
+            "</Logging>")
+        bucket.set_subresource('logging', logging_str)
+        self.assertEqual(bucket.get_subresource('logging'), logging_str)
+        # try disable/enable logging
+        bucket.disable_logging()
+        self.assertEqual(bucket.get_subresource('logging'), empty_logging_str)
+        bucket.enable_logging('log-bucket', 'example')
+        self.assertEqual(bucket.get_subresource('logging'), logging_str)
+
+    def test_copy_key(self):
+        """Test copying a key from one bucket to another."""
+        # create two new, empty buckets
+        bucket1 = self._MakeBucket()
+        bucket2 = self._MakeBucket()
+        bucket_name_1 = bucket1.name
+        bucket_name_2 = bucket2.name
+        # verify buckets got created
+        bucket1 = self._GetConnection().get_bucket(bucket_name_1)
+        bucket2 = self._GetConnection().get_bucket(bucket_name_2)
+        # create a key in bucket1 and give it some content
+        key_name = 'foobar'
+        k1 = bucket1.new_key(key_name)
+        self.assertIsInstance(k1, bucket1.key_class)
+        k1.name = key_name
+        s = 'This is a test.'
+        k1.set_contents_from_string(s)
+        # copy the new key from bucket1 to bucket2
+        k1.copy(bucket_name_2, key_name)
+        # now copy the contents from bucket2 to a local file
+        k2 = bucket2.lookup(key_name)
+        self.assertIsInstance(k2, bucket2.key_class)
+        tmpdir = self._MakeTempDir()
+        fpath = os.path.join(tmpdir, 'foobar')
+        fp = open(fpath, 'wb')
+        k2.get_contents_to_file(fp)
+        fp.close()
+        fp = open(fpath)
+        # check to make sure content read is identical to original
+        self.assertEqual(s, fp.read())
+        fp.close()
+        # delete keys
+        bucket1.delete_key(k1)
+        bucket2.delete_key(k2)
+
+    def test_default_object_acls(self):
+        """Test default object acls."""
+        # create a new bucket
+        bucket = self._MakeBucket()
+        # get default acl and make sure it's project-private
+        acl = bucket.get_def_acl()
+        self.assertIsNotNone(re.search(PROJECT_PRIVATE_RE, acl.to_xml()))
+        # set default acl to a canned acl and verify it gets set
+        bucket.set_def_acl('public-read')
+        acl = bucket.get_def_acl()
+        # save public-read acl for later test
+        public_read_acl = acl
+        self.assertEqual(acl.to_xml(), ('<AccessControlList><Entries><Entry>'
+          '<Scope type="AllUsers"></Scope><Permission>READ</Permission>'
+          '</Entry></Entries></AccessControlList>'))
+        # back to private acl
+        bucket.set_def_acl('private')
+        acl = bucket.get_def_acl()
+        self.assertEqual(acl.to_xml(),
+                         '<AccessControlList></AccessControlList>')
+        # set default acl to an xml acl and verify it gets set
+        bucket.set_def_acl(public_read_acl)
+        acl = bucket.get_def_acl()
+        self.assertEqual(acl.to_xml(), ('<AccessControlList><Entries><Entry>'
+          '<Scope type="AllUsers"></Scope><Permission>READ</Permission>'
+          '</Entry></Entries></AccessControlList>'))
+        # back to private acl
+        bucket.set_def_acl('private')
+        acl = bucket.get_def_acl()
+        self.assertEqual(acl.to_xml(),
+                         '<AccessControlList></AccessControlList>')
+
+    def test_default_object_acls_storage_uri(self):
+        """Test default object acls using storage_uri."""
+        # create a new bucket
+        bucket = self._MakeBucket()
+        bucket_name = bucket.name
+        uri = storage_uri('gs://' + bucket_name)
+        # get default acl and make sure it's project-private
+        acl = uri.get_def_acl()
+        self.assertIsNotNone(re.search(PROJECT_PRIVATE_RE, acl.to_xml()))
+        # set default acl to a canned acl and verify it gets set
+        uri.set_def_acl('public-read')
+        acl = uri.get_def_acl()
+        # save public-read acl for later test
+        public_read_acl = acl
+        self.assertEqual(acl.to_xml(), ('<AccessControlList><Entries><Entry>'
+          '<Scope type="AllUsers"></Scope><Permission>READ</Permission>'
+          '</Entry></Entries></AccessControlList>'))
+        # back to private acl
+        uri.set_def_acl('private')
+        acl = uri.get_def_acl()
+        self.assertEqual(acl.to_xml(),
+                         '<AccessControlList></AccessControlList>')
+        # set default acl to an xml acl and verify it gets set
+        uri.set_def_acl(public_read_acl)
+        acl = uri.get_def_acl()
+        self.assertEqual(acl.to_xml(), ('<AccessControlList><Entries><Entry>'
+          '<Scope type="AllUsers"></Scope><Permission>READ</Permission>'
+          '</Entry></Entries></AccessControlList>'))
+        # back to private acl
+        uri.set_def_acl('private')
+        acl = uri.get_def_acl()
+        self.assertEqual(acl.to_xml(),
+                         '<AccessControlList></AccessControlList>')
+
+    def test_cors_xml_bucket(self):
+        """Test setting and getting of CORS XML documents on Bucket."""
+        # create a new bucket
+        bucket = self._MakeBucket()
+        bucket_name = bucket.name
+        # now call get_bucket to see if it's really there
+        bucket = self._GetConnection().get_bucket(bucket_name)
+        # get new bucket cors and make sure it's empty
+        cors = re.sub(r'\s', '', bucket.get_cors().to_xml())
+        self.assertEqual(cors, CORS_EMPTY)
+        # set cors document on new bucket
+        bucket.set_cors(CORS_DOC)
+        cors = re.sub(r'\s', '', bucket.get_cors().to_xml())
+        self.assertEqual(cors, CORS_DOC)
+
+    def test_cors_xml_storage_uri(self):
+        """Test setting and getting of CORS XML documents with storage_uri."""
+        # create a new bucket
+        bucket = self._MakeBucket()
+        bucket_name = bucket.name
+        uri = storage_uri('gs://' + bucket_name)
+        # get new bucket cors and make sure it's empty
+        cors = re.sub(r'\s', '', uri.get_cors().to_xml())
+        self.assertEqual(cors, CORS_EMPTY)
+        # set cors document on new bucket
+        cors_obj = Cors()
+        h = handler.XmlHandler(cors_obj, None)
+        xml.sax.parseString(CORS_DOC, h)
+        uri.set_cors(cors_obj)
+        cors = re.sub(r'\s', '', uri.get_cors().to_xml())
+        self.assertEqual(cors, CORS_DOC)
diff --git a/tests/integration/gs/test_generation_conditionals.py b/tests/integration/gs/test_generation_conditionals.py
new file mode 100644
index 0000000..a35c466
--- /dev/null
+++ b/tests/integration/gs/test_generation_conditionals.py
@@ -0,0 +1,399 @@
+# -*- coding: utf-8 -*-
+# Copyright (c) 2013, Google, Inc.
+# All rights reserved.
+#
+# Permission is hereby granted, free of charge, to any person obtaining a
+# copy of this software and associated documentation files (the
+# "Software"), to deal in the Software without restriction, including
+# without limitation the rights to use, copy, modify, merge, publish, dis-
+# tribute, sublicense, and/or sell copies of the Software, and to permit
+# persons to whom the Software is furnished to do so, subject to the fol-
+# lowing conditions:
+#
+# The above copyright notice and this permission notice shall be included
+# in all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
+# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
+# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+# IN THE SOFTWARE.
+
+"""Integration tests for GS versioning support."""
+
+import StringIO
+import os
+import tempfile
+from xml import sax
+
+from boto import handler
+from boto.exception import GSResponseError
+from boto.gs.acl import ACL
+from tests.integration.gs.testcase import GSTestCase
+
+
+# HTTP Error returned when a generation precondition fails.
+VERSION_MISMATCH = "412"
+
+
+class GSGenerationConditionalsTest(GSTestCase):
+
+    def testConditionalSetContentsFromFile(self):
+        b = self._MakeBucket()
+        k = b.new_key("foo")
+        s1 = "test1"
+        fp = StringIO.StringIO(s1)
+        with self.assertRaisesRegexp(GSResponseError, VERSION_MISMATCH):
+            k.set_contents_from_file(fp, if_generation=999)
+
+        fp = StringIO.StringIO(s1)
+        k.set_contents_from_file(fp, if_generation=0)
+        g1 = k.generation
+
+        s2 = "test2"
+        fp = StringIO.StringIO(s2)
+        with self.assertRaisesRegexp(GSResponseError, VERSION_MISMATCH):
+            k.set_contents_from_file(fp, if_generation=int(g1)+1)
+
+        fp = StringIO.StringIO(s2)
+        k.set_contents_from_file(fp, if_generation=g1)
+        self.assertEqual(k.get_contents_as_string(), s2)
+
+    def testConditionalSetContentsFromString(self):
+        b = self._MakeBucket()
+        k = b.new_key("foo")
+        s1 = "test1"
+        with self.assertRaisesRegexp(GSResponseError, VERSION_MISMATCH):
+            k.set_contents_from_string(s1, if_generation=999)
+
+        k.set_contents_from_string(s1, if_generation=0)
+        g1 = k.generation
+
+        s2 = "test2"
+        with self.assertRaisesRegexp(GSResponseError, VERSION_MISMATCH):
+            k.set_contents_from_string(s2, if_generation=int(g1)+1)
+
+        k.set_contents_from_string(s2, if_generation=g1)
+        self.assertEqual(k.get_contents_as_string(), s2)
+
+    def testConditionalSetContentsFromFilename(self):
+        s1 = "test1"
+        s2 = "test2"
+        f1 = tempfile.NamedTemporaryFile(prefix="boto-gs-test", delete=False)
+        f2 = tempfile.NamedTemporaryFile(prefix="boto-gs-test", delete=False)
+        fname1 = f1.name
+        fname2 = f2.name
+        f1.write(s1)
+        f1.close()
+        f2.write(s2)
+        f2.close()
+
+        try:
+            b = self._MakeBucket()
+            k = b.new_key("foo")
+
+            with self.assertRaisesRegexp(GSResponseError, VERSION_MISMATCH):
+                k.set_contents_from_filename(fname1, if_generation=999)
+
+            k.set_contents_from_filename(fname1, if_generation=0)
+            g1 = k.generation
+
+            with self.assertRaisesRegexp(GSResponseError, VERSION_MISMATCH):
+                k.set_contents_from_filename(fname2, if_generation=int(g1)+1)
+
+            k.set_contents_from_filename(fname2, if_generation=g1)
+            self.assertEqual(k.get_contents_as_string(), s2)
+        finally:
+            os.remove(fname1)
+            os.remove(fname2)
+
+    def testBucketConditionalSetAcl(self):
+        b = self._MakeVersionedBucket()
+        k = b.new_key("foo")
+        s1 = "test1"
+        k.set_contents_from_string(s1)
+
+        g1 = k.generation
+        mg1 = k.metageneration
+        self.assertEqual(str(mg1), "1")
+        b.set_acl("public-read", key_name="foo")
+
+        k = b.get_key("foo")
+        g2 = k.generation
+        mg2 = k.metageneration
+
+        self.assertEqual(g2, g1)
+        self.assertGreater(mg2, mg1)
+
+        with self.assertRaisesRegexp(ValueError, ("Received if_metageneration "
+                                                  "argument with no "
+                                                  "if_generation argument")):
+            b.set_acl("bucket-owner-full-control", key_name="foo",
+                      if_metageneration=123)
+
+        with self.assertRaisesRegexp(GSResponseError, VERSION_MISMATCH):
+            b.set_acl("bucket-owner-full-control", key_name="foo",
+                      if_generation=int(g2) + 1)
+
+        with self.assertRaisesRegexp(GSResponseError, VERSION_MISMATCH):
+            b.set_acl("bucket-owner-full-control", key_name="foo",
+                      if_generation=g2, if_metageneration=int(mg2) + 1)
+
+        b.set_acl("bucket-owner-full-control", key_name="foo", if_generation=g2)
+
+        k = b.get_key("foo")
+        g3 = k.generation
+        mg3 = k.metageneration
+        self.assertEqual(g3, g2)
+        self.assertGreater(mg3, mg2)
+
+        b.set_acl("public-read", key_name="foo", if_generation=g3,
+                  if_metageneration=mg3)
+
+    def testConditionalSetContentsFromStream(self):
+        b = self._MakeBucket()
+        k = b.new_key("foo")
+        s1 = "test1"
+        fp = StringIO.StringIO(s1)
+        with self.assertRaisesRegexp(GSResponseError, VERSION_MISMATCH):
+            k.set_contents_from_stream(fp, if_generation=999)
+
+        fp = StringIO.StringIO(s1)
+        k.set_contents_from_stream(fp, if_generation=0)
+        g1 = k.generation
+
+        k = b.get_key("foo")
+        s2 = "test2"
+        fp = StringIO.StringIO(s2)
+        with self.assertRaisesRegexp(GSResponseError, VERSION_MISMATCH):
+            k.set_contents_from_stream(fp, if_generation=int(g1)+1)
+
+        fp = StringIO.StringIO(s2)
+        k.set_contents_from_stream(fp, if_generation=g1)
+        self.assertEqual(k.get_contents_as_string(), s2)
+
+    def testBucketConditionalSetCannedAcl(self):
+        b = self._MakeVersionedBucket()
+        k = b.new_key("foo")
+        s1 = "test1"
+        k.set_contents_from_string(s1)
+
+        g1 = k.generation
+        mg1 = k.metageneration
+        self.assertEqual(str(mg1), "1")
+        b.set_canned_acl("public-read", key_name="foo")
+
+        k = b.get_key("foo")
+        g2 = k.generation
+        mg2 = k.metageneration
+
+        self.assertEqual(g2, g1)
+        self.assertGreater(mg2, mg1)
+
+        with self.assertRaisesRegexp(ValueError, ("Received if_metageneration "
+                                                  "argument with no "
+                                                  "if_generation argument")):
+            b.set_canned_acl("bucket-owner-full-control", key_name="foo",
+                      if_metageneration=123)
+
+        with self.assertRaisesRegexp(GSResponseError, VERSION_MISMATCH):
+            b.set_canned_acl("bucket-owner-full-control", key_name="foo",
+                      if_generation=int(g2) + 1)
+
+        with self.assertRaisesRegexp(GSResponseError, VERSION_MISMATCH):
+            b.set_canned_acl("bucket-owner-full-control", key_name="foo",
+                      if_generation=g2, if_metageneration=int(mg2) + 1)
+
+        b.set_canned_acl("bucket-owner-full-control", key_name="foo",
+                         if_generation=g2)
+
+        k = b.get_key("foo")
+        g3 = k.generation
+        mg3 = k.metageneration
+        self.assertEqual(g3, g2)
+        self.assertGreater(mg3, mg2)
+
+        b.set_canned_acl("public-read", key_name="foo", if_generation=g3,
+                  if_metageneration=mg3)
+
+    def testBucketConditionalSetXmlAcl(self):
+        b = self._MakeVersionedBucket()
+        k = b.new_key("foo")
+        s1 = "test1"
+        k.set_contents_from_string(s1)
+
+        g1 = k.generation
+        mg1 = k.metageneration
+        self.assertEqual(str(mg1), "1")
+
+        acl_xml = (
+            '<ACCESSControlList><EntrIes><Entry>'    +
+            '<Scope type="AllUsers"></Scope><Permission>READ</Permission>' +
+            '</Entry></EntrIes></ACCESSControlList>')
+        acl = ACL()
+        h = handler.XmlHandler(acl, b)
+        sax.parseString(acl_xml, h)
+        acl = acl.to_xml()
+
+        b.set_xml_acl(acl, key_name="foo")
+
+        k = b.get_key("foo")
+        g2 = k.generation
+        mg2 = k.metageneration
+
+        self.assertEqual(g2, g1)
+        self.assertGreater(mg2, mg1)
+
+        with self.assertRaisesRegexp(ValueError, ("Received if_metageneration "
+                                                  "argument with no "
+                                                  "if_generation argument")):
+            b.set_xml_acl(acl, key_name="foo", if_metageneration=123)
+
+        with self.assertRaisesRegexp(GSResponseError, VERSION_MISMATCH):
+            b.set_xml_acl(acl, key_name="foo", if_generation=int(g2) + 1)
+
+        with self.assertRaisesRegexp(GSResponseError, VERSION_MISMATCH):
+            b.set_xml_acl(acl, key_name="foo", if_generation=g2,
+                          if_metageneration=int(mg2) + 1)
+
+        b.set_xml_acl(acl, key_name="foo", if_generation=g2)
+
+        k = b.get_key("foo")
+        g3 = k.generation
+        mg3 = k.metageneration
+        self.assertEqual(g3, g2)
+        self.assertGreater(mg3, mg2)
+
+        b.set_xml_acl(acl, key_name="foo", if_generation=g3,
+                      if_metageneration=mg3)
+
+    def testObjectConditionalSetAcl(self):
+        b = self._MakeVersionedBucket()
+        k = b.new_key("foo")
+        k.set_contents_from_string("test1")
+
+        g1 = k.generation
+        mg1 = k.metageneration
+        self.assertEqual(str(mg1), "1")
+        k.set_acl("public-read")
+
+        k = b.get_key("foo")
+        g2 = k.generation
+        mg2 = k.metageneration
+
+        self.assertEqual(g2, g1)
+        self.assertGreater(mg2, mg1)
+
+        with self.assertRaisesRegexp(ValueError, ("Received if_metageneration "
+                                                  "argument with no "
+                                                  "if_generation argument")):
+            k.set_acl("bucket-owner-full-control", if_metageneration=123)
+
+        with self.assertRaisesRegexp(GSResponseError, VERSION_MISMATCH):
+            k.set_acl("bucket-owner-full-control", if_generation=int(g2) + 1)
+
+        with self.assertRaisesRegexp(GSResponseError, VERSION_MISMATCH):
+            k.set_acl("bucket-owner-full-control", if_generation=g2,
+                      if_metageneration=int(mg2) + 1)
+
+        k.set_acl("bucket-owner-full-control", if_generation=g2)
+
+        k = b.get_key("foo")
+        g3 = k.generation
+        mg3 = k.metageneration
+        self.assertEqual(g3, g2)
+        self.assertGreater(mg3, mg2)
+
+        k.set_acl("public-read", if_generation=g3, if_metageneration=mg3)
+
+    def testObjectConditionalSetCannedAcl(self):
+        b = self._MakeVersionedBucket()
+        k = b.new_key("foo")
+        k.set_contents_from_string("test1")
+
+        g1 = k.generation
+        mg1 = k.metageneration
+        self.assertEqual(str(mg1), "1")
+        k.set_canned_acl("public-read")
+
+        k = b.get_key("foo")
+        g2 = k.generation
+        mg2 = k.metageneration
+
+        self.assertEqual(g2, g1)
+        self.assertGreater(mg2, mg1)
+
+        with self.assertRaisesRegexp(ValueError, ("Received if_metageneration "
+                                                  "argument with no "
+                                                  "if_generation argument")):
+            k.set_canned_acl("bucket-owner-full-control",
+                             if_metageneration=123)
+
+        with self.assertRaisesRegexp(GSResponseError, VERSION_MISMATCH):
+            k.set_canned_acl("bucket-owner-full-control",
+                             if_generation=int(g2) + 1)
+
+        with self.assertRaisesRegexp(GSResponseError, VERSION_MISMATCH):
+            k.set_canned_acl("bucket-owner-full-control", if_generation=g2,
+                      if_metageneration=int(mg2) + 1)
+
+        k.set_canned_acl("bucket-owner-full-control", if_generation=g2)
+
+        k = b.get_key("foo")
+        g3 = k.generation
+        mg3 = k.metageneration
+        self.assertEqual(g3, g2)
+        self.assertGreater(mg3, mg2)
+
+        k.set_canned_acl("public-read", if_generation=g3, if_metageneration=mg3)
+
+    def testObjectConditionalSetXmlAcl(self):
+        b = self._MakeVersionedBucket()
+        k = b.new_key("foo")
+        s1 = "test1"
+        k.set_contents_from_string(s1)
+
+        g1 = k.generation
+        mg1 = k.metageneration
+        self.assertEqual(str(mg1), "1")
+
+        acl_xml = (
+            '<ACCESSControlList><EntrIes><Entry>'    +
+            '<Scope type="AllUsers"></Scope><Permission>READ</Permission>' +
+            '</Entry></EntrIes></ACCESSControlList>')
+        acl = ACL()
+        h = handler.XmlHandler(acl, b)
+        sax.parseString(acl_xml, h)
+        acl = acl.to_xml()
+
+        k.set_xml_acl(acl)
+
+        k = b.get_key("foo")
+        g2 = k.generation
+        mg2 = k.metageneration
+
+        self.assertEqual(g2, g1)
+        self.assertGreater(mg2, mg1)
+
+        with self.assertRaisesRegexp(ValueError, ("Received if_metageneration "
+                                                  "argument with no "
+                                                  "if_generation argument")):
+            k.set_xml_acl(acl, if_metageneration=123)
+
+        with self.assertRaisesRegexp(GSResponseError, VERSION_MISMATCH):
+            k.set_xml_acl(acl, if_generation=int(g2) + 1)
+
+        with self.assertRaisesRegexp(GSResponseError, VERSION_MISMATCH):
+            k.set_xml_acl(acl, if_generation=g2, if_metageneration=int(mg2) + 1)
+
+        k.set_xml_acl(acl, if_generation=g2)
+
+        k = b.get_key("foo")
+        g3 = k.generation
+        mg3 = k.metageneration
+        self.assertEqual(g3, g2)
+        self.assertGreater(mg3, mg2)
+
+        k.set_xml_acl(acl, if_generation=g3, if_metageneration=mg3)
diff --git a/tests/integration/gs/test_resumable_downloads.py b/tests/integration/gs/test_resumable_downloads.py
new file mode 100644
index 0000000..ba5d983
--- /dev/null
+++ b/tests/integration/gs/test_resumable_downloads.py
@@ -0,0 +1,354 @@
+# Copyright 2010 Google Inc.
+#
+# Permission is hereby granted, free of charge, to any person obtaining a
+# copy of this software and associated documentation files (the
+# "Software"), to deal in the Software without restriction, including
+# without limitation the rights to use, copy, modify, merge, publish, dis-
+# tribute, sublicense, and/or sell copies of the Software, and to permit
+# persons to whom the Software is furnished to do so, subject to the fol-
+# lowing conditions:
+#
+# The above copyright notice and this permission notice shall be included
+# in all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
+# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
+# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+# IN THE SOFTWARE.
+
+"""
+Tests of resumable downloads.
+"""
+
+import errno
+import os
+import re
+
+import boto
+from boto.s3.resumable_download_handler import get_cur_file_size
+from boto.s3.resumable_download_handler import ResumableDownloadHandler
+from boto.exception import ResumableTransferDisposition
+from boto.exception import ResumableDownloadException
+from cb_test_harness import CallbackTestHarness
+from tests.integration.gs.testcase import GSTestCase
+
+
+SMALL_KEY_SIZE = 2 * 1024 # 2 KB.
+LARGE_KEY_SIZE = 500 * 1024 # 500 KB.
+
+
+class ResumableDownloadTests(GSTestCase):
+    """Resumable download test suite."""
+
+    def make_small_key(self):
+        small_src_key_as_string = os.urandom(SMALL_KEY_SIZE)
+        small_src_key = self._MakeKey(data=small_src_key_as_string)
+        return small_src_key_as_string, small_src_key
+
+    def make_tracker_file(self, tmpdir=None):
+        if not tmpdir:
+            tmpdir = self._MakeTempDir()
+        tracker_file = os.path.join(tmpdir, 'tracker')
+        return tracker_file
+
+    def make_dst_fp(self, tmpdir=None):
+        if not tmpdir:
+            tmpdir = self._MakeTempDir()
+        dst_file = os.path.join(tmpdir, 'dstfile')
+        return open(dst_file, 'w')
+
+    def test_non_resumable_download(self):
+        """
+        Tests that non-resumable downloads work
+        """
+        dst_fp = self.make_dst_fp()
+        small_src_key_as_string, small_src_key = self.make_small_key()
+        small_src_key.get_contents_to_file(dst_fp)
+        self.assertEqual(SMALL_KEY_SIZE,
+                         get_cur_file_size(dst_fp))
+        self.assertEqual(small_src_key_as_string,
+                         small_src_key.get_contents_as_string())
+
+    def test_download_without_persistent_tracker(self):
+        """
+        Tests a single resumable download, with no tracker persistence
+        """
+        res_download_handler = ResumableDownloadHandler()
+        dst_fp = self.make_dst_fp()
+        small_src_key_as_string, small_src_key = self.make_small_key()
+        small_src_key.get_contents_to_file(
+            dst_fp, res_download_handler=res_download_handler)
+        self.assertEqual(SMALL_KEY_SIZE,
+                         get_cur_file_size(dst_fp))
+        self.assertEqual(small_src_key_as_string,
+                         small_src_key.get_contents_as_string())
+
+    def test_failed_download_with_persistent_tracker(self):
+        """
+        Tests that failed resumable download leaves a correct tracker file
+        """
+        harness = CallbackTestHarness()
+        tmpdir = self._MakeTempDir()
+        tracker_file_name = self.make_tracker_file(tmpdir)
+        dst_fp = self.make_dst_fp(tmpdir)
+        res_download_handler = ResumableDownloadHandler(
+            tracker_file_name=tracker_file_name, num_retries=0)
+        small_src_key_as_string, small_src_key = self.make_small_key()
+        try:
+            small_src_key.get_contents_to_file(
+                dst_fp, cb=harness.call,
+                res_download_handler=res_download_handler)
+            self.fail('Did not get expected ResumableDownloadException')
+        except ResumableDownloadException, e:
+            # We'll get a ResumableDownloadException at this point because
+            # of CallbackTestHarness (above). Check that the tracker file was
+            # created correctly.
+            self.assertEqual(e.disposition,
+                             ResumableTransferDisposition.ABORT_CUR_PROCESS)
+            self.assertTrue(os.path.exists(tracker_file_name))
+            f = open(tracker_file_name)
+            etag_line = f.readline()
+            self.assertEquals(etag_line.rstrip('\n'), small_src_key.etag.strip('"\''))
+
+    def test_retryable_exception_recovery(self):
+        """
+        Tests handling of a retryable exception
+        """
+        # Test one of the RETRYABLE_EXCEPTIONS.
+        exception = ResumableDownloadHandler.RETRYABLE_EXCEPTIONS[0]
+        harness = CallbackTestHarness(exception=exception)
+        res_download_handler = ResumableDownloadHandler(num_retries=1)
+        dst_fp = self.make_dst_fp()
+        small_src_key_as_string, small_src_key = self.make_small_key()
+        small_src_key.get_contents_to_file(
+            dst_fp, cb=harness.call,
+            res_download_handler=res_download_handler)
+        # Ensure downloaded object has correct content.
+        self.assertEqual(SMALL_KEY_SIZE,
+                         get_cur_file_size(dst_fp))
+        self.assertEqual(small_src_key_as_string,
+                         small_src_key.get_contents_as_string())
+
+    def test_broken_pipe_recovery(self):
+        """
+        Tests handling of a Broken Pipe (which interacts with an httplib bug)
+        """
+        exception = IOError(errno.EPIPE, "Broken pipe")
+        harness = CallbackTestHarness(exception=exception)
+        res_download_handler = ResumableDownloadHandler(num_retries=1)
+        dst_fp = self.make_dst_fp()
+        small_src_key_as_string, small_src_key = self.make_small_key()
+        small_src_key.get_contents_to_file(
+            dst_fp, cb=harness.call,
+            res_download_handler=res_download_handler)
+        # Ensure downloaded object has correct content.
+        self.assertEqual(SMALL_KEY_SIZE,
+                         get_cur_file_size(dst_fp))
+        self.assertEqual(small_src_key_as_string,
+                         small_src_key.get_contents_as_string())
+
+    def test_non_retryable_exception_handling(self):
+        """
+        Tests resumable download that fails with a non-retryable exception
+        """
+        harness = CallbackTestHarness(
+            exception=OSError(errno.EACCES, 'Permission denied'))
+        res_download_handler = ResumableDownloadHandler(num_retries=1)
+        dst_fp = self.make_dst_fp()
+        small_src_key_as_string, small_src_key = self.make_small_key()
+        try:
+            small_src_key.get_contents_to_file(
+                dst_fp, cb=harness.call,
+                res_download_handler=res_download_handler)
+            self.fail('Did not get expected OSError')
+        except OSError, e:
+            # Ensure the error was re-raised.
+            self.assertEqual(e.errno, 13)
+
+    def test_failed_and_restarted_download_with_persistent_tracker(self):
+        """
+        Tests resumable download that fails once and then completes,
+        with tracker file
+        """
+        harness = CallbackTestHarness()
+        tmpdir = self._MakeTempDir()
+        tracker_file_name = self.make_tracker_file(tmpdir)
+        dst_fp = self.make_dst_fp(tmpdir)
+        small_src_key_as_string, small_src_key = self.make_small_key()
+        res_download_handler = ResumableDownloadHandler(
+            tracker_file_name=tracker_file_name, num_retries=1)
+        small_src_key.get_contents_to_file(
+            dst_fp, cb=harness.call,
+            res_download_handler=res_download_handler)
+        # Ensure downloaded object has correct content.
+        self.assertEqual(SMALL_KEY_SIZE,
+                         get_cur_file_size(dst_fp))
+        self.assertEqual(small_src_key_as_string,
+                         small_src_key.get_contents_as_string())
+        # Ensure tracker file deleted.
+        self.assertFalse(os.path.exists(tracker_file_name))
+
+    def test_multiple_in_process_failures_then_succeed(self):
+        """
+        Tests resumable download that fails twice in one process, then completes
+        """
+        res_download_handler = ResumableDownloadHandler(num_retries=3)
+        dst_fp = self.make_dst_fp()
+        small_src_key_as_string, small_src_key = self.make_small_key()
+        small_src_key.get_contents_to_file(
+            dst_fp, res_download_handler=res_download_handler)
+        # Ensure downloaded object has correct content.
+        self.assertEqual(SMALL_KEY_SIZE,
+                         get_cur_file_size(dst_fp))
+        self.assertEqual(small_src_key_as_string,
+                         small_src_key.get_contents_as_string())
+
+    def test_multiple_in_process_failures_then_succeed_with_tracker_file(self):
+        """
+        Tests resumable download that fails completely in one process,
+        then when restarted completes, using a tracker file
+        """
+        # Set up test harness that causes more failures than a single
+        # ResumableDownloadHandler instance will handle, writing enough data
+        # before the first failure that some of it survives that process run.
+        harness = CallbackTestHarness(
+            fail_after_n_bytes=LARGE_KEY_SIZE/2, num_times_to_fail=2)
+        larger_src_key_as_string = os.urandom(LARGE_KEY_SIZE)
+        larger_src_key = self._MakeKey(data=larger_src_key_as_string)
+        tmpdir = self._MakeTempDir()
+        tracker_file_name = self.make_tracker_file(tmpdir)
+        dst_fp = self.make_dst_fp(tmpdir)
+        res_download_handler = ResumableDownloadHandler(
+            tracker_file_name=tracker_file_name, num_retries=0)
+        try:
+            larger_src_key.get_contents_to_file(
+                dst_fp, cb=harness.call,
+                res_download_handler=res_download_handler)
+            self.fail('Did not get expected ResumableDownloadException')
+        except ResumableDownloadException, e:
+            self.assertEqual(e.disposition,
+                             ResumableTransferDisposition.ABORT_CUR_PROCESS)
+            # Ensure a tracker file survived.
+            self.assertTrue(os.path.exists(tracker_file_name))
+        # Try it one more time; this time should succeed.
+        larger_src_key.get_contents_to_file(
+            dst_fp, cb=harness.call,
+            res_download_handler=res_download_handler)
+        self.assertEqual(LARGE_KEY_SIZE,
+                         get_cur_file_size(dst_fp))
+        self.assertEqual(larger_src_key_as_string,
+                         larger_src_key.get_contents_as_string())
+        self.assertFalse(os.path.exists(tracker_file_name))
+        # Ensure some of the file was downloaded both before and after failure.
+        self.assertTrue(
+            len(harness.transferred_seq_before_first_failure) > 1 and
+            len(harness.transferred_seq_after_first_failure) > 1)
+
+    def test_download_with_inital_partial_download_before_failure(self):
+        """
+        Tests resumable download that successfully downloads some content
+        before it fails, then restarts and completes
+        """
+        # Set up harness to fail download after several hundred KB so download
+        # server will have saved something before we retry.
+        harness = CallbackTestHarness(
+            fail_after_n_bytes=LARGE_KEY_SIZE/2)
+        larger_src_key_as_string = os.urandom(LARGE_KEY_SIZE)
+        larger_src_key = self._MakeKey(data=larger_src_key_as_string)
+        res_download_handler = ResumableDownloadHandler(num_retries=1)
+        dst_fp = self.make_dst_fp()
+        larger_src_key.get_contents_to_file(
+            dst_fp, cb=harness.call,
+            res_download_handler=res_download_handler)
+        # Ensure downloaded object has correct content.
+        self.assertEqual(LARGE_KEY_SIZE,
+                         get_cur_file_size(dst_fp))
+        self.assertEqual(larger_src_key_as_string,
+                         larger_src_key.get_contents_as_string())
+        # Ensure some of the file was downloaded both before and after failure.
+        self.assertTrue(
+            len(harness.transferred_seq_before_first_failure) > 1 and
+            len(harness.transferred_seq_after_first_failure) > 1)
+
+    def test_zero_length_object_download(self):
+        """
+        Tests downloading a zero-length object (exercises boundary conditions).
+        """
+        res_download_handler = ResumableDownloadHandler()
+        dst_fp = self.make_dst_fp()
+        k = self._MakeKey()
+        k.get_contents_to_file(dst_fp,
+                               res_download_handler=res_download_handler)
+        self.assertEqual(0, get_cur_file_size(dst_fp))
+
+    def test_download_with_invalid_tracker_etag(self):
+        """
+        Tests resumable download with a tracker file containing an invalid etag
+        """
+        tmp_dir = self._MakeTempDir()
+        dst_fp = self.make_dst_fp(tmp_dir)
+        small_src_key_as_string, small_src_key = self.make_small_key()
+        invalid_etag_tracker_file_name = os.path.join(tmp_dir,
+                                                      'invalid_etag_tracker')
+        f = open(invalid_etag_tracker_file_name, 'w')
+        f.write('3.14159\n')
+        f.close()
+        res_download_handler = ResumableDownloadHandler(
+            tracker_file_name=invalid_etag_tracker_file_name)
+        # An error should be printed about the invalid tracker, but then it
+        # should run the update successfully.
+        small_src_key.get_contents_to_file(
+            dst_fp, res_download_handler=res_download_handler)
+        self.assertEqual(SMALL_KEY_SIZE, get_cur_file_size(dst_fp))
+        self.assertEqual(small_src_key_as_string,
+                         small_src_key.get_contents_as_string())
+
+    def test_download_with_inconsistent_etag_in_tracker(self):
+        """
+        Tests resumable download with an inconsistent etag in tracker file
+        """
+        tmp_dir = self._MakeTempDir()
+        dst_fp = self.make_dst_fp(tmp_dir)
+        small_src_key_as_string, small_src_key = self.make_small_key()
+        inconsistent_etag_tracker_file_name = os.path.join(tmp_dir,
+            'inconsistent_etag_tracker')
+        f = open(inconsistent_etag_tracker_file_name, 'w')
+        good_etag = small_src_key.etag.strip('"\'')
+        new_val_as_list = []
+        for c in reversed(good_etag):
+            new_val_as_list.append(c)
+        f.write('%s\n' % ''.join(new_val_as_list))
+        f.close()
+        res_download_handler = ResumableDownloadHandler(
+            tracker_file_name=inconsistent_etag_tracker_file_name)
+        # An error should be printed about the expired tracker, but then it
+        # should run the update successfully.
+        small_src_key.get_contents_to_file(
+            dst_fp, res_download_handler=res_download_handler)
+        self.assertEqual(SMALL_KEY_SIZE,
+                         get_cur_file_size(dst_fp))
+        self.assertEqual(small_src_key_as_string,
+                         small_src_key.get_contents_as_string())
+
+    def test_download_with_unwritable_tracker_file(self):
+        """
+        Tests resumable download with an unwritable tracker file
+        """
+        # Make dir where tracker_file lives temporarily unwritable.
+        tmp_dir = self._MakeTempDir()
+        tracker_file_name = os.path.join(tmp_dir, 'tracker')
+        save_mod = os.stat(tmp_dir).st_mode
+        try:
+            os.chmod(tmp_dir, 0)
+            res_download_handler = ResumableDownloadHandler(
+                tracker_file_name=tracker_file_name)
+        except ResumableDownloadException, e:
+            self.assertEqual(e.disposition, ResumableTransferDisposition.ABORT)
+            self.assertNotEqual(
+                e.message.find('Couldn\'t write URI tracker file'), -1)
+        finally:
+            # Restore original protection of dir where tracker_file lives.
+            os.chmod(tmp_dir, save_mod)
diff --git a/tests/integration/gs/test_resumable_uploads.py b/tests/integration/gs/test_resumable_uploads.py
new file mode 100644
index 0000000..7c60145
--- /dev/null
+++ b/tests/integration/gs/test_resumable_uploads.py
@@ -0,0 +1,523 @@
+# Copyright 2010 Google Inc.
+#
+# Permission is hereby granted, free of charge, to any person obtaining a
+# copy of this software and associated documentation files (the
+# "Software"), to deal in the Software without restriction, including
+# without limitation the rights to use, copy, modify, merge, publish, dis-
+# tribute, sublicense, and/or sell copies of the Software, and to permit
+# persons to whom the Software is furnished to do so, subject to the fol-
+# lowing conditions:
+#
+# The above copyright notice and this permission notice shall be included
+# in all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
+# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
+# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+# IN THE SOFTWARE.
+
+"""
+Tests of Google Cloud Storage resumable uploads.
+"""
+
+import StringIO
+import errno
+import random
+import os
+import time
+
+import boto
+from boto import storage_uri
+from boto.gs.resumable_upload_handler import ResumableUploadHandler
+from boto.exception import InvalidUriError
+from boto.exception import ResumableTransferDisposition
+from boto.exception import ResumableUploadException
+from cb_test_harness import CallbackTestHarness
+from tests.integration.gs.testcase import GSTestCase
+
+
+SMALL_KEY_SIZE = 2 * 1024 # 2 KB.
+LARGE_KEY_SIZE = 500 * 1024 # 500 KB.
+LARGEST_KEY_SIZE = 1024 * 1024 # 1 MB.
+
+
+class ResumableUploadTests(GSTestCase):
+    """Resumable upload test suite."""
+
+    def build_input_file(self, size):
+        buf = []
+        # I manually construct the random data here instead of calling
+        # os.urandom() because I want to constrain the range of data (in
+        # this case to 0'..'9') so the test
+        # code can easily overwrite part of the StringIO file with
+        # known-to-be-different values.
+        for i in range(size):
+            buf.append(str(random.randint(0, 9)))
+        file_as_string = ''.join(buf)
+        return (file_as_string, StringIO.StringIO(file_as_string))
+
+    def make_small_file(self):
+        return self.build_input_file(SMALL_KEY_SIZE)
+
+    def make_large_file(self):
+        return self.build_input_file(LARGE_KEY_SIZE)
+
+    def make_tracker_file(self, tmpdir=None):
+        if not tmpdir:
+            tmpdir = self._MakeTempDir()
+        tracker_file = os.path.join(tmpdir, 'tracker')
+        return tracker_file
+
+    def test_non_resumable_upload(self):
+        """
+        Tests that non-resumable uploads work
+        """
+        small_src_file_as_string, small_src_file = self.make_small_file()
+        # Seek to end incase its the first test.
+        small_src_file.seek(0, os.SEEK_END)
+        dst_key = self._MakeKey(set_contents=False)
+        try:
+            dst_key.set_contents_from_file(small_src_file)
+            self.fail("should fail as need to rewind the filepointer")
+        except AttributeError:
+            pass
+        # Now try calling with a proper rewind.
+        dst_key.set_contents_from_file(small_src_file, rewind=True)
+        self.assertEqual(SMALL_KEY_SIZE, dst_key.size)
+        self.assertEqual(small_src_file_as_string,
+                         dst_key.get_contents_as_string())
+
+    def test_upload_without_persistent_tracker(self):
+        """
+        Tests a single resumable upload, with no tracker URI persistence
+        """
+        res_upload_handler = ResumableUploadHandler()
+        small_src_file_as_string, small_src_file = self.make_small_file()
+        small_src_file.seek(0)
+        dst_key = self._MakeKey(set_contents=False)
+        dst_key.set_contents_from_file(
+            small_src_file, res_upload_handler=res_upload_handler)
+        self.assertEqual(SMALL_KEY_SIZE, dst_key.size)
+        self.assertEqual(small_src_file_as_string,
+                         dst_key.get_contents_as_string())
+
+    def test_failed_upload_with_persistent_tracker(self):
+        """
+        Tests that failed resumable upload leaves a correct tracker URI file
+        """
+        harness = CallbackTestHarness()
+        tracker_file_name = self.make_tracker_file()
+        res_upload_handler = ResumableUploadHandler(
+            tracker_file_name=tracker_file_name, num_retries=0)
+        small_src_file_as_string, small_src_file = self.make_small_file()
+        small_src_file.seek(0)
+        dst_key = self._MakeKey(set_contents=False)
+        try:
+            dst_key.set_contents_from_file(
+                small_src_file, cb=harness.call,
+                res_upload_handler=res_upload_handler)
+            self.fail('Did not get expected ResumableUploadException')
+        except ResumableUploadException, e:
+            # We'll get a ResumableUploadException at this point because
+            # of CallbackTestHarness (above). Check that the tracker file was
+            # created correctly.
+            self.assertEqual(e.disposition,
+                             ResumableTransferDisposition.ABORT_CUR_PROCESS)
+            self.assertTrue(os.path.exists(tracker_file_name))
+            f = open(tracker_file_name)
+            uri_from_file = f.readline().strip()
+            f.close()
+            self.assertEqual(uri_from_file,
+                             res_upload_handler.get_tracker_uri())
+
+    def test_retryable_exception_recovery(self):
+        """
+        Tests handling of a retryable exception
+        """
+        # Test one of the RETRYABLE_EXCEPTIONS.
+        exception = ResumableUploadHandler.RETRYABLE_EXCEPTIONS[0]
+        harness = CallbackTestHarness(exception=exception)
+        res_upload_handler = ResumableUploadHandler(num_retries=1)
+        small_src_file_as_string, small_src_file = self.make_small_file()
+        small_src_file.seek(0)
+        dst_key = self._MakeKey(set_contents=False)
+        dst_key.set_contents_from_file(
+            small_src_file, cb=harness.call,
+            res_upload_handler=res_upload_handler)
+        # Ensure uploaded object has correct content.
+        self.assertEqual(SMALL_KEY_SIZE, dst_key.size)
+        self.assertEqual(small_src_file_as_string,
+                         dst_key.get_contents_as_string())
+
+    def test_broken_pipe_recovery(self):
+        """
+        Tests handling of a Broken Pipe (which interacts with an httplib bug)
+        """
+        exception = IOError(errno.EPIPE, "Broken pipe")
+        harness = CallbackTestHarness(exception=exception)
+        res_upload_handler = ResumableUploadHandler(num_retries=1)
+        small_src_file_as_string, small_src_file = self.make_small_file()
+        small_src_file.seek(0)
+        dst_key = self._MakeKey(set_contents=False)
+        dst_key.set_contents_from_file(
+            small_src_file, cb=harness.call,
+            res_upload_handler=res_upload_handler)
+        # Ensure uploaded object has correct content.
+        self.assertEqual(SMALL_KEY_SIZE, dst_key.size)
+        self.assertEqual(small_src_file_as_string,
+                         dst_key.get_contents_as_string())
+
+    def test_non_retryable_exception_handling(self):
+        """
+        Tests a resumable upload that fails with a non-retryable exception
+        """
+        harness = CallbackTestHarness(
+            exception=OSError(errno.EACCES, 'Permission denied'))
+        res_upload_handler = ResumableUploadHandler(num_retries=1)
+        small_src_file_as_string, small_src_file = self.make_small_file()
+        small_src_file.seek(0)
+        dst_key = self._MakeKey(set_contents=False)
+        try:
+            dst_key.set_contents_from_file(
+                small_src_file, cb=harness.call,
+                res_upload_handler=res_upload_handler)
+            self.fail('Did not get expected OSError')
+        except OSError, e:
+            # Ensure the error was re-raised.
+            self.assertEqual(e.errno, 13)
+
+    def test_failed_and_restarted_upload_with_persistent_tracker(self):
+        """
+        Tests resumable upload that fails once and then completes, with tracker
+        file
+        """
+        harness = CallbackTestHarness()
+        tracker_file_name = self.make_tracker_file()
+        res_upload_handler = ResumableUploadHandler(
+            tracker_file_name=tracker_file_name, num_retries=1)
+        small_src_file_as_string, small_src_file = self.make_small_file()
+        small_src_file.seek(0)
+        dst_key = self._MakeKey(set_contents=False)
+        dst_key.set_contents_from_file(
+            small_src_file, cb=harness.call,
+            res_upload_handler=res_upload_handler)
+        # Ensure uploaded object has correct content.
+        self.assertEqual(SMALL_KEY_SIZE, dst_key.size)
+        self.assertEqual(small_src_file_as_string,
+                         dst_key.get_contents_as_string())
+        # Ensure tracker file deleted.
+        self.assertFalse(os.path.exists(tracker_file_name))
+
+    def test_multiple_in_process_failures_then_succeed(self):
+        """
+        Tests resumable upload that fails twice in one process, then completes
+        """
+        res_upload_handler = ResumableUploadHandler(num_retries=3)
+        small_src_file_as_string, small_src_file = self.make_small_file()
+        small_src_file.seek(0)
+        dst_key = self._MakeKey(set_contents=False)
+        dst_key.set_contents_from_file(
+            small_src_file, res_upload_handler=res_upload_handler)
+        # Ensure uploaded object has correct content.
+        self.assertEqual(SMALL_KEY_SIZE, dst_key.size)
+        self.assertEqual(small_src_file_as_string,
+                         dst_key.get_contents_as_string())
+
+    def test_multiple_in_process_failures_then_succeed_with_tracker_file(self):
+        """
+        Tests resumable upload that fails completely in one process,
+        then when restarted completes, using a tracker file
+        """
+        # Set up test harness that causes more failures than a single
+        # ResumableUploadHandler instance will handle, writing enough data
+        # before the first failure that some of it survives that process run.
+        harness = CallbackTestHarness(
+            fail_after_n_bytes=LARGE_KEY_SIZE/2, num_times_to_fail=2)
+        tracker_file_name = self.make_tracker_file()
+        res_upload_handler = ResumableUploadHandler(
+            tracker_file_name=tracker_file_name, num_retries=1)
+        larger_src_file_as_string, larger_src_file = self.make_large_file()
+        larger_src_file.seek(0)
+        dst_key = self._MakeKey(set_contents=False)
+        try:
+            dst_key.set_contents_from_file(
+                larger_src_file, cb=harness.call,
+                res_upload_handler=res_upload_handler)
+            self.fail('Did not get expected ResumableUploadException')
+        except ResumableUploadException, e:
+            self.assertEqual(e.disposition,
+                             ResumableTransferDisposition.ABORT_CUR_PROCESS)
+            # Ensure a tracker file survived.
+            self.assertTrue(os.path.exists(tracker_file_name))
+        # Try it one more time; this time should succeed.
+        larger_src_file.seek(0)
+        dst_key.set_contents_from_file(
+            larger_src_file, cb=harness.call,
+            res_upload_handler=res_upload_handler)
+        self.assertEqual(LARGE_KEY_SIZE, dst_key.size)
+        self.assertEqual(larger_src_file_as_string,
+                         dst_key.get_contents_as_string())
+        self.assertFalse(os.path.exists(tracker_file_name))
+        # Ensure some of the file was uploaded both before and after failure.
+        self.assertTrue(len(harness.transferred_seq_before_first_failure) > 1
+                        and
+                        len(harness.transferred_seq_after_first_failure) > 1)
+
+    def test_upload_with_inital_partial_upload_before_failure(self):
+        """
+        Tests resumable upload that successfully uploads some content
+        before it fails, then restarts and completes
+        """
+        # Set up harness to fail upload after several hundred KB so upload
+        # server will have saved something before we retry.
+        harness = CallbackTestHarness(
+            fail_after_n_bytes=LARGE_KEY_SIZE/2)
+        res_upload_handler = ResumableUploadHandler(num_retries=1)
+        larger_src_file_as_string, larger_src_file = self.make_large_file()
+        larger_src_file.seek(0)
+        dst_key = self._MakeKey(set_contents=False)
+        dst_key.set_contents_from_file(
+            larger_src_file, cb=harness.call,
+            res_upload_handler=res_upload_handler)
+        # Ensure uploaded object has correct content.
+        self.assertEqual(LARGE_KEY_SIZE, dst_key.size)
+        self.assertEqual(larger_src_file_as_string,
+                         dst_key.get_contents_as_string())
+        # Ensure some of the file was uploaded both before and after failure.
+        self.assertTrue(len(harness.transferred_seq_before_first_failure) > 1
+                        and
+                        len(harness.transferred_seq_after_first_failure) > 1)
+
+    def test_empty_file_upload(self):
+        """
+        Tests uploading an empty file (exercises boundary conditions).
+        """
+        res_upload_handler = ResumableUploadHandler()
+        empty_src_file = StringIO.StringIO('')
+        empty_src_file.seek(0)
+        dst_key = self._MakeKey(set_contents=False)
+        dst_key.set_contents_from_file(
+            empty_src_file, res_upload_handler=res_upload_handler)
+        self.assertEqual(0, dst_key.size)
+
+    def test_upload_retains_metadata(self):
+        """
+        Tests that resumable upload correctly sets passed metadata
+        """
+        res_upload_handler = ResumableUploadHandler()
+        headers = {'Content-Type' : 'text/plain', 'Content-Encoding' : 'gzip',
+                   'x-goog-meta-abc' : 'my meta', 'x-goog-acl' : 'public-read'}
+        small_src_file_as_string, small_src_file = self.make_small_file()
+        small_src_file.seek(0)
+        dst_key = self._MakeKey(set_contents=False)
+        dst_key.set_contents_from_file(
+            small_src_file, headers=headers,
+            res_upload_handler=res_upload_handler)
+        self.assertEqual(SMALL_KEY_SIZE, dst_key.size)
+        self.assertEqual(small_src_file_as_string,
+                         dst_key.get_contents_as_string())
+        dst_key.open_read()
+        self.assertEqual('text/plain', dst_key.content_type)
+        self.assertEqual('gzip', dst_key.content_encoding)
+        self.assertTrue('abc' in dst_key.metadata)
+        self.assertEqual('my meta', str(dst_key.metadata['abc']))
+        acl = dst_key.get_acl()
+        for entry in acl.entries.entry_list:
+            if str(entry.scope) == '<AllUsers>':
+                self.assertEqual('READ', str(acl.entries.entry_list[1].permission))
+                return
+        self.fail('No <AllUsers> scope found')
+
+    def test_upload_with_file_size_change_between_starts(self):
+        """
+        Tests resumable upload on a file that changes sizes between initial
+        upload start and restart
+        """
+        harness = CallbackTestHarness(
+            fail_after_n_bytes=LARGE_KEY_SIZE/2)
+        tracker_file_name = self.make_tracker_file()
+        # Set up first process' ResumableUploadHandler not to do any
+        # retries (initial upload request will establish expected size to
+        # upload server).
+        res_upload_handler = ResumableUploadHandler(
+            tracker_file_name=tracker_file_name, num_retries=0)
+        larger_src_file_as_string, larger_src_file = self.make_large_file()
+        larger_src_file.seek(0)
+        dst_key = self._MakeKey(set_contents=False)
+        try:
+            dst_key.set_contents_from_file(
+                larger_src_file, cb=harness.call,
+                res_upload_handler=res_upload_handler)
+            self.fail('Did not get expected ResumableUploadException')
+        except ResumableUploadException, e:
+            # First abort (from harness-forced failure) should be
+            # ABORT_CUR_PROCESS.
+            self.assertEqual(e.disposition, ResumableTransferDisposition.ABORT_CUR_PROCESS)
+            # Ensure a tracker file survived.
+            self.assertTrue(os.path.exists(tracker_file_name))
+        # Try it again, this time with different size source file.
+        # Wait 1 second between retry attempts, to give upload server a
+        # chance to save state so it can respond to changed file size with
+        # 500 response in the next attempt.
+        time.sleep(1)
+        try:
+            largest_src_file = self.build_input_file(LARGEST_KEY_SIZE)[1]
+            largest_src_file.seek(0)
+            dst_key.set_contents_from_file(
+                largest_src_file, res_upload_handler=res_upload_handler)
+            self.fail('Did not get expected ResumableUploadException')
+        except ResumableUploadException, e:
+            # This abort should be a hard abort (file size changing during
+            # transfer).
+            self.assertEqual(e.disposition, ResumableTransferDisposition.ABORT)
+            self.assertNotEqual(e.message.find('file size changed'), -1, e.message)
+
+    def test_upload_with_file_size_change_during_upload(self):
+        """
+        Tests resumable upload on a file that changes sizes while upload
+        in progress
+        """
+        # Create a file we can change during the upload.
+        test_file_size = 500 * 1024  # 500 KB.
+        test_file = self.build_input_file(test_file_size)[1]
+        harness = CallbackTestHarness(fp_to_change=test_file,
+                                      fp_change_pos=test_file_size)
+        res_upload_handler = ResumableUploadHandler(num_retries=1)
+        dst_key = self._MakeKey(set_contents=False)
+        try:
+            dst_key.set_contents_from_file(
+                test_file, cb=harness.call,
+                res_upload_handler=res_upload_handler)
+            self.fail('Did not get expected ResumableUploadException')
+        except ResumableUploadException, e:
+            self.assertEqual(e.disposition, ResumableTransferDisposition.ABORT)
+            self.assertNotEqual(
+                e.message.find('File changed during upload'), -1)
+
+    def test_upload_with_file_content_change_during_upload(self):
+        """
+        Tests resumable upload on a file that changes one byte of content
+        (so, size stays the same) while upload in progress
+        """
+        test_file_size = 500 * 1024  # 500 KB.
+        test_file = self.build_input_file(test_file_size)[1]
+        harness = CallbackTestHarness(fail_after_n_bytes=test_file_size/2,
+                                      fp_to_change=test_file,
+                                      # Write to byte 1, as the CallbackTestHarness writes
+                                      # 3 bytes. This will result in the data on the server
+                                      # being different than the local file.
+                                      fp_change_pos=1)
+        res_upload_handler = ResumableUploadHandler(num_retries=1)
+        dst_key = self._MakeKey(set_contents=False)
+        bucket_uri = storage_uri('gs://' + dst_key.bucket.name)
+        dst_key_uri = bucket_uri.clone_replace_name(dst_key.name)
+        try:
+            dst_key.set_contents_from_file(
+                test_file, cb=harness.call,
+                res_upload_handler=res_upload_handler)
+            self.fail('Did not get expected ResumableUploadException')
+        except ResumableUploadException, e:
+            self.assertEqual(e.disposition, ResumableTransferDisposition.ABORT)
+            # Ensure the file size didn't change.
+            test_file.seek(0, os.SEEK_END)
+            self.assertEqual(test_file_size, test_file.tell())
+            self.assertNotEqual(
+                e.message.find('md5 signature doesn\'t match etag'), -1)
+            # Ensure the bad data wasn't left around.
+            try:
+                dst_key_uri.get_key()
+                self.fail('Did not get expected InvalidUriError')
+            except InvalidUriError, e:
+                pass
+
+    def test_upload_with_content_length_header_set(self):
+        """
+        Tests resumable upload on a file when the user supplies a
+        Content-Length header. This is used by gsutil, for example,
+        to set the content length when gzipping a file.
+        """
+        res_upload_handler = ResumableUploadHandler()
+        small_src_file_as_string, small_src_file = self.make_small_file()
+        small_src_file.seek(0)
+        dst_key = self._MakeKey(set_contents=False)
+        try:
+            dst_key.set_contents_from_file(
+                small_src_file, res_upload_handler=res_upload_handler,
+                headers={'Content-Length' : SMALL_KEY_SIZE})
+            self.fail('Did not get expected ResumableUploadException')
+        except ResumableUploadException, e:
+            self.assertEqual(e.disposition, ResumableTransferDisposition.ABORT)
+            self.assertNotEqual(
+                e.message.find('Attempt to specify Content-Length header'), -1)
+
+    def test_upload_with_syntactically_invalid_tracker_uri(self):
+        """
+        Tests resumable upload with a syntactically invalid tracker URI
+        """
+        tmp_dir = self._MakeTempDir()
+        syntactically_invalid_tracker_file_name = os.path.join(tmp_dir,
+            'synt_invalid_uri_tracker')
+        with open(syntactically_invalid_tracker_file_name, 'w') as f:
+            f.write('ftp://example.com')
+        res_upload_handler = ResumableUploadHandler(
+            tracker_file_name=syntactically_invalid_tracker_file_name)
+        small_src_file_as_string, small_src_file = self.make_small_file()
+        # An error should be printed about the invalid URI, but then it
+        # should run the update successfully.
+        small_src_file.seek(0)
+        dst_key = self._MakeKey(set_contents=False)
+        dst_key.set_contents_from_file(
+            small_src_file, res_upload_handler=res_upload_handler)
+        self.assertEqual(SMALL_KEY_SIZE, dst_key.size)
+        self.assertEqual(small_src_file_as_string,
+                         dst_key.get_contents_as_string())
+
+    def test_upload_with_invalid_upload_id_in_tracker_file(self):
+        """
+        Tests resumable upload with invalid upload ID
+        """
+        invalid_upload_id = ('http://pub.storage.googleapis.com/?upload_id='
+            'AyzB2Uo74W4EYxyi5dp_-r68jz8rtbvshsv4TX7srJVkJ57CxTY5Dw2')
+        tmpdir = self._MakeTempDir()
+        invalid_upload_id_tracker_file_name = os.path.join(tmpdir,
+            'invalid_upload_id_tracker')
+        with open(invalid_upload_id_tracker_file_name, 'w') as f:
+            f.write(invalid_upload_id)
+
+        res_upload_handler = ResumableUploadHandler(
+            tracker_file_name=invalid_upload_id_tracker_file_name)
+        small_src_file_as_string, small_src_file = self.make_small_file()
+        # An error should occur, but then the tracker URI should be
+        # regenerated and the the update should succeed.
+        small_src_file.seek(0)
+        dst_key = self._MakeKey(set_contents=False)
+        dst_key.set_contents_from_file(
+            small_src_file, res_upload_handler=res_upload_handler)
+        self.assertEqual(SMALL_KEY_SIZE, dst_key.size)
+        self.assertEqual(small_src_file_as_string,
+                         dst_key.get_contents_as_string())
+        self.assertNotEqual(invalid_upload_id,
+                            res_upload_handler.get_tracker_uri())
+
+    def test_upload_with_unwritable_tracker_file(self):
+        """
+        Tests resumable upload with an unwritable tracker file
+        """
+        # Make dir where tracker_file lives temporarily unwritable.
+        tmp_dir = self._MakeTempDir()
+        tracker_file_name = self.make_tracker_file(tmp_dir)
+        save_mod = os.stat(tmp_dir).st_mode
+        try:
+            os.chmod(tmp_dir, 0)
+            res_upload_handler = ResumableUploadHandler(
+                tracker_file_name=tracker_file_name)
+        except ResumableUploadException, e:
+            self.assertEqual(e.disposition, ResumableTransferDisposition.ABORT)
+            self.assertNotEqual(
+                e.message.find('Couldn\'t write URI tracker file'), -1)
+        finally:
+            # Restore original protection of dir where tracker_file lives.
+            os.chmod(tmp_dir, save_mod)
diff --git a/tests/integration/gs/test_storage_uri.py b/tests/integration/gs/test_storage_uri.py
new file mode 100644
index 0000000..a8ed3b6
--- /dev/null
+++ b/tests/integration/gs/test_storage_uri.py
@@ -0,0 +1,161 @@
+# -*- coding: utf-8 -*-
+# Copyright (c) 2013, Google, Inc.
+# All rights reserved.
+#
+# Permission is hereby granted, free of charge, to any person obtaining a
+# copy of this software and associated documentation files (the
+# "Software"), to deal in the Software without restriction, including
+# without limitation the rights to use, copy, modify, merge, publish, dis-
+# tribute, sublicense, and/or sell copies of the Software, and to permit
+# persons to whom the Software is furnished to do so, subject to the fol-
+# lowing conditions:
+#
+# The above copyright notice and this permission notice shall be included
+# in all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
+# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
+# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+# IN THE SOFTWARE.
+
+"""Integration tests for StorageUri interface."""
+
+import binascii
+import re
+import StringIO
+
+from boto import storage_uri
+from boto.exception import BotoClientError
+from boto.gs.acl import SupportedPermissions as perms
+from tests.integration.gs.testcase import GSTestCase
+
+
+class GSStorageUriTest(GSTestCase):
+
+    def testHasVersion(self):
+        uri = storage_uri("gs://bucket/obj")
+        self.assertFalse(uri.has_version())
+        uri.version_id = "versionid"
+        self.assertTrue(uri.has_version())
+
+        uri = storage_uri("gs://bucket/obj")
+        # Generation triggers versioning.
+        uri.generation = 12345
+        self.assertTrue(uri.has_version())
+        uri.generation = None
+        self.assertFalse(uri.has_version())
+
+        # Zero-generation counts as a version.
+        uri = storage_uri("gs://bucket/obj")
+        uri.generation = 0
+        self.assertTrue(uri.has_version())
+
+    def testCloneReplaceKey(self):
+        b = self._MakeBucket()
+        k = b.new_key("obj")
+        k.set_contents_from_string("stringdata")
+
+        orig_uri = storage_uri("gs://%s/" % b.name)
+
+        uri = orig_uri.clone_replace_key(k)
+        self.assertTrue(uri.has_version())
+        self.assertRegexpMatches(str(uri.generation), r"[0-9]+")
+
+    def testSetAclXml(self):
+        """Ensures that calls to the set_xml_acl functions succeed."""
+        b = self._MakeBucket()
+        k = b.new_key("obj")
+        k.set_contents_from_string("stringdata")
+        bucket_uri = storage_uri("gs://%s/" % b.name)
+
+        # Get a valid ACL for an object.
+        bucket_uri.object_name = "obj"
+        bucket_acl = bucket_uri.get_acl()
+        bucket_uri.object_name = None
+
+        # Add a permission to the ACL.
+        all_users_read_permission = ("<Entry><Scope type='AllUsers'/>"
+                                     "<Permission>READ</Permission></Entry>")
+        acl_string = re.sub(r"</Entries>",
+                           all_users_read_permission + "</Entries>",
+                           bucket_acl.to_xml())
+
+        # Test-generated owner IDs are not currently valid for buckets
+        acl_no_owner_string = re.sub(r"<Owner>.*</Owner>", "", acl_string)
+
+        # Set ACL on an object.
+        bucket_uri.set_xml_acl(acl_string, "obj")
+        # Set ACL on a bucket.
+        bucket_uri.set_xml_acl(acl_no_owner_string)
+        # Set the default ACL for a bucket.
+        bucket_uri.set_def_xml_acl(acl_no_owner_string)
+
+        # Verify all the ACLs were successfully applied.
+        new_obj_acl_string = k.get_acl().to_xml()
+        new_bucket_acl_string = bucket_uri.get_acl().to_xml()
+        new_bucket_def_acl_string = bucket_uri.get_def_acl().to_xml()
+        self.assertRegexpMatches(new_obj_acl_string, r"AllUsers")
+        self.assertRegexpMatches(new_bucket_acl_string, r"AllUsers")
+        self.assertRegexpMatches(new_bucket_def_acl_string, r"AllUsers")
+
+    def testPropertiesUpdated(self):
+        b = self._MakeBucket()
+        bucket_uri = storage_uri("gs://%s" % b.name)
+        key_uri = bucket_uri.clone_replace_name("obj")
+        key_uri.set_contents_from_string("data1")
+
+        self.assertRegexpMatches(str(key_uri.generation), r"[0-9]+")
+        k = b.get_key("obj")
+        self.assertEqual(k.generation, key_uri.generation)
+        self.assertEquals(k.get_contents_as_string(), "data1")
+
+        key_uri.set_contents_from_stream(StringIO.StringIO("data2"))
+        self.assertRegexpMatches(str(key_uri.generation), r"[0-9]+")
+        self.assertGreater(key_uri.generation, k.generation)
+        k = b.get_key("obj")
+        self.assertEqual(k.generation, key_uri.generation)
+        self.assertEquals(k.get_contents_as_string(), "data2")
+
+        key_uri.set_contents_from_file(StringIO.StringIO("data3"))
+        self.assertRegexpMatches(str(key_uri.generation), r"[0-9]+")
+        self.assertGreater(key_uri.generation, k.generation)
+        k = b.get_key("obj")
+        self.assertEqual(k.generation, key_uri.generation)
+        self.assertEquals(k.get_contents_as_string(), "data3")
+
+    def testCompose(self):
+        data1 = 'hello '
+        data2 = 'world!'
+        expected_crc = 1238062967
+
+        b = self._MakeBucket()
+        bucket_uri = storage_uri("gs://%s" % b.name)
+        key_uri1 = bucket_uri.clone_replace_name("component1")
+        key_uri1.set_contents_from_string(data1)
+        key_uri2 = bucket_uri.clone_replace_name("component2")
+        key_uri2.set_contents_from_string(data2)
+
+        # Simple compose.
+        key_uri_composite = bucket_uri.clone_replace_name("composite")
+        components = [key_uri1, key_uri2]
+        key_uri_composite.compose(components, content_type='text/plain')
+        self.assertEquals(key_uri_composite.get_contents_as_string(),
+                          data1 + data2)
+        composite_key = key_uri_composite.get_key()
+        cloud_crc32c = binascii.hexlify(
+            composite_key.cloud_hashes['crc32c'])
+        self.assertEquals(cloud_crc32c, hex(expected_crc)[2:])
+        self.assertEquals(composite_key.content_type, 'text/plain')
+
+        # Compose disallowed between buckets.
+        key_uri1.bucket_name += '2'
+        try:
+            key_uri_composite.compose(components)
+            self.fail('Composing between buckets didn\'t fail as expected.')
+        except BotoClientError as err:
+            self.assertEquals(
+                err.reason, 'GCS does not support inter-bucket composing')
+
diff --git a/tests/integration/gs/test_versioning.py b/tests/integration/gs/test_versioning.py
new file mode 100644
index 0000000..6d1aedd
--- /dev/null
+++ b/tests/integration/gs/test_versioning.py
@@ -0,0 +1,267 @@
+# -*- coding: utf-8 -*-
+# Copyright (c) 2012, Google, Inc.
+# All rights reserved.
+#
+# Permission is hereby granted, free of charge, to any person obtaining a
+# copy of this software and associated documentation files (the
+# "Software"), to deal in the Software without restriction, including
+# without limitation the rights to use, copy, modify, merge, publish, dis-
+# tribute, sublicense, and/or sell copies of the Software, and to permit
+# persons to whom the Software is furnished to do so, subject to the fol-
+# lowing conditions:
+#
+# The above copyright notice and this permission notice shall be included
+# in all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
+# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
+# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+# IN THE SOFTWARE.
+
+"""Integration tests for GS versioning support."""
+
+from xml import sax
+
+from boto import handler
+from boto.gs import acl
+from tests.integration.gs.testcase import GSTestCase
+
+
+class GSVersioningTest(GSTestCase):
+
+    def testVersioningToggle(self):
+        b = self._MakeBucket()
+        self.assertFalse(b.get_versioning_status())
+        b.configure_versioning(True)
+        self.assertTrue(b.get_versioning_status())
+        b.configure_versioning(False)
+        self.assertFalse(b.get_versioning_status())
+
+    def testDeleteVersionedKey(self):
+        b = self._MakeVersionedBucket()
+        k = b.new_key("foo")
+        s1 = "test1"
+        k.set_contents_from_string(s1)
+
+        k = b.get_key("foo")
+        g1 = k.generation
+
+        s2 = "test2"
+        k.set_contents_from_string(s2)
+        k = b.get_key("foo")
+        g2 = k.generation
+
+        versions = list(b.list_versions())
+        self.assertEqual(len(versions), 2)
+        self.assertEqual(versions[0].name, "foo")
+        self.assertEqual(versions[1].name, "foo")
+        generations = [k.generation for k in versions]
+        self.assertIn(g1, generations)
+        self.assertIn(g2, generations)
+
+        # Delete "current" version and make sure that version is no longer
+        # visible from a basic GET call.
+        b.delete_key("foo", generation=None)
+        self.assertIsNone(b.get_key("foo"))
+
+        # Both old versions should still be there when listed using the versions
+        # query parameter.
+        versions = list(b.list_versions())
+        self.assertEqual(len(versions), 2)
+        self.assertEqual(versions[0].name, "foo")
+        self.assertEqual(versions[1].name, "foo")
+        generations = [k.generation for k in versions]
+        self.assertIn(g1, generations)
+        self.assertIn(g2, generations)
+
+        # Delete generation 2 and make sure it's gone.
+        b.delete_key("foo", generation=g2)
+        versions = list(b.list_versions())
+        self.assertEqual(len(versions), 1)
+        self.assertEqual(versions[0].name, "foo")
+        self.assertEqual(versions[0].generation, g1)
+
+        # Delete generation 1 and make sure it's gone.
+        b.delete_key("foo", generation=g1)
+        versions = list(b.list_versions())
+        self.assertEqual(len(versions), 0)
+
+    def testGetVersionedKey(self):
+        b = self._MakeVersionedBucket()
+        k = b.new_key("foo")
+        s1 = "test1"
+        k.set_contents_from_string(s1)
+
+        k = b.get_key("foo")
+        g1 = k.generation
+        o1 = k.get_contents_as_string()
+        self.assertEqual(o1, s1)
+
+        s2 = "test2"
+        k.set_contents_from_string(s2)
+        k = b.get_key("foo")
+        g2 = k.generation
+        self.assertNotEqual(g2, g1)
+        o2 = k.get_contents_as_string()
+        self.assertEqual(o2, s2)
+
+        k = b.get_key("foo", generation=g1)
+        self.assertEqual(k.get_contents_as_string(), s1)
+        k = b.get_key("foo", generation=g2)
+        self.assertEqual(k.get_contents_as_string(), s2)
+
+    def testVersionedBucketCannedAcl(self):
+        b = self._MakeVersionedBucket()
+        k = b.new_key("foo")
+        s1 = "test1"
+        k.set_contents_from_string(s1)
+
+        k = b.get_key("foo")
+        g1 = k.generation
+
+        s2 = "test2"
+        k.set_contents_from_string(s2)
+        k = b.get_key("foo")
+        g2 = k.generation
+
+        acl1g1 = b.get_acl("foo", generation=g1)
+        acl1g2 = b.get_acl("foo", generation=g2)
+        owner1g1 = acl1g1.owner.id
+        owner1g2 = acl1g2.owner.id
+        self.assertEqual(owner1g1, owner1g2)
+        entries1g1 = acl1g1.entries.entry_list
+        entries1g2 = acl1g2.entries.entry_list
+        self.assertEqual(len(entries1g1), len(entries1g2))
+
+        b.set_acl("public-read", key_name="foo", generation=g1)
+
+        acl2g1 = b.get_acl("foo", generation=g1)
+        acl2g2 = b.get_acl("foo", generation=g2)
+        entries2g1 = acl2g1.entries.entry_list
+        entries2g2 = acl2g2.entries.entry_list
+        self.assertEqual(len(entries2g2), len(entries1g2))
+        public_read_entries1 = [e for e in entries2g1 if e.permission == "READ"
+                                and e.scope.type == acl.ALL_USERS]
+        public_read_entries2 = [e for e in entries2g2 if e.permission == "READ"
+                                and e.scope.type == acl.ALL_USERS]
+        self.assertEqual(len(public_read_entries1), 1)
+        self.assertEqual(len(public_read_entries2), 0)
+
+    def testVersionedBucketXmlAcl(self):
+        b = self._MakeVersionedBucket()
+        k = b.new_key("foo")
+        s1 = "test1"
+        k.set_contents_from_string(s1)
+
+        k = b.get_key("foo")
+        g1 = k.generation
+
+        s2 = "test2"
+        k.set_contents_from_string(s2)
+        k = b.get_key("foo")
+        g2 = k.generation
+
+        acl1g1 = b.get_acl("foo", generation=g1)
+        acl1g2 = b.get_acl("foo", generation=g2)
+        owner1g1 = acl1g1.owner.id
+        owner1g2 = acl1g2.owner.id
+        self.assertEqual(owner1g1, owner1g2)
+        entries1g1 = acl1g1.entries.entry_list
+        entries1g2 = acl1g2.entries.entry_list
+        self.assertEqual(len(entries1g1), len(entries1g2))
+
+        acl_xml = (
+            '<ACCESSControlList><EntrIes><Entry>'    +
+            '<Scope type="AllUsers"></Scope><Permission>READ</Permission>' +
+            '</Entry></EntrIes></ACCESSControlList>')
+        aclo = acl.ACL()
+        h = handler.XmlHandler(aclo, b)
+        sax.parseString(acl_xml, h)
+
+        b.set_acl(aclo, key_name="foo", generation=g1)
+
+        acl2g1 = b.get_acl("foo", generation=g1)
+        acl2g2 = b.get_acl("foo", generation=g2)
+        entries2g1 = acl2g1.entries.entry_list
+        entries2g2 = acl2g2.entries.entry_list
+        self.assertEqual(len(entries2g2), len(entries1g2))
+        public_read_entries1 = [e for e in entries2g1 if e.permission == "READ"
+                                and e.scope.type == acl.ALL_USERS]
+        public_read_entries2 = [e for e in entries2g2 if e.permission == "READ"
+                                and e.scope.type == acl.ALL_USERS]
+        self.assertEqual(len(public_read_entries1), 1)
+        self.assertEqual(len(public_read_entries2), 0)
+
+    def testVersionedObjectCannedAcl(self):
+        b = self._MakeVersionedBucket()
+        k = b.new_key("foo")
+        s1 = "test1"
+        k.set_contents_from_string(s1)
+
+        k = b.get_key("foo")
+        g1 = k.generation
+
+        s2 = "test2"
+        k.set_contents_from_string(s2)
+        k = b.get_key("foo")
+        g2 = k.generation
+
+        acl1g1 = b.get_acl("foo", generation=g1)
+        acl1g2 = b.get_acl("foo", generation=g2)
+        owner1g1 = acl1g1.owner.id
+        owner1g2 = acl1g2.owner.id
+        self.assertEqual(owner1g1, owner1g2)
+        entries1g1 = acl1g1.entries.entry_list
+        entries1g2 = acl1g2.entries.entry_list
+        self.assertEqual(len(entries1g1), len(entries1g2))
+
+        b.set_acl("public-read", key_name="foo", generation=g1)
+
+        acl2g1 = b.get_acl("foo", generation=g1)
+        acl2g2 = b.get_acl("foo", generation=g2)
+        entries2g1 = acl2g1.entries.entry_list
+        entries2g2 = acl2g2.entries.entry_list
+        self.assertEqual(len(entries2g2), len(entries1g2))
+        public_read_entries1 = [e for e in entries2g1 if e.permission == "READ"
+                                and e.scope.type == acl.ALL_USERS]
+        public_read_entries2 = [e for e in entries2g2 if e.permission == "READ"
+                                and e.scope.type == acl.ALL_USERS]
+        self.assertEqual(len(public_read_entries1), 1)
+        self.assertEqual(len(public_read_entries2), 0)
+
+    def testCopyVersionedKey(self):
+        b = self._MakeVersionedBucket()
+        k = b.new_key("foo")
+        s1 = "test1"
+        k.set_contents_from_string(s1)
+
+        k = b.get_key("foo")
+        g1 = k.generation
+
+        s2 = "test2"
+        k.set_contents_from_string(s2)
+
+        b2 = self._MakeVersionedBucket()
+        b2.copy_key("foo2", b.name, "foo", src_generation=g1)
+
+        k2 = b2.get_key("foo2")
+        s3 = k2.get_contents_as_string()
+        self.assertEqual(s3, s1)
+
+    def testKeyGenerationUpdatesOnSet(self):
+        b = self._MakeVersionedBucket()
+        k = b.new_key("foo")
+        self.assertIsNone(k.generation)
+        k.set_contents_from_string("test1")
+        g1 = k.generation
+        self.assertRegexpMatches(g1, r'[0-9]+')
+        self.assertEqual(k.metageneration, '1')
+        k.set_contents_from_string("test2")
+        g2 = k.generation
+        self.assertNotEqual(g1, g2)
+        self.assertRegexpMatches(g2, r'[0-9]+')
+        self.assertGreater(int(g2), int(g1))
+        self.assertEqual(k.metageneration, '1')
diff --git a/tests/integration/gs/testcase.py b/tests/integration/gs/testcase.py
new file mode 100644
index 0000000..a6c1e08
--- /dev/null
+++ b/tests/integration/gs/testcase.py
@@ -0,0 +1,116 @@
+# -*- coding: utf-8 -*-
+# Copyright (c) 2013, Google, Inc.
+# All rights reserved.
+#
+# Permission is hereby granted, free of charge, to any person obtaining a
+# copy of this software and associated documentation files (the
+# "Software"), to deal in the Software without restriction, including
+# without limitation the rights to use, copy, modify, merge, publish, dis-
+# tribute, sublicense, and/or sell copies of the Software, and to permit
+# persons to whom the Software is furnished to do so, subject to the fol-
+# lowing conditions:
+#
+# The above copyright notice and this permission notice shall be included
+# in all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
+# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
+# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+# IN THE SOFTWARE.
+
+"""Base TestCase class for gs integration tests."""
+
+import shutil
+import tempfile
+import time
+
+from boto.exception import GSResponseError
+from boto.gs.connection import GSConnection
+from tests.integration.gs.util import has_google_credentials
+from tests.integration.gs.util import retry
+from tests.unit import unittest
+
+@unittest.skipUnless(has_google_credentials(),
+                     "Google credentials are required to run the Google "
+                     "Cloud Storage tests.  Update your boto.cfg to run "
+                     "these tests.")
+class GSTestCase(unittest.TestCase):
+    gs = True
+
+    def setUp(self):
+        self._conn = GSConnection()
+        self._buckets = []
+        self._tempdirs = []
+
+    # Retry with an exponential backoff if a server error is received. This
+    # ensures that we try *really* hard to clean up after ourselves.
+    @retry(GSResponseError)
+    def tearDown(self):
+        while len(self._tempdirs):
+            tmpdir = self._tempdirs.pop()
+            shutil.rmtree(tmpdir, ignore_errors=True)
+
+        while(len(self._buckets)):
+            b = self._buckets[-1]
+            bucket = self._conn.get_bucket(b)
+            while len(list(bucket.list_versions())) > 0:
+                for k in bucket.list_versions():
+                    bucket.delete_key(k.name, generation=k.generation)
+            bucket.delete()
+            self._buckets.pop()
+
+    def _GetConnection(self):
+        """Returns the GSConnection object used to connect to GCS."""
+        return self._conn
+
+    def _MakeTempName(self):
+        """Creates and returns a temporary name for testing that is likely to be
+        unique."""
+        return "boto-gs-test-%s" % repr(time.time()).replace(".", "-")
+
+    def _MakeBucketName(self):
+        """Creates and returns a temporary bucket name for testing that is
+        likely to be unique."""
+        b = self._MakeTempName()
+        self._buckets.append(b)
+        return b
+
+    def _MakeBucket(self):
+        """Creates and returns temporary bucket for testing. After the test, the
+        contents of the bucket and the bucket itself will be deleted."""
+        b = self._conn.create_bucket(self._MakeBucketName())
+        return b
+
+    def _MakeKey(self, data='', bucket=None, set_contents=True):
+        """Creates and returns a Key with provided data. If no bucket is given,
+        a temporary bucket is created."""
+        if data and not set_contents:
+            # The data and set_contents parameters are mutually exclusive. 
+            raise ValueError('MakeKey called with a non-empty data parameter '
+                             'but set_contents was set to False.')
+        if not bucket:
+            bucket = self._MakeBucket()
+        key_name = self._MakeTempName()
+        k = bucket.new_key(key_name)
+        if set_contents:
+            k.set_contents_from_string(data)
+        return k
+
+    def _MakeVersionedBucket(self):
+        """Creates and returns temporary versioned bucket for testing. After the
+        test, the contents of the bucket and the bucket itself will be
+        deleted."""
+        b = self._MakeBucket()
+        b.configure_versioning(True)
+        return b
+
+    def _MakeTempDir(self):
+        """Creates and returns a temporary directory on disk. After the test,
+        the contents of the directory and the directory itself will be
+        deleted."""
+        tmpdir = tempfile.mkdtemp(prefix=self._MakeTempName())
+        self._tempdirs.append(tmpdir)
+        return tmpdir
diff --git a/tests/integration/gs/util.py b/tests/integration/gs/util.py
new file mode 100644
index 0000000..5c99ac0
--- /dev/null
+++ b/tests/integration/gs/util.py
@@ -0,0 +1,85 @@
+# Copyright (c) 2012, Google, Inc.
+# All rights reserved.
+#
+# Permission is hereby granted, free of charge, to any person obtaining a
+# copy of this software and associated documentation files (the
+# "Software"), to deal in the Software without restriction, including
+# without limitation the rights to use, copy, modify, merge, publish, dis-
+# tribute, sublicense, and/or sell copies of the Software, and to permit
+# persons to whom the Software is furnished to do so, subject to the fol-
+# lowing conditions:
+#
+# The above copyright notice and this permission notice shall be included
+# in all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
+# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
+# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+# IN THE SOFTWARE.
+
+import time
+
+from boto.provider import Provider
+
+
+_HAS_GOOGLE_CREDENTIALS = None
+
+
+def has_google_credentials():
+    global _HAS_GOOGLE_CREDENTIALS
+    if _HAS_GOOGLE_CREDENTIALS is None:
+        provider = Provider('google')
+        if provider.access_key is None or provider.secret_key is None:
+            _HAS_GOOGLE_CREDENTIALS = False
+        else:
+            _HAS_GOOGLE_CREDENTIALS = True
+    return _HAS_GOOGLE_CREDENTIALS
+
+
+def retry(ExceptionToCheck, tries=4, delay=3, backoff=2, logger=None):
+    """Retry calling the decorated function using an exponential backoff.
+
+    Taken from:
+      https://github.com/saltycrane/retry-decorator
+    Licensed under BSD:
+      https://github.com/saltycrane/retry-decorator/blob/master/LICENSE
+
+    :param ExceptionToCheck: the exception to check. may be a tuple of
+        exceptions to check
+    :type ExceptionToCheck: Exception or tuple
+    :param tries: number of times to try (not retry) before giving up
+    :type tries: int
+    :param delay: initial delay between retries in seconds
+    :type delay: int
+    :param backoff: backoff multiplier e.g. value of 2 will double the delay
+        each retry
+    :type backoff: int
+    :param logger: logger to use. If None, print
+    :type logger: logging.Logger instance
+    """
+    def deco_retry(f):
+        def f_retry(*args, **kwargs):
+            mtries, mdelay = tries, delay
+            try_one_last_time = True
+            while mtries > 1:
+                try:
+                    return f(*args, **kwargs)
+                    try_one_last_time = False
+                    break
+                except ExceptionToCheck, e:
+                    msg = "%s, Retrying in %d seconds..." % (str(e), mdelay)
+                    if logger:
+                        logger.warning(msg)
+                    else:
+                        print msg
+                    time.sleep(mdelay)
+                    mtries -= 1
+                    mdelay *= backoff
+            if try_one_last_time:
+                return f(*args, **kwargs)
+            return
+        return f_retry  # true decorator
+    return deco_retry
diff --git a/tests/integration/mws/test.py b/tests/integration/mws/test.py
index 1d75379..4818258 100644
--- a/tests/integration/mws/test.py
+++ b/tests/integration/mws/test.py
@@ -1,5 +1,8 @@
 #!/usr/bin/env python
-from tests.unit import unittest
+try:
+    from tests.unit import unittest
+except ImportError:
+    import unittest
 import sys
 import os
 import os.path
diff --git a/tests/integration/opsworks/__init__.py b/tests/integration/opsworks/__init__.py
new file mode 100644
index 0000000..e69de29
--- /dev/null
+++ b/tests/integration/opsworks/__init__.py
diff --git a/tests/integration/opsworks/test_layer1.py b/tests/integration/opsworks/test_layer1.py
new file mode 100644
index 0000000..a9887cd
--- /dev/null
+++ b/tests/integration/opsworks/test_layer1.py
@@ -0,0 +1,40 @@
+# Copyright (c) 2013 Amazon.com, Inc. or its affiliates.  All Rights Reserved
+#
+# Permission is hereby granted, free of charge, to any person obtaining a
+# copy of this software and associated documentation files (the
+# "Software"), to deal in the Software without restriction, including
+# without limitation the rights to use, copy, modify, merge, publish, dis-
+# tribute, sublicense, and/or sell copies of the Software, and to permit
+# persons to whom the Software is furnished to do so, subject to the fol-
+# lowing conditions:
+#
+# The above copyright notice and this permission notice shall be included
+# in all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
+# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
+# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+# IN THE SOFTWARE.
+#
+import unittest
+import time
+
+from boto.opsworks.layer1 import OpsWorksConnection
+from boto.opsworks.exceptions import ValidationException
+
+
+class TestOpsWorksConnection(unittest.TestCase):
+    def setUp(self):
+        self.api = OpsWorksConnection()
+
+    def test_describe_stacks(self):
+        response = self.api.describe_stacks()
+        self.assertIn('Stacks', response)
+
+    def test_validation_errors(self):
+        with self.assertRaises(ValidationException):
+            self.api.create_stack('testbotostack', 'us-east-1',
+                                  'badarn', 'badarn2')
diff --git a/tests/integration/redshift/__init__.py b/tests/integration/redshift/__init__.py
new file mode 100644
index 0000000..e69de29
--- /dev/null
+++ b/tests/integration/redshift/__init__.py
diff --git a/tests/integration/redshift/test_cert_verification.py b/tests/integration/redshift/test_cert_verification.py
new file mode 100644
index 0000000..27fd16d
--- /dev/null
+++ b/tests/integration/redshift/test_cert_verification.py
@@ -0,0 +1,35 @@
+# Copyright (c) 2013 Amazon.com, Inc. or its affiliates.  All Rights Reserved
+#
+# Permission is hereby granted, free of charge, to any person obtaining a
+# copy of this software and associated documentation files (the
+# "Software"), to deal in the Software without restriction, including
+# without limitation the rights to use, copy, modify, merge, publish, dis-
+# tribute, sublicense, and/or sell copies of the Software, and to permit
+# persons to whom the Software is furnished to do so, subject to the fol-
+# lowing conditions:
+#
+# The above copyright notice and this permission notice shall be included
+# in all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
+# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
+# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+# IN THE SOFTWARE.
+#
+
+from tests.unit import unittest
+import boto.redshift
+
+
+class CertVerificationTest(unittest.TestCase):
+
+    redshift = True
+    ssl = True
+
+    def test_certs(self):
+        for region in boto.redshift.regions():
+            c = region.connect()
+            c.describe_cluster_versions()
diff --git a/tests/integration/redshift/test_layer1.py b/tests/integration/redshift/test_layer1.py
new file mode 100644
index 0000000..490618e
--- /dev/null
+++ b/tests/integration/redshift/test_layer1.py
@@ -0,0 +1,134 @@
+# Copyright (c) 2013 Amazon.com, Inc. or its affiliates.  All Rights Reserved
+#
+# Permission is hereby granted, free of charge, to any person obtaining a
+# copy of this software and associated documentation files (the
+# "Software"), to deal in the Software without restriction, including
+# without limitation the rights to use, copy, modify, merge, publish, dis-
+# tribute, sublicense, and/or sell copies of the Software, and to permit
+# persons to whom the Software is furnished to do so, subject to the fol-
+# lowing conditions:
+#
+# The above copyright notice and this permission notice shall be included
+# in all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
+# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
+# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+# IN THE SOFTWARE.
+#
+import unittest
+import time
+
+from nose.plugins.attrib import attr
+
+from boto.redshift.layer1 import RedshiftConnection
+from boto.redshift.exceptions import ClusterNotFoundFault
+from boto.redshift.exceptions import ResizeNotFoundFault
+
+
+class TestRedshiftLayer1Management(unittest.TestCase):
+    redshift = True
+
+    def setUp(self):
+        self.api = RedshiftConnection()
+        self.cluster_prefix = 'boto-redshift-cluster-%s'
+        self.node_type = 'dw.hs1.xlarge'
+        self.master_username = 'mrtest'
+        self.master_password = 'P4ssword'
+        self.db_name = 'simon'
+        # Redshift was taking ~20 minutes to bring clusters up in testing.
+        self.wait_time = 60 * 20
+
+    def cluster_id(self):
+        # This need to be unique per-test method.
+        return self.cluster_prefix % str(int(time.time()))
+
+    def create_cluster(self):
+        cluster_id = self.cluster_id()
+        self.api.create_cluster(
+            cluster_id, self.node_type,
+            self.master_username, self.master_password,
+            db_name=self.db_name, number_of_nodes=3
+        )
+
+        # Wait for it to come up.
+        time.sleep(self.wait_time)
+
+        self.addCleanup(self.delete_cluster_the_slow_way, cluster_id)
+        return cluster_id
+
+    def delete_cluster_the_slow_way(self, cluster_id):
+        # Because there might be other operations in progress. :(
+        time.sleep(self.wait_time)
+
+        self.api.delete_cluster(cluster_id, skip_final_cluster_snapshot=True)
+
+    @attr('notdefault')
+    def test_create_delete_cluster(self):
+        cluster_id = self.cluster_id()
+        self.api.create_cluster(
+            cluster_id, self.node_type,
+            self.master_username, self.master_password,
+            db_name=self.db_name, number_of_nodes=3
+        )
+
+        # Wait for it to come up.
+        time.sleep(self.wait_time)
+
+        self.api.delete_cluster(cluster_id, skip_final_cluster_snapshot=True)
+
+    @attr('notdefault')
+    def test_as_much_as_possible_before_teardown(self):
+        # Per @garnaat, for the sake of suite time, we'll test as much as we
+        # can before we teardown.
+
+        # Test a non-existent cluster ID.
+        with self.assertRaises(ClusterNotFoundFault):
+            self.api.describe_clusters('badpipelineid')
+
+        # Now create the cluster & move on.
+        cluster_id = self.create_cluster()
+
+        # Test never resized.
+        with self.assertRaises(ResizeNotFoundFault):
+            self.api.describe_resize(cluster_id)
+
+        # The cluster shows up in describe_clusters
+        clusters = self.api.describe_clusters()['DescribeClustersResponse']\
+                                               ['DescribeClustersResult']\
+                                               ['Clusters']
+        cluster_ids = [c['ClusterIdentifier'] for c in clusters]
+        self.assertIn(cluster_id, cluster_ids)
+
+        # The cluster shows up in describe_clusters w/ id
+        response = self.api.describe_clusters(cluster_id)
+        self.assertEqual(response['DescribeClustersResponse']\
+                         ['DescribeClustersResult']['Clusters'][0]\
+                         ['ClusterIdentifier'], cluster_id)
+
+        snapshot_id = "snap-%s" % cluster_id
+
+        # Test creating a snapshot.
+        response = self.api.create_cluster_snapshot(snapshot_id, cluster_id)
+        self.assertEqual(response['CreateClusterSnapshotResponse']\
+                         ['CreateClusterSnapshotResult']['Snapshot']\
+                         ['SnapshotIdentifier'], snapshot_id)
+        self.assertEqual(response['CreateClusterSnapshotResponse']\
+                         ['CreateClusterSnapshotResult']['Snapshot']\
+                         ['Status'], 'creating')
+        self.addCleanup(self.api.delete_cluster_snapshot, snapshot_id)
+
+        # More waiting. :(
+        time.sleep(self.wait_time)
+
+        # Describe the snapshots.
+        response = self.api.describe_cluster_snapshots(
+            cluster_identifier=cluster_id
+        )
+        snap = response['DescribeClusterSnapshotsResponse']\
+                       ['DescribeClusterSnapshotsResult']['Snapshots'][-1]
+        self.assertEqual(snap['SnapshotType'], 'manual')
+        self.assertEqual(snap['DBName'], self.db_name)
diff --git a/tests/integration/route53/test_zone.py b/tests/integration/route53/test_zone.py
new file mode 100644
index 0000000..1c4d6be
--- /dev/null
+++ b/tests/integration/route53/test_zone.py
@@ -0,0 +1,132 @@
+# Copyright (c) 2011 Blue Pines Technologies LLC, Brad Carleton
+# www.bluepines.org
+# Copyright (c) 2012 42 Lines Inc., Jim Browne
+#
+# Permission is hereby granted, free of charge, to any person obtaining a
+# copy of this software and associated documentation files (the
+# "Software"), to deal in the Software without restriction, including
+# without limitation the rights to use, copy, modify, merge, publish, dis-
+# tribute, sublicense, and/or sell copies of the Software, and to permit
+# persons to whom the Software is furnished to do so, subject to the fol-
+# lowing conditions:
+#
+# The above copyright notice and this permission notice shall be included
+# in all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
+# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
+# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+# IN THE SOFTWARE.
+#
+
+import unittest
+from boto.route53.connection import Route53Connection
+from boto.exception import TooManyRecordsException
+
+
+class TestRoute53Zone(unittest.TestCase):
+    @classmethod
+    def setUpClass(self):
+        route53 = Route53Connection()
+        zone = route53.get_zone('example.com')
+        if zone is not None:
+            zone.delete()
+        self.zone = route53.create_zone('example.com')
+
+    def test_nameservers(self):
+        self.zone.get_nameservers()
+
+    def test_a(self):
+        self.zone.add_a('example.com', '102.11.23.1', 80)
+        record = self.zone.get_a('example.com')
+        self.assertEquals(record.name, u'example.com.')
+        self.assertEquals(record.resource_records, [u'102.11.23.1'])
+        self.assertEquals(record.ttl, u'80')
+        self.zone.update_a('example.com', '186.143.32.2', '800')
+        record = self.zone.get_a('example.com')
+        self.assertEquals(record.name, u'example.com.')
+        self.assertEquals(record.resource_records, [u'186.143.32.2'])
+        self.assertEquals(record.ttl, u'800')
+
+    def test_cname(self):
+        self.zone.add_cname('www.example.com', 'webserver.example.com', 200)
+        record = self.zone.get_cname('www.example.com')
+        self.assertEquals(record.name, u'www.example.com.')
+        self.assertEquals(record.resource_records, [u'webserver.example.com.'])
+        self.assertEquals(record.ttl, u'200')
+        self.zone.update_cname('www.example.com', 'web.example.com', 45)
+        record = self.zone.get_cname('www.example.com')
+        self.assertEquals(record.name, u'www.example.com.')
+        self.assertEquals(record.resource_records, [u'web.example.com.'])
+        self.assertEquals(record.ttl, u'45')
+
+    def test_mx(self):
+        self.zone.add_mx('example.com',
+                         ['10 mx1.example.com', '20 mx2.example.com'],
+                         1000)
+        record = self.zone.get_mx('example.com')
+        self.assertEquals(set(record.resource_records),
+                          set([u'10 mx1.example.com.',
+                               u'20 mx2.example.com.']))
+        self.assertEquals(record.ttl, u'1000')
+        self.zone.update_mx('example.com',
+                            ['10 mail1.example.com', '20 mail2.example.com'],
+                            50)
+        record = self.zone.get_mx('example.com')
+        self.assertEquals(set(record.resource_records),
+                          set([u'10 mail1.example.com.',
+                               '20 mail2.example.com.']))
+        self.assertEquals(record.ttl, u'50')
+
+    def test_get_records(self):
+        self.zone.get_records()
+
+    def test_get_nameservers(self):
+        self.zone.get_nameservers()
+
+    def test_get_zones(self):
+        route53 = Route53Connection()
+        route53.get_zones()
+
+    def test_identifiers_wrrs(self):
+        self.zone.add_a('wrr.example.com', '1.2.3.4',
+                        identifier=('foo', '20'))
+        self.zone.add_a('wrr.example.com', '5.6.7.8',
+                        identifier=('bar', '10'))
+        wrrs = self.zone.find_records('wrr.example.com', 'A', all=True)
+        self.assertEquals(len(wrrs), 2)
+        self.zone.delete_a('wrr.example.com', all=True)
+
+    def test_identifiers_lbrs(self):
+        self.zone.add_a('lbr.example.com', '4.3.2.1',
+                        identifier=('baz', 'us-east-1'))
+        self.zone.add_a('lbr.example.com', '8.7.6.5',
+                        identifier=('bam', 'us-west-1'))
+        lbrs = self.zone.find_records('lbr.example.com', 'A', all=True)
+        self.assertEquals(len(lbrs), 2)
+        self.zone.delete_a('lbr.example.com',
+                      identifier=('bam', 'us-west-1'))
+        self.zone.delete_a('lbr.example.com',
+                           identifier=('baz', 'us-east-1'))
+
+    def test_toomany_exception(self):
+        self.zone.add_a('exception.example.com', '4.3.2.1',
+                        identifier=('baz', 'us-east-1'))
+        self.zone.add_a('exception.example.com', '8.7.6.5',
+                        identifier=('bam', 'us-west-1'))
+        with self.assertRaises(TooManyRecordsException):
+            lbrs = self.zone.get_a('exception.example.com')
+        self.zone.delete_a('exception.example.com', all=True)
+
+    @classmethod
+    def tearDownClass(self):
+        self.zone.delete_a('example.com')
+        self.zone.delete_cname('www.example.com')
+        self.zone.delete_mx('example.com')
+        self.zone.delete()
+
+if __name__ == '__main__':
+    unittest.main(verbosity=3)
diff --git a/tests/integration/s3/mock_storage_service.py b/tests/integration/s3/mock_storage_service.py
index f08af79..507695b 100644
--- a/tests/integration/s3/mock_storage_service.py
+++ b/tests/integration/s3/mock_storage_service.py
@@ -29,6 +29,7 @@
 import copy
 import boto
 import base64
+import re
 
 from boto.utils import compute_md5
 from boto.s3.prefix import Prefix
@@ -64,6 +65,7 @@
         self.data = None
         self.etag = None
         self.size = None
+        self.closed = True
         self.content_encoding = None
         self.content_language = None
         self.content_type = None
@@ -104,9 +106,32 @@
         if 'Content-Language' in headers:
             self.content_language = headers['Content-Language']
 
-    def open_read(self, headers=NOT_IMPL, query_args=NOT_IMPL,
+    # Simplistic partial implementation for headers: Just supports range GETs
+    # of flavor 'Range: bytes=xyz-'.
+    def open_read(self, headers=None, query_args=NOT_IMPL,
                   override_num_retries=NOT_IMPL):
-        pass
+        if self.closed:
+            self.read_pos = 0
+        self.closed = False
+        if headers and 'Range' in headers:
+            match = re.match('bytes=([0-9]+)-$', headers['Range'])
+            if match:
+                self.read_pos = int(match.group(1))
+
+    def close(self, fast=NOT_IMPL):
+      self.closed = True
+
+    def read(self, size=0):
+        self.open_read()
+        if size == 0:
+            data = self.data[self.read_pos:]
+            self.read_pos = self.size
+        else:
+            data = self.data[self.read_pos:self.read_pos+size]
+            self.read_pos += size
+        if not data:
+            self.close()
+        return data
 
     def set_contents_from_file(self, fp, headers=None, replace=NOT_IMPL,
                                cb=NOT_IMPL, num_cb=NOT_IMPL,
@@ -117,6 +142,19 @@
         self.size = len(self.data)
         self._handle_headers(headers)
 
+    def set_contents_from_stream(self, fp, headers=None, replace=NOT_IMPL,
+                               cb=NOT_IMPL, num_cb=NOT_IMPL, policy=NOT_IMPL,
+                               reduced_redundancy=NOT_IMPL, query_args=NOT_IMPL,
+                               size=NOT_IMPL):
+        self.data = ''
+        chunk = fp.read(self.BufferSize)
+        while chunk:
+          self.data += chunk
+          chunk = fp.read(self.BufferSize)
+        self.set_etag()
+        self.size = len(self.data)
+        self._handle_headers(headers)
+
     def set_contents_from_string(self, s, headers=NOT_IMPL, replace=NOT_IMPL,
                                  cb=NOT_IMPL, num_cb=NOT_IMPL, policy=NOT_IMPL,
                                  md5=NOT_IMPL, reduced_redundancy=NOT_IMPL):
@@ -133,13 +171,20 @@
         self.set_contents_from_file(fp, headers, replace, cb, num_cb,
                                     policy, md5, res_upload_handler)
         fp.close()
-    
+
     def copy(self, dst_bucket_name, dst_key, metadata=NOT_IMPL,
              reduced_redundancy=NOT_IMPL, preserve_acl=NOT_IMPL):
         dst_bucket = self.bucket.connection.get_bucket(dst_bucket_name)
         return dst_bucket.copy_key(dst_key, self.bucket.name,
                                    self.name, metadata)
 
+    @property
+    def provider(self):
+        provider = None
+        if self.bucket and self.bucket.connection:
+            provider = self.bucket.connection.provider
+        return provider
+
     def set_etag(self):
         """
         Set etag attribute by generating hex MD5 checksum on current 
@@ -193,7 +238,7 @@
                  storage_class=NOT_IMPL, preserve_acl=NOT_IMPL,
                  encrypt_key=NOT_IMPL, headers=NOT_IMPL, query_args=NOT_IMPL):
         new_key = self.new_key(key_name=new_key_name)
-        src_key = mock_connection.get_bucket(
+        src_key = self.connection.get_bucket(
             src_bucket_name).get_key(src_key_name)
         new_key.data = copy.copy(src_key.data)
         new_key.size = len(new_key.data)
@@ -205,6 +250,12 @@
     def enable_logging(self, target_bucket_prefix):
         self.logging = True
 
+    def get_logging_config(self):
+        return {"Logging": {}}
+
+    def get_versioning_status(self, headers=NOT_IMPL):
+        return False
+
     def get_acl(self, key_name='', headers=NOT_IMPL, version_id=NOT_IMPL):
         if key_name:
             # Return ACL for the key.
@@ -294,6 +345,15 @@
         self.subresources[subresource] = value
 
 
+class MockProvider(object):
+
+    def __init__(self, provider):
+        self.provider = provider
+
+    def get_provider_name(self):
+        return self.provider
+
+
 class MockConnection(object):
 
     def __init__(self, aws_access_key_id=NOT_IMPL,
@@ -303,12 +363,13 @@
                  host=NOT_IMPL, debug=NOT_IMPL,
                  https_connection_factory=NOT_IMPL,
                  calling_format=NOT_IMPL,
-                 path=NOT_IMPL, provider=NOT_IMPL,
+                 path=NOT_IMPL, provider='s3',
                  bucket_class=NOT_IMPL):
         self.buckets = {}
+        self.provider = MockProvider(provider)
 
     def create_bucket(self, bucket_name, headers=NOT_IMPL, location=NOT_IMPL,
-                      policy=NOT_IMPL):
+                      policy=NOT_IMPL, storage_class=NOT_IMPL):
         if bucket_name in self.buckets:
             raise boto.exception.StorageCreateError(
                 409, 'BucketAlreadyOwnedByYou',
@@ -343,10 +404,12 @@
     delim = '/'
 
     def __init__(self, scheme, bucket_name=None, object_name=None,
-                 debug=NOT_IMPL, suppress_consec_slashes=NOT_IMPL):
+                 debug=NOT_IMPL, suppress_consec_slashes=NOT_IMPL,
+                 version_id=None, generation=None, is_latest=False):
         self.scheme = scheme
         self.bucket_name = bucket_name
         self.object_name = object_name
+        self.suppress_consec_slashes = suppress_consec_slashes
         if self.bucket_name and self.object_name:
             self.uri = ('%s://%s/%s' % (self.scheme, self.bucket_name,
                                         self.object_name))
@@ -355,6 +418,15 @@
         else:
             self.uri = ('%s://' % self.scheme)
 
+        self.version_id = version_id
+        self.generation = generation and int(generation)
+        self.is_version_specific = (bool(self.generation)
+                                    or bool(self.version_id))
+        self.is_latest = is_latest
+        if bucket_name and object_name:
+            self.versionless_uri = '%s://%s/%s' % (scheme, bucket_name,
+                                                   object_name)
+
     def __repr__(self):
         """Returns string representation of URI."""
         return self.uri
@@ -366,18 +438,36 @@
         return boto.provider.Provider('aws').canned_acls
 
     def clone_replace_name(self, new_name):
-        return MockBucketStorageUri(self.scheme, self.bucket_name, new_name)
+        return self.__class__(self.scheme, self.bucket_name, new_name)
+
+    def clone_replace_key(self, key):
+        return self.__class__(
+                key.provider.get_provider_name(),
+                bucket_name=key.bucket.name,
+                object_name=key.name,
+                suppress_consec_slashes=self.suppress_consec_slashes,
+                version_id=getattr(key, 'version_id', None),
+                generation=getattr(key, 'generation', None),
+                is_latest=getattr(key, 'is_latest', None))
 
     def connect(self, access_key_id=NOT_IMPL, secret_access_key=NOT_IMPL):
         return mock_connection
 
     def create_bucket(self, headers=NOT_IMPL, location=NOT_IMPL,
-                      policy=NOT_IMPL):
+                      policy=NOT_IMPL, storage_class=NOT_IMPL):
         return self.connect().create_bucket(self.bucket_name)
 
     def delete_bucket(self, headers=NOT_IMPL):
         return self.connect().delete_bucket(self.bucket_name)
 
+    def get_versioning_config(self, headers=NOT_IMPL):
+        self.get_bucket().get_versioning_status(headers)
+
+    def has_version(self):
+        return (issubclass(type(self), MockBucketStorageUri)
+                and ((self.version_id is not None)
+                     or (self.generation is not None)))
+
     def delete_key(self, validate=NOT_IMPL, headers=NOT_IMPL,
                    version_id=NOT_IMPL, mfa_token=NOT_IMPL):
         self.get_bucket().delete_key(self.object_name)
@@ -390,6 +480,10 @@
                        headers=NOT_IMPL, version_id=NOT_IMPL):
         self.get_bucket().enable_logging(target_bucket)
 
+    def get_logging_config(self, validate=NOT_IMPL, headers=NOT_IMPL,
+                           version_id=NOT_IMPL):
+        return self.get_bucket().get_logging_config()
+
     def equals(self, uri):
         return self.uri == uri.uri
 
@@ -410,7 +504,8 @@
     def get_all_keys(self, validate=NOT_IMPL, headers=NOT_IMPL):
         return self.get_bucket().get_all_keys(self)
 
-    def list_bucket(self, prefix='', delimiter='', headers=NOT_IMPL):
+    def list_bucket(self, prefix='', delimiter='', headers=NOT_IMPL,
+                    all_versions=NOT_IMPL):
         return self.get_bucket().list(prefix=prefix, delimiter=delimiter)
 
     def get_bucket(self, validate=NOT_IMPL, headers=NOT_IMPL):
@@ -469,7 +564,7 @@
     def copy_key(self, src_bucket_name, src_key_name, metadata=NOT_IMPL,
                  src_version_id=NOT_IMPL, storage_class=NOT_IMPL,
                  preserve_acl=NOT_IMPL, encrypt_key=NOT_IMPL, headers=NOT_IMPL,
-                 query_args=NOT_IMPL):
+                 query_args=NOT_IMPL, src_generation=NOT_IMPL):
         dst_bucket = self.get_bucket()
         return dst_bucket.copy_key(new_key_name=self.object_name,
                                    src_bucket_name=src_bucket_name,
diff --git a/tests/integration/s3/test_bucket.py b/tests/integration/s3/test_bucket.py
index 2611be0..6d29525 100644
--- a/tests/integration/s3/test_bucket.py
+++ b/tests/integration/s3/test_bucket.py
@@ -17,7 +17,7 @@
 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
 # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
 # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
-# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, 
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
 # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
 # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
 # IN THE SOFTWARE.
@@ -32,8 +32,13 @@
 from boto.exception import S3ResponseError
 from boto.s3.connection import S3Connection
 from boto.s3.bucketlogging import BucketLogging
+from boto.s3.lifecycle import Lifecycle
+from boto.s3.lifecycle import Transition
+from boto.s3.lifecycle import Rule
 from boto.s3.acl import Grant
 from boto.s3.tagging import Tags, TagSet
+from boto.s3.lifecycle import Lifecycle, Expiration, Transition
+from boto.s3.website import RedirectLocation
 
 
 class S3BucketTest (unittest.TestCase):
@@ -84,9 +89,9 @@
     def test_logging(self):
         # use self.bucket as the target bucket so that teardown
         # will delete any log files that make it into the bucket
-        # automatically and all we have to do is delete the 
+        # automatically and all we have to do is delete the
         # source bucket.
-        sb_name = "src-" + self.bucket_name 
+        sb_name = "src-" + self.bucket_name
         sb = self.conn.create_bucket(sb_name)
         # grant log write perms to target bucket using canned-acl
         self.bucket.set_acl("log-delivery-write")
@@ -148,3 +153,111 @@
         self.assertEqual(response[0][0].value, 'avalue')
         self.assertEqual(response[0][1].key, 'anotherkey')
         self.assertEqual(response[0][1].value, 'anothervalue')
+
+    def test_website_configuration(self):
+        response = self.bucket.configure_website('index.html')
+        self.assertTrue(response)
+        config = self.bucket.get_website_configuration()
+        self.assertEqual(config, {'WebsiteConfiguration':
+                                  {'IndexDocument': {'Suffix': 'index.html'}}})
+        config2, xml = self.bucket.get_website_configuration_with_xml()
+        self.assertEqual(config, config2)
+        self.assertTrue('<Suffix>index.html</Suffix>' in xml, xml)
+
+    def test_website_redirect_all_requests(self):
+        response = self.bucket.configure_website(
+            redirect_all_requests_to=RedirectLocation('example.com'))
+        config = self.bucket.get_website_configuration()
+        self.assertEqual(config, {
+            'WebsiteConfiguration': {
+                'RedirectAllRequestsTo': {
+                    'HostName': 'example.com'}}})
+
+        # Can configure the protocol as well.
+        response = self.bucket.configure_website(
+            redirect_all_requests_to=RedirectLocation('example.com', 'https'))
+        config = self.bucket.get_website_configuration()
+        self.assertEqual(config, {
+            'WebsiteConfiguration': {'RedirectAllRequestsTo': {
+                'HostName': 'example.com',
+                'Protocol': 'https',
+            }}}
+        )
+
+    def test_lifecycle(self):
+        lifecycle = Lifecycle()
+        lifecycle.add_rule('myid', '', 'Enabled', 30)
+        self.assertTrue(self.bucket.configure_lifecycle(lifecycle))
+        response = self.bucket.get_lifecycle_config()
+        self.assertEqual(len(response), 1)
+        actual_lifecycle = response[0]
+        self.assertEqual(actual_lifecycle.id, 'myid')
+        self.assertEqual(actual_lifecycle.prefix, '')
+        self.assertEqual(actual_lifecycle.status, 'Enabled')
+        self.assertEqual(actual_lifecycle.transition, None)
+
+    def test_lifecycle_with_glacier_transition(self):
+        lifecycle = Lifecycle()
+        transition = Transition(days=30, storage_class='GLACIER')
+        rule = Rule('myid', prefix='', status='Enabled', expiration=None,
+                    transition=transition)
+        lifecycle.append(rule)
+        self.assertTrue(self.bucket.configure_lifecycle(lifecycle))
+        response = self.bucket.get_lifecycle_config()
+        transition = response[0].transition
+        self.assertEqual(transition.days, 30)
+        self.assertEqual(transition.storage_class, 'GLACIER')
+        self.assertEqual(transition.date, None)
+
+    def test_lifecycle_multi(self):
+        date = '2022-10-12T00:00:00.000Z'
+        sc = 'GLACIER'
+        lifecycle = Lifecycle()
+        lifecycle.add_rule("1", "1/", "Enabled", 1)
+        lifecycle.add_rule("2", "2/", "Enabled", Expiration(days=2))
+        lifecycle.add_rule("3", "3/", "Enabled", Expiration(date=date))
+        lifecycle.add_rule("4", "4/", "Enabled", None,
+            Transition(days=4, storage_class=sc))
+        lifecycle.add_rule("5", "5/", "Enabled", None,
+            Transition(date=date, storage_class=sc))
+        # set the lifecycle
+        self.bucket.configure_lifecycle(lifecycle)
+        # read the lifecycle back
+        readlifecycle = self.bucket.get_lifecycle_config();
+        for rule in readlifecycle:
+            if rule.id == "1":
+                self.assertEqual(rule.prefix, "1/")
+                self.assertEqual(rule.expiration.days, 1)
+            elif rule.id == "2":
+                self.assertEqual(rule.prefix, "2/")
+                self.assertEqual(rule.expiration.days, 2)
+            elif rule.id == "3":
+                self.assertEqual(rule.prefix, "3/")
+                self.assertEqual(rule.expiration.date, date)
+            elif rule.id == "4":
+                self.assertEqual(rule.prefix, "4/")
+                self.assertEqual(rule.transition.days, 4)
+                self.assertEqual(rule.transition.storage_class, sc)
+            elif rule.id == "5":
+                self.assertEqual(rule.prefix, "5/")
+                self.assertEqual(rule.transition.date, date)
+                self.assertEqual(rule.transition.storage_class, sc)
+            else:
+                self.fail("unexpected id %s" % rule.id)
+
+    def test_lifecycle_jp(self):
+        # test lifecycle with Japanese prefix
+        name = "Japanese files"
+        prefix = u"日本語/"
+        days = 30
+        lifecycle = Lifecycle()
+        lifecycle.add_rule(name, prefix, "Enabled", days)
+        # set the lifecycle
+        self.bucket.configure_lifecycle(lifecycle)
+        # read the lifecycle back
+        readlifecycle = self.bucket.get_lifecycle_config();
+        for rule in readlifecycle:
+            self.assertEqual(rule.id, name)
+            self.assertEqual(rule.expiration.days, days)
+            #Note: Boto seems correct? AWS seems broken?
+            #self.assertEqual(rule.prefix, prefix)
diff --git a/tests/integration/s3/test_connection.py b/tests/integration/s3/test_connection.py
index b673303..5d7473e 100644
--- a/tests/integration/s3/test_connection.py
+++ b/tests/integration/s3/test_connection.py
@@ -99,7 +99,8 @@
         k.name = 'foo/bar'
         k.set_contents_from_string(s1, headers)
         k.name = 'foo/bas'
-        k.set_contents_from_filename('foobar')
+        size = k.set_contents_from_filename('foobar')
+        assert size == 42
         k.name = 'foo/bat'
         k.set_contents_from_string(s1)
         k.name = 'fie/bar'
diff --git a/tests/integration/s3/test_key.py b/tests/integration/s3/test_key.py
index 6aecb22..f329e06 100644
--- a/tests/integration/s3/test_key.py
+++ b/tests/integration/s3/test_key.py
@@ -1,5 +1,3 @@
-# -*- coding: utf-8 -*-
-
 # Copyright (c) 2012 Mitch Garnaat http://garnaat.org/
 # All rights reserved.
 #
@@ -17,7 +15,7 @@
 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
 # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
 # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
-# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, 
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
 # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
 # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
 # IN THE SOFTWARE.
@@ -26,14 +24,15 @@
 Some unit tests for S3 Key
 """
 
-import unittest
+from tests.unit import unittest
 import time
 import StringIO
 from boto.s3.connection import S3Connection
 from boto.s3.key import Key
 from boto.exception import S3ResponseError
 
-class S3KeyTest (unittest.TestCase):
+
+class S3KeyTest(unittest.TestCase):
     s3 = True
 
     def setUp(self):
@@ -75,12 +74,12 @@
         kn = self.bucket.new_key("k")
         ks = kn.get_contents_as_string()
         self.assertEqual(ks, "")
-        
+
     def test_set_contents_as_file(self):
         content="01234567890123456789"
         sfp = StringIO.StringIO(content)
 
-        # fp is set at 0 for just opened (for read) files. 
+        # fp is set at 0 for just opened (for read) files.
         # set_contents should write full content to key.
         k = self.bucket.new_key("k")
         k.set_contents_from_file(sfp)
@@ -114,7 +113,7 @@
         content="01234567890123456789"
         sfp = StringIO.StringIO(content)
 
-        # fp is set at 0 for just opened (for read) files. 
+        # fp is set at 0 for just opened (for read) files.
         # set_contents should write full content to key.
         k = self.bucket.new_key("k")
         good_md5 = k.compute_md5(sfp)
@@ -153,7 +152,7 @@
         k.set_contents_from_file(sfp)
         kn = self.bucket.new_key("k")
         s = kn.get_contents_as_string()
-        self.assertEqual(kn.md5, k.md5)       
+        self.assertEqual(kn.md5, k.md5)
         self.assertEqual(s, content)
 
     def test_file_callback(self):
@@ -316,7 +315,7 @@
         # no more than 10 times
         # last time always 20 bytes
         sfp.seek(0)
-        self.my_cb_cnt = 0 
+        self.my_cb_cnt = 0
         self.my_cb_last = None
         k = self.bucket.new_key("k")
         k.BufferSize = 2
@@ -335,7 +334,7 @@
         # no more than 1000 times
         # last time always 20 bytes
         sfp.seek(0)
-        self.my_cb_cnt = 0 
+        self.my_cb_cnt = 0
         self.my_cb_last = None
         k = self.bucket.new_key("k")
         k.BufferSize = 2
@@ -350,3 +349,37 @@
         self.assertTrue(self.my_cb_cnt <= 1000)
         self.assertEqual(self.my_cb_last, 20)
         self.assertEqual(s, content)
+
+    def test_website_redirects(self):
+        self.bucket.configure_website('index.html')
+        key = self.bucket.new_key('redirect-key')
+        self.assertTrue(key.set_redirect('http://www.amazon.com/'))
+        self.assertEqual(key.get_redirect(), 'http://www.amazon.com/')
+
+        self.assertTrue(key.set_redirect('http://aws.amazon.com/'))
+        self.assertEqual(key.get_redirect(), 'http://aws.amazon.com/')
+
+    def test_website_redirect_none_configured(self):
+        key = self.bucket.new_key('redirect-key')
+        key.set_contents_from_string('')
+        self.assertEqual(key.get_redirect(), None)
+
+    def test_website_redirect_with_bad_value(self):
+        self.bucket.configure_website('index.html')
+        key = self.bucket.new_key('redirect-key')
+        with self.assertRaises(key.provider.storage_response_error):
+            # Must start with a / or http
+            key.set_redirect('ftp://ftp.example.org')
+        with self.assertRaises(key.provider.storage_response_error):
+            # Must start with a / or http
+            key.set_redirect('')
+
+    def test_setting_date(self):
+        key = self.bucket.new_key('test_date')
+        # This should actually set x-amz-meta-date & not fail miserably.
+        key.set_metadata('date', '20130524T155935Z')
+        key.set_contents_from_string('Some text here.')
+
+        check = self.bucket.get_key('test_date')
+        self.assertEqual(check.get_metadata('date'), u'20130524T155935Z')
+        self.assertTrue('x-amz-meta-date' in check._get_remote_metadata())
diff --git a/tests/integration/sns/test_sns_sqs_subscription.py b/tests/integration/sns/test_sns_sqs_subscription.py
new file mode 100644
index 0000000..0cb8b36
--- /dev/null
+++ b/tests/integration/sns/test_sns_sqs_subscription.py
@@ -0,0 +1,101 @@
+# Copyright (c) 2006-2010 Mitch Garnaat http://garnaat.org/
+# Copyright (c) 2010, Eucalyptus Systems, Inc.
+# All rights reserved.
+#
+# Permission is hereby granted, free of charge, to any person obtaining a
+# copy of this software and associated documentation files (the
+# "Software"), to deal in the Software without restriction, including
+# without limitation the rights to use, copy, modify, merge, publish, dis-
+# tribute, sublicense, and/or sell copies of the Software, and to permit
+# persons to whom the Software is furnished to do so, subject to the fol-
+# lowing conditions:
+#
+# The above copyright notice and this permission notice shall be included
+# in all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
+# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
+# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+# IN THE SOFTWARE.
+
+"""
+Unit tests for subscribing SQS queues to SNS topics.
+"""
+
+import hashlib
+import time
+import json
+
+from tests.unit import unittest
+
+from boto.sqs.connection import SQSConnection
+from boto.sns.connection import SNSConnection
+
+class SNSSubcribeSQSTest(unittest.TestCase):
+    
+    sqs = True
+    sns = True
+
+    def setUp(self):
+        self.sqsc = SQSConnection()
+        self.snsc = SNSConnection()
+
+    def get_policy_statements(self, queue):
+        attrs = queue.get_attributes('Policy')
+        policy = json.loads(attrs.get('Policy', "{}"))
+        return policy.get('Statement', {})
+
+    def test_correct_sid(self):
+        now = time.time()
+        topic_name = queue_name = "test_correct_sid%d" % (now)
+
+        timeout = 60
+        queue = self.sqsc.create_queue(queue_name, timeout)
+        self.addCleanup(self.sqsc.delete_queue, queue, True)
+        queue_arn = queue.arn
+
+        topic = self.snsc.create_topic(topic_name)
+        topic_arn = topic['CreateTopicResponse']['CreateTopicResult']\
+                ['TopicArn']
+        self.addCleanup(self.snsc.delete_topic, topic_arn)
+
+        expected_sid = hashlib.md5(topic_arn + queue_arn).hexdigest()
+        resp = self.snsc.subscribe_sqs_queue(topic_arn, queue)
+        
+        found_expected_sid = False
+        statements = self.get_policy_statements(queue)
+        for statement in statements:
+            if statement['Sid'] == expected_sid:
+                found_expected_sid = True
+                break
+        self.assertTrue(found_expected_sid)
+
+    def test_idempotent_subscribe(self):
+        now = time.time()
+        topic_name = queue_name = "test_idempotent_subscribe%d" % (now)
+
+        timeout = 60
+        queue = self.sqsc.create_queue(queue_name, timeout)
+        self.addCleanup(self.sqsc.delete_queue, queue, True)
+        initial_statements = self.get_policy_statements(queue)
+        queue_arn = queue.arn
+
+        topic = self.snsc.create_topic(topic_name)
+        topic_arn = topic['CreateTopicResponse']['CreateTopicResult']\
+                ['TopicArn']
+        self.addCleanup(self.snsc.delete_topic, topic_arn)
+
+        resp = self.snsc.subscribe_sqs_queue(topic_arn, queue)
+        time.sleep(3)
+        first_subscribe_statements = self.get_policy_statements(queue)
+        self.assertEqual(len(first_subscribe_statements),
+                len(initial_statements) + 1)
+
+        resp2 = self.snsc.subscribe_sqs_queue(topic_arn, queue)
+        time.sleep(3)
+        second_subscribe_statements = self.get_policy_statements(queue)
+        self.assertEqual(len(second_subscribe_statements),
+                len(first_subscribe_statements))
diff --git a/tests/integration/sqs/test_connection.py b/tests/integration/sqs/test_connection.py
index 4851be9..9b2ab59 100644
--- a/tests/integration/sqs/test_connection.py
+++ b/tests/integration/sqs/test_connection.py
@@ -25,9 +25,12 @@
 Some unit tests for the SQSConnection
 """
 
-import unittest
 import time
+from threading import Timer
+from tests.unit import unittest
+
 from boto.sqs.connection import SQSConnection
+from boto.sqs.message import Message
 from boto.sqs.message import MHMessage
 from boto.exception import SQSError
 
@@ -54,17 +57,18 @@
         # now create one that should work and should be unique (i.e. a new one)
         queue_name = 'test%d' % int(time.time())
         timeout = 60
-        queue = c.create_queue(queue_name, timeout)
+        queue_1 = c.create_queue(queue_name, timeout)
+        self.addCleanup(c.delete_queue, queue_1, True)
         time.sleep(60)
         rs = c.get_all_queues()
         i = 0
         for q in rs:
             i += 1
         assert i == num_queues + 1
-        assert queue.count_slow() == 0
+        assert queue_1.count_slow() == 0
 
         # check the visibility timeout
-        t = queue.get_timeout()
+        t = queue_1.get_timeout()
         assert t == timeout, '%d != %d' % (t, timeout)
 
         # now try to get queue attributes
@@ -80,73 +84,158 @@
 
         # now change the visibility timeout
         timeout = 45
-        queue.set_timeout(timeout)
+        queue_1.set_timeout(timeout)
         time.sleep(60)
-        t = queue.get_timeout()
+        t = queue_1.get_timeout()
         assert t == timeout, '%d != %d' % (t, timeout)
 
         # now add a message
         message_body = 'This is a test\n'
-        message = queue.new_message(message_body)
-        queue.write(message)
+        message = queue_1.new_message(message_body)
+        queue_1.write(message)
         time.sleep(60)
-        assert queue.count_slow() == 1
+        assert queue_1.count_slow() == 1
         time.sleep(90)
 
         # now read the message from the queue with a 10 second timeout
-        message = queue.read(visibility_timeout=10)
+        message = queue_1.read(visibility_timeout=10)
         assert message
         assert message.get_body() == message_body
 
         # now immediately try another read, shouldn't find anything
-        message = queue.read()
+        message = queue_1.read()
         assert message == None
 
         # now wait 30 seconds and try again
         time.sleep(30)
-        message = queue.read()
+        message = queue_1.read()
         assert message
 
         # now delete the message
-        queue.delete_message(message)
+        queue_1.delete_message(message)
         time.sleep(30)
-        assert queue.count_slow() == 0
+        assert queue_1.count_slow() == 0
 
         # try a batch write
         num_msgs = 10
         msgs = [(i, 'This is message %d' % i, 0) for i in range(num_msgs)]
-        queue.write_batch(msgs)
+        queue_1.write_batch(msgs)
 
         # try to delete all of the messages using batch delete
         deleted = 0
         while deleted < num_msgs:
             time.sleep(5)
-            msgs = queue.get_messages(num_msgs)
+            msgs = queue_1.get_messages(num_msgs)
             if msgs:
-                br = queue.delete_message_batch(msgs)
+                br = queue_1.delete_message_batch(msgs)
                 deleted += len(br.results)
 
         # create another queue so we can test force deletion
         # we will also test MHMessage with this queue
         queue_name = 'test%d' % int(time.time())
         timeout = 60
-        queue = c.create_queue(queue_name, timeout)
-        queue.set_message_class(MHMessage)
+        queue_2 = c.create_queue(queue_name, timeout)
+        self.addCleanup(c.delete_queue, queue_2, True)
+        queue_2.set_message_class(MHMessage)
         time.sleep(30)
 
         # now add a couple of messages
-        message = queue.new_message()
+        message = queue_2.new_message()
         message['foo'] = 'bar'
-        queue.write(message)
+        queue_2.write(message)
         message_body = {'fie': 'baz', 'foo': 'bar'}
-        message = queue.new_message(body=message_body)
-        queue.write(message)
+        message = queue_2.new_message(body=message_body)
+        queue_2.write(message)
         time.sleep(30)
 
-        m = queue.read()
+        m = queue_2.read()
         assert m['foo'] == 'bar'
 
-        # now delete that queue and messages
-        c.delete_queue(queue, True)
-
         print '--- tests completed ---'
+
+    def test_sqs_timeout(self):
+        c = SQSConnection()
+        queue_name = 'test_sqs_timeout_%s' % int(time.time())
+        queue = c.create_queue(queue_name)
+        self.addCleanup(c.delete_queue, queue, True)
+        start = time.time()
+        poll_seconds = 2
+        response = queue.read(visibility_timeout=None,
+                              wait_time_seconds=poll_seconds)
+        total_time = time.time() - start
+        self.assertTrue(total_time > poll_seconds,
+                        "SQS queue did not block for at least %s seconds: %s" %
+                        (poll_seconds, total_time))
+        self.assertIsNone(response)
+
+        # Now that there's an element in the queue, we should not block for 2
+        # seconds.
+        c.send_message(queue, 'test message')
+        start = time.time()
+        poll_seconds = 2
+        message = c.receive_message(
+            queue, number_messages=1,
+            visibility_timeout=None, attributes=None,
+            wait_time_seconds=poll_seconds)[0]
+        total_time = time.time() - start
+        self.assertTrue(total_time < poll_seconds,
+                        "SQS queue blocked longer than %s seconds: %s" %
+                        (poll_seconds, total_time))
+        self.assertEqual(message.get_body(), 'test message')
+
+        attrs = c.get_queue_attributes(queue, 'ReceiveMessageWaitTimeSeconds')
+        self.assertEqual(attrs['ReceiveMessageWaitTimeSeconds'], '0')
+
+    def test_sqs_longpoll(self):
+        c = SQSConnection()
+        queue_name = 'test_sqs_longpoll_%s' % int(time.time())
+        queue = c.create_queue(queue_name)
+        self.addCleanup(c.delete_queue, queue, True)
+        messages = []
+
+        # The basic idea is to spawn a timer thread that will put something
+        # on the queue in 5 seconds and verify that our long polling client
+        # sees the message after waiting for approximately that long.
+        def send_message():
+            messages.append(
+                queue.write(queue.new_message('this is a test message')))
+
+        t = Timer(5.0, send_message)
+        t.start()
+        self.addCleanup(t.join)
+
+        start = time.time()
+        response = queue.read(wait_time_seconds=10)
+        end = time.time()
+
+        t.join()
+        self.assertEqual(response.id, messages[0].id)
+        self.assertEqual(response.get_body(), messages[0].get_body())
+        # The timer thread should send the message in 5 seconds, so
+        # we're giving +- .5 seconds for the total time the queue
+        # was blocked on the read call.
+        self.assertTrue(4.5 <= (end - start) <= 5.5)
+
+    def test_queue_deletion_affects_full_queues(self):
+        conn = SQSConnection()
+        initial_count = len(conn.get_all_queues())
+
+        empty = conn.create_queue('empty%d' % int(time.time()))
+        full = conn.create_queue('full%d' % int(time.time()))
+        time.sleep(60)
+        # Make sure they're both around.
+        self.assertEqual(len(conn.get_all_queues()), initial_count + 2)
+
+        # Put a message in the full queue.
+        m1 = Message()
+        m1.set_body('This is a test message.')
+        full.write(m1)
+        self.assertEqual(full.count(), 1)
+
+        self.assertTrue(conn.delete_queue(empty))
+        # Here's the regression for the docs. SQS will delete a queue with
+        # messages in it, no ``force_deletion`` needed.
+        self.assertTrue(conn.delete_queue(full))
+        # Wait long enough for SQS to finally remove the queues.
+        time.sleep(90)
+        self.assertEqual(len(conn.get_all_queues()), initial_count)
diff --git a/tests/integration/storage_uri/__init__.py b/tests/integration/storage_uri/__init__.py
new file mode 100644
index 0000000..e69de29
--- /dev/null
+++ b/tests/integration/storage_uri/__init__.py
diff --git a/tests/integration/storage_uri/test_storage_uri.py b/tests/integration/storage_uri/test_storage_uri.py
new file mode 100644
index 0000000..55dac1a
--- /dev/null
+++ b/tests/integration/storage_uri/test_storage_uri.py
@@ -0,0 +1,63 @@
+# Copyright (c) 2012 Mitch Garnaat http://garnaat.org/
+# All rights reserved.
+#
+# Permission is hereby granted, free of charge, to any person obtaining a
+# copy of this software and associated documentation files (the
+# "Software"), to deal in the Software without restriction, including
+# without limitation the rights to use, copy, modify, merge, publish, dis-
+# tribute, sublicense, and/or sell copies of the Software, and to permit
+# persons to whom the Software is furnished to do so, subject to the fol-
+# lowing conditions:
+#
+# The above copyright notice and this permission notice shall be included
+# in all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
+# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
+# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+# IN THE SOFTWARE.
+
+"""
+Some unit tests for StorageUri
+"""
+
+from tests.unit import unittest
+import time
+import boto
+from boto.s3.connection import S3Connection, Location
+
+
+class StorageUriTest(unittest.TestCase):
+    s3 = True
+
+    def nuke_bucket(self, bucket):
+        for key in bucket:
+            key.delete()
+
+        bucket.delete()
+
+    def test_storage_uri_regionless(self):
+        # First, create a bucket in a different region.
+        conn = S3Connection(
+            host='s3-us-west-2.amazonaws.com'
+        )
+        bucket_name = 'keytest-%d' % int(time.time())
+        bucket = conn.create_bucket(bucket_name, location=Location.USWest2)
+        self.addCleanup(self.nuke_bucket, bucket)
+
+        # Now use ``storage_uri`` to try to make a new key.
+        # This would throw a 301 exception.
+        suri = boto.storage_uri('s3://%s/test' % bucket_name)
+        the_key = suri.new_key()
+        the_key.key = 'Test301'
+        the_key.set_contents_from_string(
+            'This should store in a different region.'
+        )
+
+        # Check it a different way.
+        alt_conn = boto.connect_s3(host='s3-us-west-2.amazonaws.com')
+        alt_bucket = alt_conn.get_bucket(bucket_name)
+        alt_key = alt_bucket.get_key('Test301')
diff --git a/tests/integration/sts/test_session_token.py b/tests/integration/sts/test_session_token.py
index fa33d5f..35d42ca 100644
--- a/tests/integration/sts/test_session_token.py
+++ b/tests/integration/sts/test_session_token.py
@@ -27,10 +27,12 @@
 import unittest
 import time
 import os
+from boto.exception import BotoServerError
 from boto.sts.connection import STSConnection
 from boto.sts.credentials import Credentials
 from boto.s3.connection import S3Connection
 
+
 class SessionTokenTest (unittest.TestCase):
     sts = True
 
@@ -63,3 +65,17 @@
         buckets = s3.get_all_buckets()
 
         print '--- tests completed ---'
+
+    def test_assume_role_with_web_identity(self):
+        c = STSConnection()
+
+        try:
+            creds = c.assume_role_with_web_identity(
+                'arn:aws:s3:::my_corporate_bucket/*',
+                'guestuser',
+                'b94d27b9934d3e08a52e52d7da7dabfac484efe37a5380ee9088f7ace2efcde9',
+                provider_id='www.amazon.com',
+            )
+        except BotoServerError as err:
+            self.assertEqual(err.status, 403)
+            self.assertTrue('Not authorized' in err.body)
diff --git a/tests/integration/support/__init__.py b/tests/integration/support/__init__.py
new file mode 100644
index 0000000..e69de29
--- /dev/null
+++ b/tests/integration/support/__init__.py
diff --git a/tests/integration/support/test_cert_verification.py b/tests/integration/support/test_cert_verification.py
new file mode 100644
index 0000000..586cc71
--- /dev/null
+++ b/tests/integration/support/test_cert_verification.py
@@ -0,0 +1,35 @@
+# Copyright (c) 2013 Amazon.com, Inc. or its affiliates.  All Rights Reserved
+#
+# Permission is hereby granted, free of charge, to any person obtaining a
+# copy of this software and associated documentation files (the
+# "Software"), to deal in the Software without restriction, including
+# without limitation the rights to use, copy, modify, merge, publish, dis-
+# tribute, sublicense, and/or sell copies of the Software, and to permit
+# persons to whom the Software is furnished to do so, subject to the fol-
+# lowing conditions:
+#
+# The above copyright notice and this permission notice shall be included
+# in all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
+# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
+# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+# IN THE SOFTWARE.
+#
+
+from tests.unit import unittest
+import boto.support
+
+
+class CertVerificationTest(unittest.TestCase):
+
+    support = True
+    ssl = True
+
+    def test_certs(self):
+        for region in boto.support.regions():
+            c = region.connect()
+            c.describe_services()
diff --git a/tests/integration/support/test_layer1.py b/tests/integration/support/test_layer1.py
new file mode 100644
index 0000000..6b2b65d
--- /dev/null
+++ b/tests/integration/support/test_layer1.py
@@ -0,0 +1,76 @@
+# Copyright (c) 2013 Amazon.com, Inc. or its affiliates.  All Rights Reserved
+#
+# Permission is hereby granted, free of charge, to any person obtaining a
+# copy of this software and associated documentation files (the
+# "Software"), to deal in the Software without restriction, including
+# without limitation the rights to use, copy, modify, merge, publish, dis-
+# tribute, sublicense, and/or sell copies of the Software, and to permit
+# persons to whom the Software is furnished to do so, subject to the fol-
+# lowing conditions:
+#
+# The above copyright notice and this permission notice shall be included
+# in all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
+# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
+# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+# IN THE SOFTWARE.
+#
+import unittest
+import time
+
+from boto.support.layer1 import SupportConnection
+from boto.support import exceptions
+
+
+class TestSupportLayer1Management(unittest.TestCase):
+    support = True
+
+    def setUp(self):
+        self.api = SupportConnection()
+        self.wait_time = 5
+
+    def test_as_much_as_possible_before_teardown(self):
+        cases = self.api.describe_cases()
+        preexisting_count = len(cases.get('cases', []))
+
+        services = self.api.describe_services()
+        self.assertTrue('services' in services)
+        service_codes = [serv['code'] for serv in services['services']]
+        self.assertTrue('amazon-cloudsearch' in service_codes)
+
+        severity = self.api.describe_severity_levels()
+        self.assertTrue('severityLevels' in severity)
+        severity_codes = [sev['code'] for sev in severity['severityLevels']]
+        self.assertTrue('low' in severity_codes)
+
+        case_1 = self.api.create_case(
+            subject='TEST: I am a test case.',
+            service_code='amazon-cloudsearch',
+            category_code='other',
+            communication_body="This is a test problem",
+            severity_code='low',
+            language='en'
+        )
+        time.sleep(self.wait_time)
+        case_id = case_1['caseId']
+
+        new_cases = self.api.describe_cases()
+        self.assertTrue(len(new_cases['cases']) > preexisting_count)
+
+        result = self.api.add_communication_to_case(
+            communication_body="This is a test solution.",
+            case_id=case_id
+        )
+        self.assertTrue(result.get('result', False))
+        time.sleep(self.wait_time)
+
+        final_cases = self.api.describe_cases(case_id_list=[case_id])
+        comms = final_cases['cases'][0]['recentCommunications']\
+                           ['communications']
+        self.assertEqual(len(comms), 2)
+
+        close_result = self.api.resolve_case(case_id=case_id)
diff --git a/tests/test.py b/tests/test.py
index 68e7af2..d9781ec 100755
--- a/tests/test.py
+++ b/tests/test.py
@@ -29,7 +29,10 @@
 
 
 def main():
-    parser = argparse.ArgumentParser()
+    description = ("Runs boto unit and/or integration tests. "
+                   "Arguments will be passed on to nosetests. "
+                   "See nosetests --help for more information.")
+    parser = argparse.ArgumentParser(description=description)
     parser.add_argument('-t', '--service-tests', action="append", default=[],
                         help="Run tests for a given service.  This will "
                         "run any test tagged with the specified value, "
diff --git a/tests/unit/__init__.py b/tests/unit/__init__.py
index 4e52b76..cea79eb 100644
--- a/tests/unit/__init__.py
+++ b/tests/unit/__init__.py
@@ -22,6 +22,9 @@
             https_connection_factory=self.https_connection_factory,
             aws_access_key_id='aws_access_key_id',
             aws_secret_access_key='aws_secret_access_key')
+        self.initialize_service_connection()
+
+    def initialize_service_connection(self):
         self.actual_request = None
         self.original_mexe = self.service_connection._mexe
         self.service_connection._mexe = self._mexe_spy
@@ -45,6 +48,7 @@
         response.reason = reason
 
         response.getheaders.return_value = header
+        response.msg = dict(header)
         def overwrite_header(arg, default=None):
             header_dict = dict(header)
             if header_dict.has_key(arg):
@@ -52,7 +56,7 @@
             else:
                 return default
         response.getheader.side_effect = overwrite_header
-        
+
         return response
 
     def assert_request_parameters(self, params, ignore_params_values=None):
diff --git a/tests/unit/auth/__init__.py b/tests/unit/auth/__init__.py
new file mode 100644
index 0000000..e69de29
--- /dev/null
+++ b/tests/unit/auth/__init__.py
diff --git a/tests/unit/auth/test_sigv4.py b/tests/unit/auth/test_sigv4.py
new file mode 100644
index 0000000..b11b145
--- /dev/null
+++ b/tests/unit/auth/test_sigv4.py
@@ -0,0 +1,111 @@
+# Copyright (c) 2012 Amazon.com, Inc. or its affiliates.  All Rights Reserved
+#
+# Permission is hereby granted, free of charge, to any person obtaining a
+# copy of this software and associated documentation files (the
+# "Software"), to deal in the Software without restriction, including
+# without limitation the rights to use, copy, modify, merge, publish, dis-
+# tribute, sublicense, and/or sell copies of the Software, and to permit
+# persons to whom the Software is furnished to do so, subject to the fol-
+# lowing conditions:
+#
+# The above copyright notice and this permission notice shall be included
+# in all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
+# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
+# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+# IN THE SOFTWARE.
+#
+from mock import Mock
+from tests.unit import unittest
+
+from boto.auth import HmacAuthV4Handler
+from boto.connection import HTTPRequest
+
+
+class TestSigV4Handler(unittest.TestCase):
+    def setUp(self):
+        self.provider = Mock()
+        self.provider.access_key = 'access_key'
+        self.provider.secret_key = 'secret_key'
+        self.request = HTTPRequest(
+            'POST', 'https', 'glacier.us-east-1.amazonaws.com', 443,
+            '/-/vaults/foo/archives', None, {},
+            {'x-amz-glacier-version': '2012-06-01'}, '')
+
+    def test_inner_whitespace_is_collapsed(self):
+        auth = HmacAuthV4Handler('glacier.us-east-1.amazonaws.com',
+                                 Mock(), self.provider)
+        self.request.headers['x-amz-archive-description'] = 'two  spaces'
+        headers = auth.headers_to_sign(self.request)
+        self.assertEqual(headers, {'Host': 'glacier.us-east-1.amazonaws.com',
+                                   'x-amz-archive-description': 'two  spaces',
+                                   'x-amz-glacier-version': '2012-06-01'})
+        # Note the single space between the "two spaces".
+        self.assertEqual(auth.canonical_headers(headers),
+                         'host:glacier.us-east-1.amazonaws.com\n'
+                         'x-amz-archive-description:two spaces\n'
+                         'x-amz-glacier-version:2012-06-01')
+
+    def test_canonical_query_string(self):
+        auth = HmacAuthV4Handler('glacier.us-east-1.amazonaws.com',
+                                 Mock(), self.provider)
+        request = HTTPRequest(
+            'GET', 'https', 'glacier.us-east-1.amazonaws.com', 443,
+            '/-/vaults/foo/archives', None, {},
+            {'x-amz-glacier-version': '2012-06-01'}, '')
+        request.params['Foo.1'] = 'aaa'
+        request.params['Foo.10'] = 'zzz'
+        query_string = auth.canonical_query_string(request)
+        self.assertEqual(query_string, 'Foo.1=aaa&Foo.10=zzz')
+
+    def test_canonical_uri(self):
+        auth = HmacAuthV4Handler('glacier.us-east-1.amazonaws.com',
+                                 Mock(), self.provider)
+        request = HTTPRequest(
+            'GET', 'https', 'glacier.us-east-1.amazonaws.com', 443,
+            'x/./././x .html', None, {},
+            {'x-amz-glacier-version': '2012-06-01'}, '')
+        canonical_uri = auth.canonical_uri(request)
+        # This should be both normalized & urlencoded.
+        self.assertEqual(canonical_uri, 'x/x%20.html')
+
+    def test_headers_to_sign(self):
+        auth = HmacAuthV4Handler('glacier.us-east-1.amazonaws.com',
+                                 Mock(), self.provider)
+        request = HTTPRequest(
+            'GET', 'http', 'glacier.us-east-1.amazonaws.com', 80,
+            'x/./././x .html', None, {},
+            {'x-amz-glacier-version': '2012-06-01'}, '')
+        headers = auth.headers_to_sign(request)
+        # Port 80 & not secure excludes the port.
+        self.assertEqual(headers['Host'], 'glacier.us-east-1.amazonaws.com')
+
+        request = HTTPRequest(
+            'GET', 'https', 'glacier.us-east-1.amazonaws.com', 443,
+            'x/./././x .html', None, {},
+            {'x-amz-glacier-version': '2012-06-01'}, '')
+        headers = auth.headers_to_sign(request)
+        # SSL port excludes the port.
+        self.assertEqual(headers['Host'], 'glacier.us-east-1.amazonaws.com')
+
+        request = HTTPRequest(
+            'GET', 'https', 'glacier.us-east-1.amazonaws.com', 8080,
+            'x/./././x .html', None, {},
+            {'x-amz-glacier-version': '2012-06-01'}, '')
+        headers = auth.headers_to_sign(request)
+        # URL should include port.
+        self.assertEqual(headers['Host'], 'glacier.us-east-1.amazonaws.com:8080')
+
+    def test_region_and_service_can_be_overriden(self):
+        auth = HmacAuthV4Handler('queue.amazonaws.com',
+                                 Mock(), self.provider)
+        self.request.headers['X-Amz-Date'] = '20121121000000'
+
+        auth.region_name = 'us-west-2'
+        auth.service_name = 'sqs'
+        scope = auth.credential_scope(self.request)
+        self.assertEqual(scope, '20121121/us-west-2/sqs/aws4_request')
diff --git a/tests/unit/beanstalk/test_layer1.py b/tests/unit/beanstalk/test_layer1.py
index 6df7537..2ecec0d 100644
--- a/tests/unit/beanstalk/test_layer1.py
+++ b/tests/unit/beanstalk/test_layer1.py
@@ -44,11 +44,8 @@
         self.assert_request_parameters({
             'Action': 'ListAvailableSolutionStacks',
             'ContentType': 'JSON',
-            'SignatureMethod': 'HmacSHA256',
-            'SignatureVersion': 2,
             'Version': '2010-12-01',
-            'AWSAccessKeyId': 'aws_access_key_id',
-        }, ignore_params_values=['Timestamp'])
+        })
 
 
 class TestCreateApplicationVersion(AWSMockServiceTestCase):
@@ -78,16 +75,13 @@
         self.assert_request_parameters({
             'Action': 'CreateApplicationVersion',
             'ContentType': 'JSON',
-            'SignatureMethod': 'HmacSHA256',
-            'SignatureVersion': 2,
             'Version': '2010-12-01',
             'ApplicationName': 'application1',
             'AutoCreateApplication': 'true',
             'SourceBundle.S3Bucket': 'mybucket',
             'SourceBundle.S3Key': 'mykey',
             'VersionLabel': 'version1',
-            'AWSAccessKeyId': 'aws_access_key_id',
-        }, ignore_params_values=['Timestamp'])
+        })
         self.assertEqual(app_version['ApplicationName'], 'application1')
         self.assertEqual(app_version['VersionLabel'], 'version1')
 
@@ -114,15 +108,12 @@
             'EnvironmentName': 'environment1',
             'TemplateName': '32bit Amazon Linux running Tomcat 7',
             'ContentType': 'JSON',
-            'SignatureMethod': 'HmacSHA256',
-            'SignatureVersion': 2,
             'Version': '2010-12-01',
             'VersionLabel': 'version1',
-            'AWSAccessKeyId': 'aws_access_key_id',
             'OptionSettings.member.1.Namespace': 'aws:autoscaling:launchconfiguration',
             'OptionSettings.member.1.OptionName': 'Ec2KeyName',
             'OptionSettings.member.1.Value': 'mykeypair',
             'OptionSettings.member.2.Namespace': 'aws:elasticbeanstalk:application:environment',
             'OptionSettings.member.2.OptionName': 'ENVVAR',
             'OptionSettings.member.2.Value': 'VALUE1',
-        }, ignore_params_values=['Timestamp'])
+        })
diff --git a/tests/unit/cloudformation/test_stack.py b/tests/unit/cloudformation/test_stack.py
new file mode 100644
index 0000000..54d2dc9
--- /dev/null
+++ b/tests/unit/cloudformation/test_stack.py
@@ -0,0 +1,76 @@
+#!/usr/bin/env python
+import datetime
+import xml.sax
+import unittest
+import boto.handler
+import boto.resultset
+import boto.cloudformation
+
+SAMPLE_XML = r"""
+<DescribeStacksResponse xmlns="http://cloudformation.amazonaws.com/doc/2010-05-15/">
+  <DescribeStacksResult>
+    <Stacks>
+      <member>
+        <Tags>
+          <member>
+            <Value>value0</Value>
+            <Key>key0</Key>
+          </member>
+          <member>
+            <Key>key1</Key>
+            <Value>value1</Value>
+          </member>
+        </Tags>
+        <StackId>arn:aws:cloudformation:ap-southeast-1:100:stack/Name/id</StackId>
+        <StackStatus>CREATE_COMPLETE</StackStatus>
+        <StackName>Name</StackName>
+        <StackStatusReason/>
+        <Description/>
+        <NotificationARNs>
+          <member>arn:aws:sns:ap-southeast-1:100:name</member>
+        </NotificationARNs>
+        <CreationTime>2013-01-10T05:04:56Z</CreationTime>
+        <DisableRollback>false</DisableRollback>
+        <Outputs>
+          <member>
+            <OutputValue>value0</OutputValue>
+            <Description>output0</Description>
+            <OutputKey>key0</OutputKey>
+          </member>
+          <member>
+            <OutputValue>value1</OutputValue>
+            <Description>output1</Description>
+            <OutputKey>key1</OutputKey>
+          </member>
+        </Outputs>
+      </member>
+    </Stacks>
+  </DescribeStacksResult>
+  <ResponseMetadata>
+    <RequestId>1</RequestId>
+  </ResponseMetadata>
+</DescribeStacksResponse>
+"""
+
+class TestStackParse(unittest.TestCase):
+    def test_parse_tags(self):
+        rs = boto.resultset.ResultSet([('member', boto.cloudformation.stack.Stack)])
+        h = boto.handler.XmlHandler(rs, None)
+        xml.sax.parseString(SAMPLE_XML, h)
+        tags = rs[0].tags
+        self.assertEqual(tags, {u'key0': u'value0', u'key1': u'value1'})
+
+    def test_creation_time_with_millis(self):
+        millis_xml = SAMPLE_XML.replace(
+          "<CreationTime>2013-01-10T05:04:56Z</CreationTime>",
+          "<CreationTime>2013-01-10T05:04:56.102342Z</CreationTime>"
+        )
+
+        rs = boto.resultset.ResultSet([('member', boto.cloudformation.stack.Stack)])
+        h = boto.handler.XmlHandler(rs, None)
+        xml.sax.parseString(millis_xml, h)
+        creation_time = rs[0].creation_time
+        self.assertEqual(creation_time, datetime.datetime(2013, 1, 10, 5, 4, 56, 102342))
+
+if __name__ == '__main__':
+    unittest.main()
diff --git a/tests/unit/cloudfront/test_signed_urls.py b/tests/unit/cloudfront/test_signed_urls.py
index 2957538..e1a19f0 100644
--- a/tests/unit/cloudfront/test_signed_urls.py
+++ b/tests/unit/cloudfront/test_signed_urls.py
@@ -1,10 +1,11 @@
-
+import tempfile
 import unittest
 try:
     import simplejson as json
 except ImportError:
     import json
 from textwrap import dedent
+
 from boto.cloudfront.distribution import Distribution
 
 class CloudfrontSignedUrlsTest(unittest.TestCase):
@@ -99,6 +100,38 @@
         encoded_sig = self.dist._url_base64_encode(sig)
         self.assertEqual(expected, encoded_sig)
 
+    def test_sign_canned_policy_pk_file(self):
+        """
+        Test signing the canned policy from amazon's cloudfront documentation
+        with a file object.
+        """
+        expected = ("Nql641NHEUkUaXQHZINK1FZ~SYeUSoBJMxjdgqrzIdzV2gyEXPDN"
+                    "v0pYdWJkflDKJ3xIu7lbwRpSkG98NBlgPi4ZJpRRnVX4kXAJK6td"
+                    "Nx6FucDB7OVqzcxkxHsGFd8VCG1BkC-Afh9~lOCMIYHIaiOB6~5j"
+                    "t9w2EOwi6sIIqrg_")
+        pk_file = tempfile.TemporaryFile()
+        pk_file.write(self.pk_str)
+        pk_file.seek(0)
+        sig = self.dist._sign_string(self.canned_policy, private_key_file=pk_file)
+        encoded_sig = self.dist._url_base64_encode(sig)
+        self.assertEqual(expected, encoded_sig)
+
+    def test_sign_canned_policy_pk_file_name(self):
+        """
+        Test signing the canned policy from amazon's cloudfront documentation
+        with a file name.
+        """
+        expected = ("Nql641NHEUkUaXQHZINK1FZ~SYeUSoBJMxjdgqrzIdzV2gyEXPDN"
+                    "v0pYdWJkflDKJ3xIu7lbwRpSkG98NBlgPi4ZJpRRnVX4kXAJK6td"
+                    "Nx6FucDB7OVqzcxkxHsGFd8VCG1BkC-Afh9~lOCMIYHIaiOB6~5j"
+                    "t9w2EOwi6sIIqrg_")
+        pk_file = tempfile.NamedTemporaryFile()
+        pk_file.write(self.pk_str)
+        pk_file.flush()
+        sig = self.dist._sign_string(self.canned_policy, private_key_file=pk_file.name)
+        encoded_sig = self.dist._url_base64_encode(sig)
+        self.assertEqual(expected, encoded_sig)
+
     def test_sign_canned_policy_unicode(self):
         """
         Test signing the canned policy from amazon's cloudfront documentation.
diff --git a/tests/unit/cloudsearch/__init__.py b/tests/unit/cloudsearch/__init__.py
new file mode 100644
index 0000000..8b13789
--- /dev/null
+++ b/tests/unit/cloudsearch/__init__.py
@@ -0,0 +1 @@
+
diff --git a/tests/unit/cloudsearch/test_connection.py b/tests/unit/cloudsearch/test_connection.py
new file mode 100644
index 0000000..d2f6752
--- /dev/null
+++ b/tests/unit/cloudsearch/test_connection.py
@@ -0,0 +1,241 @@
+#!/usr/bin env python
+
+from tests.unit import AWSMockServiceTestCase
+
+from boto.cloudsearch.domain import Domain
+from boto.cloudsearch.layer1 import Layer1
+
+import json
+
+class TestCloudSearchCreateDomain(AWSMockServiceTestCase):
+    connection_class = Layer1
+
+    def default_body(self):
+        return """
+<CreateDomainResponse xmlns="http://cloudsearch.amazonaws.com/doc/2011-02-01">
+  <CreateDomainResult>
+    <DomainStatus>
+      <SearchPartitionCount>0</SearchPartitionCount>
+      <SearchService>
+        <Arn>arn:aws:cs:us-east-1:1234567890:search/demo</Arn>
+        <Endpoint>search-demo-userdomain.us-east-1.cloudsearch.amazonaws.com</Endpoint>
+      </SearchService>
+      <NumSearchableDocs>0</NumSearchableDocs>
+      <Created>true</Created>
+      <DomainId>1234567890/demo</DomainId>
+      <Processing>false</Processing>
+      <SearchInstanceCount>0</SearchInstanceCount>
+      <DomainName>demo</DomainName>
+      <RequiresIndexDocuments>false</RequiresIndexDocuments>
+      <Deleted>false</Deleted>
+      <DocService>
+        <Arn>arn:aws:cs:us-east-1:1234567890:doc/demo</Arn>
+        <Endpoint>doc-demo-userdomain.us-east-1.cloudsearch.amazonaws.com</Endpoint>
+      </DocService>
+    </DomainStatus>
+  </CreateDomainResult>
+  <ResponseMetadata>
+    <RequestId>00000000-0000-0000-0000-000000000000</RequestId>
+  </ResponseMetadata>
+</CreateDomainResponse>
+"""
+
+    def test_create_domain(self):
+        self.set_http_response(status_code=200)
+        api_response = self.service_connection.create_domain('demo')
+
+        self.assert_request_parameters({
+            'Action': 'CreateDomain',
+            'DomainName': 'demo',
+            'AWSAccessKeyId': 'aws_access_key_id',
+            'SignatureMethod': 'HmacSHA256',
+            'SignatureVersion': 2,
+            'Version': '2011-02-01',
+        }, ignore_params_values=['Timestamp'])
+
+    def test_cloudsearch_connect_result_endpoints(self):
+        """Check that endpoints & ARNs are correctly returned from AWS"""
+
+        self.set_http_response(status_code=200)
+        api_response = self.service_connection.create_domain('demo')
+        domain = Domain(self, api_response)
+
+        self.assertEqual(domain.doc_service_arn,
+                         "arn:aws:cs:us-east-1:1234567890:doc/demo")
+        self.assertEqual(
+            domain.doc_service_endpoint,
+            "doc-demo-userdomain.us-east-1.cloudsearch.amazonaws.com")
+        self.assertEqual(domain.search_service_arn,
+                         "arn:aws:cs:us-east-1:1234567890:search/demo")
+        self.assertEqual(
+            domain.search_service_endpoint,
+            "search-demo-userdomain.us-east-1.cloudsearch.amazonaws.com")
+
+    def test_cloudsearch_connect_result_statuses(self):
+        """Check that domain statuses are correctly returned from AWS"""
+        self.set_http_response(status_code=200)
+        api_response = self.service_connection.create_domain('demo')
+        domain = Domain(self, api_response)
+
+        self.assertEqual(domain.created, True)
+        self.assertEqual(domain.processing, False)
+        self.assertEqual(domain.requires_index_documents, False)
+        self.assertEqual(domain.deleted, False)
+
+    def test_cloudsearch_connect_result_details(self):
+        """Check that the domain information is correctly returned from AWS"""
+        self.set_http_response(status_code=200)
+        api_response = self.service_connection.create_domain('demo')
+        domain = Domain(self, api_response)
+
+        self.assertEqual(domain.id, "1234567890/demo")
+        self.assertEqual(domain.name, "demo")
+
+    def test_cloudsearch_documentservice_creation(self):
+        self.set_http_response(status_code=200)
+        api_response = self.service_connection.create_domain('demo')
+        domain = Domain(self, api_response)
+
+        document = domain.get_document_service()
+
+        self.assertEqual(
+            document.endpoint,
+            "doc-demo-userdomain.us-east-1.cloudsearch.amazonaws.com")
+
+    def test_cloudsearch_searchservice_creation(self):
+        self.set_http_response(status_code=200)
+        api_response = self.service_connection.create_domain('demo')
+        domain = Domain(self, api_response)
+
+        search = domain.get_search_service()
+
+        self.assertEqual(
+            search.endpoint,
+            "search-demo-userdomain.us-east-1.cloudsearch.amazonaws.com")
+
+
+class CloudSearchConnectionDeletionTest(AWSMockServiceTestCase):
+    connection_class = Layer1
+
+    def default_body(self):
+        return """
+<DeleteDomainResponse xmlns="http://cloudsearch.amazonaws.com/doc/2011-02-01">
+  <DeleteDomainResult>
+    <DomainStatus>
+      <SearchPartitionCount>0</SearchPartitionCount>
+      <SearchService>
+        <Arn>arn:aws:cs:us-east-1:1234567890:search/demo</Arn>
+        <Endpoint>search-demo-userdomain.us-east-1.cloudsearch.amazonaws.com</Endpoint>
+      </SearchService>
+      <NumSearchableDocs>0</NumSearchableDocs>
+      <Created>true</Created>
+      <DomainId>1234567890/demo</DomainId>
+      <Processing>false</Processing>
+      <SearchInstanceCount>0</SearchInstanceCount>
+      <DomainName>demo</DomainName>
+      <RequiresIndexDocuments>false</RequiresIndexDocuments>
+      <Deleted>false</Deleted>
+      <DocService>
+        <Arn>arn:aws:cs:us-east-1:1234567890:doc/demo</Arn>
+        <Endpoint>doc-demo-userdomain.us-east-1.cloudsearch.amazonaws.com</Endpoint>
+      </DocService>
+    </DomainStatus>
+  </DeleteDomainResult>
+  <ResponseMetadata>
+    <RequestId>00000000-0000-0000-0000-000000000000</RequestId>
+  </ResponseMetadata>
+</DeleteDomainResponse>
+"""
+
+    def test_cloudsearch_deletion(self):
+        """
+        Check that the correct arguments are sent to AWS when creating a
+        cloudsearch connection.
+        """
+        self.set_http_response(status_code=200)
+        api_response = self.service_connection.delete_domain('demo')
+
+        self.assert_request_parameters({
+            'Action': 'DeleteDomain',
+            'DomainName': 'demo',
+            'AWSAccessKeyId': 'aws_access_key_id',
+            'SignatureMethod': 'HmacSHA256',
+            'SignatureVersion': 2,
+            'Version': '2011-02-01',
+        }, ignore_params_values=['Timestamp'])
+
+
+class CloudSearchConnectionIndexDocumentTest(AWSMockServiceTestCase):
+    connection_class = Layer1
+
+    def default_body(self):
+        return """
+<IndexDocumentsResponse xmlns="http://cloudsearch.amazonaws.com/doc/2011-02-01">
+  <IndexDocumentsResult>
+    <FieldNames>
+      <member>average_score</member>
+      <member>brand_id</member>
+      <member>colors</member>
+      <member>context</member>
+      <member>context_owner</member>
+      <member>created_at</member>
+      <member>creator_id</member>
+      <member>description</member>
+      <member>file_size</member>
+      <member>format</member>
+      <member>has_logo</member>
+      <member>has_messaging</member>
+      <member>height</member>
+      <member>image_id</member>
+      <member>ingested_from</member>
+      <member>is_advertising</member>
+      <member>is_photo</member>
+      <member>is_reviewed</member>
+      <member>modified_at</member>
+      <member>subject_date</member>
+      <member>tags</member>
+      <member>title</member>
+      <member>width</member>
+    </FieldNames>
+  </IndexDocumentsResult>
+  <ResponseMetadata>
+    <RequestId>eb2b2390-6bbd-11e2-ab66-93f3a90dcf2a</RequestId>
+  </ResponseMetadata>
+</IndexDocumentsResponse>
+"""
+
+    def test_cloudsearch_index_documents(self):
+        """
+        Check that the correct arguments are sent to AWS when indexing a
+        domain.
+        """
+        self.set_http_response(status_code=200)
+        api_response = self.service_connection.index_documents('demo')
+
+        self.assert_request_parameters({
+            'Action': 'IndexDocuments',
+            'DomainName': 'demo',
+            'AWSAccessKeyId': 'aws_access_key_id',
+            'SignatureMethod': 'HmacSHA256',
+            'SignatureVersion': 2,
+            'Version': '2011-02-01',
+        }, ignore_params_values=['Timestamp'])
+
+    def test_cloudsearch_index_documents_resp(self):
+        """
+        Check that the AWS response is being parsed correctly when indexing a
+        domain.
+        """
+        self.set_http_response(status_code=200)
+        api_response = self.service_connection.index_documents('demo')
+
+        self.assertEqual(api_response, ['average_score', 'brand_id', 'colors',
+                                        'context', 'context_owner',
+                                        'created_at', 'creator_id',
+                                        'description', 'file_size', 'format',
+                                        'has_logo', 'has_messaging', 'height',
+                                        'image_id', 'ingested_from',
+                                        'is_advertising', 'is_photo',
+                                        'is_reviewed', 'modified_at',
+                                        'subject_date', 'tags', 'title',
+                                        'width'])
diff --git a/tests/unit/cloudsearch/test_document.py b/tests/unit/cloudsearch/test_document.py
new file mode 100644
index 0000000..dc2cc24
--- /dev/null
+++ b/tests/unit/cloudsearch/test_document.py
@@ -0,0 +1,324 @@
+#!/usr/bin env python
+
+from tests.unit import unittest
+from httpretty import HTTPretty
+from mock import MagicMock
+
+import urlparse
+import json
+
+from boto.cloudsearch.document import DocumentServiceConnection
+from boto.cloudsearch.document import CommitMismatchError, EncodingError, \
+        ContentTooLongError, DocumentServiceConnection
+
+import boto
+
+class CloudSearchDocumentTest(unittest.TestCase):
+    def setUp(self):
+        HTTPretty.enable()
+        HTTPretty.register_uri(
+            HTTPretty.POST,
+            ("http://doc-demo-userdomain.us-east-1.cloudsearch.amazonaws.com/"
+             "2011-02-01/documents/batch"),
+            body=json.dumps(self.response),
+            content_type="application/json")
+
+    def tearDown(self):
+        HTTPretty.disable()
+
+class CloudSearchDocumentSingleTest(CloudSearchDocumentTest):
+
+    response = {
+        'status': 'success',
+        'adds': 1,
+        'deletes': 0,
+    }
+
+    def test_cloudsearch_add_basics(self):
+        """
+        Check that a simple add document actually sends an add document request
+        to AWS.
+        """
+        document = DocumentServiceConnection(
+            endpoint="doc-demo-userdomain.us-east-1.cloudsearch.amazonaws.com")
+        document.add("1234", 10, {"id": "1234", "title": "Title 1",
+                                  "category": ["cat_a", "cat_b", "cat_c"]})
+        document.commit()
+
+        args = json.loads(HTTPretty.last_request.body)[0]
+
+        self.assertEqual(args['lang'], 'en')
+        self.assertEqual(args['type'], 'add')
+
+    def test_cloudsearch_add_single_basic(self):
+        """
+        Check that a simple add document sends correct document metadata to
+        AWS.
+        """
+        document = DocumentServiceConnection(
+            endpoint="doc-demo-userdomain.us-east-1.cloudsearch.amazonaws.com")
+        document.add("1234", 10, {"id": "1234", "title": "Title 1",
+                                  "category": ["cat_a", "cat_b", "cat_c"]})
+        document.commit()
+
+        args = json.loads(HTTPretty.last_request.body)[0]
+
+        self.assertEqual(args['id'], '1234')
+        self.assertEqual(args['version'], 10)
+        self.assertEqual(args['type'], 'add')
+
+    def test_cloudsearch_add_single_fields(self):
+        """
+        Check that a simple add document sends the actual document to AWS.
+        """
+        document = DocumentServiceConnection(
+            endpoint="doc-demo-userdomain.us-east-1.cloudsearch.amazonaws.com")
+        document.add("1234", 10, {"id": "1234", "title": "Title 1",
+                                  "category": ["cat_a", "cat_b", "cat_c"]})
+        document.commit()
+
+        args = json.loads(HTTPretty.last_request.body)[0]
+
+        self.assertEqual(args['fields']['category'], ['cat_a', 'cat_b',
+                                                      'cat_c'])
+        self.assertEqual(args['fields']['id'], '1234')
+        self.assertEqual(args['fields']['title'], 'Title 1')
+
+    def test_cloudsearch_add_single_result(self):
+        """
+        Check that the reply from adding a single document is correctly parsed.
+        """
+        document = DocumentServiceConnection(
+            endpoint="doc-demo-userdomain.us-east-1.cloudsearch.amazonaws.com")
+        document.add("1234", 10, {"id": "1234", "title": "Title 1",
+                                  "category": ["cat_a", "cat_b", "cat_c"]})
+        doc = document.commit()
+
+        self.assertEqual(doc.status, 'success')
+        self.assertEqual(doc.adds, 1)
+        self.assertEqual(doc.deletes, 0)
+
+        self.assertEqual(doc.doc_service, document)
+
+
+class CloudSearchDocumentMultipleAddTest(CloudSearchDocumentTest):
+
+    response = {
+        'status': 'success',
+        'adds': 3,
+        'deletes': 0,
+    }
+
+    objs = {
+        '1234': {
+            'version': 10, 'fields': {"id": "1234", "title": "Title 1",
+                                      "category": ["cat_a", "cat_b",
+                                                   "cat_c"]}},
+        '1235': {
+            'version': 11, 'fields': {"id": "1235", "title": "Title 2",
+                                      "category": ["cat_b", "cat_c",
+                                                   "cat_d"]}},
+        '1236': {
+            'version': 12, 'fields': {"id": "1236", "title": "Title 3",
+                                      "category": ["cat_e", "cat_f",
+                                                   "cat_g"]}},
+        }
+
+
+    def test_cloudsearch_add_basics(self):
+        """Check that multiple documents are added correctly to AWS"""
+        document = DocumentServiceConnection(
+            endpoint="doc-demo-userdomain.us-east-1.cloudsearch.amazonaws.com")
+        for (key, obj) in self.objs.items():
+            document.add(key, obj['version'], obj['fields'])
+        document.commit()
+
+        args = json.loads(HTTPretty.last_request.body)
+
+        for arg in args:
+            self.assertTrue(arg['id'] in self.objs)
+            self.assertEqual(arg['version'], self.objs[arg['id']]['version'])
+            self.assertEqual(arg['fields']['id'],
+                             self.objs[arg['id']]['fields']['id'])
+            self.assertEqual(arg['fields']['title'],
+                             self.objs[arg['id']]['fields']['title'])
+            self.assertEqual(arg['fields']['category'],
+                             self.objs[arg['id']]['fields']['category'])
+
+    def test_cloudsearch_add_results(self):
+        """
+        Check that the result from adding multiple documents is parsed
+        correctly.
+        """
+        document = DocumentServiceConnection(
+            endpoint="doc-demo-userdomain.us-east-1.cloudsearch.amazonaws.com")
+        for (key, obj) in self.objs.items():
+            document.add(key, obj['version'], obj['fields'])
+        doc = document.commit()
+
+        self.assertEqual(doc.status, 'success')
+        self.assertEqual(doc.adds, len(self.objs))
+        self.assertEqual(doc.deletes, 0)
+
+
+class CloudSearchDocumentDelete(CloudSearchDocumentTest):
+
+    response = {
+        'status': 'success',
+        'adds': 0,
+        'deletes': 1,
+    }
+
+    def test_cloudsearch_delete(self):
+        """
+        Test that the request for a single document deletion is done properly.
+        """
+        document = DocumentServiceConnection(
+            endpoint="doc-demo-userdomain.us-east-1.cloudsearch.amazonaws.com")
+        document.delete("5", "10")
+        document.commit()
+        args = json.loads(HTTPretty.last_request.body)[0]
+
+        self.assertEqual(args['version'], '10')
+        self.assertEqual(args['type'], 'delete')
+        self.assertEqual(args['id'], '5')
+
+    def test_cloudsearch_delete_results(self):
+        """
+        Check that the result of a single document deletion is parsed properly.
+        """
+        document = DocumentServiceConnection(
+            endpoint="doc-demo-userdomain.us-east-1.cloudsearch.amazonaws.com")
+        document.delete("5", "10")
+        doc = document.commit()
+
+        self.assertEqual(doc.status, 'success')
+        self.assertEqual(doc.adds, 0)
+        self.assertEqual(doc.deletes, 1)
+
+
+class CloudSearchDocumentDeleteMultiple(CloudSearchDocumentTest):
+    response = {
+        'status': 'success',
+        'adds': 0,
+        'deletes': 2,
+    }
+
+    def test_cloudsearch_delete_multiples(self):
+        document = DocumentServiceConnection(
+            endpoint="doc-demo-userdomain.us-east-1.cloudsearch.amazonaws.com")
+        document.delete("5", "10")
+        document.delete("6", "11")
+        document.commit()
+        args = json.loads(HTTPretty.last_request.body)
+
+        self.assertEqual(len(args), 2)
+        for arg in args:
+            self.assertEqual(arg['type'], 'delete')
+
+            if arg['id'] == '5':
+                self.assertEqual(arg['version'], '10')
+            elif arg['id'] == '6':
+                self.assertEqual(arg['version'], '11')
+            else: # Unknown result out of AWS that shouldn't be there
+                self.assertTrue(False)
+
+
+class CloudSearchSDFManipulation(CloudSearchDocumentTest):
+    response = {
+        'status': 'success',
+        'adds': 1,
+        'deletes': 0,
+    }
+
+
+    def test_cloudsearch_initial_sdf_is_blank(self):
+        document = DocumentServiceConnection(
+            endpoint="doc-demo-userdomain.us-east-1.cloudsearch.amazonaws.com")
+
+        self.assertEqual(document.get_sdf(), '[]')
+
+    def test_cloudsearch_single_document_sdf(self):
+        document = DocumentServiceConnection(
+            endpoint="doc-demo-userdomain.us-east-1.cloudsearch.amazonaws.com")
+
+        document.add("1234", 10, {"id": "1234", "title": "Title 1",
+                                  "category": ["cat_a", "cat_b", "cat_c"]})
+
+        self.assertNotEqual(document.get_sdf(), '[]')
+
+        document.clear_sdf()
+
+        self.assertEqual(document.get_sdf(), '[]')
+
+class CloudSearchBadSDFTesting(CloudSearchDocumentTest):
+    response = {
+        'status': 'success',
+        'adds': 1,
+        'deletes': 0,
+    }
+
+    def test_cloudsearch_erroneous_sdf(self):
+        original = boto.log.error
+        boto.log.error = MagicMock()
+        document = DocumentServiceConnection(
+            endpoint="doc-demo-userdomain.us-east-1.cloudsearch.amazonaws.com")
+
+        document.add("1234", 10, {"id": "1234", "title": None,
+                                  "category": ["cat_a", "cat_b", "cat_c"]})
+
+        document.commit()
+        self.assertNotEqual(len(boto.log.error.call_args_list), 1)
+
+        boto.log.error = original
+
+
+class CloudSearchDocumentErrorBadUnicode(CloudSearchDocumentTest):
+    response = {
+        'status': 'error',
+        'adds': 0,
+        'deletes': 0,
+        'errors': [{'message': 'Illegal Unicode character in document'}]
+    }
+
+    def test_fake_bad_unicode(self):
+        document = DocumentServiceConnection(
+            endpoint="doc-demo-userdomain.us-east-1.cloudsearch.amazonaws.com")
+        document.add("1234", 10, {"id": "1234", "title": "Title 1",
+                                  "category": ["cat_a", "cat_b", "cat_c"]})
+        self.assertRaises(EncodingError, document.commit)
+
+
+class CloudSearchDocumentErrorDocsTooBig(CloudSearchDocumentTest):
+    response = {
+        'status': 'error',
+        'adds': 0,
+        'deletes': 0,
+        'errors': [{'message': 'The Content-Length is too long'}]
+    }
+
+    def test_fake_docs_too_big(self):
+        document = DocumentServiceConnection(
+            endpoint="doc-demo-userdomain.us-east-1.cloudsearch.amazonaws.com")
+        document.add("1234", 10, {"id": "1234", "title": "Title 1",
+                                  "category": ["cat_a", "cat_b", "cat_c"]})
+
+        self.assertRaises(ContentTooLongError, document.commit)
+
+
+class CloudSearchDocumentErrorMismatch(CloudSearchDocumentTest):
+    response = {
+            'status': 'error',
+            'adds': 0,
+            'deletes': 0,
+            'errors': [{'message': 'Something went wrong'}]
+            }
+
+    def test_fake_failure(self):
+        document = DocumentServiceConnection(
+            endpoint="doc-demo-userdomain.us-east-1.cloudsearch.amazonaws.com")
+
+        document.add("1234", 10, {"id": "1234", "title": "Title 1",
+                                  "category": ["cat_a", "cat_b", "cat_c"]})
+
+        self.assertRaises(CommitMismatchError, document.commit)
diff --git a/tests/unit/cloudsearch/test_search.py b/tests/unit/cloudsearch/test_search.py
new file mode 100644
index 0000000..7cadf65
--- /dev/null
+++ b/tests/unit/cloudsearch/test_search.py
@@ -0,0 +1,357 @@
+#!/usr/bin env python
+
+from tests.unit import unittest
+from httpretty import HTTPretty
+
+import urlparse
+import json
+
+from boto.cloudsearch.search import SearchConnection
+
+HOSTNAME = "search-demo-userdomain.us-east-1.cloudsearch.amazonaws.com"
+FULL_URL = 'http://%s/2011-02-01/search' % HOSTNAME
+
+
+class CloudSearchSearchBaseTest(unittest.TestCase):
+
+    hits = [
+        {
+            'id': '12341',
+            'title': 'Document 1',
+        },
+        {
+            'id': '12342',
+            'title': 'Document 2',
+        },
+        {
+            'id': '12343',
+            'title': 'Document 3',
+        },
+        {
+            'id': '12344',
+            'title': 'Document 4',
+        },
+        {
+            'id': '12345',
+            'title': 'Document 5',
+        },
+        {
+            'id': '12346',
+            'title': 'Document 6',
+        },
+        {
+            'id': '12347',
+            'title': 'Document 7',
+        },
+    ]
+
+
+    def get_args(self, requestline):
+        (_, request, _) = requestline.split(" ")
+        (_, request) = request.split("?", 1)
+        args = urlparse.parse_qs(request)
+        return args
+
+    def setUp(self):
+        HTTPretty.enable()
+        HTTPretty.register_uri(HTTPretty.GET, FULL_URL,
+                               body=json.dumps(self.response),
+                               content_type="text/xml")
+
+    def tearDown(self):
+        HTTPretty.disable()
+
+class CloudSearchSearchTest(CloudSearchSearchBaseTest):
+    response = {
+        'rank': '-text_relevance',
+        'match-expr':"Test",
+        'hits': {
+            'found': 30,
+            'start': 0,
+            'hit':CloudSearchSearchBaseTest.hits
+            },
+        'info': {
+            'rid':'b7c167f6c2da6d93531b9a7b314ad030b3a74803b4b7797edb905ba5a6a08',
+            'time-ms': 2,
+            'cpu-time-ms': 0
+        }
+
+    }
+
+    def test_cloudsearch_qsearch(self):
+        search = SearchConnection(endpoint=HOSTNAME)
+
+        search.search(q='Test')
+
+        args = self.get_args(HTTPretty.last_request.raw_requestline)
+
+        self.assertEqual(args['q'], ["Test"])
+        self.assertEqual(args['start'], ["0"])
+        self.assertEqual(args['size'], ["10"])
+
+    def test_cloudsearch_bqsearch(self):
+        search = SearchConnection(endpoint=HOSTNAME)
+
+        search.search(bq="'Test'")
+
+        args = self.get_args(HTTPretty.last_request.raw_requestline)
+
+        self.assertEqual(args['bq'], ["'Test'"])
+
+    def test_cloudsearch_search_details(self):
+        search = SearchConnection(endpoint=HOSTNAME)
+
+        search.search(q='Test', size=50, start=20)
+
+        args = self.get_args(HTTPretty.last_request.raw_requestline)
+
+        self.assertEqual(args['q'], ["Test"])
+        self.assertEqual(args['size'], ["50"])
+        self.assertEqual(args['start'], ["20"])
+
+    def test_cloudsearch_facet_single(self):
+        search = SearchConnection(endpoint=HOSTNAME)
+
+        search.search(q='Test', facet=["Author"])
+
+        args = self.get_args(HTTPretty.last_request.raw_requestline)
+
+        self.assertEqual(args['facet'], ["Author"])
+
+    def test_cloudsearch_facet_multiple(self):
+        search = SearchConnection(endpoint=HOSTNAME)
+
+        search.search(q='Test', facet=["author", "cat"])
+
+        args = self.get_args(HTTPretty.last_request.raw_requestline)
+
+        self.assertEqual(args['facet'], ["author,cat"])
+
+    def test_cloudsearch_facet_constraint_single(self):
+        search = SearchConnection(endpoint=HOSTNAME)
+
+        search.search(
+            q='Test',
+            facet_constraints={'author': "'John Smith','Mark Smith'"})
+
+        args = self.get_args(HTTPretty.last_request.raw_requestline)
+
+        self.assertEqual(args['facet-author-constraints'],
+                         ["'John Smith','Mark Smith'"])
+
+    def test_cloudsearch_facet_constraint_multiple(self):
+        search = SearchConnection(endpoint=HOSTNAME)
+
+        search.search(
+            q='Test',
+            facet_constraints={'author': "'John Smith','Mark Smith'",
+                               'category': "'News','Reviews'"})
+
+        args = self.get_args(HTTPretty.last_request.raw_requestline)
+
+        self.assertEqual(args['facet-author-constraints'],
+                         ["'John Smith','Mark Smith'"])
+        self.assertEqual(args['facet-category-constraints'],
+                         ["'News','Reviews'"])
+
+    def test_cloudsearch_facet_sort_single(self):
+        search = SearchConnection(endpoint=HOSTNAME)
+
+        search.search(q='Test', facet_sort={'author': 'alpha'})
+
+        args = self.get_args(HTTPretty.last_request.raw_requestline)
+
+        self.assertEqual(args['facet-author-sort'], ['alpha'])
+
+    def test_cloudsearch_facet_sort_multiple(self):
+        search = SearchConnection(endpoint=HOSTNAME)
+
+        search.search(q='Test', facet_sort={'author': 'alpha',
+                                            'cat': 'count'})
+
+        args = self.get_args(HTTPretty.last_request.raw_requestline)
+
+        self.assertEqual(args['facet-author-sort'], ['alpha'])
+        self.assertEqual(args['facet-cat-sort'], ['count'])
+
+    def test_cloudsearch_top_n_single(self):
+        search = SearchConnection(endpoint=HOSTNAME)
+
+        search.search(q='Test', facet_top_n={'author': 5})
+
+        args = self.get_args(HTTPretty.last_request.raw_requestline)
+
+        self.assertEqual(args['facet-author-top-n'], ['5'])
+
+    def test_cloudsearch_top_n_multiple(self):
+        search = SearchConnection(endpoint=HOSTNAME)
+
+        search.search(q='Test', facet_top_n={'author': 5, 'cat': 10})
+
+        args = self.get_args(HTTPretty.last_request.raw_requestline)
+
+        self.assertEqual(args['facet-author-top-n'], ['5'])
+        self.assertEqual(args['facet-cat-top-n'], ['10'])
+
+    def test_cloudsearch_rank_single(self):
+        search = SearchConnection(endpoint=HOSTNAME)
+
+        search.search(q='Test', rank=["date"])
+
+        args = self.get_args(HTTPretty.last_request.raw_requestline)
+
+        self.assertEqual(args['rank'], ['date'])
+
+    def test_cloudsearch_rank_multiple(self):
+        search = SearchConnection(endpoint=HOSTNAME)
+
+        search.search(q='Test', rank=["date", "score"])
+
+        args = self.get_args(HTTPretty.last_request.raw_requestline)
+
+        self.assertEqual(args['rank'], ['date,score'])
+
+    def test_cloudsearch_result_fields_single(self):
+        search = SearchConnection(endpoint=HOSTNAME)
+
+        search.search(q='Test', return_fields=['author'])
+
+        args = self.get_args(HTTPretty.last_request.raw_requestline)
+
+        self.assertEqual(args['return-fields'], ['author'])
+
+    def test_cloudsearch_result_fields_multiple(self):
+        search = SearchConnection(endpoint=HOSTNAME)
+
+        search.search(q='Test', return_fields=['author', 'title'])
+
+        args = self.get_args(HTTPretty.last_request.raw_requestline)
+
+        self.assertEqual(args['return-fields'], ['author,title'])
+
+
+    def test_cloudsearch_t_field_single(self):
+        search = SearchConnection(endpoint=HOSTNAME)
+
+        search.search(q='Test', t={'year':'2001..2007'})
+
+        args = self.get_args(HTTPretty.last_request.raw_requestline)
+
+        self.assertEqual(args['t-year'], ['2001..2007'])
+
+    def test_cloudsearch_t_field_multiple(self):
+        search = SearchConnection(endpoint=HOSTNAME)
+
+        search.search(q='Test', t={'year':'2001..2007', 'score':'10..50'})
+
+        args = self.get_args(HTTPretty.last_request.raw_requestline)
+
+        self.assertEqual(args['t-year'], ['2001..2007'])
+        self.assertEqual(args['t-score'], ['10..50'])
+
+
+    def test_cloudsearch_results_meta(self):
+        """Check returned metadata is parsed correctly"""
+        search = SearchConnection(endpoint=HOSTNAME)
+
+        results = search.search(q='Test')
+
+        # These rely on the default response which is fed into HTTPretty
+        self.assertEqual(results.rank, "-text_relevance")
+        self.assertEqual(results.match_expression, "Test")
+
+    def test_cloudsearch_results_info(self):
+        """Check num_pages_needed is calculated correctly"""
+        search = SearchConnection(endpoint=HOSTNAME)
+
+        results = search.search(q='Test')
+
+        # This relies on the default response which is fed into HTTPretty
+        self.assertEqual(results.num_pages_needed, 3.0)
+
+    def test_cloudsearch_results_matched(self):
+        """
+        Check that information objects are passed back through the API
+        correctly.
+        """
+        search = SearchConnection(endpoint=HOSTNAME)
+        query = search.build_query(q='Test')
+
+        results = search(query)
+
+        self.assertEqual(results.search_service, search)
+        self.assertEqual(results.query, query)
+
+    def test_cloudsearch_results_hits(self):
+        """Check that documents are parsed properly from AWS"""
+        search = SearchConnection(endpoint=HOSTNAME)
+
+        results = search.search(q='Test')
+
+        hits = map(lambda x: x['id'], results.docs)
+
+        # This relies on the default response which is fed into HTTPretty
+        self.assertEqual(
+            hits, ["12341", "12342", "12343", "12344",
+                   "12345", "12346", "12347"])
+
+    def test_cloudsearch_results_iterator(self):
+        """Check the results iterator"""
+        search = SearchConnection(endpoint=HOSTNAME)
+
+        results = search.search(q='Test')
+        results_correct = iter(["12341", "12342", "12343", "12344",
+                                "12345", "12346", "12347"])
+        for x in results:
+            self.assertEqual(x['id'], results_correct.next())
+
+
+    def test_cloudsearch_results_internal_consistancy(self):
+        """Check the documents length matches the iterator details"""
+        search = SearchConnection(endpoint=HOSTNAME)
+
+        results = search.search(q='Test')
+
+        self.assertEqual(len(results), len(results.docs))
+
+    def test_cloudsearch_search_nextpage(self):
+        """Check next page query is correct"""
+        search = SearchConnection(endpoint=HOSTNAME)
+        query1 = search.build_query(q='Test')
+        query2 = search.build_query(q='Test')
+
+        results = search(query2)
+
+        self.assertEqual(results.next_page().query.start,
+                         query1.start + query1.size)
+        self.assertEqual(query1.q, query2.q)
+
+class CloudSearchSearchFacetTest(CloudSearchSearchBaseTest):
+    response = {
+        'rank': '-text_relevance',
+        'match-expr':"Test",
+        'hits': {
+            'found': 30,
+            'start': 0,
+            'hit':CloudSearchSearchBaseTest.hits
+            },
+        'info': {
+            'rid':'b7c167f6c2da6d93531b9a7b314ad030b3a74803b4b7797edb905ba5a6a08',
+            'time-ms': 2,
+            'cpu-time-ms': 0
+        },
+        'facets': {
+            'tags': {},
+            'animals': {'constraints': [{'count': '2', 'value': 'fish'}, {'count': '1', 'value':'lions'}]},
+        }
+    }
+
+    def test_cloudsearch_search_facets(self):
+        #self.response['facets'] = {'tags': {}}
+
+        search = SearchConnection(endpoint=HOSTNAME)
+
+        results = search.search(q='Test', facet=['tags'])
+
+        self.assertTrue('tags' not in results.facets)
+        self.assertEqual(results.facets['animals'], {u'lions': u'1', u'fish': u'2'})
diff --git a/tests/unit/dynamodb/__init__.py b/tests/unit/dynamodb/__init__.py
new file mode 100644
index 0000000..e69de29
--- /dev/null
+++ b/tests/unit/dynamodb/__init__.py
diff --git a/tests/unit/dynamodb/test_batch.py b/tests/unit/dynamodb/test_batch.py
new file mode 100644
index 0000000..545aed7
--- /dev/null
+++ b/tests/unit/dynamodb/test_batch.py
@@ -0,0 +1,103 @@
+#!/usr/bin/env python
+# Copyright (c) 2012 Amazon.com, Inc. or its affiliates.  All Rights Reserved
+#
+# Permission is hereby granted, free of charge, to any person obtaining a
+# copy of this software and associated documentation files (the
+# "Software"), to deal in the Software without restriction, including
+# without limitation the rights to use, copy, modify, merge, publish, dis-
+# tribute, sublicense, and/or sell copies of the Software, and to permit
+# persons to whom the Software is furnished to do so, subject to the fol-
+# lowing conditions:
+#
+# The above copyright notice and this permission notice shall be included
+# in all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
+# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
+# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+# IN THE SOFTWARE.
+#
+from tests.unit import unittest
+
+from boto.dynamodb.batch import Batch
+from boto.dynamodb.table import Table
+from boto.dynamodb.layer2 import Layer2
+from boto.dynamodb.batch import BatchList
+
+
+DESCRIBE_TABLE_1 = {
+    'Table': {
+        'CreationDateTime': 1349910554.478,
+        'ItemCount': 1,
+        'KeySchema': {'HashKeyElement': {'AttributeName': u'foo',
+                                         'AttributeType': u'S'}},
+        'ProvisionedThroughput': {'ReadCapacityUnits': 10,
+                                  'WriteCapacityUnits': 10},
+        'TableName': 'testtable',
+        'TableSizeBytes': 54,
+        'TableStatus': 'ACTIVE'}
+}
+
+DESCRIBE_TABLE_2 = {
+    'Table': {
+        'CreationDateTime': 1349910554.478,
+        'ItemCount': 1,
+        'KeySchema': {'HashKeyElement': {'AttributeName': u'baz',
+                                         'AttributeType': u'S'},
+                      'RangeKeyElement': {'AttributeName': 'myrange',
+                                          'AttributeType': 'N'}},
+        'ProvisionedThroughput': {'ReadCapacityUnits': 10,
+                                  'WriteCapacityUnits': 10},
+        'TableName': 'testtable2',
+        'TableSizeBytes': 54,
+        'TableStatus': 'ACTIVE'}
+}
+
+
+class TestBatchObjects(unittest.TestCase):
+    maxDiff = None
+
+    def setUp(self):
+        self.layer2 = Layer2('access_key', 'secret_key')
+        self.table = Table(self.layer2, DESCRIBE_TABLE_1)
+        self.table2 = Table(self.layer2, DESCRIBE_TABLE_2)
+
+    def test_batch_to_dict(self):
+        b = Batch(self.table, ['k1', 'k2'], attributes_to_get=['foo'],
+                  consistent_read=True)
+        self.assertDictEqual(
+            b.to_dict(),
+            {'AttributesToGet': ['foo'],
+             'Keys': [{'HashKeyElement': {'S': 'k1'}},
+                      {'HashKeyElement': {'S': 'k2'}}],
+             'ConsistentRead': True}
+        )
+
+    def test_batch_consistent_read_defaults_to_false(self):
+        b = Batch(self.table, ['k1'])
+        self.assertDictEqual(
+            b.to_dict(),
+            {'Keys': [{'HashKeyElement': {'S': 'k1'}}],
+             'ConsistentRead': False}
+        )
+
+    def test_batch_list_consistent_read(self):
+        b = BatchList(self.layer2)
+        b.add_batch(self.table, ['k1'], ['foo'], consistent_read=True)
+        b.add_batch(self.table2, [('k2', 54)], ['bar'], consistent_read=False)
+        self.assertDictEqual(
+            b.to_dict(),
+            {'testtable': {'AttributesToGet': ['foo'],
+                           'Keys': [{'HashKeyElement': {'S': 'k1'}}],
+                           'ConsistentRead': True},
+              'testtable2': {'AttributesToGet': ['bar'],
+                             'Keys': [{'HashKeyElement': {'S': 'k2'},
+                                       'RangeKeyElement': {'N': '54'}}],
+                             'ConsistentRead': False}})
+
+
+if __name__ == '__main__':
+    unittest.main()
diff --git a/tests/unit/dynamodb/test_layer2.py b/tests/unit/dynamodb/test_layer2.py
new file mode 100644
index 0000000..ff6ba9e
--- /dev/null
+++ b/tests/unit/dynamodb/test_layer2.py
@@ -0,0 +1,119 @@
+#!/usr/bin/env python
+# Copyright (c) 2012 Amazon.com, Inc. or its affiliates.  All Rights Reserved
+#
+# Permission is hereby granted, free of charge, to any person obtaining a
+# copy of this software and associated documentation files (the
+# "Software"), to deal in the Software without restriction, including
+# without limitation the rights to use, copy, modify, merge, publish, dis-
+# tribute, sublicense, and/or sell copies of the Software, and to permit
+# persons to whom the Software is furnished to do so, subject to the fol-
+# lowing conditions:
+#
+# The above copyright notice and this permission notice shall be included
+# in all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
+# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
+# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+# IN THE SOFTWARE.
+#
+
+from tests.unit import unittest
+from mock import Mock
+
+from boto.dynamodb.layer2 import Layer2
+from boto.dynamodb.table import Table, Schema
+
+
+DESCRIBE_TABLE = {
+    "Table": {
+        "CreationDateTime": 1.353526122785E9, "ItemCount":1,
+        "KeySchema": {
+            "HashKeyElement":{"AttributeName": "foo", "AttributeType": "N"}},
+        "ProvisionedThroughput": {
+            "NumberOfDecreasesToday": 0,
+            "ReadCapacityUnits": 5,
+            "WriteCapacityUnits": 5},
+        "TableName": "footest",
+        "TableSizeBytes": 21,
+        "TableStatus": "ACTIVE"}
+}
+
+
+class TestTableConstruction(unittest.TestCase):
+    def setUp(self):
+        self.layer2 = Layer2('access_key', 'secret_key')
+        self.api = Mock()
+        self.layer2.layer1 = self.api
+
+    def test_get_table(self):
+        self.api.describe_table.return_value = DESCRIBE_TABLE
+        table = self.layer2.get_table('footest')
+        self.assertEqual(table.name, 'footest')
+        self.assertEqual(table.create_time, 1353526122.785)
+        self.assertEqual(table.status, 'ACTIVE')
+        self.assertEqual(table.item_count, 1)
+        self.assertEqual(table.size_bytes, 21)
+        self.assertEqual(table.read_units, 5)
+        self.assertEqual(table.write_units, 5)
+        self.assertEqual(table.schema, Schema.create(hash_key=('foo', 'N')))
+
+    def test_create_table_without_api_call(self):
+        table = self.layer2.table_from_schema(
+            name='footest',
+            schema=Schema.create(hash_key=('foo', 'N')))
+        self.assertEqual(table.name, 'footest')
+        self.assertEqual(table.schema, Schema.create(hash_key=('foo', 'N')))
+        # describe_table is never called.
+        self.assertEqual(self.api.describe_table.call_count, 0)
+
+    def test_create_schema_with_hash_and_range(self):
+        schema = self.layer2.create_schema('foo', int, 'bar', str)
+        self.assertEqual(schema.hash_key_name, 'foo')
+        self.assertEqual(schema.hash_key_type, 'N')
+        self.assertEqual(schema.range_key_name, 'bar')
+        self.assertEqual(schema.range_key_type, 'S')
+
+    def test_create_schema_with_hash(self):
+        schema = self.layer2.create_schema('foo', str)
+        self.assertEqual(schema.hash_key_name, 'foo')
+        self.assertEqual(schema.hash_key_type, 'S')
+        self.assertIsNone(schema.range_key_name)
+        self.assertIsNone(schema.range_key_type)
+
+
+class TestSchemaEquality(unittest.TestCase):
+    def test_schema_equal(self):
+        s1 = Schema.create(hash_key=('foo', 'N'))
+        s2 = Schema.create(hash_key=('foo', 'N'))
+        self.assertEqual(s1, s2)
+
+    def test_schema_not_equal(self):
+        s1 = Schema.create(hash_key=('foo', 'N'))
+        s2 = Schema.create(hash_key=('bar', 'N'))
+        s3 = Schema.create(hash_key=('foo', 'S'))
+        self.assertNotEqual(s1, s2)
+        self.assertNotEqual(s1, s3)
+
+    def test_equal_with_hash_and_range(self):
+        s1 = Schema.create(hash_key=('foo', 'N'), range_key=('bar', 'S'))
+        s2 = Schema.create(hash_key=('foo', 'N'), range_key=('bar', 'S'))
+        self.assertEqual(s1, s2)
+
+    def test_schema_with_hash_and_range_not_equal(self):
+        s1 = Schema.create(hash_key=('foo', 'N'), range_key=('bar', 'S'))
+        s2 = Schema.create(hash_key=('foo', 'N'), range_key=('bar', 'N'))
+        s3 = Schema.create(hash_key=('foo', 'S'), range_key=('baz', 'N'))
+        s4 = Schema.create(hash_key=('bar', 'N'), range_key=('baz', 'N'))
+        self.assertNotEqual(s1, s2)
+        self.assertNotEqual(s1, s3)
+        self.assertNotEqual(s1, s4)
+        self.assertNotEqual(s2, s4)
+        self.assertNotEqual(s3, s4)
+
+
+if __name__ == '__main__':
+    unittest.main()
diff --git a/tests/unit/dynamodb/test_types.py b/tests/unit/dynamodb/test_types.py
new file mode 100644
index 0000000..99ffa6d
--- /dev/null
+++ b/tests/unit/dynamodb/test_types.py
@@ -0,0 +1,82 @@
+#!/usr/bin/env python
+# Copyright (c) 2012 Amazon.com, Inc. or its affiliates.  All Rights Reserved
+#
+# Permission is hereby granted, free of charge, to any person obtaining a
+# copy of this software and associated documentation files (the
+# "Software"), to deal in the Software without restriction, including
+# without limitation the rights to use, copy, modify, merge, publish, dis-
+# tribute, sublicense, and/or sell copies of the Software, and to permit
+# persons to whom the Software is furnished to do so, subject to the fol-
+# lowing conditions:
+#
+# The above copyright notice and this permission notice shall be included
+# in all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
+# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
+# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+# IN THE SOFTWARE.
+#
+from decimal import Decimal
+from tests.unit import unittest
+
+from boto.dynamodb import types
+from boto.dynamodb.exceptions import DynamoDBNumberError
+
+
+class TestDynamizer(unittest.TestCase):
+    def setUp(self):
+        pass
+
+    def test_encoding_to_dynamodb(self):
+        dynamizer = types.Dynamizer()
+        self.assertEqual(dynamizer.encode('foo'), {'S': 'foo'})
+        self.assertEqual(dynamizer.encode(54), {'N': '54'})
+        self.assertEqual(dynamizer.encode(Decimal('1.1')), {'N': '1.1'})
+        self.assertEqual(dynamizer.encode(set([1, 2, 3])),
+                         {'NS': ['1', '2', '3']})
+        self.assertEqual(dynamizer.encode(set(['foo', 'bar'])),
+                         {'SS': ['foo', 'bar']})
+        self.assertEqual(dynamizer.encode(types.Binary('\x01')),
+                         {'B': 'AQ=='})
+        self.assertEqual(dynamizer.encode(set([types.Binary('\x01')])),
+                         {'BS': ['AQ==']})
+
+    def test_decoding_to_dynamodb(self):
+        dynamizer = types.Dynamizer()
+        self.assertEqual(dynamizer.decode({'S': 'foo'}), 'foo')
+        self.assertEqual(dynamizer.decode({'N': '54'}), 54)
+        self.assertEqual(dynamizer.decode({'N': '1.1'}), Decimal('1.1'))
+        self.assertEqual(dynamizer.decode({'NS': ['1', '2', '3']}),
+                         set([1, 2, 3]))
+        self.assertEqual(dynamizer.decode({'SS': ['foo', 'bar']}),
+                         set(['foo', 'bar']))
+        self.assertEqual(dynamizer.decode({'B': 'AQ=='}), types.Binary('\x01'))
+        self.assertEqual(dynamizer.decode({'BS': ['AQ==']}),
+                         set([types.Binary('\x01')]))
+
+    def test_float_conversion_errors(self):
+        dynamizer = types.Dynamizer()
+        # When supporting decimals, certain floats will work:
+        self.assertEqual(dynamizer.encode(1.25), {'N': '1.25'})
+        # And some will generate errors, which is why it's best
+        # to just use Decimals directly:
+        with self.assertRaises(DynamoDBNumberError):
+            dynamizer.encode(1.1)
+
+    def test_lossy_float_conversions(self):
+        dynamizer = types.LossyFloatDynamizer()
+        # Just testing the differences here, specifically float conversions:
+        self.assertEqual(dynamizer.encode(1.1), {'N': '1.1'})
+        self.assertEqual(dynamizer.decode({'N': '1.1'}), 1.1)
+
+        self.assertEqual(dynamizer.encode(set([1.1])),
+                         {'NS': ['1.1']})
+        self.assertEqual(dynamizer.decode({'NS': ['1.1', '2.2', '3.3']}),
+                         set([1.1, 2.2, 3.3]))
+
+if __name__ == '__main__':
+    unittest.main()
diff --git a/tests/unit/dynamodb2/__init__.py b/tests/unit/dynamodb2/__init__.py
new file mode 100644
index 0000000..e69de29
--- /dev/null
+++ b/tests/unit/dynamodb2/__init__.py
diff --git a/tests/unit/dynamodb2/test_layer1.py b/tests/unit/dynamodb2/test_layer1.py
new file mode 100644
index 0000000..5778a72
--- /dev/null
+++ b/tests/unit/dynamodb2/test_layer1.py
@@ -0,0 +1,45 @@
+# Copyright (c) 2013 Amazon.com, Inc. or its affiliates.
+# All rights reserved.
+#
+# Permission is hereby granted, free of charge, to any person obtaining a
+# copy of this software and associated documentation files (the
+# "Software"), to deal in the Software without restriction, including
+# without limitation the rights to use, copy, modify, merge, publish, dis-
+# tribute, sublicense, and/or sell copies of the Software, and to permit
+# persons to whom the Software is furnished to do so, subject to the fol-
+# lowing conditions:
+#
+# The above copyright notice and this permission notice shall be included
+# in all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
+# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
+# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+# IN THE SOFTWARE.
+
+"""
+Tests for Layer1 of DynamoDB v2
+"""
+from tests.unit import unittest
+from boto.dynamodb2.layer1 import DynamoDBConnection
+from boto.regioninfo import RegionInfo
+
+
+class DynamoDBv2Layer1UnitTest(unittest.TestCase):
+    dynamodb = True
+
+    def test_init_region(self):
+        dynamodb = DynamoDBConnection(
+            aws_access_key_id='aws_access_key_id',
+            aws_secret_access_key='aws_secret_access_key')
+        self.assertEqual(dynamodb.region.name, 'us-east-1')
+        dynamodb = DynamoDBConnection(
+            region=RegionInfo(name='us-west-2',
+                              endpoint='dynamodb.us-west-2.amazonaws.com'),
+            aws_access_key_id='aws_access_key_id',
+            aws_secret_access_key='aws_secret_access_key',
+        )
+        self.assertEqual(dynamodb.region.name, 'us-west-2')
diff --git a/tests/unit/dynamodb2/test_table.py b/tests/unit/dynamodb2/test_table.py
new file mode 100644
index 0000000..fe7e5b9
--- /dev/null
+++ b/tests/unit/dynamodb2/test_table.py
@@ -0,0 +1,2025 @@
+import mock
+import unittest
+from boto.dynamodb2 import exceptions
+from boto.dynamodb2.fields import (HashKey, RangeKey,
+                                   AllIndex, KeysOnlyIndex, IncludeIndex)
+from boto.dynamodb2.items import Item
+from boto.dynamodb2.layer1 import DynamoDBConnection
+from boto.dynamodb2.results import ResultSet, BatchGetResultSet
+from boto.dynamodb2.table import Table
+from boto.dynamodb2.types import (STRING, NUMBER,
+                                  FILTER_OPERATORS, QUERY_OPERATORS)
+
+
+FakeDynamoDBConnection = mock.create_autospec(DynamoDBConnection)
+
+
+
+class SchemaFieldsTestCase(unittest.TestCase):
+    def test_hash_key(self):
+        hash_key = HashKey('hello')
+        self.assertEqual(hash_key.name, 'hello')
+        self.assertEqual(hash_key.data_type, STRING)
+        self.assertEqual(hash_key.attr_type, 'HASH')
+
+        self.assertEqual(hash_key.definition(), {
+            'AttributeName': 'hello',
+            'AttributeType': 'S'
+        })
+        self.assertEqual(hash_key.schema(), {
+            'AttributeName': 'hello',
+            'KeyType': 'HASH'
+        })
+
+    def test_range_key(self):
+        range_key = RangeKey('hello')
+        self.assertEqual(range_key.name, 'hello')
+        self.assertEqual(range_key.data_type, STRING)
+        self.assertEqual(range_key.attr_type, 'RANGE')
+
+        self.assertEqual(range_key.definition(), {
+            'AttributeName': 'hello',
+            'AttributeType': 'S'
+        })
+        self.assertEqual(range_key.schema(), {
+            'AttributeName': 'hello',
+            'KeyType': 'RANGE'
+        })
+
+    def test_alternate_type(self):
+        alt_key = HashKey('alt', data_type=NUMBER)
+        self.assertEqual(alt_key.name, 'alt')
+        self.assertEqual(alt_key.data_type, NUMBER)
+        self.assertEqual(alt_key.attr_type, 'HASH')
+
+        self.assertEqual(alt_key.definition(), {
+            'AttributeName': 'alt',
+            'AttributeType': 'N'
+        })
+        self.assertEqual(alt_key.schema(), {
+            'AttributeName': 'alt',
+            'KeyType': 'HASH'
+        })
+
+
+class IndexFieldTestCase(unittest.TestCase):
+    def test_all_index(self):
+        all_index = AllIndex('AllKeys', parts=[
+            HashKey('username'),
+            RangeKey('date_joined')
+        ])
+        self.assertEqual(all_index.name, 'AllKeys')
+        self.assertEqual([part.attr_type for part in all_index.parts], [
+            'HASH',
+            'RANGE'
+        ])
+        self.assertEqual(all_index.projection_type, 'ALL')
+
+        self.assertEqual(all_index.definition(), [
+            {'AttributeName': 'username', 'AttributeType': 'S'},
+            {'AttributeName': 'date_joined', 'AttributeType': 'S'}
+        ])
+        self.assertEqual(all_index.schema(), {
+            'IndexName': 'AllKeys',
+            'KeySchema': [
+                {
+                    'AttributeName': 'username',
+                    'KeyType': 'HASH'
+                },
+                {
+                    'AttributeName': 'date_joined',
+                    'KeyType': 'RANGE'
+                }
+            ],
+            'Projection': {
+                'ProjectionType': 'ALL'
+            }
+        })
+
+    def test_keys_only_index(self):
+        keys_only = KeysOnlyIndex('KeysOnly', parts=[
+            HashKey('username'),
+            RangeKey('date_joined')
+        ])
+        self.assertEqual(keys_only.name, 'KeysOnly')
+        self.assertEqual([part.attr_type for part in keys_only.parts], [
+            'HASH',
+            'RANGE'
+        ])
+        self.assertEqual(keys_only.projection_type, 'KEYS_ONLY')
+
+        self.assertEqual(keys_only.definition(), [
+            {'AttributeName': 'username', 'AttributeType': 'S'},
+            {'AttributeName': 'date_joined', 'AttributeType': 'S'}
+        ])
+        self.assertEqual(keys_only.schema(), {
+            'IndexName': 'KeysOnly',
+            'KeySchema': [
+                {
+                    'AttributeName': 'username',
+                    'KeyType': 'HASH'
+                },
+                {
+                    'AttributeName': 'date_joined',
+                    'KeyType': 'RANGE'
+                }
+            ],
+            'Projection': {
+                'ProjectionType': 'KEYS_ONLY'
+            }
+        })
+
+    def test_include_index(self):
+        include_index = IncludeIndex('IncludeKeys', parts=[
+            HashKey('username'),
+            RangeKey('date_joined')
+        ], includes=[
+            'gender',
+            'friend_count'
+        ])
+        self.assertEqual(include_index.name, 'IncludeKeys')
+        self.assertEqual([part.attr_type for part in include_index.parts], [
+            'HASH',
+            'RANGE'
+        ])
+        self.assertEqual(include_index.projection_type, 'INCLUDE')
+
+        self.assertEqual(include_index.definition(), [
+            {'AttributeName': 'username', 'AttributeType': 'S'},
+            {'AttributeName': 'date_joined', 'AttributeType': 'S'}
+        ])
+        self.assertEqual(include_index.schema(), {
+            'IndexName': 'IncludeKeys',
+            'KeySchema': [
+                {
+                    'AttributeName': 'username',
+                    'KeyType': 'HASH'
+                },
+                {
+                    'AttributeName': 'date_joined',
+                    'KeyType': 'RANGE'
+                }
+            ],
+            'Projection': {
+                'ProjectionType': 'INCLUDE',
+                'NonKeyAttributes': [
+                    'gender',
+                    'friend_count',
+                ]
+            }
+        })
+
+
+class ItemTestCase(unittest.TestCase):
+    def setUp(self):
+        super(ItemTestCase, self).setUp()
+        self.table = Table('whatever', connection=FakeDynamoDBConnection())
+        self.johndoe = self.create_item({
+            'username': 'johndoe',
+            'first_name': 'John',
+            'date_joined': 12345,
+        })
+
+    def create_item(self, data):
+        return Item(self.table, data=data)
+
+    def test_initialization(self):
+        empty_item = Item(self.table)
+        self.assertEqual(empty_item.table, self.table)
+        self.assertEqual(empty_item._data, {})
+
+        full_item = Item(self.table, data={
+            'username': 'johndoe',
+            'date_joined': 12345,
+        })
+        self.assertEqual(full_item.table, self.table)
+        self.assertEqual(full_item._data, {
+            'username': 'johndoe',
+            'date_joined': 12345,
+        })
+
+    # The next couple methods make use of ``sorted(...)`` so we get consistent
+    # ordering everywhere & no erroneous failures.
+
+    def test_keys(self):
+        self.assertEqual(sorted(self.johndoe.keys()), [
+            'date_joined',
+            'first_name',
+            'username',
+        ])
+
+    def test_values(self):
+        self.assertEqual(sorted(self.johndoe.values()), [
+            12345,
+            'John',
+            'johndoe',
+        ])
+
+    def test_contains(self):
+        self.assertTrue('username' in self.johndoe)
+        self.assertTrue('first_name' in self.johndoe)
+        self.assertTrue('date_joined' in self.johndoe)
+        self.assertFalse('whatever' in self.johndoe)
+
+    def test_iter(self):
+        self.assertEqual(list(self.johndoe), [
+            'johndoe',
+            'John',
+            12345,
+        ])
+
+    def test_get(self):
+        self.assertEqual(self.johndoe.get('username'), 'johndoe')
+        self.assertEqual(self.johndoe.get('first_name'), 'John')
+        self.assertEqual(self.johndoe.get('date_joined'), 12345)
+
+        # Test a missing key. No default yields ``None``.
+        self.assertEqual(self.johndoe.get('last_name'), None)
+        # This time with a default.
+        self.assertEqual(self.johndoe.get('last_name', True), True)
+
+    def test_items(self):
+        self.assertEqual(sorted(self.johndoe.items()), [
+            ('date_joined', 12345),
+            ('first_name', 'John'),
+            ('username', 'johndoe'),
+        ])
+
+    def test_attribute_access(self):
+        self.assertEqual(self.johndoe['username'], 'johndoe')
+        self.assertEqual(self.johndoe['first_name'], 'John')
+        self.assertEqual(self.johndoe['date_joined'], 12345)
+
+        # Test a missing key.
+        self.assertEqual(self.johndoe['last_name'], None)
+
+        # Set a key.
+        self.johndoe['last_name'] = 'Doe'
+        # Test accessing the new key.
+        self.assertEqual(self.johndoe['last_name'], 'Doe')
+
+        # Delete a key.
+        del self.johndoe['last_name']
+        # Test the now-missing-again key.
+        self.assertEqual(self.johndoe['last_name'], None)
+
+    def test_needs_save(self):
+        self.johndoe.mark_clean()
+        self.assertFalse(self.johndoe.needs_save())
+        self.johndoe['last_name'] = 'Doe'
+        self.assertTrue(self.johndoe.needs_save())
+
+    def test_mark_clean(self):
+        self.johndoe['last_name'] = 'Doe'
+        self.assertTrue(self.johndoe.needs_save())
+        self.johndoe.mark_clean()
+        self.assertFalse(self.johndoe.needs_save())
+
+    def test_load(self):
+        empty_item = Item(self.table)
+        empty_item.load({
+            'Item': {
+                'username': {'S': 'johndoe'},
+                'first_name': {'S': 'John'},
+                'last_name': {'S': 'Doe'},
+                'date_joined': {'N': '1366056668'},
+                'friend_count': {'N': '3'},
+                'friends': {'SS': ['alice', 'bob', 'jane']},
+            }
+        })
+        self.assertEqual(empty_item['username'], 'johndoe')
+        self.assertEqual(empty_item['date_joined'], 1366056668)
+        self.assertEqual(sorted(empty_item['friends']), sorted([
+            'alice',
+            'bob',
+            'jane'
+        ]))
+
+    def test_get_keys(self):
+        # Setup the data.
+        self.table.schema = [
+            HashKey('username'),
+            RangeKey('date_joined'),
+        ]
+        self.assertEqual(self.johndoe.get_keys(), {
+            'username': 'johndoe',
+            'date_joined': 12345,
+        })
+
+    def test_get_raw_keys(self):
+        # Setup the data.
+        self.table.schema = [
+            HashKey('username'),
+            RangeKey('date_joined'),
+        ]
+        self.assertEqual(self.johndoe.get_raw_keys(), {
+            'username': {'S': 'johndoe'},
+            'date_joined': {'N': '12345'},
+        })
+
+    def test_build_expects(self):
+        # Pristine.
+        self.assertEqual(self.johndoe.build_expects(), {
+            'first_name': {
+                'Exists': False,
+            },
+            'username': {
+                'Exists': False,
+            },
+            'date_joined': {
+                'Exists': False,
+            },
+        })
+
+        # Without modifications.
+        self.johndoe.mark_clean()
+        self.assertEqual(self.johndoe.build_expects(), {
+            'first_name': {
+                'Exists': True,
+                'Value': {
+                    'S': 'John',
+                },
+            },
+            'username': {
+                'Exists': True,
+                'Value': {
+                    'S': 'johndoe',
+                },
+            },
+            'date_joined': {
+                'Exists': True,
+                'Value': {
+                    'N': '12345',
+                },
+            },
+        })
+
+        # Change some data.
+        self.johndoe['first_name'] = 'Johann'
+        # Add some data.
+        self.johndoe['last_name'] = 'Doe'
+        # Delete some data.
+        del self.johndoe['date_joined']
+
+        # All fields (default).
+        self.assertEqual(self.johndoe.build_expects(), {
+            'first_name': {
+                'Exists': True,
+                'Value': {
+                    'S': 'John',
+                },
+            },
+            'last_name': {
+                'Exists': False,
+            },
+            'username': {
+                'Exists': True,
+                'Value': {
+                    'S': 'johndoe',
+                },
+            },
+            'date_joined': {
+                'Exists': True,
+                'Value': {
+                    'N': '12345',
+                },
+            },
+        })
+
+        # Only a subset of the fields.
+        self.assertEqual(self.johndoe.build_expects(fields=[
+            'first_name',
+            'last_name',
+            'date_joined',
+        ]), {
+            'first_name': {
+                'Exists': True,
+                'Value': {
+                    'S': 'John',
+                },
+            },
+            'last_name': {
+                'Exists': False,
+            },
+            'date_joined': {
+                'Exists': True,
+                'Value': {
+                    'N': '12345',
+                },
+            },
+        })
+
+    def test_prepare_full(self):
+        self.assertEqual(self.johndoe.prepare_full(), {
+            'username': {'S': 'johndoe'},
+            'first_name': {'S': 'John'},
+            'date_joined': {'N': '12345'}
+        })
+
+    def test_prepare_partial(self):
+        self.johndoe.mark_clean()
+        # Change some data.
+        self.johndoe['first_name'] = 'Johann'
+        # Add some data.
+        self.johndoe['last_name'] = 'Doe'
+        # Delete some data.
+        del self.johndoe['date_joined']
+
+        self.assertEqual(self.johndoe.prepare_partial(), {
+            'date_joined': {
+                'Action': 'DELETE',
+            },
+            'first_name': {
+                'Action': 'PUT',
+                'Value': {'S': 'Johann'},
+            },
+            'last_name': {
+                'Action': 'PUT',
+                'Value': {'S': 'Doe'},
+            },
+        })
+
+    def test_save_no_changes(self):
+        # Unchanged, no save.
+        with mock.patch.object(self.table, '_put_item', return_value=True) \
+                as mock_put_item:
+            # Pretend we loaded it via ``get_item``...
+            self.johndoe.mark_clean()
+            self.assertFalse(self.johndoe.save())
+
+        self.assertFalse(mock_put_item.called)
+
+    def test_save_with_changes(self):
+        # With changed data.
+        with mock.patch.object(self.table, '_put_item', return_value=True) \
+                as mock_put_item:
+            self.johndoe.mark_clean()
+            self.johndoe['first_name'] = 'J'
+            self.johndoe['new_attr'] = 'never_seen_before'
+            self.assertTrue(self.johndoe.save())
+            self.assertFalse(self.johndoe.needs_save())
+
+        self.assertTrue(mock_put_item.called)
+        mock_put_item.assert_called_once_with({
+            'username': {'S': 'johndoe'},
+            'first_name': {'S': 'J'},
+            'new_attr': {'S': 'never_seen_before'},
+            'date_joined': {'N': '12345'}
+        }, expects={
+            'username': {
+                'Value': {
+                    'S': 'johndoe',
+                },
+                'Exists': True,
+            },
+            'first_name': {
+                'Value': {
+                    'S': 'John',
+                },
+                'Exists': True,
+            },
+            'new_attr': {
+                'Exists': False,
+            },
+            'date_joined': {
+                'Value': {
+                    'N': '12345',
+                },
+                'Exists': True,
+            },
+        })
+
+    def test_save_with_changes_overwrite(self):
+        # With changed data.
+        with mock.patch.object(self.table, '_put_item', return_value=True) \
+                as mock_put_item:
+            self.johndoe['first_name'] = 'J'
+            self.johndoe['new_attr'] = 'never_seen_before'
+            # OVERWRITE ALL THE THINGS
+            self.assertTrue(self.johndoe.save(overwrite=True))
+            self.assertFalse(self.johndoe.needs_save())
+
+        self.assertTrue(mock_put_item.called)
+        mock_put_item.assert_called_once_with({
+            'username': {'S': 'johndoe'},
+            'first_name': {'S': 'J'},
+            'new_attr': {'S': 'never_seen_before'},
+            'date_joined': {'N': '12345'}
+        }, expects=None)
+
+    def test_partial_no_changes(self):
+        # Unchanged, no save.
+        with mock.patch.object(self.table, '_update_item', return_value=True) \
+                as mock_update_item:
+            # Pretend we loaded it via ``get_item``...
+            self.johndoe.mark_clean()
+            self.assertFalse(self.johndoe.partial_save())
+
+        self.assertFalse(mock_update_item.called)
+
+    def test_partial_with_changes(self):
+        # Setup the data.
+        self.table.schema = [
+            HashKey('username'),
+        ]
+
+        # With changed data.
+        with mock.patch.object(self.table, '_update_item', return_value=True) \
+                as mock_update_item:
+            # Pretend we loaded it via ``get_item``...
+            self.johndoe.mark_clean()
+            # Now... MODIFY!!!
+            self.johndoe['first_name'] = 'J'
+            self.johndoe['last_name'] = 'Doe'
+            del self.johndoe['date_joined']
+            self.assertTrue(self.johndoe.partial_save())
+            self.assertFalse(self.johndoe.needs_save())
+
+        self.assertTrue(mock_update_item.called)
+        mock_update_item.assert_called_once_with({
+            'username': 'johndoe',
+        }, {
+            'first_name': {
+                'Action': 'PUT',
+                'Value': {'S': 'J'},
+            },
+            'last_name': {
+                'Action': 'PUT',
+                'Value': {'S': 'Doe'},
+            },
+            'date_joined': {
+                'Action': 'DELETE',
+            }
+        }, expects={
+            'first_name': {
+                'Value': {
+                    'S': 'John',
+                },
+                'Exists': True
+            },
+            'last_name': {
+                'Exists': False
+            },
+            'date_joined': {
+                'Value': {
+                    'N': '12345',
+                },
+                'Exists': True
+            },
+        })
+
+    def test_delete(self):
+        # Setup the data.
+        self.table.schema = [
+            HashKey('username'),
+            RangeKey('date_joined'),
+        ]
+
+        with mock.patch.object(self.table, 'delete_item', return_value=True) \
+                as mock_delete_item:
+            self.johndoe.delete()
+
+        self.assertTrue(mock_delete_item.called)
+        mock_delete_item.assert_called_once_with(
+            username='johndoe',
+            date_joined=12345
+        )
+
+
+def fake_results(name, greeting='hello', exclusive_start_key=None, limit=None):
+    if exclusive_start_key is None:
+        exclusive_start_key = -1
+
+    end_cap = 13
+    results = []
+    start_key = exclusive_start_key + 1
+
+    for i in range(start_key, start_key + 5):
+        if i < end_cap:
+            results.append("%s %s #%s" % (greeting, name, i))
+
+    retval = {
+        'results': results,
+    }
+
+    if exclusive_start_key + 5 < end_cap:
+        retval['last_key'] = exclusive_start_key + 5
+
+    return retval
+
+
+class ResultSetTestCase(unittest.TestCase):
+    def setUp(self):
+        super(ResultSetTestCase, self).setUp()
+        self.results = ResultSet()
+        self.results.to_call(fake_results, 'john', greeting='Hello', limit=20)
+
+    def test_first_key(self):
+        self.assertEqual(self.results.first_key, 'exclusive_start_key')
+
+    def test_fetch_more(self):
+        # First "page".
+        self.results.fetch_more()
+        self.assertEqual(self.results._results, [
+            'Hello john #0',
+            'Hello john #1',
+            'Hello john #2',
+            'Hello john #3',
+            'Hello john #4',
+        ])
+
+        # Fake in a last key.
+        self.results._last_key_seen = 4
+        # Second "page".
+        self.results.fetch_more()
+        self.assertEqual(self.results._results, [
+            'Hello john #5',
+            'Hello john #6',
+            'Hello john #7',
+            'Hello john #8',
+            'Hello john #9',
+        ])
+
+        # Fake in a last key.
+        self.results._last_key_seen = 9
+        # Last "page".
+        self.results.fetch_more()
+        self.assertEqual(self.results._results, [
+            'Hello john #10',
+            'Hello john #11',
+            'Hello john #12',
+        ])
+
+        # Fake in a key outside the range.
+        self.results._last_key_seen = 15
+        # Empty "page". Nothing new gets added
+        self.results.fetch_more()
+        self.assertEqual(self.results._results, [])
+
+        # Make sure we won't check for results in the future.
+        self.assertFalse(self.results._results_left)
+
+    def test_iteration(self):
+        # First page.
+        self.assertEqual(self.results.next(), 'Hello john #0')
+        self.assertEqual(self.results.next(), 'Hello john #1')
+        self.assertEqual(self.results.next(), 'Hello john #2')
+        self.assertEqual(self.results.next(), 'Hello john #3')
+        self.assertEqual(self.results.next(), 'Hello john #4')
+        self.assertEqual(self.results.call_kwargs['limit'], 15)
+        # Second page.
+        self.assertEqual(self.results.next(), 'Hello john #5')
+        self.assertEqual(self.results.next(), 'Hello john #6')
+        self.assertEqual(self.results.next(), 'Hello john #7')
+        self.assertEqual(self.results.next(), 'Hello john #8')
+        self.assertEqual(self.results.next(), 'Hello john #9')
+        self.assertEqual(self.results.call_kwargs['limit'], 10)
+        # Third page.
+        self.assertEqual(self.results.next(), 'Hello john #10')
+        self.assertEqual(self.results.next(), 'Hello john #11')
+        self.assertEqual(self.results.next(), 'Hello john #12')
+        self.assertRaises(StopIteration, self.results.next)
+        self.assertEqual(self.results.call_kwargs['limit'], 7)
+
+    def test_iteration_noresults(self):
+        def none(limit=10):
+            return {
+                'results': [],
+            }
+
+        results = ResultSet()
+        results.to_call(none, limit=20)
+        self.assertRaises(StopIteration, results.next)
+
+    def test_list(self):
+        self.assertEqual(list(self.results), [
+            'Hello john #0',
+            'Hello john #1',
+            'Hello john #2',
+            'Hello john #3',
+            'Hello john #4',
+            'Hello john #5',
+            'Hello john #6',
+            'Hello john #7',
+            'Hello john #8',
+            'Hello john #9',
+            'Hello john #10',
+            'Hello john #11',
+            'Hello john #12'
+        ])
+
+
+def fake_batch_results(keys):
+    results = []
+    simulate_unprocessed = True
+
+    if len(keys) and keys[0] == 'johndoe':
+        simulate_unprocessed = False
+
+    for key in keys:
+        if simulate_unprocessed and key == 'johndoe':
+            continue
+
+        results.append("hello %s" % key)
+
+    retval = {
+        'results': results,
+        'last_key': None,
+    }
+
+    if simulate_unprocessed:
+        retval['unprocessed_keys'] = ['johndoe']
+
+    return retval
+
+
+class BatchGetResultSetTestCase(unittest.TestCase):
+    def setUp(self):
+        super(BatchGetResultSetTestCase, self).setUp()
+        self.results = BatchGetResultSet(keys=[
+            'alice',
+            'bob',
+            'jane',
+            'johndoe',
+        ])
+        self.results.to_call(fake_batch_results)
+
+    def test_fetch_more(self):
+        # First "page".
+        self.results.fetch_more()
+        self.assertEqual(self.results._results, [
+            'hello alice',
+            'hello bob',
+            'hello jane',
+        ])
+        self.assertEqual(self.results._keys_left, ['johndoe'])
+
+        # Second "page".
+        self.results.fetch_more()
+        self.assertEqual(self.results._results, [
+            'hello johndoe',
+        ])
+
+        # Empty "page". Nothing new gets added
+        self.results.fetch_more()
+        self.assertEqual(self.results._results, [])
+
+        # Make sure we won't check for results in the future.
+        self.assertFalse(self.results._results_left)
+
+    def test_iteration(self):
+        # First page.
+        self.assertEqual(self.results.next(), 'hello alice')
+        self.assertEqual(self.results.next(), 'hello bob')
+        self.assertEqual(self.results.next(), 'hello jane')
+        self.assertEqual(self.results.next(), 'hello johndoe')
+        self.assertRaises(StopIteration, self.results.next)
+
+
+class TableTestCase(unittest.TestCase):
+    def setUp(self):
+        super(TableTestCase, self).setUp()
+        self.users = Table('users', connection=FakeDynamoDBConnection())
+        self.default_connection = DynamoDBConnection(
+            aws_access_key_id='access_key',
+            aws_secret_access_key='secret_key'
+        )
+
+    def test__introspect_schema(self):
+        raw_schema_1 = [
+            {
+                "AttributeName": "username",
+                "KeyType": "HASH"
+            },
+            {
+                "AttributeName": "date_joined",
+                "KeyType": "RANGE"
+            }
+        ]
+        schema_1 = self.users._introspect_schema(raw_schema_1)
+        self.assertEqual(len(schema_1), 2)
+        self.assertTrue(isinstance(schema_1[0], HashKey))
+        self.assertEqual(schema_1[0].name, 'username')
+        self.assertTrue(isinstance(schema_1[1], RangeKey))
+        self.assertEqual(schema_1[1].name, 'date_joined')
+
+        raw_schema_2 = [
+            {
+                "AttributeName": "username",
+                "KeyType": "BTREE"
+            },
+        ]
+        self.assertRaises(
+            exceptions.UnknownSchemaFieldError,
+            self.users._introspect_schema,
+            raw_schema_2
+        )
+
+    def test__introspect_indexes(self):
+        raw_indexes_1 = [
+            {
+                "IndexName": "MostRecentlyJoinedIndex",
+                "KeySchema": [
+                    {
+                        "AttributeName": "username",
+                        "KeyType": "HASH"
+                    },
+                    {
+                        "AttributeName": "date_joined",
+                        "KeyType": "RANGE"
+                    }
+                ],
+                "Projection": {
+                    "ProjectionType": "KEYS_ONLY"
+                }
+            },
+            {
+                "IndexName": "EverybodyIndex",
+                "KeySchema": [
+                    {
+                        "AttributeName": "username",
+                        "KeyType": "HASH"
+                    },
+                ],
+                "Projection": {
+                    "ProjectionType": "ALL"
+                }
+            },
+            {
+                "IndexName": "GenderIndex",
+                "KeySchema": [
+                    {
+                        "AttributeName": "username",
+                        "KeyType": "HASH"
+                    },
+                    {
+                        "AttributeName": "date_joined",
+                        "KeyType": "RANGE"
+                    }
+                ],
+                "Projection": {
+                    "ProjectionType": "INCLUDE",
+                    "NonKeyAttributes": [
+                        'gender',
+                    ]
+                }
+            }
+        ]
+        indexes_1 = self.users._introspect_indexes(raw_indexes_1)
+        self.assertEqual(len(indexes_1), 3)
+        self.assertTrue(isinstance(indexes_1[0], KeysOnlyIndex))
+        self.assertEqual(indexes_1[0].name, 'MostRecentlyJoinedIndex')
+        self.assertEqual(len(indexes_1[0].parts), 2)
+        self.assertTrue(isinstance(indexes_1[1], AllIndex))
+        self.assertEqual(indexes_1[1].name, 'EverybodyIndex')
+        self.assertEqual(len(indexes_1[1].parts), 1)
+        self.assertTrue(isinstance(indexes_1[2], IncludeIndex))
+        self.assertEqual(indexes_1[2].name, 'GenderIndex')
+        self.assertEqual(len(indexes_1[2].parts), 2)
+        self.assertEqual(indexes_1[2].includes_fields, ['gender'])
+
+        raw_indexes_2 = [
+            {
+                "IndexName": "MostRecentlyJoinedIndex",
+                "KeySchema": [
+                    {
+                        "AttributeName": "username",
+                        "KeyType": "HASH"
+                    },
+                    {
+                        "AttributeName": "date_joined",
+                        "KeyType": "RANGE"
+                    }
+                ],
+                "Projection": {
+                    "ProjectionType": "SOMETHING_CRAZY"
+                }
+            },
+        ]
+        self.assertRaises(
+            exceptions.UnknownIndexFieldError,
+            self.users._introspect_indexes,
+            raw_indexes_2
+        )
+
+    def test_initialization(self):
+        users = Table('users', connection=self.default_connection)
+        self.assertEqual(users.table_name, 'users')
+        self.assertTrue(isinstance(users.connection, DynamoDBConnection))
+        self.assertEqual(users.throughput['read'], 5)
+        self.assertEqual(users.throughput['write'], 5)
+        self.assertEqual(users.schema, None)
+        self.assertEqual(users.indexes, None)
+
+        groups = Table('groups', connection=FakeDynamoDBConnection())
+        self.assertEqual(groups.table_name, 'groups')
+        self.assertTrue(hasattr(groups.connection, 'assert_called_once_with'))
+
+    def test_create_simple(self):
+        conn = FakeDynamoDBConnection()
+
+        with mock.patch.object(conn, 'create_table', return_value={}) \
+                as mock_create_table:
+            retval = Table.create('users', schema=[
+                HashKey('username'),
+                RangeKey('date_joined', data_type=NUMBER)
+            ], connection=conn)
+            self.assertTrue(retval)
+
+        self.assertTrue(mock_create_table.called)
+        mock_create_table.assert_called_once_with(attribute_definitions=[
+            {
+                'AttributeName': 'username',
+                'AttributeType': 'S'
+            },
+            {
+                'AttributeName': 'date_joined',
+                'AttributeType': 'N'
+            }
+        ],
+        table_name='users',
+        key_schema=[
+            {
+                'KeyType': 'HASH',
+                'AttributeName': 'username'
+            },
+            {
+                'KeyType': 'RANGE',
+                'AttributeName': 'date_joined'
+            }
+        ],
+        provisioned_throughput={
+            'WriteCapacityUnits': 5,
+            'ReadCapacityUnits': 5
+        })
+
+    def test_create_full(self):
+        conn = FakeDynamoDBConnection()
+
+        with mock.patch.object(conn, 'create_table', return_value={}) \
+                as mock_create_table:
+            retval = Table.create('users', schema=[
+                HashKey('username'),
+                RangeKey('date_joined', data_type=NUMBER)
+            ], throughput={
+                'read':20,
+                'write': 10,
+            }, indexes=[
+                KeysOnlyIndex('FriendCountIndex', parts=[
+                    RangeKey('friend_count')
+                ]),
+            ], connection=conn)
+            self.assertTrue(retval)
+
+        self.assertTrue(mock_create_table.called)
+        mock_create_table.assert_called_once_with(attribute_definitions=[
+            {
+                'AttributeName': 'username',
+                'AttributeType': 'S'
+            },
+            {
+                'AttributeName': 'date_joined',
+                'AttributeType': 'N'
+            },
+            {
+                'AttributeName': 'friend_count',
+                'AttributeType': 'S'
+            }
+        ],
+        key_schema=[
+            {
+                'KeyType': 'HASH',
+                'AttributeName': 'username'
+            },
+            {
+                'KeyType': 'RANGE',
+                'AttributeName': 'date_joined'
+            }
+        ],
+        table_name='users',
+        provisioned_throughput={
+            'WriteCapacityUnits': 10,
+            'ReadCapacityUnits': 20
+        },
+        local_secondary_indexes=[
+            {
+                'KeySchema': [
+                    {
+                        'KeyType': 'RANGE',
+                        'AttributeName': 'friend_count'
+                    }
+                ],
+                'IndexName': 'FriendCountIndex',
+                'Projection': {
+                    'ProjectionType': 'KEYS_ONLY'
+                }
+            }
+        ])
+
+    def test_describe(self):
+        expected = {
+            "Table": {
+                "AttributeDefinitions": [
+                    {
+                        "AttributeName": "username",
+                        "AttributeType": "S"
+                    }
+                ],
+                "ItemCount": 5,
+                "KeySchema": [
+                    {
+                        "AttributeName": "username",
+                        "KeyType": "HASH"
+                    }
+                ],
+                "LocalSecondaryIndexes": [
+                    {
+                        "IndexName": "UsernameIndex",
+                        "KeySchema": [
+                            {
+                                "AttributeName": "username",
+                                "KeyType": "HASH"
+                            }
+                        ],
+                        "Projection": {
+                            "ProjectionType": "KEYS_ONLY"
+                        }
+                    }
+                ],
+                "ProvisionedThroughput": {
+                    "ReadCapacityUnits": 20,
+                    "WriteCapacityUnits": 6
+                },
+                "TableName": "Thread",
+                "TableStatus": "ACTIVE"
+            }
+        }
+
+        with mock.patch.object(
+                self.users.connection,
+                'describe_table',
+                return_value=expected) as mock_describe:
+            self.assertEqual(self.users.throughput['read'], 5)
+            self.assertEqual(self.users.throughput['write'], 5)
+            self.assertEqual(self.users.schema, None)
+            self.assertEqual(self.users.indexes, None)
+
+            self.users.describe()
+
+            self.assertEqual(self.users.throughput['read'], 20)
+            self.assertEqual(self.users.throughput['write'], 6)
+            self.assertEqual(len(self.users.schema), 1)
+            self.assertEqual(isinstance(self.users.schema[0], HashKey), 1)
+            self.assertEqual(len(self.users.indexes), 1)
+
+        mock_describe.assert_called_once_with('users')
+
+    def test_update(self):
+        with mock.patch.object(
+                self.users.connection,
+                'update_table',
+                return_value={}) as mock_update:
+            self.assertEqual(self.users.throughput['read'], 5)
+            self.assertEqual(self.users.throughput['write'], 5)
+            self.users.update(throughput={
+                'read': 7,
+                'write': 2,
+            })
+            self.assertEqual(self.users.throughput['read'], 7)
+            self.assertEqual(self.users.throughput['write'], 2)
+
+        mock_update.assert_called_once_with('users', {
+            'WriteCapacityUnits': 2,
+            'ReadCapacityUnits': 7
+        })
+
+    def test_delete(self):
+        with mock.patch.object(
+                self.users.connection,
+                'delete_table',
+                return_value={}) as mock_delete:
+            self.assertTrue(self.users.delete())
+
+        mock_delete.assert_called_once_with('users')
+
+    def test_get_item(self):
+        expected = {
+            'Item': {
+                'username': {'S': 'johndoe'},
+                'first_name': {'S': 'John'},
+                'last_name': {'S': 'Doe'},
+                'date_joined': {'N': '1366056668'},
+                'friend_count': {'N': '3'},
+                'friends': {'SS': ['alice', 'bob', 'jane']},
+            }
+        }
+
+        with mock.patch.object(
+                self.users.connection,
+                'get_item',
+                return_value=expected) as mock_get_item:
+            item = self.users.get_item(username='johndoe')
+            self.assertEqual(item['username'], 'johndoe')
+            self.assertEqual(item['first_name'], 'John')
+
+        mock_get_item.assert_called_once_with('users', {
+            'username': {'S': 'johndoe'}
+        }, consistent_read=False)
+
+    def test_put_item(self):
+        with mock.patch.object(
+                self.users.connection,
+                'put_item',
+                return_value={}) as mock_put_item:
+            self.users.put_item(data={
+                'username': 'johndoe',
+                'last_name': 'Doe',
+                'date_joined': 12345,
+            })
+
+        mock_put_item.assert_called_once_with('users', {
+            'username': {'S': 'johndoe'},
+            'last_name': {'S': 'Doe'},
+            'date_joined': {'N': '12345'}
+        }, expected={
+            'username': {
+                'Exists': False,
+            },
+            'last_name': {
+                'Exists': False,
+            },
+            'date_joined': {
+                'Exists': False,
+            }
+        })
+
+    def test_private_put_item(self):
+        with mock.patch.object(
+                self.users.connection,
+                'put_item',
+                return_value={}) as mock_put_item:
+            self.users._put_item({'some': 'data'})
+
+        mock_put_item.assert_called_once_with('users', {'some': 'data'})
+
+    def test_private_update_item(self):
+        with mock.patch.object(
+                self.users.connection,
+                'update_item',
+                return_value={}) as mock_update_item:
+            self.users._update_item({
+                'username': 'johndoe'
+            }, {
+                'some': 'data',
+            })
+
+        mock_update_item.assert_called_once_with('users', {
+            'username': {'S': 'johndoe'},
+        }, {
+            'some': 'data',
+        })
+
+    def test_delete_item(self):
+        with mock.patch.object(
+                self.users.connection,
+                'delete_item',
+                return_value={}) as mock_delete_item:
+            self.assertTrue(self.users.delete_item(username='johndoe', date_joined=23456))
+
+        mock_delete_item.assert_called_once_with('users', {
+            'username': {
+                'S': 'johndoe'
+            },
+            'date_joined': {
+                'N': '23456'
+            }
+        })
+
+    def test_get_key_fields_no_schema_populated(self):
+        expected = {
+            "Table": {
+                "AttributeDefinitions": [
+                    {
+                        "AttributeName": "username",
+                        "AttributeType": "S"
+                    },
+                    {
+                        "AttributeName": "date_joined",
+                        "AttributeType": "N"
+                    }
+                ],
+                "ItemCount": 5,
+                "KeySchema": [
+                    {
+                        "AttributeName": "username",
+                        "KeyType": "HASH"
+                    },
+                    {
+                        "AttributeName": "date_joined",
+                        "KeyType": "RANGE"
+                    }
+                ],
+                "LocalSecondaryIndexes": [
+                    {
+                        "IndexName": "UsernameIndex",
+                        "KeySchema": [
+                            {
+                                "AttributeName": "username",
+                                "KeyType": "HASH"
+                            }
+                        ],
+                        "Projection": {
+                            "ProjectionType": "KEYS_ONLY"
+                        }
+                    }
+                ],
+                "ProvisionedThroughput": {
+                    "ReadCapacityUnits": 20,
+                    "WriteCapacityUnits": 6
+                },
+                "TableName": "Thread",
+                "TableStatus": "ACTIVE"
+            }
+        }
+
+        with mock.patch.object(
+                self.users.connection,
+                'describe_table',
+                return_value=expected) as mock_describe:
+            self.assertEqual(self.users.schema, None)
+
+            key_fields = self.users.get_key_fields()
+            self.assertEqual(key_fields, ['username', 'date_joined'])
+
+            self.assertEqual(len(self.users.schema), 2)
+
+        mock_describe.assert_called_once_with('users')
+
+    def test_batch_write_no_writes(self):
+        with mock.patch.object(
+                self.users.connection,
+                'batch_write_item',
+                return_value={}) as mock_batch:
+            with self.users.batch_write() as batch:
+                pass
+
+        self.assertFalse(mock_batch.called)
+
+    def test_batch_write(self):
+        with mock.patch.object(
+                self.users.connection,
+                'batch_write_item',
+                return_value={}) as mock_batch:
+            with self.users.batch_write() as batch:
+                batch.put_item(data={
+                    'username': 'jane',
+                    'date_joined': 12342547
+                })
+                batch.delete_item(username='johndoe')
+                batch.put_item(data={
+                    'username': 'alice',
+                    'date_joined': 12342888
+                })
+
+        mock_batch.assert_called_once_with({
+            'users': [
+                {
+                    'PutRequest': {
+                        'Item': {
+                            'username': {'S': 'jane'},
+                            'date_joined': {'N': '12342547'}
+                        }
+                    }
+                },
+                {
+                    'PutRequest': {
+                        'Item': {
+                            'username': {'S': 'alice'},
+                            'date_joined': {'N': '12342888'}
+                        }
+                    }
+                },
+                {
+                    'DeleteRequest': {
+                        'Key': {
+                            'username': {'S': 'johndoe'},
+                        }
+                    }
+                },
+            ]
+        })
+
+    def test_batch_write_dont_swallow_exceptions(self):
+        with mock.patch.object(
+                self.users.connection,
+                'batch_write_item',
+                return_value={}) as mock_batch:
+            try:
+                with self.users.batch_write() as batch:
+                    raise Exception('OH NOES')
+            except Exception, e:
+                self.assertEqual(str(e), 'OH NOES')
+
+        self.assertFalse(mock_batch.called)
+
+    def test_batch_write_flushing(self):
+        with mock.patch.object(
+                self.users.connection,
+                'batch_write_item',
+                return_value={}) as mock_batch:
+            with self.users.batch_write() as batch:
+                batch.put_item(data={
+                    'username': 'jane',
+                    'date_joined': 12342547
+                })
+                # This would only be enough for one batch.
+                batch.delete_item(username='johndoe1')
+                batch.delete_item(username='johndoe2')
+                batch.delete_item(username='johndoe3')
+                batch.delete_item(username='johndoe4')
+                batch.delete_item(username='johndoe5')
+                batch.delete_item(username='johndoe6')
+                batch.delete_item(username='johndoe7')
+                batch.delete_item(username='johndoe8')
+                batch.delete_item(username='johndoe9')
+                batch.delete_item(username='johndoe10')
+                batch.delete_item(username='johndoe11')
+                batch.delete_item(username='johndoe12')
+                batch.delete_item(username='johndoe13')
+                batch.delete_item(username='johndoe14')
+                batch.delete_item(username='johndoe15')
+                batch.delete_item(username='johndoe16')
+                batch.delete_item(username='johndoe17')
+                batch.delete_item(username='johndoe18')
+                batch.delete_item(username='johndoe19')
+                batch.delete_item(username='johndoe20')
+                batch.delete_item(username='johndoe21')
+                batch.delete_item(username='johndoe22')
+                batch.delete_item(username='johndoe23')
+
+                # We're only at 24 items. No flushing yet.
+                self.assertEqual(mock_batch.call_count, 0)
+
+                # This pushes it over the edge. A flush happens then we start
+                # queuing objects again.
+                batch.delete_item(username='johndoe24')
+                self.assertEqual(mock_batch.call_count, 1)
+                # Since we add another, there's enough for a second call to
+                # flush.
+                batch.delete_item(username='johndoe25')
+
+        self.assertEqual(mock_batch.call_count, 2)
+
+    def test__build_filters(self):
+        filters = self.users._build_filters({
+            'username__eq': 'johndoe',
+            'date_joined__gte': 1234567,
+            'age__in': [30, 31, 32, 33],
+            'last_name__between': ['danzig', 'only'],
+            'first_name__null': False,
+            'gender__null': True,
+        }, using=FILTER_OPERATORS)
+        self.assertEqual(filters, {
+            'username': {
+                'AttributeValueList': [
+                    {
+                        'S': 'johndoe',
+                    },
+                ],
+                'ComparisonOperator': 'EQ',
+            },
+            'date_joined': {
+                'AttributeValueList': [
+                    {
+                        'N': '1234567',
+                    },
+                ],
+                'ComparisonOperator': 'GE',
+            },
+            'age': {
+                'AttributeValueList': [{'NS': ['32', '33', '30', '31']}],
+                'ComparisonOperator': 'IN',
+            },
+            'last_name': {
+                'AttributeValueList': [{'S': 'danzig'}, {'S': 'only'}],
+                'ComparisonOperator': 'BETWEEN',
+            },
+            'first_name': {
+                'ComparisonOperator': 'NOT_NULL'
+            },
+            'gender': {
+                'ComparisonOperator': 'NULL'
+            },
+        })
+
+        self.assertRaises(exceptions.UnknownFilterTypeError,
+            self.users._build_filters,
+            {
+                'darling__die': True,
+            }
+        )
+
+        q_filters = self.users._build_filters({
+            'username__eq': 'johndoe',
+            'date_joined__gte': 1234567,
+            'last_name__between': ['danzig', 'only'],
+            'gender__beginswith': 'm',
+        }, using=QUERY_OPERATORS)
+        self.assertEqual(q_filters, {
+            'username': {
+                'AttributeValueList': [
+                    {
+                        'S': 'johndoe',
+                    },
+                ],
+                'ComparisonOperator': 'EQ',
+            },
+            'date_joined': {
+                'AttributeValueList': [
+                    {
+                        'N': '1234567',
+                    },
+                ],
+                'ComparisonOperator': 'GE',
+            },
+            'last_name': {
+                'AttributeValueList': [{'S': 'danzig'}, {'S': 'only'}],
+                'ComparisonOperator': 'BETWEEN',
+            },
+            'gender': {
+                'AttributeValueList': [{'S': 'm'}],
+                'ComparisonOperator': 'BEGINS_WITH',
+            },
+        })
+
+        self.assertRaises(exceptions.UnknownFilterTypeError,
+            self.users._build_filters,
+            {
+                'darling__die': True,
+            },
+            using=QUERY_OPERATORS
+        )
+        self.assertRaises(exceptions.UnknownFilterTypeError,
+            self.users._build_filters,
+            {
+                'first_name__null': True,
+            },
+            using=QUERY_OPERATORS
+        )
+
+    def test_private_query(self):
+        expected = {
+            "ConsumedCapacity": {
+                "CapacityUnits": 0.5,
+                "TableName": "users"
+            },
+            "Count": 4,
+            "Items": [
+                {
+                    'username': {'S': 'johndoe'},
+                    'first_name': {'S': 'John'},
+                    'last_name': {'S': 'Doe'},
+                    'date_joined': {'N': '1366056668'},
+                    'friend_count': {'N': '3'},
+                    'friends': {'SS': ['alice', 'bob', 'jane']},
+                },
+                {
+                    'username': {'S': 'jane'},
+                    'first_name': {'S': 'Jane'},
+                    'last_name': {'S': 'Doe'},
+                    'date_joined': {'N': '1366057777'},
+                    'friend_count': {'N': '2'},
+                    'friends': {'SS': ['alice', 'johndoe']},
+                },
+                {
+                    'username': {'S': 'alice'},
+                    'first_name': {'S': 'Alice'},
+                    'last_name': {'S': 'Expert'},
+                    'date_joined': {'N': '1366056680'},
+                    'friend_count': {'N': '1'},
+                    'friends': {'SS': ['jane']},
+                },
+                {
+                    'username': {'S': 'bob'},
+                    'first_name': {'S': 'Bob'},
+                    'last_name': {'S': 'Smith'},
+                    'date_joined': {'N': '1366056888'},
+                    'friend_count': {'N': '1'},
+                    'friends': {'SS': ['johndoe']},
+                },
+            ],
+            "ScannedCount": 4
+        }
+
+        with mock.patch.object(
+                self.users.connection,
+                'query',
+                return_value=expected) as mock_query:
+            results = self.users._query(
+                limit=4,
+                reverse=True,
+                username__between=['aaa', 'mmm']
+            )
+            usernames = [res['username'] for res in results['results']]
+            self.assertEqual(usernames, ['johndoe', 'jane', 'alice', 'bob'])
+            self.assertEqual(len(results['results']), 4)
+            self.assertEqual(results['last_key'], None)
+
+        mock_query.assert_called_once_with('users',
+            consistent_read=False,
+            index_name=None,
+            scan_index_forward=True,
+            limit=4,
+            key_conditions={
+                'username': {
+                    'AttributeValueList': [{'S': 'aaa'}, {'S': 'mmm'}],
+                    'ComparisonOperator': 'BETWEEN',
+                }
+            }
+        )
+
+        # Now alter the expected.
+        expected['LastEvaluatedKey'] = {
+            'username': {
+                'S': 'johndoe',
+            },
+        }
+
+        with mock.patch.object(
+                self.users.connection,
+                'query',
+                return_value=expected) as mock_query_2:
+            results = self.users._query(
+                limit=4,
+                reverse=True,
+                username__between=['aaa', 'mmm'],
+                exclusive_start_key={
+                    'username': 'adam',
+                },
+                consistent=True
+            )
+            usernames = [res['username'] for res in results['results']]
+            self.assertEqual(usernames, ['johndoe', 'jane', 'alice', 'bob'])
+            self.assertEqual(len(results['results']), 4)
+            self.assertEqual(results['last_key'], {'username': 'johndoe'})
+
+        mock_query_2.assert_called_once_with('users',
+            key_conditions={
+                'username': {
+                    'AttributeValueList': [{'S': 'aaa'}, {'S': 'mmm'}],
+                    'ComparisonOperator': 'BETWEEN',
+                }
+            },
+            index_name=None,
+            scan_index_forward=True,
+            limit=4,
+            exclusive_start_key={
+                'username': {
+                    'S': 'adam',
+                },
+            },
+            consistent_read=True
+        )
+
+    def test_private_scan(self):
+        expected = {
+            "ConsumedCapacity": {
+                "CapacityUnits": 0.5,
+                "TableName": "users"
+            },
+            "Count": 4,
+            "Items": [
+                {
+                    'username': {'S': 'alice'},
+                    'first_name': {'S': 'Alice'},
+                    'last_name': {'S': 'Expert'},
+                    'date_joined': {'N': '1366056680'},
+                    'friend_count': {'N': '1'},
+                    'friends': {'SS': ['jane']},
+                },
+                {
+                    'username': {'S': 'bob'},
+                    'first_name': {'S': 'Bob'},
+                    'last_name': {'S': 'Smith'},
+                    'date_joined': {'N': '1366056888'},
+                    'friend_count': {'N': '1'},
+                    'friends': {'SS': ['johndoe']},
+                },
+                {
+                    'username': {'S': 'jane'},
+                    'first_name': {'S': 'Jane'},
+                    'last_name': {'S': 'Doe'},
+                    'date_joined': {'N': '1366057777'},
+                    'friend_count': {'N': '2'},
+                    'friends': {'SS': ['alice', 'johndoe']},
+                },
+            ],
+            "ScannedCount": 4
+        }
+
+        with mock.patch.object(
+                self.users.connection,
+                'scan',
+                return_value=expected) as mock_scan:
+            results = self.users._scan(
+                limit=2,
+                friend_count__lte=2
+            )
+            usernames = [res['username'] for res in results['results']]
+            self.assertEqual(usernames, ['alice', 'bob', 'jane'])
+            self.assertEqual(len(results['results']), 3)
+            self.assertEqual(results['last_key'], None)
+
+        mock_scan.assert_called_once_with('users',
+            scan_filter={
+                'friend_count': {
+                    'AttributeValueList': [{'N': '2'}],
+                    'ComparisonOperator': 'LE',
+                }
+            },
+            limit=2,
+            segment=None,
+            total_segments=None
+        )
+
+        # Now alter the expected.
+        expected['LastEvaluatedKey'] = {
+            'username': {
+                'S': 'jane',
+            },
+        }
+
+        with mock.patch.object(
+                self.users.connection,
+                'scan',
+                return_value=expected) as mock_scan_2:
+            results = self.users._scan(
+                limit=3,
+                friend_count__lte=2,
+                exclusive_start_key={
+                    'username': 'adam',
+                },
+                segment=None,
+                total_segments=None
+            )
+            usernames = [res['username'] for res in results['results']]
+            self.assertEqual(usernames, ['alice', 'bob', 'jane'])
+            self.assertEqual(len(results['results']), 3)
+            self.assertEqual(results['last_key'], {'username': 'jane'})
+
+        mock_scan_2.assert_called_once_with('users',
+            scan_filter={
+                'friend_count': {
+                    'AttributeValueList': [{'N': '2'}],
+                    'ComparisonOperator': 'LE',
+                }
+            },
+            limit=3,
+            exclusive_start_key={
+                'username': {
+                    'S': 'adam',
+                },
+            },
+            segment=None,
+            total_segments=None
+        )
+
+    def test_query(self):
+        items_1 = {
+            'results': [
+                Item(self.users, data={
+                    'username': 'johndoe',
+                    'first_name': 'John',
+                    'last_name': 'Doe',
+                }),
+                Item(self.users, data={
+                    'username': 'jane',
+                    'first_name': 'Jane',
+                    'last_name': 'Doe',
+                }),
+            ],
+            'last_key': 'jane',
+        }
+
+        results = self.users.query(last_name__eq='Doe')
+        self.assertTrue(isinstance(results, ResultSet))
+        self.assertEqual(len(results._results), 0)
+        self.assertEqual(results.the_callable, self.users._query)
+
+        with mock.patch.object(
+                results,
+                'the_callable',
+                return_value=items_1) as mock_query:
+            res_1 = results.next()
+            # Now it should be populated.
+            self.assertEqual(len(results._results), 2)
+            self.assertEqual(res_1['username'], 'johndoe')
+            res_2 = results.next()
+            self.assertEqual(res_2['username'], 'jane')
+
+        self.assertEqual(mock_query.call_count, 1)
+
+        items_2 = {
+            'results': [
+                Item(self.users, data={
+                    'username': 'foodoe',
+                    'first_name': 'Foo',
+                    'last_name': 'Doe',
+                }),
+            ],
+        }
+
+        with mock.patch.object(
+                results,
+                'the_callable',
+                return_value=items_2) as mock_query_2:
+            res_3 = results.next()
+            # New results should have been found.
+            self.assertEqual(len(results._results), 1)
+            self.assertEqual(res_3['username'], 'foodoe')
+
+            self.assertRaises(StopIteration, results.next)
+
+        self.assertEqual(mock_query_2.call_count, 1)
+
+    def test_scan(self):
+        items_1 = {
+            'results': [
+                Item(self.users, data={
+                    'username': 'johndoe',
+                    'first_name': 'John',
+                    'last_name': 'Doe',
+                }),
+                Item(self.users, data={
+                    'username': 'jane',
+                    'first_name': 'Jane',
+                    'last_name': 'Doe',
+                }),
+            ],
+            'last_key': 'jane',
+        }
+
+        results = self.users.scan(last_name__eq='Doe')
+        self.assertTrue(isinstance(results, ResultSet))
+        self.assertEqual(len(results._results), 0)
+        self.assertEqual(results.the_callable, self.users._scan)
+
+        with mock.patch.object(
+                results,
+                'the_callable',
+                return_value=items_1) as mock_scan:
+            res_1 = results.next()
+            # Now it should be populated.
+            self.assertEqual(len(results._results), 2)
+            self.assertEqual(res_1['username'], 'johndoe')
+            res_2 = results.next()
+            self.assertEqual(res_2['username'], 'jane')
+
+        self.assertEqual(mock_scan.call_count, 1)
+
+        items_2 = {
+            'results': [
+                Item(self.users, data={
+                    'username': 'zoeydoe',
+                    'first_name': 'Zoey',
+                    'last_name': 'Doe',
+                }),
+            ],
+        }
+
+        with mock.patch.object(
+                results,
+                'the_callable',
+                return_value=items_2) as mock_scan_2:
+            res_3 = results.next()
+            # New results should have been found.
+            self.assertEqual(len(results._results), 1)
+            self.assertEqual(res_3['username'], 'zoeydoe')
+
+            self.assertRaises(StopIteration, results.next)
+
+        self.assertEqual(mock_scan_2.call_count, 1)
+
+    def test_count(self):
+        expected = {
+            "Table": {
+                "AttributeDefinitions": [
+                    {
+                        "AttributeName": "username",
+                        "AttributeType": "S"
+                    }
+                ],
+                "ItemCount": 5,
+                "KeySchema": [
+                    {
+                        "AttributeName": "username",
+                        "KeyType": "HASH"
+                    }
+                ],
+                "LocalSecondaryIndexes": [
+                    {
+                        "IndexName": "UsernameIndex",
+                        "KeySchema": [
+                            {
+                                "AttributeName": "username",
+                                "KeyType": "HASH"
+                            }
+                        ],
+                        "Projection": {
+                            "ProjectionType": "KEYS_ONLY"
+                        }
+                    }
+                ],
+                "ProvisionedThroughput": {
+                    "ReadCapacityUnits": 20,
+                    "WriteCapacityUnits": 6
+                },
+                "TableName": "Thread",
+                "TableStatus": "ACTIVE"
+            }
+        }
+
+        with mock.patch.object(
+                self.users,
+                'describe',
+                return_value=expected) as mock_count:
+            self.assertEqual(self.users.count(), 5)
+
+    def test_private_batch_get(self):
+        expected = {
+            "ConsumedCapacity": {
+                "CapacityUnits": 0.5,
+                "TableName": "users"
+            },
+            'Responses': {
+                'users': [
+                    {
+                        'username': {'S': 'alice'},
+                        'first_name': {'S': 'Alice'},
+                        'last_name': {'S': 'Expert'},
+                        'date_joined': {'N': '1366056680'},
+                        'friend_count': {'N': '1'},
+                        'friends': {'SS': ['jane']},
+                    },
+                    {
+                        'username': {'S': 'bob'},
+                        'first_name': {'S': 'Bob'},
+                        'last_name': {'S': 'Smith'},
+                        'date_joined': {'N': '1366056888'},
+                        'friend_count': {'N': '1'},
+                        'friends': {'SS': ['johndoe']},
+                    },
+                    {
+                        'username': {'S': 'jane'},
+                        'first_name': {'S': 'Jane'},
+                        'last_name': {'S': 'Doe'},
+                        'date_joined': {'N': '1366057777'},
+                        'friend_count': {'N': '2'},
+                        'friends': {'SS': ['alice', 'johndoe']},
+                    },
+                ],
+            },
+            "UnprocessedKeys": {
+            },
+        }
+
+        with mock.patch.object(
+                self.users.connection,
+                'batch_get_item',
+                return_value=expected) as mock_batch_get:
+            results = self.users._batch_get(keys=[
+                {'username': 'alice', 'friend_count': 1},
+                {'username': 'bob', 'friend_count': 1},
+                {'username': 'jane'},
+            ])
+            usernames = [res['username'] for res in results['results']]
+            self.assertEqual(usernames, ['alice', 'bob', 'jane'])
+            self.assertEqual(len(results['results']), 3)
+            self.assertEqual(results['last_key'], None)
+            self.assertEqual(results['unprocessed_keys'], [])
+
+        mock_batch_get.assert_called_once_with(request_items={
+            'users': {
+                'Keys': [
+                    {
+                        'username': {'S': 'alice'},
+                        'friend_count': {'N': '1'}
+                    },
+                    {
+                        'username': {'S': 'bob'},
+                        'friend_count': {'N': '1'}
+                    }, {
+                        'username': {'S': 'jane'},
+                    }
+                ]
+            }
+        })
+
+        # Now alter the expected.
+        del expected['Responses']['users'][2]
+        expected['UnprocessedKeys'] = {
+            'Keys': [
+                {'username': {'S': 'jane',}},
+            ],
+        }
+
+        with mock.patch.object(
+                self.users.connection,
+                'batch_get_item',
+                return_value=expected) as mock_batch_get_2:
+            results = self.users._batch_get(keys=[
+                {'username': 'alice', 'friend_count': 1},
+                {'username': 'bob', 'friend_count': 1},
+                {'username': 'jane'},
+            ])
+            usernames = [res['username'] for res in results['results']]
+            self.assertEqual(usernames, ['alice', 'bob'])
+            self.assertEqual(len(results['results']), 2)
+            self.assertEqual(results['last_key'], None)
+            self.assertEqual(results['unprocessed_keys'], [
+                {'username': 'jane'}
+            ])
+
+        mock_batch_get_2.assert_called_once_with(request_items={
+            'users': {
+                'Keys': [
+                    {
+                        'username': {'S': 'alice'},
+                        'friend_count': {'N': '1'}
+                    },
+                    {
+                        'username': {'S': 'bob'},
+                        'friend_count': {'N': '1'}
+                    }, {
+                        'username': {'S': 'jane'},
+                    }
+                ]
+            }
+        })
+
+    def test_batch_get(self):
+        items_1 = {
+            'results': [
+                Item(self.users, data={
+                    'username': 'johndoe',
+                    'first_name': 'John',
+                    'last_name': 'Doe',
+                }),
+                Item(self.users, data={
+                    'username': 'jane',
+                    'first_name': 'Jane',
+                    'last_name': 'Doe',
+                }),
+            ],
+            'last_key': None,
+            'unprocessed_keys': [
+                'zoeydoe',
+            ]
+        }
+
+        results = self.users.batch_get(keys=[
+            {'username': 'johndoe'},
+            {'username': 'jane'},
+            {'username': 'zoeydoe'},
+        ])
+        self.assertTrue(isinstance(results, BatchGetResultSet))
+        self.assertEqual(len(results._results), 0)
+        self.assertEqual(results.the_callable, self.users._batch_get)
+
+        with mock.patch.object(
+                results,
+                'the_callable',
+                return_value=items_1) as mock_batch_get:
+            res_1 = results.next()
+            # Now it should be populated.
+            self.assertEqual(len(results._results), 2)
+            self.assertEqual(res_1['username'], 'johndoe')
+            res_2 = results.next()
+            self.assertEqual(res_2['username'], 'jane')
+
+        self.assertEqual(mock_batch_get.call_count, 1)
+        self.assertEqual(results._keys_left, ['zoeydoe'])
+
+        items_2 = {
+            'results': [
+                Item(self.users, data={
+                    'username': 'zoeydoe',
+                    'first_name': 'Zoey',
+                    'last_name': 'Doe',
+                }),
+            ],
+        }
+
+        with mock.patch.object(
+                results,
+                'the_callable',
+                return_value=items_2) as mock_batch_get_2:
+            res_3 = results.next()
+            # New results should have been found.
+            self.assertEqual(len(results._results), 1)
+            self.assertEqual(res_3['username'], 'zoeydoe')
+
+            self.assertRaises(StopIteration, results.next)
+
+        self.assertEqual(mock_batch_get_2.call_count, 1)
+        self.assertEqual(results._keys_left, [])
diff --git a/tests/unit/ec2/autoscale/__init__.py b/tests/unit/ec2/autoscale/__init__.py
new file mode 100644
index 0000000..e69de29
--- /dev/null
+++ b/tests/unit/ec2/autoscale/__init__.py
diff --git a/tests/unit/ec2/autoscale/test_group.py b/tests/unit/ec2/autoscale/test_group.py
new file mode 100644
index 0000000..2894154
--- /dev/null
+++ b/tests/unit/ec2/autoscale/test_group.py
@@ -0,0 +1,200 @@
+#!/usr/bin/env python
+# Copyright (c) 2012 Amazon.com, Inc. or its affiliates.  All Rights Reserved
+#
+# Permission is hereby granted, free of charge, to any person obtaining a
+# copy of this software and associated documentation files (the
+# "Software"), to deal in the Software without restriction, including
+# without limitation the rights to use, copy, modify, merge, publish, dis-
+# tribute, sublicense, and/or sell copies of the Software, and to permit
+# persons to whom the Software is furnished to do so, subject to the fol-
+# lowing conditions:
+#
+# The above copyright notice and this permission notice shall be included
+# in all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
+# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
+# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+# IN THE SOFTWARE.
+#
+
+from datetime import datetime
+
+from tests.unit import unittest
+from tests.unit import AWSMockServiceTestCase
+
+from boto.ec2.autoscale import AutoScaleConnection
+from boto.ec2.autoscale.group import AutoScalingGroup
+
+
+class TestAutoScaleGroup(AWSMockServiceTestCase):
+    connection_class = AutoScaleConnection
+
+    def setUp(self):
+        super(TestAutoScaleGroup, self).setUp()
+
+    def default_body(self):
+        return """
+            <CreateLaunchConfigurationResponse>
+              <ResponseMetadata>
+                <RequestId>requestid</RequestId>
+              </ResponseMetadata>
+            </CreateLaunchConfigurationResponse>
+        """
+
+    def test_autoscaling_group_with_termination_policies(self):
+        self.set_http_response(status_code=200)
+        autoscale = AutoScalingGroup(
+            name='foo', launch_config='lauch_config',
+            min_size=1, max_size=2,
+            termination_policies=['OldestInstance', 'OldestLaunchConfiguration'])
+        self.service_connection.create_auto_scaling_group(autoscale)
+        self.assert_request_parameters({
+            'Action': 'CreateAutoScalingGroup',
+            'AutoScalingGroupName': 'foo',
+            'LaunchConfigurationName': 'lauch_config',
+            'MaxSize': 2,
+            'MinSize': 1,
+            'TerminationPolicies.member.1': 'OldestInstance',
+            'TerminationPolicies.member.2': 'OldestLaunchConfiguration',
+        }, ignore_params_values=['Version'])
+
+class TestScheduledGroup(AWSMockServiceTestCase):
+    connection_class = AutoScaleConnection
+
+    def setUp(self):
+        super(TestScheduledGroup, self).setUp()
+
+    def default_body(self):
+        return """
+            <PutScheduledUpdateGroupActionResponse>
+                <ResponseMetadata>
+                  <RequestId>requestid</RequestId>
+                </ResponseMetadata>
+            </PutScheduledUpdateGroupActionResponse>
+        """
+
+    def test_scheduled_group_creation(self):
+        self.set_http_response(status_code=200)
+        self.service_connection.create_scheduled_group_action('foo',
+                                                              'scheduled-foo',
+                                                              desired_capacity=1,
+                                                              start_time=datetime(2013, 1, 1, 22, 55, 31),
+                                                              end_time=datetime(2013, 2, 1, 22, 55, 31),
+                                                              min_size=1,
+                                                              max_size=2,
+                                                              recurrence='0 10 * * *')
+        self.assert_request_parameters({
+            'Action': 'PutScheduledUpdateGroupAction',
+            'AutoScalingGroupName': 'foo',
+            'ScheduledActionName': 'scheduled-foo',
+            'MaxSize': 2,
+            'MinSize': 1,
+            'DesiredCapacity': 1,
+            'EndTime': '2013-02-01T22:55:31',
+            'StartTime': '2013-01-01T22:55:31',
+            'Recurrence': '0 10 * * *',
+        }, ignore_params_values=['Version'])
+
+class TestParseAutoScaleGroupResponse(AWSMockServiceTestCase):
+    connection_class = AutoScaleConnection
+
+    def default_body(self):
+        return """
+          <DescribeAutoScalingGroupsResult>
+             <AutoScalingGroups>
+               <member>
+                 <Tags/>
+                 <SuspendedProcesses/>
+                 <AutoScalingGroupName>test_group</AutoScalingGroupName>
+                 <HealthCheckType>EC2</HealthCheckType>
+                 <CreatedTime>2012-09-27T20:19:47.082Z</CreatedTime>
+                 <EnabledMetrics/>
+                 <LaunchConfigurationName>test_launchconfig</LaunchConfigurationName>
+                 <Instances>
+                   <member>
+                     <HealthStatus>Healthy</HealthStatus>
+                     <AvailabilityZone>us-east-1a</AvailabilityZone>
+                     <InstanceId>i-z118d054</InstanceId>
+                     <LaunchConfigurationName>test_launchconfig</LaunchConfigurationName>
+                     <LifecycleState>InService</LifecycleState>
+                   </member>
+                 </Instances>
+                 <DesiredCapacity>1</DesiredCapacity>
+                 <AvailabilityZones>
+                   <member>us-east-1c</member>
+                   <member>us-east-1a</member>
+                 </AvailabilityZones>
+                 <LoadBalancerNames/>
+                 <MinSize>1</MinSize>
+                 <VPCZoneIdentifier/>
+                 <HealthCheckGracePeriod>0</HealthCheckGracePeriod>
+                 <DefaultCooldown>300</DefaultCooldown>
+                 <AutoScalingGroupARN>myarn</AutoScalingGroupARN>
+                 <TerminationPolicies>
+                   <member>OldestInstance</member>
+                   <member>OldestLaunchConfiguration</member>
+                 </TerminationPolicies>
+                 <MaxSize>2</MaxSize>
+               </member>
+             </AutoScalingGroups>
+          </DescribeAutoScalingGroupsResult>
+        """
+
+    def test_get_all_groups_is_parsed_correctly(self):
+        self.set_http_response(status_code=200)
+        response = self.service_connection.get_all_groups(names=['test_group'])
+        self.assertEqual(len(response), 1, response)
+        as_group = response[0]
+        self.assertEqual(as_group.availability_zones, ['us-east-1c', 'us-east-1a'])
+        self.assertEqual(as_group.default_cooldown, 300)
+        self.assertEqual(as_group.desired_capacity, 1)
+        self.assertEqual(as_group.enabled_metrics, [])
+        self.assertEqual(as_group.health_check_period, 0)
+        self.assertEqual(as_group.health_check_type, 'EC2')
+        self.assertEqual(as_group.launch_config_name, 'test_launchconfig')
+        self.assertEqual(as_group.load_balancers, [])
+        self.assertEqual(as_group.min_size, 1)
+        self.assertEqual(as_group.max_size, 2)
+        self.assertEqual(as_group.name, 'test_group')
+        self.assertEqual(as_group.suspended_processes, [])
+        self.assertEqual(as_group.tags, [])
+        self.assertEqual(as_group.termination_policies,
+                         ['OldestInstance', 'OldestLaunchConfiguration'])
+
+
+class TestDescribeTerminationPolicies(AWSMockServiceTestCase):
+    connection_class = AutoScaleConnection
+
+    def default_body(self):
+        return """
+          <DescribeTerminationPolicyTypesResponse>
+            <DescribeTerminationPolicyTypesResult>
+              <TerminationPolicyTypes>
+                <member>ClosestToNextInstanceHour</member>
+                <member>Default</member>
+                <member>NewestInstance</member>
+                <member>OldestInstance</member>
+                <member>OldestLaunchConfiguration</member>
+              </TerminationPolicyTypes>
+            </DescribeTerminationPolicyTypesResult>
+            <ResponseMetadata>
+              <RequestId>requestid</RequestId>
+            </ResponseMetadata>
+          </DescribeTerminationPolicyTypesResponse>
+        """
+
+    def test_autoscaling_group_with_termination_policies(self):
+        self.set_http_response(status_code=200)
+        response = self.service_connection.get_termination_policies()
+        self.assertListEqual(
+            response,
+            ['ClosestToNextInstanceHour', 'Default',
+             'NewestInstance', 'OldestInstance', 'OldestLaunchConfiguration'])
+
+
+if __name__ == '__main__':
+    unittest.main()
diff --git a/tests/unit/ec2/cloudwatch/__init__.py b/tests/unit/ec2/cloudwatch/__init__.py
new file mode 100644
index 0000000..e69de29
--- /dev/null
+++ b/tests/unit/ec2/cloudwatch/__init__.py
diff --git a/tests/unit/ec2/cloudwatch/test_connection.py b/tests/unit/ec2/cloudwatch/test_connection.py
new file mode 100644
index 0000000..d92669d
--- /dev/null
+++ b/tests/unit/ec2/cloudwatch/test_connection.py
@@ -0,0 +1,98 @@
+#!/usr/bin/env python
+# Copyright (c) 2013 Amazon.com, Inc. or its affiliates.  All Rights Reserved
+#
+# Permission is hereby granted, free of charge, to any person obtaining a
+# copy of this software and associated documentation files (the
+# "Software"), to deal in the Software without restriction, including
+# without limitation the rights to use, copy, modify, merge, publish, dis-
+# tribute, sublicense, and/or sell copies of the Software, and to permit
+# persons to whom the Software is furnished to do so, subject to the fol-
+# lowing conditions:
+#
+# The above copyright notice and this permission notice shall be included
+# in all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
+# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
+# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+# IN THE SOFTWARE.
+#
+import datetime
+
+from tests.unit import unittest
+from tests.unit import AWSMockServiceTestCase
+
+from boto.ec2.cloudwatch import CloudWatchConnection
+
+
+class TestCloudWatchConnection(AWSMockServiceTestCase):
+
+    connection_class = CloudWatchConnection
+
+    def test_build_put_params_multiple_everything(self):
+        # This dictionary gets modified by the method call.
+        # Check to make sure all updates happen appropriately.
+        params = {}
+        # Again, these are rubbish parameters. Pay them no mind, we care more
+        # about the functionality of the method
+        name = ['whatever', 'goeshere']
+        value = None
+        timestamp = [
+            datetime.datetime(2013, 5, 13, 9, 2, 35),
+            datetime.datetime(2013, 5, 12, 9, 2, 35),
+        ]
+        unit = ['lbs', 'ft']
+        dimensions = None
+        statistics = [
+            {
+                'maximum': 5,
+                'minimum': 1,
+                'samplecount': 3,
+                'sum': 7,
+            },
+            {
+                'maximum': 6,
+                'minimum': 2,
+                'samplecount': 4,
+                'sum': 5,
+            },
+        ]
+
+        # The important part is that this shouldn't generate a warning (due
+        # to overwriting a variable) & should have the correct number of
+        # Metrics (2).
+        self.service_connection.build_put_params(
+            params,
+            name=name,
+            value=value,
+            timestamp=timestamp,
+            unit=unit,
+            dimensions=dimensions,
+            statistics=statistics
+        )
+
+        self.assertEqual(params, {
+            'MetricData.member.1.MetricName': 'whatever',
+            'MetricData.member.1.StatisticValues.Maximum': 5,
+            'MetricData.member.1.StatisticValues.Minimum': 1,
+            'MetricData.member.1.StatisticValues.SampleCount': 3,
+            'MetricData.member.1.StatisticValues.Sum': 7,
+            'MetricData.member.1.Timestamp': '2013-05-13T09:02:35',
+            'MetricData.member.1.Unit': 'lbs',
+            'MetricData.member.2.MetricName': 'goeshere',
+            'MetricData.member.2.StatisticValues.Maximum': 6,
+            'MetricData.member.2.StatisticValues.Minimum': 2,
+            'MetricData.member.2.StatisticValues.SampleCount': 4,
+            'MetricData.member.2.StatisticValues.Sum': 5,
+            'MetricData.member.2.Timestamp': '2013-05-12T09:02:35',
+            # If needed, comment this next line to cause a test failure & see
+            # the logging warning.
+            'MetricData.member.2.Unit': 'ft',
+        })
+
+
+if __name__ == '__main__':
+    unittest.main()
diff --git a/tests/unit/ec2/elb/__init__.py b/tests/unit/ec2/elb/__init__.py
new file mode 100644
index 0000000..e69de29
--- /dev/null
+++ b/tests/unit/ec2/elb/__init__.py
diff --git a/tests/unit/ec2/elb/test_loadbalancer.py b/tests/unit/ec2/elb/test_loadbalancer.py
new file mode 100644
index 0000000..d5e126c
--- /dev/null
+++ b/tests/unit/ec2/elb/test_loadbalancer.py
@@ -0,0 +1,33 @@
+#!/usr/bin/env python
+
+from tests.unit import unittest
+from tests.unit import AWSMockServiceTestCase
+
+import mock
+
+from boto.ec2.elb import ELBConnection
+
+DISABLE_RESPONSE = r"""<?xml version="1.0" encoding="UTF-8"?>
+<DisableAvailabilityZonesForLoadBalancerResult xmlns="http://ec2.amazonaws.com/doc/2013-02-01/">
+    <requestId>3be1508e-c444-4fef-89cc-0b1223c4f02fEXAMPLE</requestId>
+    <AvailabilityZones>
+        <member>sample-zone</member>
+    </AvailabilityZones>
+</DisableAvailabilityZonesForLoadBalancerResult>
+"""
+
+
+class TestInstanceStatusResponseParsing(unittest.TestCase):
+    def test_next_token(self):
+        elb = ELBConnection(aws_access_key_id='aws_access_key_id',
+                            aws_secret_access_key='aws_secret_access_key')
+        mock_response = mock.Mock()
+        mock_response.read.return_value = DISABLE_RESPONSE
+        mock_response.status = 200
+        elb.make_request = mock.Mock(return_value=mock_response)
+        disabled = elb.disable_availability_zones('mine', ['sample-zone'])
+        self.assertEqual(disabled, ['sample-zone'])
+
+
+if __name__ == '__main__':
+    unittest.main()
diff --git a/tests/unit/ec2/test_connection.py b/tests/unit/ec2/test_connection.py
index a93830d..d06288d 100644
--- a/tests/unit/ec2/test_connection.py
+++ b/tests/unit/ec2/test_connection.py
@@ -122,10 +122,9 @@
             'MaxDuration': '1000',
             'MaxInstanceCount': '1',
             'NextToken': 'next_token',
-            'MaxResults': '10',
-            'Version': '2012-08-15'},
+            'MaxResults': '10',},
              ignore_params_values=['AWSAccessKeyId', 'SignatureMethod',
-                                   'SignatureVersion', 'Timestamp'])
+                                   'SignatureVersion', 'Timestamp', 'Version'])
 
 
 class TestPurchaseReservedInstanceOffering(TestEC2ConnectionBase):
@@ -141,10 +140,10 @@
             'InstanceCount': 1,
             'ReservedInstancesOfferingId': 'offering_id',
             'LimitPrice.Amount': '100.0',
-            'LimitPrice.CurrencyCode': 'USD',
-            'Version': '2012-08-15'},
+            'LimitPrice.CurrencyCode': 'USD',},
              ignore_params_values=['AWSAccessKeyId', 'SignatureMethod',
-                                   'SignatureVersion', 'Timestamp'])
+                                   'SignatureVersion', 'Timestamp',
+                                   'Version'])
 
 
 class TestCancelReservedInstancesListing(TestEC2ConnectionBase):
@@ -369,10 +368,198 @@
             'PriceSchedules.0.Price': '2.5',
             'PriceSchedules.0.Term': '11',
             'PriceSchedules.1.Price': '2.0',
-            'PriceSchedules.1.Term': '8',
-            'Version': '2012-08-15'},
+            'PriceSchedules.1.Term': '8',},
              ignore_params_values=['AWSAccessKeyId', 'SignatureMethod',
-                                   'SignatureVersion', 'Timestamp'])
+                                   'SignatureVersion', 'Timestamp',
+                                   'Version'])
+
+
+class TestDescribeSpotInstanceRequests(TestEC2ConnectionBase):
+    def default_body(self):
+        return """
+        <DescribeSpotInstanceRequestsResponse>
+            <requestId>requestid</requestId>
+            <spotInstanceRequestSet>
+                <item>
+                    <spotInstanceRequestId>sir-id</spotInstanceRequestId>
+                    <spotPrice>0.003000</spotPrice>
+                    <type>one-time</type>
+                    <state>active</state>
+                    <status>
+                        <code>fulfilled</code>
+                        <updateTime>2012-10-19T18:09:26.000Z</updateTime>
+                        <message>Your Spot request is fulfilled.</message>
+                    </status>
+                    <launchGroup>mylaunchgroup</launchGroup>
+                    <launchSpecification>
+                        <imageId>ami-id</imageId>
+                        <keyName>mykeypair</keyName>
+                        <groupSet>
+                            <item>
+                                <groupId>sg-id</groupId>
+                                <groupName>groupname</groupName>
+                            </item>
+                        </groupSet>
+                        <instanceType>t1.micro</instanceType>
+                        <monitoring>
+                            <enabled>false</enabled>
+                        </monitoring>
+                    </launchSpecification>
+                    <instanceId>i-id</instanceId>
+                    <createTime>2012-10-19T18:07:05.000Z</createTime>
+                    <productDescription>Linux/UNIX</productDescription>
+                    <launchedAvailabilityZone>us-east-1d</launchedAvailabilityZone>
+                </item>
+            </spotInstanceRequestSet>
+        </DescribeSpotInstanceRequestsResponse>
+        """
+
+    def test_describe_spot_instance_requets(self):
+        self.set_http_response(status_code=200)
+        response = self.ec2.get_all_spot_instance_requests()
+        self.assertEqual(len(response), 1)
+        spotrequest = response[0]
+        self.assertEqual(spotrequest.id, 'sir-id')
+        self.assertEqual(spotrequest.price, 0.003)
+        self.assertEqual(spotrequest.type, 'one-time')
+        self.assertEqual(spotrequest.state, 'active')
+        self.assertEqual(spotrequest.fault, None)
+        self.assertEqual(spotrequest.valid_from, None)
+        self.assertEqual(spotrequest.valid_until, None)
+        self.assertEqual(spotrequest.launch_group, 'mylaunchgroup')
+        self.assertEqual(spotrequest.launched_availability_zone, 'us-east-1d')
+        self.assertEqual(spotrequest.product_description, 'Linux/UNIX')
+        self.assertEqual(spotrequest.availability_zone_group, None)
+        self.assertEqual(spotrequest.create_time,
+                         '2012-10-19T18:07:05.000Z')
+        self.assertEqual(spotrequest.instance_id, 'i-id')
+        launch_spec = spotrequest.launch_specification
+        self.assertEqual(launch_spec.key_name, 'mykeypair')
+        self.assertEqual(launch_spec.instance_type, 't1.micro')
+        self.assertEqual(launch_spec.image_id, 'ami-id')
+        self.assertEqual(launch_spec.placement, None)
+        self.assertEqual(launch_spec.kernel, None)
+        self.assertEqual(launch_spec.ramdisk, None)
+        self.assertEqual(launch_spec.monitored, False)
+        self.assertEqual(launch_spec.subnet_id, None)
+        self.assertEqual(launch_spec.block_device_mapping, None)
+        self.assertEqual(launch_spec.instance_profile, None)
+        self.assertEqual(launch_spec.ebs_optimized, False)
+        status = spotrequest.status
+        self.assertEqual(status.code, 'fulfilled')
+        self.assertEqual(status.update_time, '2012-10-19T18:09:26.000Z')
+        self.assertEqual(status.message, 'Your Spot request is fulfilled.')
+
+
+class TestCopySnapshot(TestEC2ConnectionBase):
+    def default_body(self):
+        return """
+        <CopySnapshotResponse xmlns="http://ec2.amazonaws.com/doc/2012-12-01/">
+            <requestId>request_id</requestId>
+            <snapshotId>snap-copied-id</snapshotId>
+        </CopySnapshotResponse>
+        """
+
+    def test_copy_snapshot(self):
+        self.set_http_response(status_code=200)
+        snapshot_id = self.ec2.copy_snapshot('us-west-2', 'snap-id',
+                                             'description')
+        self.assertEqual(snapshot_id, 'snap-copied-id')
+
+        self.assert_request_parameters({
+            'Action': 'CopySnapshot',
+            'Description': 'description',
+            'SourceRegion': 'us-west-2',
+            'SourceSnapshotId': 'snap-id'},
+             ignore_params_values=['AWSAccessKeyId', 'SignatureMethod',
+                                   'SignatureVersion', 'Timestamp',
+                                   'Version'])
+
+
+class TestAccountAttributes(TestEC2ConnectionBase):
+    def default_body(self):
+        return """
+        <DescribeAccountAttributesResponse xmlns="http://ec2.amazonaws.com/doc/2012-12-01/">
+            <requestId>6d042e8a-4bc3-43e8-8265-3cbc54753f14</requestId>
+            <accountAttributeSet>
+                <item>
+                    <attributeName>vpc-max-security-groups-per-interface</attributeName>
+                    <attributeValueSet>
+                        <item>
+                            <attributeValue>5</attributeValue>
+                        </item>
+                    </attributeValueSet>
+                </item>
+                <item>
+                    <attributeName>max-instances</attributeName>
+                    <attributeValueSet>
+                        <item>
+                            <attributeValue>50</attributeValue>
+                        </item>
+                    </attributeValueSet>
+                </item>
+                <item>
+                    <attributeName>supported-platforms</attributeName>
+                    <attributeValueSet>
+                        <item>
+                            <attributeValue>EC2</attributeValue>
+                        </item>
+                        <item>
+                            <attributeValue>VPC</attributeValue>
+                        </item>
+                    </attributeValueSet>
+                </item>
+                <item>
+                    <attributeName>default-vpc</attributeName>
+                    <attributeValueSet>
+                        <item>
+                            <attributeValue>none</attributeValue>
+                        </item>
+                    </attributeValueSet>
+                </item>
+            </accountAttributeSet>
+        </DescribeAccountAttributesResponse>
+        """
+
+    def test_describe_account_attributes(self):
+        self.set_http_response(status_code=200)
+        parsed = self.ec2.describe_account_attributes()
+        self.assertEqual(len(parsed), 4)
+        self.assertEqual(parsed[0].attribute_name,
+                         'vpc-max-security-groups-per-interface')
+        self.assertEqual(parsed[0].attribute_values,
+                         ['5'])
+        self.assertEqual(parsed[-1].attribute_name,
+                         'default-vpc')
+        self.assertEqual(parsed[-1].attribute_values,
+                         ['none'])
+
+
+class TestDescribeVPCAttribute(TestEC2ConnectionBase):
+    def default_body(self):
+        return """
+        <DescribeVpcAttributeResponse xmlns="http://ec2.amazonaws.com/doc/2013-02-01/">
+            <requestId>request_id</requestId>
+            <vpcId>vpc-id</vpcId>
+            <enableDnsHostnames>
+                <value>false</value>
+            </enableDnsHostnames>
+        </DescribeVpcAttributeResponse>
+        """
+
+    def test_describe_vpc_attribute(self):
+        self.set_http_response(status_code=200)
+        parsed = self.ec2.describe_vpc_attribute('vpc-id',
+                                                 'enableDnsHostnames')
+        self.assertEqual(parsed.vpc_id, 'vpc-id')
+        self.assertFalse(parsed.enable_dns_hostnames)
+        self.assert_request_parameters({
+            'Action': 'DescribeVpcAttribute',
+            'VpcId': 'vpc-id',
+            'Attribute': 'enableDnsHostnames',},
+             ignore_params_values=['AWSAccessKeyId', 'SignatureMethod',
+                                   'SignatureVersion', 'Timestamp',
+                                   'Version'])
 
 
 if __name__ == '__main__':
diff --git a/tests/unit/ec2/test_instance.py b/tests/unit/ec2/test_instance.py
index 7d304d7..c48ef11 100644
--- a/tests/unit/ec2/test_instance.py
+++ b/tests/unit/ec2/test_instance.py
@@ -1,20 +1,133 @@
 #!/usr/bin/env python
 
 from tests.unit import unittest
+from tests.unit import AWSMockServiceTestCase
 
 import mock
 
 from boto.ec2.connection import EC2Connection
 
+DESCRIBE_INSTANCE_VPC = r"""<?xml version="1.0" encoding="UTF-8"?>
+<DescribeInstancesResponse xmlns="http://ec2.amazonaws.com/doc/2012-10-01/">
+    <requestId>c6132c74-b524-4884-87f5-0f4bde4a9760</requestId>
+    <reservationSet>
+        <item>
+            <reservationId>r-72ef4a0a</reservationId>
+            <ownerId>184906166255</ownerId>
+            <groupSet/>
+            <instancesSet>
+                <item>
+                    <instanceId>i-instance</instanceId>
+                    <imageId>ami-1624987f</imageId>
+                    <instanceState>
+                        <code>16</code>
+                        <name>running</name>
+                    </instanceState>
+                    <privateDnsName/>
+                    <dnsName/>
+                    <reason/>
+                    <keyName>mykeypair</keyName>
+                    <amiLaunchIndex>0</amiLaunchIndex>
+                    <productCodes/>
+                    <instanceType>m1.small</instanceType>
+                    <launchTime>2012-12-14T23:48:37.000Z</launchTime>
+                    <placement>
+                        <availabilityZone>us-east-1d</availabilityZone>
+                        <groupName/>
+                        <tenancy>default</tenancy>
+                    </placement>
+                    <kernelId>aki-88aa75e1</kernelId>
+                    <monitoring>
+                        <state>disabled</state>
+                    </monitoring>
+                    <subnetId>subnet-0dc60667</subnetId>
+                    <vpcId>vpc-id</vpcId>
+                    <privateIpAddress>10.0.0.67</privateIpAddress>
+                    <sourceDestCheck>true</sourceDestCheck>
+                    <groupSet>
+                        <item>
+                            <groupId>sg-id</groupId>
+                            <groupName>WebServerSG</groupName>
+                        </item>
+                    </groupSet>
+                    <architecture>x86_64</architecture>
+                    <rootDeviceType>ebs</rootDeviceType>
+                    <rootDeviceName>/dev/sda1</rootDeviceName>
+                    <blockDeviceMapping>
+                        <item>
+                            <deviceName>/dev/sda1</deviceName>
+                            <ebs>
+                                <volumeId>vol-id</volumeId>
+                                <status>attached</status>
+                                <attachTime>2012-12-14T23:48:43.000Z</attachTime>
+                                <deleteOnTermination>true</deleteOnTermination>
+                            </ebs>
+                        </item>
+                    </blockDeviceMapping>
+                    <virtualizationType>paravirtual</virtualizationType>
+                    <clientToken>foo</clientToken>
+                    <tagSet>
+                        <item>
+                            <key>Name</key>
+                            <value/>
+                        </item>
+                    </tagSet>
+                    <hypervisor>xen</hypervisor>
+                    <networkInterfaceSet>
+                        <item>
+                            <networkInterfaceId>eni-id</networkInterfaceId>
+                            <subnetId>subnet-id</subnetId>
+                            <vpcId>vpc-id</vpcId>
+                            <description>Primary network interface</description>
+                            <ownerId>ownerid</ownerId>
+                            <status>in-use</status>
+                            <privateIpAddress>10.0.0.67</privateIpAddress>
+                            <sourceDestCheck>true</sourceDestCheck>
+                            <groupSet>
+                                <item>
+                                    <groupId>sg-id</groupId>
+                                    <groupName>WebServerSG</groupName>
+                                </item>
+                            </groupSet>
+                            <attachment>
+                                <attachmentId>eni-attach-id</attachmentId>
+                                <deviceIndex>0</deviceIndex>
+                                <status>attached</status>
+                                <attachTime>2012-12-14T23:48:37.000Z</attachTime>
+                                <deleteOnTermination>true</deleteOnTermination>
+                            </attachment>
+                            <privateIpAddressesSet>
+                                <item>
+                                    <privateIpAddress>10.0.0.67</privateIpAddress>
+                                    <primary>true</primary>
+                                </item>
+                                <item>
+                                    <privateIpAddress>10.0.0.54</privateIpAddress>
+                                    <primary>false</primary>
+                                </item>
+                                <item>
+                                    <privateIpAddress>10.0.0.55</privateIpAddress>
+                                    <primary>false</primary>
+                                </item>
+                            </privateIpAddressesSet>
+                        </item>
+                    </networkInterfaceSet>
+                    <ebsOptimized>false</ebsOptimized>
+                </item>
+            </instancesSet>
+        </item>
+    </reservationSet>
+</DescribeInstancesResponse>
+"""
 
-RESPONSE = r"""
+RUN_INSTANCE_RESPONSE = r"""
 <RunInstancesResponse xmlns="http://ec2.amazonaws.com/doc/2012-06-01/">
     <requestId>ad4b83c2-f606-4c39-90c6-5dcc5be823e1</requestId>
     <reservationId>r-c5cef7a7</reservationId>
-    <ownerId>184906166255</ownerId>
+    <ownerId>ownerid</ownerId>
     <groupSet>
         <item>
-            <groupId>sg-99a710f1</groupId>
+            <groupId>sg-id</groupId>
             <groupName>SSH</groupName>
         </item>
     </groupSet>
@@ -62,8 +175,8 @@
             <hypervisor>xen</hypervisor>
             <networkInterfaceSet/>
             <iamInstanceProfile>
-                <arn>arn:aws:iam::184906166255:instance-profile/myinstanceprofile</arn>
-                <id>AIPAIQ2LVHYBCH7LYQFDK</id>
+                <arn>arn:aws:iam::ownerid:instance-profile/myinstanceprofile</arn>
+                <id>iamid</id>
             </iamInstanceProfile>
         </item>
     </instancesSet>
@@ -76,7 +189,7 @@
         ec2 = EC2Connection(aws_access_key_id='aws_access_key_id',
                             aws_secret_access_key='aws_secret_access_key')
         mock_response = mock.Mock()
-        mock_response.read.return_value = RESPONSE
+        mock_response.read.return_value = RUN_INSTANCE_RESPONSE
         mock_response.status = 200
         ec2.make_request = mock.Mock(return_value=mock_response)
         reservation = ec2.run_instances(image_id='ami-12345')
@@ -89,9 +202,41 @@
         self.assertEqual(instance.id, 'i-ff0f1299')
         self.assertDictEqual(
             instance.instance_profile,
-            {'arn': ('arn:aws:iam::184906166255:'
+            {'arn': ('arn:aws:iam::ownerid:'
                      'instance-profile/myinstanceprofile'),
-             'id': 'AIPAIQ2LVHYBCH7LYQFDK'})
+             'id': 'iamid'})
+
+
+class TestDescribeInstances(AWSMockServiceTestCase):
+    connection_class = EC2Connection
+
+    def default_body(self):
+        return DESCRIBE_INSTANCE_VPC
+
+    def test_multiple_private_ip_addresses(self):
+        self.set_http_response(status_code=200)
+
+        api_response = self.service_connection.get_all_instances()
+        self.assertEqual(len(api_response), 1)
+
+        instances = api_response[0].instances
+        self.assertEqual(len(instances), 1)
+
+        instance = instances[0]
+        self.assertEqual(len(instance.interfaces), 1)
+
+        interface = instance.interfaces[0]
+        self.assertEqual(len(interface.private_ip_addresses), 3)
+
+        addresses = interface.private_ip_addresses
+        self.assertEqual(addresses[0].private_ip_address, '10.0.0.67')
+        self.assertTrue(addresses[0].primary)
+
+        self.assertEqual(addresses[1].private_ip_address, '10.0.0.54')
+        self.assertFalse(addresses[1].primary)
+
+        self.assertEqual(addresses[2].private_ip_address, '10.0.0.55')
+        self.assertFalse(addresses[2].primary)
 
 
 if __name__ == '__main__':
diff --git a/tests/unit/ec2/test_instancestatus.py b/tests/unit/ec2/test_instancestatus.py
new file mode 100644
index 0000000..67433b8
--- /dev/null
+++ b/tests/unit/ec2/test_instancestatus.py
@@ -0,0 +1,32 @@
+#!/usr/bin/env python
+
+from tests.unit import unittest
+from tests.unit import AWSMockServiceTestCase
+
+import mock
+
+from boto.ec2.connection import EC2Connection
+
+INSTANCE_STATUS_RESPONSE = r"""<?xml version="1.0" encoding="UTF-8"?>
+<DescribeInstanceStatusResponse xmlns="http://ec2.amazonaws.com/doc/2013-02-01/">
+    <requestId>3be1508e-c444-4fef-89cc-0b1223c4f02fEXAMPLE</requestId>
+    <nextToken>page-2</nextToken>
+    <instanceStatusSet />
+</DescribeInstanceStatusResponse>
+"""
+
+
+class TestInstanceStatusResponseParsing(unittest.TestCase):
+    def test_next_token(self):
+        ec2 = EC2Connection(aws_access_key_id='aws_access_key_id',
+                            aws_secret_access_key='aws_secret_access_key')
+        mock_response = mock.Mock()
+        mock_response.read.return_value = INSTANCE_STATUS_RESPONSE
+        mock_response.status = 200
+        ec2.make_request = mock.Mock(return_value=mock_response)
+        all_statuses = ec2.get_all_instance_status()
+        self.assertEqual(all_statuses.next_token, 'page-2')
+
+
+if __name__ == '__main__':
+    unittest.main()
diff --git a/tests/unit/ec2/test_networkinterface.py b/tests/unit/ec2/test_networkinterface.py
new file mode 100644
index 0000000..b23f6c3
--- /dev/null
+++ b/tests/unit/ec2/test_networkinterface.py
@@ -0,0 +1,140 @@
+#!/usr/bin/env python
+# Copyright (c) 2012 Amazon.com, Inc. or its affiliates.  All Rights Reserved
+#
+# Permission is hereby granted, free of charge, to any person obtaining a
+# copy of this software and associated documentation files (the
+# "Software"), to deal in the Software without restriction, including
+# without limitation the rights to use, copy, modify, merge, publish, dis-
+# tribute, sublicense, and/or sell copies of the Software, and to permit
+# persons to whom the Software is furnished to do so, subject to the fol-
+# lowing conditions:
+#
+# The above copyright notice and this permission notice shall be included
+# in all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
+# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
+# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+# IN THE SOFTWARE.
+#
+
+from tests.unit import unittest
+
+
+from boto.ec2.networkinterface import NetworkInterfaceCollection
+from boto.ec2.networkinterface import NetworkInterfaceSpecification
+from boto.ec2.networkinterface import PrivateIPAddress
+
+
+class TestNetworkInterfaceCollection(unittest.TestCase):
+    maxDiff = None
+
+    def setUp(self):
+        self.private_ip_address1 = PrivateIPAddress(
+            private_ip_address='10.0.0.10', primary=False)
+        self.private_ip_address2 = PrivateIPAddress(
+            private_ip_address='10.0.0.11', primary=False)
+        self.network_interfaces_spec1 = NetworkInterfaceSpecification(
+            device_index=1, subnet_id='subnet_id',
+            description='description1',
+            private_ip_address='10.0.0.54', delete_on_termination=False,
+            private_ip_addresses=[self.private_ip_address1,
+                                  self.private_ip_address2])
+
+        self.private_ip_address3 = PrivateIPAddress(
+            private_ip_address='10.0.1.10', primary=False)
+        self.private_ip_address4 = PrivateIPAddress(
+            private_ip_address='10.0.1.11', primary=False)
+        self.network_interfaces_spec2 = NetworkInterfaceSpecification(
+            device_index=2, subnet_id='subnet_id2',
+            description='description2',
+            groups=['group_id1', 'group_id2'],
+            private_ip_address='10.0.1.54', delete_on_termination=False,
+            private_ip_addresses=[self.private_ip_address3,
+                                  self.private_ip_address4])
+
+    def test_param_serialization(self):
+        collection = NetworkInterfaceCollection(self.network_interfaces_spec1,
+                                                self.network_interfaces_spec2)
+        params = {}
+        collection.build_list_params(params)
+        self.assertDictEqual(params, {
+            'NetworkInterface.1.DeviceIndex': '1',
+            'NetworkInterface.1.DeleteOnTermination': 'false',
+            'NetworkInterface.1.Description': 'description1',
+            'NetworkInterface.1.PrivateIpAddress': '10.0.0.54',
+            'NetworkInterface.1.SubnetId': 'subnet_id',
+            'NetworkInterface.1.PrivateIpAddresses.1.Primary': 'false',
+            'NetworkInterface.1.PrivateIpAddresses.1.PrivateIpAddress':
+                '10.0.0.10',
+            'NetworkInterface.1.PrivateIpAddresses.2.Primary': 'false',
+            'NetworkInterface.1.PrivateIpAddresses.2.PrivateIpAddress':
+                '10.0.0.11',
+            'NetworkInterface.2.DeviceIndex': '2',
+            'NetworkInterface.2.Description': 'description2',
+            'NetworkInterface.2.DeleteOnTermination': 'false',
+            'NetworkInterface.2.PrivateIpAddress': '10.0.1.54',
+            'NetworkInterface.2.SubnetId': 'subnet_id2',
+            'NetworkInterface.2.SecurityGroupId.1': 'group_id1',
+            'NetworkInterface.2.SecurityGroupId.2': 'group_id2',
+            'NetworkInterface.2.PrivateIpAddresses.1.Primary': 'false',
+            'NetworkInterface.2.PrivateIpAddresses.1.PrivateIpAddress':
+                '10.0.1.10',
+            'NetworkInterface.2.PrivateIpAddresses.2.Primary': 'false',
+            'NetworkInterface.2.PrivateIpAddresses.2.PrivateIpAddress':
+                '10.0.1.11',
+        })
+
+    def test_add_prefix_to_serialization(self):
+        return
+        collection = NetworkInterfaceCollection(self.network_interfaces_spec1,
+                                                self.network_interfaces_spec2)
+        params = {}
+        collection.build_list_params(params, prefix='LaunchSpecification.')
+        # We already tested the actual serialization previously, so
+        # we're just checking a few keys to make sure we get the proper
+        # prefix.
+        self.assertDictEqual(params, {
+            'LaunchSpecification.NetworkInterface.1.DeviceIndex': '1',
+            'LaunchSpecification.NetworkInterface.1.DeleteOnTermination':
+                'false',
+            'LaunchSpecification.NetworkInterface.1.Description':
+                'description1',
+            'LaunchSpecification.NetworkInterface.1.PrivateIpAddress':
+                '10.0.0.54',
+            'LaunchSpecification.NetworkInterface.1.SubnetId': 'subnet_id',
+            'LaunchSpecification.NetworkInterface.1.PrivateIpAddresses.1.Primary':
+                'false',
+            'LaunchSpecification.NetworkInterface.1.PrivateIpAddresses.1.PrivateIpAddress':
+                '10.0.0.10',
+            'LaunchSpecification.NetworkInterface.1.PrivateIpAddresses.2.Primary': 'false',
+            'LaunchSpecification.NetworkInterface.1.PrivateIpAddresses.2.PrivateIpAddress':
+                '10.0.0.11',
+            'LaunchSpecification.NetworkInterface.2.DeviceIndex': '2',
+            'LaunchSpecification.NetworkInterface.2.Description':
+                'description2',
+            'LaunchSpecification.NetworkInterface.2.DeleteOnTermination':
+                'false',
+            'LaunchSpecification.NetworkInterface.2.PrivateIpAddress':
+                '10.0.1.54',
+            'LaunchSpecification.NetworkInterface.2.SubnetId': 'subnet_id2',
+            'LaunchSpecification.NetworkInterface.2.SecurityGroupId.1':
+                'group_id1',
+            'LaunchSpecification.NetworkInterface.2.SecurityGroupId.2':
+                'group_id2',
+            'LaunchSpecification.NetworkInterface.2.PrivateIpAddresses.1.Primary':
+                'false',
+            'LaunchSpecification.NetworkInterface.2.PrivateIpAddresses.1.PrivateIpAddress':
+                '10.0.1.10',
+            'LaunchSpecification.NetworkInterface.2.PrivateIpAddresses.2.Primary':
+                'false',
+            'LaunchSpecification.NetworkInterface.2.PrivateIpAddresses.2.PrivateIpAddress':
+                '10.0.1.11',
+        })
+
+
+if __name__ == '__main__':
+    unittest.main()
diff --git a/tests/unit/glacier/test_concurrent.py b/tests/unit/glacier/test_concurrent.py
new file mode 100644
index 0000000..b9f984e
--- /dev/null
+++ b/tests/unit/glacier/test_concurrent.py
@@ -0,0 +1,176 @@
+#!/usr/bin/env python
+# Copyright (c) 2012 Amazon.com, Inc. or its affiliates.  All Rights Reserved
+#
+# Permission is hereby granted, free of charge, to any person obtaining a
+# copy of this software and associated documentation files (the
+# "Software"), to deal in the Software without restriction, including
+# without limitation the rights to use, copy, modify, merge, publish, dis-
+# tribute, sublicense, and/or sell copies of the Software, and to permit
+# persons to whom the Software is furnished to do so, subject to the fol-
+# lowing conditions:
+#
+# The above copyright notice and this permission notice shall be included
+# in all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
+# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
+# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+# IN THE SOFTWARE.
+#
+import tempfile
+from Queue import Queue
+
+import mock
+from tests.unit import unittest
+from tests.unit import AWSMockServiceTestCase
+
+from boto.glacier.concurrent import ConcurrentUploader, ConcurrentDownloader
+from boto.glacier.concurrent import UploadWorkerThread
+from boto.glacier.concurrent import _END_SENTINEL
+
+
+class FakeThreadedConcurrentUploader(ConcurrentUploader):
+    def _start_upload_threads(self, results_queue, upload_id,
+                              worker_queue, filename):
+        self.results_queue = results_queue
+        self.worker_queue = worker_queue
+        self.upload_id = upload_id
+
+    def _wait_for_upload_threads(self, hash_chunks, result_queue, total_parts):
+        for i in xrange(total_parts):
+            hash_chunks[i] = 'foo'
+
+
+class FakeThreadedConcurrentDownloader(ConcurrentDownloader):
+    def _start_download_threads(self, results_queue, worker_queue):
+        self.results_queue = results_queue
+        self.worker_queue = worker_queue
+
+    def _wait_for_download_threads(self, filename, result_queue, total_parts):
+        pass
+
+
+class TestConcurrentUploader(unittest.TestCase):
+
+    def setUp(self):
+        super(TestConcurrentUploader, self).setUp()
+        self.stat_patch = mock.patch('os.stat')
+        self.stat_mock = self.stat_patch.start()
+        # Give a default value for tests that don't care
+        # what the file size is.
+        self.stat_mock.return_value.st_size = 1024 * 1024 * 8
+
+    def tearDown(self):
+        self.stat_mock = self.stat_patch.start()
+
+    def test_calculate_required_part_size(self):
+        self.stat_mock.return_value.st_size = 1024 * 1024 * 8
+        uploader = ConcurrentUploader(mock.Mock(), 'vault_name')
+        total_parts, part_size = uploader._calculate_required_part_size(
+            1024 * 1024 * 8)
+        self.assertEqual(total_parts, 2)
+        self.assertEqual(part_size, 4 * 1024 * 1024)
+
+    def test_calculate_required_part_size_too_small(self):
+        too_small = 1 * 1024 * 1024
+        self.stat_mock.return_value.st_size = 1024 * 1024 * 1024
+        uploader = ConcurrentUploader(mock.Mock(), 'vault_name',
+                                      part_size=too_small)
+        total_parts, part_size = uploader._calculate_required_part_size(
+            1024 * 1024 * 1024)
+        self.assertEqual(total_parts, 256)
+        # Part size if 4MB not the passed in 1MB.
+        self.assertEqual(part_size, 4 * 1024 * 1024)
+
+    def test_work_queue_is_correctly_populated(self):
+        uploader = FakeThreadedConcurrentUploader(mock.MagicMock(),
+                                                  'vault_name')
+        uploader.upload('foofile')
+        q = uploader.worker_queue
+        items = [q.get() for i in xrange(q.qsize())]
+        self.assertEqual(items[0], (0, 4 * 1024 * 1024))
+        self.assertEqual(items[1], (1, 4 * 1024 * 1024))
+        # 2 for the parts, 10 for the end sentinels (10 threads).
+        self.assertEqual(len(items), 12)
+
+    def test_correct_low_level_api_calls(self):
+        api_mock = mock.MagicMock()
+        uploader = FakeThreadedConcurrentUploader(api_mock, 'vault_name')
+        uploader.upload('foofile')
+        # The threads call the upload_part, so we're just verifying the
+        # initiate/complete multipart API calls.
+        api_mock.initiate_multipart_upload.assert_called_with(
+            'vault_name', 4 * 1024 * 1024, None)
+        api_mock.complete_multipart_upload.assert_called_with(
+            'vault_name', mock.ANY, mock.ANY, 8 * 1024 * 1024)
+
+    def test_downloader_work_queue_is_correctly_populated(self):
+        job = mock.MagicMock()
+        job.archive_size = 8 * 1024 * 1024
+        downloader = FakeThreadedConcurrentDownloader(job)
+        downloader.download('foofile')
+        q = downloader.worker_queue
+        items = [q.get() for i in xrange(q.qsize())]
+        self.assertEqual(items[0], (0, 4 * 1024 * 1024))
+        self.assertEqual(items[1], (1, 4 * 1024 * 1024))
+        # 2 for the parts, 10 for the end sentinels (10 threads).
+        self.assertEqual(len(items), 12)
+
+
+class TestUploaderThread(unittest.TestCase):
+    def setUp(self):
+        self.fileobj = tempfile.NamedTemporaryFile()
+        self.filename = self.fileobj.name
+
+    def test_fileobj_closed_when_thread_shuts_down(self):
+        thread = UploadWorkerThread(mock.Mock(), 'vault_name',
+                                    self.filename, 'upload_id',
+                                    Queue(), Queue())
+        fileobj = thread._fileobj
+        self.assertFalse(fileobj.closed)
+        # By settings should_continue to False, it should immediately
+        # exit, and we can still verify cleanup behavior.
+        thread.should_continue = False
+        thread.run()
+        self.assertTrue(fileobj.closed)
+
+    def test_upload_errors_have_exception_messages(self):
+        api = mock.Mock()
+        job_queue = Queue()
+        result_queue = Queue()
+        upload_thread = UploadWorkerThread(
+            api, 'vault_name', self.filename,
+            'upload_id', job_queue, result_queue, num_retries=1,
+            time_between_retries=0)
+        api.upload_part.side_effect = Exception("exception message")
+        job_queue.put((0, 1024))
+        job_queue.put(_END_SENTINEL)
+
+        upload_thread.run()
+        result = result_queue.get(timeout=1)
+        self.assertIn("exception message", str(result))
+
+    def test_num_retries_is_obeyed(self):
+        # total attempts is 1 + num_retries so if I have num_retries of 2,
+        # I'll attempt the upload once, and if that fails I'll retry up to
+        # 2 more times for a total of 3 attempts.
+        api = mock.Mock()
+        job_queue = Queue()
+        result_queue = Queue()
+        upload_thread = UploadWorkerThread(
+            api, 'vault_name', self.filename,
+            'upload_id', job_queue, result_queue, num_retries=2,
+            time_between_retries=0)
+        api.upload_part.side_effect = Exception()
+        job_queue.put((0, 1024))
+        job_queue.put(_END_SENTINEL)
+
+        upload_thread.run()
+        self.assertEqual(api.upload_part.call_count, 3)
+
+
+if __name__ == '__main__':
+    unittest.main()
diff --git a/tests/unit/glacier/test_job.py b/tests/unit/glacier/test_job.py
new file mode 100644
index 0000000..277fb85
--- /dev/null
+++ b/tests/unit/glacier/test_job.py
@@ -0,0 +1,60 @@
+# Copyright (c) 2013 Amazon.com, Inc. or its affiliates.  All Rights Reserved
+#
+# Permission is hereby granted, free of charge, to any person obtaining a
+# copy of this software and associated documentation files (the
+# "Software"), to deal in the Software without restriction, including
+# without limitation the rights to use, copy, modify, merge, publish, dis-
+# tribute, sublicense, and/or sell copies of the Software, and to permit
+# persons to whom the Software is furnished to do so, subject to the fol-
+# lowing conditions:
+#
+# The above copyright notice and this permission notice shall be included
+# in all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
+# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
+# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+# IN THE SOFTWARE.
+#
+from tests.unit import unittest
+import mock
+
+from boto.glacier.job import Job
+from boto.glacier.layer1 import Layer1
+from boto.glacier.response import GlacierResponse
+from boto.glacier.exceptions import TreeHashDoesNotMatchError
+
+
+class TestJob(unittest.TestCase):
+    def setUp(self):
+        self.api = mock.Mock(spec=Layer1)
+        self.vault = mock.Mock()
+        self.vault.layer1 = self.api
+        self.job = Job(self.vault)
+
+    def test_get_job_validate_checksum_success(self):
+        response = GlacierResponse(mock.Mock(), None)
+        response['TreeHash'] = 'tree_hash'
+        self.api.get_job_output.return_value = response
+        with mock.patch('boto.glacier.job.tree_hash_from_str') as t:
+            t.return_value = 'tree_hash'
+            self.job.get_output(byte_range=(1, 1024), validate_checksum=True)
+
+    def test_get_job_validation_fails(self):
+        response = GlacierResponse(mock.Mock(), None)
+        response['TreeHash'] = 'tree_hash'
+        self.api.get_job_output.return_value = response
+        with mock.patch('boto.glacier.job.tree_hash_from_str') as t:
+            t.return_value = 'BAD_TREE_HASH_VALUE'
+            with self.assertRaises(TreeHashDoesNotMatchError):
+                # With validate_checksum set to True, this call fails.
+                self.job.get_output(byte_range=(1, 1024), validate_checksum=True)
+            # With validate_checksum set to False, this call succeeds.
+            self.job.get_output(byte_range=(1, 1024), validate_checksum=False)
+
+
+if __name__ == '__main__':
+    unittest.main()
diff --git a/tests/unit/glacier/test_layer1.py b/tests/unit/glacier/test_layer1.py
index 7d7b6fc..4cabbea 100644
--- a/tests/unit/glacier/test_layer1.py
+++ b/tests/unit/glacier/test_layer1.py
@@ -1,7 +1,9 @@
-from tests.unit import AWSMockServiceTestCase
-from boto.glacier.layer1 import Layer1
 import json
 import copy
+import tempfile
+
+from tests.unit import AWSMockServiceTestCase
+from boto.glacier.layer1 import Layer1
 
 
 class GlacierLayer1ConnectionBase(AWSMockServiceTestCase):
@@ -76,3 +78,21 @@
         response = self.service_connection.get_job_output(self.vault_name,
                                                          'example-job-id')
         self.assertEqual(self.job_content, response.read())
+
+
+class GlacierUploadArchiveResets(GlacierLayer1ConnectionBase):
+    def test_upload_archive(self):
+        fake_data = tempfile.NamedTemporaryFile()
+        fake_data.write('foobarbaz')
+        # First seek to a non zero offset.
+        fake_data.seek(2)
+        self.set_http_response(status_code=201)
+        # Simulate reading the request body when we send the request.
+        self.service_connection.connection.request.side_effect = \
+                lambda *args: fake_data.read()
+        self.service_connection.upload_archive('vault_name', fake_data, 'linear_hash',
+                                               'tree_hash')
+        # Verify that we seek back to the original offset after making
+        # a request.  This ensures that if we need to resend the request we're
+        # back at the correct location within the file.
+        self.assertEqual(fake_data.tell(), 2)
diff --git a/tests/unit/glacier/test_layer2.py b/tests/unit/glacier/test_layer2.py
index a82a3a2..3a54924 100644
--- a/tests/unit/glacier/test_layer2.py
+++ b/tests/unit/glacier/test_layer2.py
@@ -23,13 +23,16 @@
 
 from tests.unit import unittest
 
-from mock import Mock
+from mock import call, Mock, patch, sentinel
 
 from boto.glacier.layer1 import Layer1
 from boto.glacier.layer2 import Layer2
+import boto.glacier.vault
 from boto.glacier.vault import Vault
 from boto.glacier.vault import Job
 
+from StringIO import StringIO
+
 # Some fixture data from the Glacier docs
 FIXTURE_VAULT = {
   "CreationDate" : "2012-02-20T17:01:45.198Z",
@@ -61,6 +64,53 @@
   "VaultARN": "arn:aws:glacier:us-east-1:012345678901:vaults/examplevault"
 }
 
+EXAMPLE_PART_LIST_RESULT_PAGE_1 = {
+    "ArchiveDescription": "archive description 1",
+    "CreationDate": "2012-03-20T17:03:43.221Z",
+    "Marker": "MfgsKHVjbQ6EldVl72bn3_n5h2TaGZQUO-Qb3B9j3TITf7WajQ",
+    "MultipartUploadId": "OW2fM5iVylEpFEMM9_HpKowRapC3vn5sSL39_396UW9zLFUWVrnRHaPjUJddQ5OxSHVXjYtrN47NBZ-khxOjyEXAMPLE",
+    "PartSizeInBytes": 4194304,
+    "Parts":
+    [ {
+      "RangeInBytes": "4194304-8388607",
+      "SHA256TreeHash": "01d34dabf7be316472c93b1ef80721f5d4"
+      }],
+    "VaultARN": "arn:aws:glacier:us-east-1:012345678901:vaults/demo1-vault"
+}
+
+# The documentation doesn't say whether the non-Parts fields are defined in
+# future pages, so assume they are not.
+EXAMPLE_PART_LIST_RESULT_PAGE_2 = {
+    "ArchiveDescription": None,
+    "CreationDate": None,
+    "Marker": None,
+    "MultipartUploadId": None,
+    "PartSizeInBytes": None,
+    "Parts":
+    [ {
+      "RangeInBytes": "0-4194303",
+      "SHA256TreeHash": "01d34dabf7be316472c93b1ef80721f5d4"
+      }],
+    "VaultARN": None
+}
+
+EXAMPLE_PART_LIST_COMPLETE = {
+    "ArchiveDescription": "archive description 1",
+    "CreationDate": "2012-03-20T17:03:43.221Z",
+    "Marker": None,
+    "MultipartUploadId": "OW2fM5iVylEpFEMM9_HpKowRapC3vn5sSL39_396UW9zLFUWVrnRHaPjUJddQ5OxSHVXjYtrN47NBZ-khxOjyEXAMPLE",
+    "PartSizeInBytes": 4194304,
+    "Parts":
+    [ {
+      "RangeInBytes": "4194304-8388607",
+      "SHA256TreeHash": "01d34dabf7be316472c93b1ef80721f5d4"
+    }, {
+      "RangeInBytes": "0-4194303",
+      "SHA256TreeHash": "01d34dabf7be316472c93b1ef80721f5d4"
+      }],
+    "VaultARN": "arn:aws:glacier:us-east-1:012345678901:vaults/demo1-vault"
+}
+
 
 class GlacierLayer2Base(unittest.TestCase):
     def setUp(self):
@@ -131,6 +181,49 @@
                          "8i1_AUyUsuhPAdTqLHy8pTl5nfCFJmDl2yEZONi5L26Omw12vcs0"
                          "1MNGntHEQL8MBfGlqrEXAMPLEArchiveId")
 
+    def test_list_all_parts_one_page(self):
+        self.mock_layer1.list_parts.return_value = (
+            dict(EXAMPLE_PART_LIST_COMPLETE)) # take a copy
+        parts_result = self.vault.list_all_parts(sentinel.upload_id)
+        expected = [call('examplevault', sentinel.upload_id)]
+        self.assertEquals(expected, self.mock_layer1.list_parts.call_args_list)
+        self.assertEquals(EXAMPLE_PART_LIST_COMPLETE, parts_result)
+
+    def test_list_all_parts_two_pages(self):
+        self.mock_layer1.list_parts.side_effect = [
+            # take copies
+            dict(EXAMPLE_PART_LIST_RESULT_PAGE_1),
+            dict(EXAMPLE_PART_LIST_RESULT_PAGE_2)
+        ]
+        parts_result = self.vault.list_all_parts(sentinel.upload_id)
+        expected = [call('examplevault', sentinel.upload_id),
+                    call('examplevault', sentinel.upload_id,
+                         marker=EXAMPLE_PART_LIST_RESULT_PAGE_1['Marker'])]
+        self.assertEquals(expected, self.mock_layer1.list_parts.call_args_list)
+        self.assertEquals(EXAMPLE_PART_LIST_COMPLETE, parts_result)
+
+    @patch('boto.glacier.vault.resume_file_upload')
+    def test_resume_archive_from_file(self, mock_resume_file_upload):
+        part_size = 4
+        mock_list_parts = Mock()
+        mock_list_parts.return_value = {
+            'PartSizeInBytes': part_size,
+            'Parts': [{
+                'RangeInBytes': '0-3',
+                'SHA256TreeHash': '12',
+                }, {
+                'RangeInBytes': '4-6',
+                'SHA256TreeHash': '34',
+                },
+        ]}
+
+        self.vault.list_all_parts = mock_list_parts
+        self.vault.resume_archive_from_file(
+            sentinel.upload_id, file_obj=sentinel.file_obj)
+        mock_resume_file_upload.assert_called_once_with(
+            self.vault, sentinel.upload_id, part_size, sentinel.file_obj,
+            {0: '12'.decode('hex'), 1: '34'.decode('hex')})
+
 
 class TestJob(GlacierLayer2Base):
     def setUp(self):
@@ -145,3 +238,29 @@
             "examplevault",
             "HkF9p6o7yjhFx-K3CGl6fuSm6VzW9T7esGQfco8nUXVYwS0jlb5gq1JZ55yHgt5vP"
             "54ZShjoQzQVVh7vEXAMPLEjobID", (0,100))
+
+class TestRangeStringParsing(unittest.TestCase):
+    def test_simple_range(self):
+        self.assertEquals(
+            Vault._range_string_to_part_index('0-3', 4), 0)
+
+    def test_range_one_too_big(self):
+        # Off-by-one bug in Amazon's Glacier implementation
+        # See: https://forums.aws.amazon.com/thread.jspa?threadID=106866&tstart=0
+        # Workaround is to assume that if a (start, end] range appears to be
+        # returned then that is what it is.
+        self.assertEquals(
+            Vault._range_string_to_part_index('0-4', 4), 0)
+
+    def test_range_too_big(self):
+        self.assertRaises(
+            AssertionError, Vault._range_string_to_part_index, '0-5', 4)
+
+    def test_range_start_mismatch(self):
+        self.assertRaises(
+            AssertionError, Vault._range_string_to_part_index, '1-3', 4)
+
+    def test_range_end_mismatch(self):
+        # End mismatch is OK, since the last part might be short
+        self.assertEquals(
+            Vault._range_string_to_part_index('0-2', 4), 0)
diff --git a/tests/unit/glacier/test_utils.py b/tests/unit/glacier/test_utils.py
new file mode 100644
index 0000000..ee62188
--- /dev/null
+++ b/tests/unit/glacier/test_utils.py
@@ -0,0 +1,116 @@
+# Copyright (c) 2012 Amazon.com, Inc. or its affiliates.  All Rights Reserved
+#
+# Permission is hereby granted, free of charge, to any person obtaining a
+# copy of this software and associated documentation files (the
+# "Software"), to deal in the Software without restriction, including
+# without limitation the rights to use, copy, modify, merge, publish, dis-
+# tribute, sublicense, and/or sell copies of the Software, and to permit
+# persons to whom the Software is furnished to do so, subject to the fol-
+# lowing conditions:
+#
+# The above copyright notice and this permission notice shall be included
+# in all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
+# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
+# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+# IN THE SOFTWARE.
+#
+import time
+import logging
+from hashlib import sha256
+from tests.unit import unittest
+
+from boto.glacier.utils import minimum_part_size, chunk_hashes, tree_hash, \
+        bytes_to_hex
+
+
+class TestPartSizeCalculations(unittest.TestCase):
+    def test_small_values_still_use_default_part_size(self):
+        self.assertEqual(minimum_part_size(1), 4 * 1024 * 1024)
+
+    def test_under_the_maximum_value(self):
+        # If we're under the maximum, we can use 4MB part sizes.
+        self.assertEqual(minimum_part_size(8 * 1024 * 1024),
+                         4 * 1024 * 1024)
+
+    def test_gigabyte_size(self):
+        # If we're over the maximum default part size, we go up to the next
+        # power of two until we find a part size that keeps us under 10,000
+        # parts.
+        self.assertEqual(minimum_part_size(8 * 1024 * 1024 * 10000),
+                         8 * 1024 * 1024)
+
+    def test_terabyte_size(self):
+        # For a 4 TB file we need at least a 512 MB part size.
+        self.assertEqual(minimum_part_size(4 * 1024 * 1024 * 1024 * 1024),
+                         512 * 1024 * 1024)
+
+    def test_file_size_too_large(self):
+        with self.assertRaises(ValueError):
+            minimum_part_size((40000 * 1024 * 1024 * 1024) + 1)
+
+    def test_default_part_size_can_be_specified(self):
+        default_part_size = 2 * 1024 * 1024
+        self.assertEqual(minimum_part_size(8 * 1024 * 1024, default_part_size),
+                         default_part_size)
+
+
+class TestChunking(unittest.TestCase):
+    def test_chunk_hashes_exact(self):
+        chunks = chunk_hashes('a' * (2 * 1024 * 1024))
+        self.assertEqual(len(chunks), 2)
+        self.assertEqual(chunks[0], sha256('a' * 1024 * 1024).digest())
+
+    def test_chunks_with_leftovers(self):
+        bytestring = 'a' * (2 * 1024 * 1024 + 20)
+        chunks = chunk_hashes(bytestring)
+        self.assertEqual(len(chunks), 3)
+        self.assertEqual(chunks[0], sha256('a' * 1024 * 1024).digest())
+        self.assertEqual(chunks[1], sha256('a' * 1024 * 1024).digest())
+        self.assertEqual(chunks[2], sha256('a' * 20).digest())
+
+    def test_less_than_one_chunk(self):
+        chunks = chunk_hashes('aaaa')
+        self.assertEqual(len(chunks), 1)
+        self.assertEqual(chunks[0], sha256('aaaa').digest())
+
+
+class TestTreeHash(unittest.TestCase):
+    # For these tests, a set of reference tree hashes were computed.
+    # This will at least catch any regressions to the tree hash
+    # calculations.
+    def calculate_tree_hash(self, bytestring):
+        start = time.time()
+        calculated = bytes_to_hex(tree_hash(chunk_hashes(bytestring)))
+        end = time.time()
+        logging.debug("Tree hash calc time for length %s: %s",
+                      len(bytestring), end - start)
+        return calculated
+
+    def test_tree_hash_calculations(self):
+        one_meg_bytestring = 'a' * (1 * 1024 * 1024)
+        two_meg_bytestring = 'a' * (2 * 1024 * 1024)
+        four_meg_bytestring = 'a' * (4 * 1024 * 1024)
+        bigger_bytestring = four_meg_bytestring + 'a' * 20
+
+        self.assertEqual(
+            self.calculate_tree_hash(one_meg_bytestring),
+            '9bc1b2a288b26af7257a36277ae3816a7d4f16e89c1e7e77d0a5c48bad62b360')
+        self.assertEqual(
+            self.calculate_tree_hash(two_meg_bytestring),
+            '560c2c9333c719cb00cfdffee3ba293db17f58743cdd1f7e4055373ae6300afa')
+        self.assertEqual(
+            self.calculate_tree_hash(four_meg_bytestring),
+            '9491cb2ed1d4e7cd53215f4017c23ec4ad21d7050a1e6bb636c4f67e8cddb844')
+        self.assertEqual(
+            self.calculate_tree_hash(bigger_bytestring),
+            '12f3cbd6101b981cde074039f6f728071da8879d6f632de8afc7cdf00661b08f')
+
+    def test_empty_tree_hash(self):
+        self.assertEqual(
+            self.calculate_tree_hash(''),
+            'e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855')
diff --git a/tests/unit/glacier/test_vault.py b/tests/unit/glacier/test_vault.py
new file mode 100644
index 0000000..7861b63
--- /dev/null
+++ b/tests/unit/glacier/test_vault.py
@@ -0,0 +1,175 @@
+#!/usr/bin/env python
+# Copyright (c) 2012 Amazon.com, Inc. or its affiliates.  All Rights Reserved
+#
+# Permission is hereby granted, free of charge, to any person obtaining a
+# copy of this software and associated documentation files (the
+# "Software"), to deal in the Software without restriction, including
+# without limitation the rights to use, copy, modify, merge, publish, dis-
+# tribute, sublicense, and/or sell copies of the Software, and to permit
+# persons to whom the Software is furnished to do so, subject to the fol-
+# lowing conditions:
+#
+# The above copyright notice and this permission notice shall be included
+# in all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
+# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
+# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+# IN THE SOFTWARE.
+#
+import unittest
+from cStringIO import StringIO
+
+import mock
+from mock import ANY
+
+from boto.glacier import vault
+from boto.glacier.job import Job
+from boto.glacier.response import GlacierResponse
+
+
+class TestVault(unittest.TestCase):
+    def setUp(self):
+        self.size_patch = mock.patch('os.path.getsize')
+        self.getsize = self.size_patch.start()
+        self.api = mock.Mock()
+        self.vault = vault.Vault(self.api, None)
+        self.vault.name = 'myvault'
+        self.mock_open = mock.mock_open()
+        stringio = StringIO('content')
+        self.mock_open.return_value.read = stringio.read
+
+    def tearDown(self):
+        self.size_patch.stop()
+
+    def test_upload_archive_small_file(self):
+        self.getsize.return_value = 1
+
+        self.api.upload_archive.return_value = {'ArchiveId': 'archive_id'}
+        with mock.patch('boto.glacier.vault.open', self.mock_open,
+                        create=True):
+            archive_id = self.vault.upload_archive(
+                'filename', 'my description')
+        self.assertEqual(archive_id, 'archive_id')
+        self.api.upload_archive.assert_called_with(
+            'myvault', self.mock_open.return_value,
+            mock.ANY, mock.ANY, 'my description')
+
+    def test_small_part_size_is_obeyed(self):
+        self.vault.DefaultPartSize = 2 * 1024 * 1024
+        self.vault.create_archive_writer = mock.Mock()
+
+        self.getsize.return_value = 1
+
+        with mock.patch('boto.glacier.vault.open', self.mock_open,
+                        create=True):
+            self.vault.create_archive_from_file('myfile')
+        # The write should be created with the default part size of the
+        # instance (2 MB).
+        self.vault.create_archive_writer.assert_called_with(
+                description=mock.ANY, part_size=self.vault.DefaultPartSize)
+
+    def test_large_part_size_is_obeyed(self):
+        self.vault.DefaultPartSize = 8 * 1024 * 1024
+        self.vault.create_archive_writer = mock.Mock()
+        self.getsize.return_value = 1
+        with mock.patch('boto.glacier.vault.open', self.mock_open,
+                        create=True):
+            self.vault.create_archive_from_file('myfile')
+        # The write should be created with the default part size of the
+        # instance (8 MB).
+        self.vault.create_archive_writer.assert_called_with(
+            description=mock.ANY, part_size=self.vault.DefaultPartSize)
+
+    def test_part_size_needs_to_be_adjusted(self):
+        # If we have a large file (400 GB)
+        self.getsize.return_value = 400 * 1024 * 1024 * 1024
+        self.vault.create_archive_writer = mock.Mock()
+        # When we try to upload the file.
+        with mock.patch('boto.glacier.vault.open', self.mock_open,
+                        create=True):
+            self.vault.create_archive_from_file('myfile')
+        # We should automatically bump up the part size used to
+        # 64 MB.
+        expected_part_size = 64 * 1024 * 1024
+        self.vault.create_archive_writer.assert_called_with(
+            description=mock.ANY, part_size=expected_part_size)
+
+    def test_retrieve_inventory(self):
+        class FakeResponse(object):
+            status = 202
+
+            def getheader(self, key, default=None):
+                if key == 'x-amz-job-id':
+                    return 'HkF9p6'
+                elif key == 'Content-Type':
+                    return 'application/json'
+
+                return 'something'
+
+            def read(self, amt=None):
+                return """{
+  "Action": "ArchiveRetrieval",
+  "ArchiveId": "NkbByEejwEggmBz2fTHgJrg0XBoDfjP4q6iu87-EXAMPLEArchiveId",
+  "ArchiveSizeInBytes": 16777216,
+  "ArchiveSHA256TreeHash": "beb0fe31a1c7ca8c6c04d574ea906e3f97",
+  "Completed": false,
+  "CreationDate": "2012-05-15T17:21:39.339Z",
+  "CompletionDate": "2012-05-15T17:21:43.561Z",
+  "InventorySizeInBytes": null,
+  "JobDescription": "My ArchiveRetrieval Job",
+  "JobId": "HkF9p6",
+  "RetrievalByteRange": "0-16777215",
+  "SHA256TreeHash": "beb0fe31a1c7ca8c6c04d574ea906e3f97b31fd",
+  "SNSTopic": "arn:aws:sns:us-east-1:012345678901:mytopic",
+  "StatusCode": "InProgress",
+  "StatusMessage": "Operation in progress.",
+  "VaultARN": "arn:aws:glacier:us-east-1:012345678901:vaults/examplevault"
+}"""
+
+        raw_resp = FakeResponse()
+        init_resp = GlacierResponse(raw_resp, [('x-amz-job-id', 'JobId')])
+        raw_resp_2 = FakeResponse()
+        desc_resp = GlacierResponse(raw_resp_2, [])
+
+        with mock.patch.object(self.vault.layer1, 'initiate_job',
+                               return_value=init_resp):
+            with mock.patch.object(self.vault.layer1, 'describe_job',
+                                   return_value=desc_resp):
+                # The old/back-compat variant of the call.
+                self.assertEqual(self.vault.retrieve_inventory(), 'HkF9p6')
+
+                # The variant the returns a full ``Job`` object.
+                job = self.vault.retrieve_inventory_job()
+                self.assertTrue(isinstance(job, Job))
+                self.assertEqual(job.id, 'HkF9p6')
+
+
+class TestConcurrentUploads(unittest.TestCase):
+
+    def test_concurrent_upload_file(self):
+        v = vault.Vault(None, None)
+        with mock.patch('boto.glacier.vault.ConcurrentUploader') as c:
+            c.return_value.upload.return_value = 'archive_id'
+            archive_id = v.concurrent_create_archive_from_file(
+                'filename', 'my description')
+            c.return_value.upload.assert_called_with('filename',
+                                                     'my description')
+        self.assertEqual(archive_id, 'archive_id')
+
+    def test_concurrent_upload_forwards_kwargs(self):
+        v = vault.Vault(None, None)
+        with mock.patch('boto.glacier.vault.ConcurrentUploader') as c:
+            c.return_value.upload.return_value = 'archive_id'
+            archive_id = v.concurrent_create_archive_from_file(
+                'filename', 'my description', num_threads=10,
+                part_size=1024 * 1024 * 1024 * 8)
+            c.assert_called_with(None, None, num_threads=10,
+                                 part_size=1024 * 1024 * 1024 * 8)
+
+
+if __name__ == '__main__':
+    unittest.main()
diff --git a/tests/unit/glacier/test_writer.py b/tests/unit/glacier/test_writer.py
index 216429f..43757eb 100644
--- a/tests/unit/glacier/test_writer.py
+++ b/tests/unit/glacier/test_writer.py
@@ -1,26 +1,230 @@
+# Copyright (c) 2012 Amazon.com, Inc. or its affiliates.  All Rights Reserved
+#
+# Permission is hereby granted, free of charge, to any person obtaining a
+# copy of this software and associated documentation files (the
+# "Software"), to deal in the Software without restriction, including
+# without limitation the rights to use, copy, modify, merge, publish, dis-
+# tribute, sublicense, and/or sell copies of the Software, and to permit
+# persons to whom the Software is furnished to do so, subject to the fol-
+# lowing conditions:
+#
+# The above copyright notice and this permission notice shall be included
+# in all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
+# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
+# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+# IN THE SOFTWARE.
+#
 from hashlib import sha256
+import itertools
+from StringIO import StringIO
 
 from tests.unit import unittest
-import mock
+from mock import (
+    call,
+    Mock,
+    patch,
+    sentinel,
+)
+from nose.tools import assert_equal
 
-from boto.glacier.writer import Writer, chunk_hashes
+from boto.glacier.layer1 import Layer1
+from boto.glacier.vault import Vault
+from boto.glacier.writer import Writer, resume_file_upload
+from boto.glacier.utils import bytes_to_hex, chunk_hashes, tree_hash
 
 
-class TestChunking(unittest.TestCase):
-    def test_chunk_hashes_exact(self):
-        chunks = chunk_hashes('a' * (2 * 1024 * 1024))
-        self.assertEqual(len(chunks), 2)
-        self.assertEqual(chunks[0], sha256('a' * 1024 * 1024).digest())
+def create_mock_vault():
+    vault = Mock(spec=Vault)
+    vault.layer1 = Mock(spec=Layer1)
+    vault.layer1.complete_multipart_upload.return_value = dict(
+        ArchiveId=sentinel.archive_id)
+    vault.name = sentinel.vault_name
+    return vault
 
-    def test_chunks_with_leftovers(self):
-        bytestring = 'a' * (2 * 1024 * 1024 + 20)
-        chunks = chunk_hashes(bytestring)
-        self.assertEqual(len(chunks), 3)
-        self.assertEqual(chunks[0], sha256('a' * 1024 * 1024).digest())
-        self.assertEqual(chunks[1], sha256('a' * 1024 * 1024).digest())
-        self.assertEqual(chunks[2], sha256('a' * 20).digest())
 
-    def test_less_than_one_chunk(self):
-        chunks = chunk_hashes('aaaa')
-        self.assertEqual(len(chunks), 1)
-        self.assertEqual(chunks[0], sha256('aaaa').digest())
+def partify(data, part_size):
+    for i in itertools.count(0):
+        start = i * part_size
+        part = data[start:start+part_size]
+        if part:
+            yield part
+        else:
+            return
+
+
+def calculate_mock_vault_calls(data, part_size, chunk_size):
+    upload_part_calls = []
+    data_tree_hashes = []
+    for i, data_part in enumerate(partify(data, part_size)):
+        start = i * part_size
+        end = start + len(data_part)
+        data_part_tree_hash_blob = tree_hash(
+            chunk_hashes(data_part, chunk_size))
+        data_part_tree_hash = bytes_to_hex(data_part_tree_hash_blob)
+        data_part_linear_hash = sha256(data_part).hexdigest()
+        upload_part_calls.append(
+            call.layer1.upload_part(
+                sentinel.vault_name, sentinel.upload_id,
+                data_part_linear_hash, data_part_tree_hash,
+                (start, end - 1), data_part))
+        data_tree_hashes.append(data_part_tree_hash_blob)
+
+    return upload_part_calls, data_tree_hashes
+
+
+def check_mock_vault_calls(vault, upload_part_calls, data_tree_hashes,
+                           data_len):
+    vault.layer1.upload_part.assert_has_calls(
+        upload_part_calls, any_order=True)
+    assert_equal(
+        len(upload_part_calls), vault.layer1.upload_part.call_count)
+
+    data_tree_hash = bytes_to_hex(tree_hash(data_tree_hashes))
+    vault.layer1.complete_multipart_upload.assert_called_once_with(
+        sentinel.vault_name, sentinel.upload_id, data_tree_hash, data_len)
+
+
+class TestWriter(unittest.TestCase):
+    def setUp(self):
+        super(TestWriter, self).setUp()
+        self.vault = create_mock_vault()
+        self.chunk_size = 2 # power of 2
+        self.part_size = 4 # power of 2
+        upload_id = sentinel.upload_id
+        self.writer = Writer(
+            self.vault, upload_id, self.part_size, self.chunk_size)
+
+    def check_write(self, write_list):
+        for write_data in write_list:
+            self.writer.write(write_data)
+        self.writer.close()
+
+        data = ''.join(write_list)
+        upload_part_calls, data_tree_hashes = calculate_mock_vault_calls(
+            data, self.part_size, self.chunk_size)
+        check_mock_vault_calls(
+            self.vault, upload_part_calls, data_tree_hashes, len(data))
+
+    def test_single_byte_write(self):
+        self.check_write(['1'])
+
+    def test_one_part_write(self):
+        self.check_write(['1234'])
+
+    def test_split_write_1(self):
+        self.check_write(['1', '234'])
+
+    def test_split_write_2(self):
+        self.check_write(['12', '34'])
+
+    def test_split_write_3(self):
+        self.check_write(['123', '4'])
+
+    def test_one_part_plus_one_write(self):
+        self.check_write(['12345'])
+
+    def test_returns_archive_id(self):
+        self.writer.write('1')
+        self.writer.close()
+        self.assertEquals(sentinel.archive_id, self.writer.get_archive_id())
+
+    def test_current_tree_hash(self):
+        self.writer.write('1234')
+        self.writer.write('567')
+        hash_1 = self.writer.current_tree_hash
+        self.assertEqual(hash_1,
+            '\x0e\xb0\x11Z\x1d\x1f\n\x10|\xf76\xa6\xf5' +
+            '\x83\xd1\xd5"bU\x0c\x95\xa8<\xf5\x81\xef\x0e\x0f\x95\n\xb7k'
+        )
+
+        # This hash will be different, since the content has changed.
+        self.writer.write('22i3uy')
+        hash_2 = self.writer.current_tree_hash
+        self.assertEqual(hash_2,
+            '\x7f\xf4\x97\x82U]\x81R\x05#^\xe8\x1c\xd19' +
+            '\xe8\x1f\x9e\xe0\x1aO\xaad\xe5\x06"\xa5\xc0\xa8AdL'
+        )
+        self.writer.close()
+
+        # Check the final tree hash, post-close.
+        final_hash = self.writer.current_tree_hash
+        self.assertEqual(final_hash,
+            ';\x1a\xb8!=\xf0\x14#\x83\x11\xd5\x0b\x0f' +
+            '\xc7D\xe4\x8e\xd1W\x99z\x14\x06\xb9D\xd0\xf0*\x93\xa2\x8e\xf9'
+        )
+        # Then assert we don't get a different one on a subsequent call.
+        self.assertEqual(final_hash, self.writer.current_tree_hash)
+
+    def test_current_uploaded_size(self):
+        self.writer.write('1234')
+        self.writer.write('567')
+        size_1 = self.writer.current_uploaded_size
+        self.assertEqual(size_1, 4)
+
+        # This hash will be different, since the content has changed.
+        self.writer.write('22i3uy')
+        size_2 = self.writer.current_uploaded_size
+        self.assertEqual(size_2, 12)
+        self.writer.close()
+
+        # Get the final size, post-close.
+        final_size = self.writer.current_uploaded_size
+        self.assertEqual(final_size, 13)
+        # Then assert we don't get a different one on a subsequent call.
+        self.assertEqual(final_size, self.writer.current_uploaded_size)
+
+    def test_upload_id(self):
+        self.assertEquals(sentinel.upload_id, self.writer.upload_id)
+
+
+class TestResume(unittest.TestCase):
+    def setUp(self):
+        super(TestResume, self).setUp()
+        self.vault = create_mock_vault()
+        self.chunk_size = 2 # power of 2
+        self.part_size = 4 # power of 2
+
+    def check_no_resume(self, data, resume_set=set()):
+        fobj = StringIO(data)
+        part_hash_map = {}
+        for part_index in resume_set:
+            start = self.part_size * part_index
+            end = start + self.part_size
+            part_data = data[start:end]
+            part_hash_map[part_index] = tree_hash(
+                chunk_hashes(part_data, self.chunk_size))
+
+        resume_file_upload(
+            self.vault, sentinel.upload_id, self.part_size, fobj,
+            part_hash_map, self.chunk_size)
+
+        upload_part_calls, data_tree_hashes = calculate_mock_vault_calls(
+            data, self.part_size, self.chunk_size)
+        resume_upload_part_calls = [
+            call for part_index, call in enumerate(upload_part_calls)
+                    if part_index not in resume_set]
+        check_mock_vault_calls(
+            self.vault, resume_upload_part_calls, data_tree_hashes, len(data))
+
+    def test_one_part_no_resume(self):
+        self.check_no_resume('1234')
+
+    def test_two_parts_no_resume(self):
+        self.check_no_resume('12345678')
+
+    def test_one_part_resume(self):
+        self.check_no_resume('1234', resume_set=set([0]))
+
+    def test_two_parts_one_resume(self):
+        self.check_no_resume('12345678', resume_set=set([1]))
+
+    def test_returns_archive_id(self):
+        archive_id = resume_file_upload(
+            self.vault, sentinel.upload_id, self.part_size, StringIO('1'), {},
+            self.chunk_size)
+        self.assertEquals(sentinel.archive_id, archive_id)
diff --git a/tests/unit/provider/test_provider.py b/tests/unit/provider/test_provider.py
index 6e494d1..cbeea4a 100644
--- a/tests/unit/provider/test_provider.py
+++ b/tests/unit/provider/test_provider.py
@@ -7,6 +7,19 @@
 from boto import provider
 
 
+INSTANCE_CONFIG = {
+    'allowall': {
+        u'AccessKeyId': u'iam_access_key',
+        u'Code': u'Success',
+        u'Expiration': u'2012-09-01T03:57:34Z',
+        u'LastUpdated': u'2012-08-31T21:43:40Z',
+        u'SecretAccessKey': u'iam_secret_key',
+        u'Token': u'iam_token',
+        u'Type': u'AWS-HMAC'
+    }
+}
+
+
 class TestProvider(unittest.TestCase):
     def setUp(self):
         self.environ = {}
@@ -70,6 +83,33 @@
         self.assertEqual(p.secret_key, 'cfg_secret_key')
         self.assertIsNone(p.security_token)
 
+    def test_keyring_is_used(self):
+        self.config = {
+            'Credentials': {
+                'aws_access_key_id': 'cfg_access_key',
+                'keyring': 'test',
+            }
+        }
+        import sys
+        try:
+            import keyring
+            imported = True
+        except ImportError:
+            sys.modules['keyring'] = keyring = type(mock)('keyring', '')
+            imported = False
+
+        try:
+            with mock.patch('keyring.get_password', create=True):
+                keyring.get_password.side_effect = (
+                    lambda kr, login: kr+login+'pw')
+                p = provider.Provider('aws')
+                self.assertEqual(p.access_key, 'cfg_access_key')
+                self.assertEqual(p.secret_key, 'testcfg_access_keypw')
+                self.assertIsNone(p.security_token)
+        finally:
+            if not imported:
+                del sys.modules['keyring']
+
     def test_env_vars_beat_config_values(self):
         self.environ['AWS_ACCESS_KEY_ID'] = 'env_access_key'
         self.environ['AWS_SECRET_ACCESS_KEY'] = 'env_secret_key'
@@ -85,24 +125,14 @@
         self.assertIsNone(p.security_token)
 
     def test_metadata_server_credentials(self):
-        instance_config = {
-            'iam': {
-                'security-credentials': {
-                    'allowall': {u'AccessKeyId': u'iam_access_key',
-                                 u'Code': u'Success',
-                                 u'Expiration': u'2012-09-01T03:57:34Z',
-                                 u'LastUpdated': u'2012-08-31T21:43:40Z',
-                                 u'SecretAccessKey': u'iam_secret_key',
-                                 u'Token': u'iam_token',
-                                 u'Type': u'AWS-HMAC'}
-                }
-            }
-        }
-        self.get_instance_metadata.return_value = instance_config
+        self.get_instance_metadata.return_value = INSTANCE_CONFIG
         p = provider.Provider('aws')
         self.assertEqual(p.access_key, 'iam_access_key')
         self.assertEqual(p.secret_key, 'iam_secret_key')
         self.assertEqual(p.security_token, 'iam_token')
+        self.assertEqual(
+            self.get_instance_metadata.call_args[1]['data'],
+            'meta-data/iam/security-credentials')
 
     def test_refresh_credentials(self):
         now = datetime.now()
@@ -117,13 +147,7 @@
             u'Token': u'first_token',
             u'Type': u'AWS-HMAC'
         }
-        instance_config = {
-            'iam': {
-                'security-credentials': {
-                    'allowall': credentials
-                }
-            }
-        }
+        instance_config = {'allowall': credentials}
         self.get_instance_metadata.return_value = instance_config
         p = provider.Provider('aws')
         self.assertEqual(p.access_key, 'first_access_key')
@@ -144,6 +168,20 @@
         self.assertEqual(p.secret_key, 'second_secret_key')
         self.assertEqual(p.security_token, 'second_token')
 
+    @mock.patch('boto.provider.config.getint')
+    @mock.patch('boto.provider.config.getfloat')
+    def test_metadata_config_params(self, config_float, config_int):
+        config_int.return_value = 10
+        config_float.return_value = 4.0
+        self.get_instance_metadata.return_value = INSTANCE_CONFIG
+        p = provider.Provider('aws')
+        self.assertEqual(p.access_key, 'iam_access_key')
+        self.assertEqual(p.secret_key, 'iam_secret_key')
+        self.assertEqual(p.security_token, 'iam_token')
+        self.get_instance_metadata.assert_called_with(
+            timeout=4.0, num_retries=10,
+            data='meta-data/iam/security-credentials')
+
 
 if __name__ == '__main__':
     unittest.main()
diff --git a/tests/unit/rds/__init__.py b/tests/unit/rds/__init__.py
new file mode 100644
index 0000000..e69de29
--- /dev/null
+++ b/tests/unit/rds/__init__.py
diff --git a/tests/unit/rds/test_connection.py b/tests/unit/rds/test_connection.py
new file mode 100644
index 0000000..0d4bff8
--- /dev/null
+++ b/tests/unit/rds/test_connection.py
@@ -0,0 +1,131 @@
+#!/usr/bin/env python
+# Copyright (c) 2012 Amazon.com, Inc. or its affiliates.  All Rights Reserved
+#
+# Permission is hereby granted, free of charge, to any person obtaining a
+# copy of this software and associated documentation files (the
+# "Software"), to deal in the Software without restriction, including
+# without limitation the rights to use, copy, modify, merge, publish, dis-
+# tribute, sublicense, and/or sell copies of the Software, and to permit
+# persons to whom the Software is furnished to do so, subject to the fol-
+# lowing conditions:
+#
+# The above copyright notice and this permission notice shall be included
+# in all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
+# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
+# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+# IN THE SOFTWARE.
+#
+
+from tests.unit import unittest
+from tests.unit import AWSMockServiceTestCase
+
+from boto.rds import RDSConnection
+
+
+class TestRDSConnection(AWSMockServiceTestCase):
+    connection_class = RDSConnection
+
+    def setUp(self):
+        super(TestRDSConnection, self).setUp()
+
+    def default_body(self):
+        return """
+        <DescribeDBInstancesResponse>
+          <DescribeDBInstancesResult>
+            <DBInstances>
+                <DBInstance>
+                  <Iops>2000</Iops>
+                  <BackupRetentionPeriod>1</BackupRetentionPeriod>
+                  <MultiAZ>false</MultiAZ>
+                  <DBInstanceStatus>backing-up</DBInstanceStatus>
+                  <DBInstanceIdentifier>mydbinstance2</DBInstanceIdentifier>
+                  <PreferredBackupWindow>10:30-11:00</PreferredBackupWindow>
+                  <PreferredMaintenanceWindow>wed:06:30-wed:07:00</PreferredMaintenanceWindow>
+                  <OptionGroupMembership>
+                    <OptionGroupName>default:mysql-5-5</OptionGroupName>
+                    <Status>in-sync</Status>
+                  </OptionGroupMembership>
+                  <AvailabilityZone>us-west-2b</AvailabilityZone>
+                  <ReadReplicaDBInstanceIdentifiers/>
+                  <Engine>mysql</Engine>
+                  <PendingModifiedValues/>
+                  <LicenseModel>general-public-license</LicenseModel>
+                  <DBParameterGroups>
+                    <DBParameterGroup>
+                      <ParameterApplyStatus>in-sync</ParameterApplyStatus>
+                      <DBParameterGroupName>default.mysql5.5</DBParameterGroupName>
+                    </DBParameterGroup>
+                  </DBParameterGroups>
+                  <Endpoint>
+                    <Port>3306</Port>
+                    <Address>mydbinstance2.c0hjqouvn9mf.us-west-2.rds.amazonaws.com</Address>
+                  </Endpoint>
+                  <EngineVersion>5.5.27</EngineVersion>
+                  <DBSecurityGroups>
+                    <DBSecurityGroup>
+                      <Status>active</Status>
+                      <DBSecurityGroupName>default</DBSecurityGroupName>
+                    </DBSecurityGroup>
+                  </DBSecurityGroups>
+                  <DBName>mydb2</DBName>
+                  <AutoMinorVersionUpgrade>true</AutoMinorVersionUpgrade>
+                  <InstanceCreateTime>2012-10-03T22:01:51.047Z</InstanceCreateTime>
+                  <AllocatedStorage>200</AllocatedStorage>
+                  <DBInstanceClass>db.m1.large</DBInstanceClass>
+                  <MasterUsername>awsuser</MasterUsername>
+                </DBInstance>
+            </DBInstances>
+          </DescribeDBInstancesResult>
+        </DescribeDBInstancesResponse>
+        """
+
+    def test_get_all_db_instances(self):
+        self.set_http_response(status_code=200)
+        response = self.service_connection.get_all_dbinstances('instance_id')
+        self.assertEqual(len(response), 1)
+        self.assert_request_parameters({
+            'Action': 'DescribeDBInstances',
+            'DBInstanceIdentifier': 'instance_id',
+        }, ignore_params_values=['AWSAccessKeyId', 'Timestamp', 'Version',
+                                 'SignatureVersion', 'SignatureMethod'])
+        db = response[0]
+        self.assertEqual(db.id, 'mydbinstance2')
+        self.assertEqual(db.create_time, '2012-10-03T22:01:51.047Z')
+        self.assertEqual(db.engine, 'mysql')
+        self.assertEqual(db.status, 'backing-up')
+        self.assertEqual(db.allocated_storage, 200)
+        self.assertEqual(
+            db.endpoint,
+            (u'mydbinstance2.c0hjqouvn9mf.us-west-2.rds.amazonaws.com', 3306))
+        self.assertEqual(db.instance_class, 'db.m1.large')
+        self.assertEqual(db.master_username, 'awsuser')
+        self.assertEqual(db.availability_zone, 'us-west-2b')
+        self.assertEqual(db.backup_retention_period, '1')
+        self.assertEqual(db.preferred_backup_window, '10:30-11:00')
+        self.assertEqual(db.preferred_maintenance_window,
+                         'wed:06:30-wed:07:00')
+        self.assertEqual(db.latest_restorable_time, None)
+        self.assertEqual(db.multi_az, False)
+        self.assertEqual(db.iops, 2000)
+        self.assertEqual(db.pending_modified_values, {})
+
+        self.assertEqual(db.parameter_group.name,
+                         'default.mysql5.5')
+        self.assertEqual(db.parameter_group.description, None)
+        self.assertEqual(db.parameter_group.engine, None)
+
+        self.assertEqual(db.security_group.owner_id, None)
+        self.assertEqual(db.security_group.name, 'default')
+        self.assertEqual(db.security_group.description, None)
+        self.assertEqual(db.security_group.ec2_groups, [])
+        self.assertEqual(db.security_group.ip_ranges, [])
+
+
+if __name__ == '__main__':
+    unittest.main()
+
diff --git a/tests/unit/s3/test_bucket.py b/tests/unit/s3/test_bucket.py
new file mode 100644
index 0000000..ac2d82b
--- /dev/null
+++ b/tests/unit/s3/test_bucket.py
@@ -0,0 +1,100 @@
+# -*- coding: utf-8 -*-
+from tests.unit import unittest
+from tests.unit import AWSMockServiceTestCase
+
+from boto.s3.connection import S3Connection
+from boto.s3.bucket import Bucket
+
+
+class TestS3Bucket(AWSMockServiceTestCase):
+    connection_class = S3Connection
+
+    def setUp(self):
+        super(TestS3Bucket, self).setUp()
+
+    def test_bucket_create_bucket(self):
+        self.set_http_response(status_code=200)
+        bucket = self.service_connection.create_bucket('mybucket_create')
+        self.assertEqual(bucket.name, 'mybucket_create')
+
+    def test_bucket_constructor(self):
+        self.set_http_response(status_code=200)
+        bucket = Bucket(self.service_connection, 'mybucket_constructor')
+        self.assertEqual(bucket.name, 'mybucket_constructor')
+
+    def test_bucket_basics(self):
+        self.set_http_response(status_code=200)
+        bucket = self.service_connection.create_bucket('mybucket')
+        self.assertEqual(bucket.__repr__(), '<Bucket: mybucket>')
+
+    def test_bucket_new_key(self):
+        self.set_http_response(status_code=200)
+        bucket = self.service_connection.create_bucket('mybucket')
+        key = bucket.new_key('mykey')
+
+        self.assertEqual(key.bucket, bucket)
+        self.assertEqual(key.key, 'mykey')
+
+    def test_bucket_new_key_missing_name(self):
+        self.set_http_response(status_code=200)
+        bucket = self.service_connection.create_bucket('mybucket')
+
+        with self.assertRaises(ValueError):
+            key = bucket.new_key('')
+
+    def test_bucket_delete_key_missing_name(self):
+        self.set_http_response(status_code=200)
+        bucket = self.service_connection.create_bucket('mybucket')
+
+        with self.assertRaises(ValueError):
+            key = bucket.delete_key('')
+
+    def test__get_all_query_args(self):
+        bukket = Bucket()
+
+        # Default.
+        qa = bukket._get_all_query_args({})
+        self.assertEqual(qa, '')
+
+        # Default with initial.
+        qa = bukket._get_all_query_args({}, 'initial=1')
+        self.assertEqual(qa, 'initial=1')
+
+        # Single param.
+        qa = bukket._get_all_query_args({
+            'foo': 'true'
+        })
+        self.assertEqual(qa, 'foo=true')
+
+        # Single param with initial.
+        qa = bukket._get_all_query_args({
+            'foo': 'true'
+        }, 'initial=1')
+        self.assertEqual(qa, 'initial=1&foo=true')
+
+        # Multiple params with all the weird cases.
+        multiple_params = {
+            'foo': 'true',
+            # Ensure Unicode chars get encoded.
+            'bar': '☃',
+            # Underscores are bad, m'kay?
+            'some_other': 'thing',
+            # Change the variant of ``max-keys``.
+            'maxkeys': 0,
+            # ``None`` values get excluded.
+            'notthere': None,
+            # Empty values also get excluded.
+            'notpresenteither': '',
+        }
+        qa = bukket._get_all_query_args(multiple_params)
+        self.assertEqual(
+            qa,
+            'bar=%E2%98%83&max-keys=0&foo=true&some-other=thing'
+        )
+
+        # Multiple params with initial.
+        qa = bukket._get_all_query_args(multiple_params, 'initial=1')
+        self.assertEqual(
+            qa,
+            'initial=1&bar=%E2%98%83&max-keys=0&foo=true&some-other=thing'
+        )
diff --git a/tests/unit/s3/test_key.py b/tests/unit/s3/test_key.py
new file mode 100644
index 0000000..5e249c1
--- /dev/null
+++ b/tests/unit/s3/test_key.py
@@ -0,0 +1,126 @@
+#!/usr/bin/env python
+# Copyright (c) 2012 Amazon.com, Inc. or its affiliates.  All Rights Reserved
+#
+# Permission is hereby granted, free of charge, to any person obtaining a
+# copy of this software and associated documentation files (the
+# "Software"), to deal in the Software without restriction, including
+# without limitation the rights to use, copy, modify, merge, publish, dis-
+# tribute, sublicense, and/or sell copies of the Software, and to permit
+# persons to whom the Software is furnished to do so, subject to the fol-
+# lowing conditions:
+#
+# The above copyright notice and this permission notice shall be included
+# in all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
+# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
+# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+# IN THE SOFTWARE.
+#
+try:
+    from cStringIO import StringIO
+except ImportError:
+    from StringIO import StringIO
+
+from tests.unit import unittest
+from tests.unit import AWSMockServiceTestCase
+
+from boto.exception import BotoServerError
+from boto.s3.connection import S3Connection
+from boto.s3.bucket import Bucket
+
+
+class TestS3Key(AWSMockServiceTestCase):
+    connection_class = S3Connection
+
+    def setUp(self):
+        super(TestS3Key, self).setUp()
+
+    def default_body(self):
+        return "default body"
+
+    def test_when_no_restore_header_present(self):
+        self.set_http_response(status_code=200)
+        b = Bucket(self.service_connection, 'mybucket')
+        k = b.get_key('myglacierkey')
+        self.assertIsNone(k.ongoing_restore)
+        self.assertIsNone(k.expiry_date)
+
+    def test_restore_header_with_ongoing_restore(self):
+        self.set_http_response(
+            status_code=200,
+            header=[('x-amz-restore', 'ongoing-request="true"')])
+        b = Bucket(self.service_connection, 'mybucket')
+        k = b.get_key('myglacierkey')
+        self.assertTrue(k.ongoing_restore)
+        self.assertIsNone(k.expiry_date)
+
+    def test_restore_completed(self):
+        self.set_http_response(
+            status_code=200,
+            header=[('x-amz-restore',
+                     'ongoing-request="false", '
+                     'expiry-date="Fri, 21 Dec 2012 00:00:00 GMT"')])
+        b = Bucket(self.service_connection, 'mybucket')
+        k = b.get_key('myglacierkey')
+        self.assertFalse(k.ongoing_restore)
+        self.assertEqual(k.expiry_date, 'Fri, 21 Dec 2012 00:00:00 GMT')
+
+    def test_delete_key_return_key(self):
+        self.set_http_response(status_code=204, body='')
+        b = Bucket(self.service_connection, 'mybucket')
+        key = b.delete_key('fookey')
+        self.assertIsNotNone(key)
+
+
+def counter(fn):
+    def _wrapper(*args, **kwargs):
+        _wrapper.count += 1
+        return fn(*args, **kwargs)
+    _wrapper.count = 0
+    return _wrapper
+
+
+class TestS3KeyRetries(AWSMockServiceTestCase):
+    connection_class = S3Connection
+
+    def setUp(self):
+        super(TestS3KeyRetries, self).setUp()
+
+    def test_500_retry(self):
+        self.set_http_response(status_code=500)
+        b = Bucket(self.service_connection, 'mybucket')
+        k = b.new_key('test_failure')
+        fail_file = StringIO('This will attempt to retry.')
+
+        try:
+            k.send_file(fail_file)
+            self.fail("This shouldn't ever succeed.")
+        except BotoServerError:
+            pass
+
+    def test_400_timeout(self):
+        weird_timeout_body = "<Error><Code>RequestTimeout</Code></Error>"
+        self.set_http_response(status_code=400, body=weird_timeout_body)
+        b = Bucket(self.service_connection, 'mybucket')
+        k = b.new_key('test_failure')
+        fail_file = StringIO('This will pretend to be chunk-able.')
+
+        # Decorate.
+        k.should_retry = counter(k.should_retry)
+        self.assertEqual(k.should_retry.count, 0)
+
+        try:
+            k.send_file(fail_file)
+            self.fail("This shouldn't ever succeed.")
+        except BotoServerError:
+            pass
+
+        self.assertTrue(k.should_retry.count, 1)
+
+
+if __name__ == '__main__':
+    unittest.main()
diff --git a/tests/unit/s3/test_keyfile.py b/tests/unit/s3/test_keyfile.py
new file mode 100644
index 0000000..bf90664
--- /dev/null
+++ b/tests/unit/s3/test_keyfile.py
@@ -0,0 +1,101 @@
+# Copyright 2013 Google Inc.
+# Copyright 2011, Nexenta Systems Inc.
+#
+# Permission is hereby granted, free of charge, to any person obtaining a
+# copy of this software and associated documentation files (the
+# "Software"), to deal in the Software without restriction, including
+# without limitation the rights to use, copy, modify, merge, publish, dis-
+# tribute, sublicense, and/or sell copies of the Software, and to permit
+# persons to whom the Software is furnished to do so, subject to the fol-
+# lowing conditions:
+#
+# The above copyright notice and this permission notice shall be included
+# in all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
+# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
+# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+# IN THE SOFTWARE.
+
+import os
+import unittest
+from boto.s3.keyfile import KeyFile
+from tests.integration.s3.mock_storage_service import MockConnection
+from tests.integration.s3.mock_storage_service import MockBucket
+
+
+class KeyfileTest(unittest.TestCase):
+
+    def setUp(self):
+        service_connection = MockConnection()
+        self.contents = '0123456789'
+        bucket = MockBucket(service_connection, 'mybucket')
+        key = bucket.new_key('mykey')
+        key.set_contents_from_string(self.contents)
+        self.keyfile = KeyFile(key)
+
+    def tearDown(self):
+        self.keyfile.close()
+
+    def testReadFull(self):
+        self.assertEqual(self.keyfile.read(len(self.contents)), self.contents)
+
+    def testReadPartial(self):
+        self.assertEqual(self.keyfile.read(5), self.contents[:5])
+        self.assertEqual(self.keyfile.read(5), self.contents[5:])
+
+    def testTell(self):
+        self.assertEqual(self.keyfile.tell(), 0)
+        self.keyfile.read(4)
+        self.assertEqual(self.keyfile.tell(), 4)
+        self.keyfile.read(6)
+        self.assertEqual(self.keyfile.tell(), 10)
+        self.keyfile.close()
+        try:
+            self.keyfile.tell()
+        except ValueError, e:
+            self.assertEqual(str(e), 'I/O operation on closed file')
+
+    def testSeek(self):
+        self.assertEqual(self.keyfile.read(4), self.contents[:4])
+        self.keyfile.seek(0)
+        self.assertEqual(self.keyfile.read(4), self.contents[:4])
+        self.keyfile.seek(5)
+        self.assertEqual(self.keyfile.read(5), self.contents[5:])
+
+        # Seeking negative should raise.
+        try:
+            self.keyfile.seek(-5)
+        except IOError, e:
+            self.assertEqual(str(e), 'Invalid argument')
+
+        # Reading past end of file is supposed to return empty string.
+        self.keyfile.read(10)
+        self.assertEqual(self.keyfile.read(20), '')
+
+        # Seeking past end of file is supposed to silently work.
+        self.keyfile.seek(50)
+        self.assertEqual(self.keyfile.tell(), 50)
+        self.assertEqual(self.keyfile.read(1), '')
+
+    def testSeekEnd(self):
+        self.assertEqual(self.keyfile.read(4), self.contents[:4])
+        self.keyfile.seek(0, os.SEEK_END)
+        self.assertEqual(self.keyfile.read(1), '')
+        self.keyfile.seek(-1, os.SEEK_END)
+        self.assertEqual(self.keyfile.tell(), 9)
+        self.assertEqual(self.keyfile.read(1), '9')
+        # Test attempt to seek backwards past the start from the end.
+        try:
+            self.keyfile.seek(-100, os.SEEK_END)
+        except IOError, e:
+            self.assertEqual(str(e), 'Invalid argument')
+
+    def testSeekCur(self):
+        self.assertEqual(self.keyfile.read(1), self.contents[0])
+        self.keyfile.seek(1, os.SEEK_CUR)
+        self.assertEqual(self.keyfile.tell(), 2)
+        self.assertEqual(self.keyfile.read(4), self.contents[2:6])
diff --git a/tests/unit/s3/test_lifecycle.py b/tests/unit/s3/test_lifecycle.py
new file mode 100644
index 0000000..da50f3a
--- /dev/null
+++ b/tests/unit/s3/test_lifecycle.py
@@ -0,0 +1,97 @@
+# Copyright (c) 2012 Amazon.com, Inc. or its affiliates.  All Rights Reserved
+#
+# Permission is hereby granted, free of charge, to any person obtaining a
+# copy of this software and associated documentation files (the
+# "Software"), to deal in the Software without restriction, including
+# without limitation the rights to use, copy, modify, merge, publish, dis-
+# tribute, sublicense, and/or sell copies of the Software, and to permit
+# persons to whom the Software is furnished to do so, subject to the fol-
+# lowing conditions:
+#
+# The above copyright notice and this permission notice shall be included
+# in all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
+# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
+# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+# IN THE SOFTWARE.
+#
+from tests.unit import AWSMockServiceTestCase
+
+from boto.s3.connection import S3Connection
+from boto.s3.bucket import Bucket
+from boto.s3.lifecycle import Rule, Lifecycle, Transition
+
+
+class TestS3LifeCycle(AWSMockServiceTestCase):
+    connection_class = S3Connection
+
+    def default_body(self):
+        return """
+        <LifecycleConfiguration>
+          <Rule>
+            <ID>rule-1</ID>
+            <Prefix>prefix/foo</Prefix>
+            <Status>Enabled</Status>
+            <Transition>
+              <Days>30</Days>
+              <StorageClass>GLACIER</StorageClass>
+            </Transition>
+            <Expiration>
+              <Days>365</Days>
+            </Expiration>
+          </Rule>
+          <Rule>
+            <ID>rule-2</ID>
+            <Prefix>prefix/bar</Prefix>
+            <Status>Disabled</Status>
+            <Transition>
+              <Date>2012-12-31T00:00:000Z</Date>
+              <StorageClass>GLACIER</StorageClass>
+            </Transition>
+          </Rule>
+        </LifecycleConfiguration>
+        """
+
+    def test_parse_lifecycle_response(self):
+        self.set_http_response(status_code=200)
+        bucket = Bucket(self.service_connection, 'mybucket')
+        response = bucket.get_lifecycle_config()
+        self.assertEqual(len(response), 2)
+        rule = response[0]
+        self.assertEqual(rule.id, 'rule-1')
+        self.assertEqual(rule.prefix, 'prefix/foo')
+        self.assertEqual(rule.status, 'Enabled')
+        self.assertEqual(rule.expiration.days, 365)
+        self.assertIsNone(rule.expiration.date)
+        transition = rule.transition
+        self.assertEqual(transition.days, 30)
+        self.assertEqual(transition.storage_class, 'GLACIER')
+        self.assertEqual(response[1].transition.date, '2012-12-31T00:00:000Z')
+
+    def test_expiration_with_no_transition(self):
+        lifecycle = Lifecycle()
+        lifecycle.add_rule('myid', 'prefix', 'Enabled', 30)
+        xml = lifecycle.to_xml()
+        self.assertIn('<Expiration><Days>30</Days></Expiration>', xml)
+
+    def test_expiration_is_optional(self):
+        t = Transition(days=30, storage_class='GLACIER')
+        r = Rule('myid', 'prefix', 'Enabled', expiration=None,
+                 transition=t)
+        xml = r.to_xml()
+        self.assertIn(
+            '<Transition><StorageClass>GLACIER</StorageClass><Days>30</Days>',
+            xml)
+
+    def test_expiration_with_expiration_and_transition(self):
+        t = Transition(date='2012-11-30T00:00:000Z', storage_class='GLACIER')
+        r = Rule('myid', 'prefix', 'Enabled', expiration=30, transition=t)
+        xml = r.to_xml()
+        self.assertIn(
+            '<Transition><StorageClass>GLACIER</StorageClass>'
+            '<Date>2012-11-30T00:00:000Z</Date>', xml)
+        self.assertIn('<Expiration><Days>30</Days></Expiration>', xml)
diff --git a/tests/unit/s3/test_tagging.py b/tests/unit/s3/test_tagging.py
index 4a0be38..02b5f53 100644
--- a/tests/unit/s3/test_tagging.py
+++ b/tests/unit/s3/test_tagging.py
@@ -2,6 +2,7 @@
 
 from boto.s3.connection import S3Connection
 from boto.s3.bucket import Bucket
+from boto.s3.tagging import Tag
 
 
 class TestS3Tagging(AWSMockServiceTestCase):
@@ -35,3 +36,12 @@
         self.assertEqual(api_response[0][0].value, 'Project One')
         self.assertEqual(api_response[0][1].key, 'User')
         self.assertEqual(api_response[0][1].value, 'jsmith')
+
+    def test_tag_equality(self):
+        t1 = Tag('foo', 'bar')
+        t2 = Tag('foo', 'bar')
+        t3 = Tag('foo', 'baz')
+        t4 = Tag('baz', 'bar')
+        self.assertEqual(t1, t2)
+        self.assertNotEqual(t1, t3)
+        self.assertNotEqual(t1, t4)
diff --git a/tests/unit/s3/test_uri.py b/tests/unit/s3/test_uri.py
new file mode 100644
index 0000000..ab68219
--- /dev/null
+++ b/tests/unit/s3/test_uri.py
@@ -0,0 +1,257 @@
+#!/usr/bin/env python
+# Copyright (c) 2013 Google, Inc.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import boto
+import tempfile
+import urllib
+from boto.exception import InvalidUriError
+from boto import storage_uri
+from boto.s3.keyfile import KeyFile
+from tests.integration.s3.mock_storage_service import MockBucket
+from tests.integration.s3.mock_storage_service import MockBucketStorageUri
+from tests.integration.s3.mock_storage_service import MockConnection
+from tests.unit import unittest
+
+"""Unit tests for StorageUri interface."""
+
+class UriTest(unittest.TestCase):
+
+    def test_provider_uri(self):
+        for prov in ('gs', 's3'):
+            uri_str = '%s://' % prov
+            uri = boto.storage_uri(uri_str, validate=False,
+                suppress_consec_slashes=False)
+            self.assertEqual(prov, uri.scheme)
+            self.assertEqual(uri_str, uri.uri)
+            self.assertFalse(hasattr(uri, 'versionless_uri'))
+            self.assertEqual('', uri.bucket_name)
+            self.assertEqual('', uri.object_name)
+            self.assertEqual(None, uri.version_id)
+            self.assertEqual(None, uri.generation)
+            self.assertEqual(uri.names_provider(), True)
+            self.assertEqual(uri.names_container(), True)
+            self.assertEqual(uri.names_bucket(), False)
+            self.assertEqual(uri.names_object(), False)
+            self.assertEqual(uri.names_directory(), False)
+            self.assertEqual(uri.names_file(), False)
+            self.assertEqual(uri.is_stream(), False)
+            self.assertEqual(uri.is_version_specific, False)
+
+    def test_bucket_uri_no_trailing_slash(self):
+        for prov in ('gs', 's3'):
+            uri_str = '%s://bucket' % prov
+            uri = boto.storage_uri(uri_str, validate=False,
+                suppress_consec_slashes=False)
+            self.assertEqual(prov, uri.scheme)
+            self.assertEqual('%s/' % uri_str, uri.uri)
+            self.assertFalse(hasattr(uri, 'versionless_uri'))
+            self.assertEqual('bucket', uri.bucket_name)
+            self.assertEqual('', uri.object_name)
+            self.assertEqual(None, uri.version_id)
+            self.assertEqual(None, uri.generation)
+            self.assertEqual(uri.names_provider(), False)
+            self.assertEqual(uri.names_container(), True)
+            self.assertEqual(uri.names_bucket(), True)
+            self.assertEqual(uri.names_object(), False)
+            self.assertEqual(uri.names_directory(), False)
+            self.assertEqual(uri.names_file(), False)
+            self.assertEqual(uri.is_stream(), False)
+            self.assertEqual(uri.is_version_specific, False)
+
+    def test_bucket_uri_with_trailing_slash(self):
+        for prov in ('gs', 's3'):
+            uri_str = '%s://bucket/' % prov
+            uri = boto.storage_uri(uri_str, validate=False,
+                suppress_consec_slashes=False)
+            self.assertEqual(prov, uri.scheme)
+            self.assertEqual(uri_str, uri.uri)
+            self.assertFalse(hasattr(uri, 'versionless_uri'))
+            self.assertEqual('bucket', uri.bucket_name)
+            self.assertEqual('', uri.object_name)
+            self.assertEqual(None, uri.version_id)
+            self.assertEqual(None, uri.generation)
+            self.assertEqual(uri.names_provider(), False)
+            self.assertEqual(uri.names_container(), True)
+            self.assertEqual(uri.names_bucket(), True)
+            self.assertEqual(uri.names_object(), False)
+            self.assertEqual(uri.names_directory(), False)
+            self.assertEqual(uri.names_file(), False)
+            self.assertEqual(uri.is_stream(), False)
+            self.assertEqual(uri.is_version_specific, False)
+
+    def test_non_versioned_object_uri(self):
+        for prov in ('gs', 's3'):
+            uri_str = '%s://bucket/obj/a/b' % prov
+            uri = boto.storage_uri(uri_str, validate=False,
+                suppress_consec_slashes=False)
+            self.assertEqual(prov, uri.scheme)
+            self.assertEqual(uri_str, uri.uri)
+            self.assertEqual(uri_str, uri.versionless_uri)
+            self.assertEqual('bucket', uri.bucket_name)
+            self.assertEqual('obj/a/b', uri.object_name)
+            self.assertEqual(None, uri.version_id)
+            self.assertEqual(None, uri.generation)
+            self.assertEqual(uri.names_provider(), False)
+            self.assertEqual(uri.names_container(), False)
+            self.assertEqual(uri.names_bucket(), False)
+            self.assertEqual(uri.names_object(), True)
+            self.assertEqual(uri.names_directory(), False)
+            self.assertEqual(uri.names_file(), False)
+            self.assertEqual(uri.is_stream(), False)
+            self.assertEqual(uri.is_version_specific, False)
+
+    def test_versioned_gs_object_uri(self):
+        uri_str = 'gs://bucket/obj/a/b#1359908801674000'
+        uri = boto.storage_uri(uri_str, validate=False,
+            suppress_consec_slashes=False)
+        self.assertEqual('gs', uri.scheme)
+        self.assertEqual(uri_str, uri.uri)
+        self.assertEqual('gs://bucket/obj/a/b', uri.versionless_uri)
+        self.assertEqual('bucket', uri.bucket_name)
+        self.assertEqual('obj/a/b', uri.object_name)
+        self.assertEqual(None, uri.version_id)
+        self.assertEqual(1359908801674000, uri.generation)
+        self.assertEqual(uri.names_provider(), False)
+        self.assertEqual(uri.names_container(), False)
+        self.assertEqual(uri.names_bucket(), False)
+        self.assertEqual(uri.names_object(), True)
+        self.assertEqual(uri.names_directory(), False)
+        self.assertEqual(uri.names_file(), False)
+        self.assertEqual(uri.is_stream(), False)
+        self.assertEqual(uri.is_version_specific, True)
+
+    def test_versioned_gs_object_uri_with_legacy_generation_value(self):
+        uri_str = 'gs://bucket/obj/a/b#1'
+        uri = boto.storage_uri(uri_str, validate=False,
+            suppress_consec_slashes=False)
+        self.assertEqual('gs', uri.scheme)
+        self.assertEqual(uri_str, uri.uri)
+        self.assertEqual('gs://bucket/obj/a/b', uri.versionless_uri)
+        self.assertEqual('bucket', uri.bucket_name)
+        self.assertEqual('obj/a/b', uri.object_name)
+        self.assertEqual(None, uri.version_id)
+        self.assertEqual(1, uri.generation)
+        self.assertEqual(uri.names_provider(), False)
+        self.assertEqual(uri.names_container(), False)
+        self.assertEqual(uri.names_bucket(), False)
+        self.assertEqual(uri.names_object(), True)
+        self.assertEqual(uri.names_directory(), False)
+        self.assertEqual(uri.names_file(), False)
+        self.assertEqual(uri.is_stream(), False)
+        self.assertEqual(uri.is_version_specific, True)
+
+    def test_roundtrip_versioned_gs_object_uri_parsed(self):
+        uri_str = 'gs://bucket/obj#1359908801674000'
+        uri = boto.storage_uri(uri_str, validate=False,
+            suppress_consec_slashes=False)
+        roundtrip_uri = boto.storage_uri(uri.uri, validate=False,
+            suppress_consec_slashes=False)
+        self.assertEqual(uri.uri, roundtrip_uri.uri)
+        self.assertEqual(uri.is_version_specific, True)
+
+    def test_versioned_s3_object_uri(self):
+        uri_str = 's3://bucket/obj/a/b#eMuM0J15HkJ9QHlktfNP5MfA.oYR2q6S'
+        uri = boto.storage_uri(uri_str, validate=False,
+            suppress_consec_slashes=False)
+        self.assertEqual('s3', uri.scheme)
+        self.assertEqual(uri_str, uri.uri)
+        self.assertEqual('s3://bucket/obj/a/b', uri.versionless_uri)
+        self.assertEqual('bucket', uri.bucket_name)
+        self.assertEqual('obj/a/b', uri.object_name)
+        self.assertEqual('eMuM0J15HkJ9QHlktfNP5MfA.oYR2q6S', uri.version_id)
+        self.assertEqual(None, uri.generation)
+        self.assertEqual(uri.names_provider(), False)
+        self.assertEqual(uri.names_container(), False)
+        self.assertEqual(uri.names_bucket(), False)
+        self.assertEqual(uri.names_object(), True)
+        self.assertEqual(uri.names_directory(), False)
+        self.assertEqual(uri.names_file(), False)
+        self.assertEqual(uri.is_stream(), False)
+        self.assertEqual(uri.is_version_specific, True)
+
+    def test_explicit_file_uri(self):
+        tmp_dir = tempfile.tempdir
+        uri_str = 'file://%s' % urllib.pathname2url(tmp_dir)
+        uri = boto.storage_uri(uri_str, validate=False,
+            suppress_consec_slashes=False)
+        self.assertEqual('file', uri.scheme)
+        self.assertEqual(uri_str, uri.uri)
+        self.assertFalse(hasattr(uri, 'versionless_uri'))
+        self.assertEqual('', uri.bucket_name)
+        self.assertEqual(tmp_dir, uri.object_name)
+        self.assertFalse(hasattr(uri, 'version_id'))
+        self.assertFalse(hasattr(uri, 'generation'))
+        self.assertFalse(hasattr(uri, 'is_version_specific'))
+        self.assertEqual(uri.names_provider(), False)
+        self.assertEqual(uri.names_bucket(), False)
+        # Don't check uri.names_container(), uri.names_directory(),
+        # uri.names_file(), or uri.names_object(), because for file URIs these
+        # functions look at the file system and apparently unit tests run
+        # chroot'd.
+        self.assertEqual(uri.is_stream(), False)
+
+    def test_implicit_file_uri(self):
+        tmp_dir = tempfile.tempdir
+        uri_str = '%s' % urllib.pathname2url(tmp_dir)
+        uri = boto.storage_uri(uri_str, validate=False,
+            suppress_consec_slashes=False)
+        self.assertEqual('file', uri.scheme)
+        self.assertEqual('file://%s' % tmp_dir, uri.uri)
+        self.assertFalse(hasattr(uri, 'versionless_uri'))
+        self.assertEqual('', uri.bucket_name)
+        self.assertEqual(tmp_dir, uri.object_name)
+        self.assertFalse(hasattr(uri, 'version_id'))
+        self.assertFalse(hasattr(uri, 'generation'))
+        self.assertFalse(hasattr(uri, 'is_version_specific'))
+        self.assertEqual(uri.names_provider(), False)
+        self.assertEqual(uri.names_bucket(), False)
+        # Don't check uri.names_container(), uri.names_directory(),
+        # uri.names_file(), or uri.names_object(), because for file URIs these
+        # functions look at the file system and apparently unit tests run
+        # chroot'd.
+        self.assertEqual(uri.is_stream(), False)
+
+    def test_gs_object_uri_contains_sharp_not_matching_version_syntax(self):
+        uri_str = 'gs://bucket/obj#13a990880167400'
+        uri = boto.storage_uri(uri_str, validate=False,
+            suppress_consec_slashes=False)
+        self.assertEqual('gs', uri.scheme)
+        self.assertEqual(uri_str, uri.uri)
+        self.assertEqual('gs://bucket/obj#13a990880167400',
+                         uri.versionless_uri)
+        self.assertEqual('bucket', uri.bucket_name)
+        self.assertEqual('obj#13a990880167400', uri.object_name)
+        self.assertEqual(None, uri.version_id)
+        self.assertEqual(None, uri.generation)
+        self.assertEqual(uri.names_provider(), False)
+        self.assertEqual(uri.names_container(), False)
+        self.assertEqual(uri.names_bucket(), False)
+        self.assertEqual(uri.names_object(), True)
+        self.assertEqual(uri.names_directory(), False)
+        self.assertEqual(uri.names_file(), False)
+        self.assertEqual(uri.is_stream(), False)
+        self.assertEqual(uri.is_version_specific, False)
+
+    def test_invalid_scheme(self):
+        uri_str = 'mars://bucket/object'
+        try:
+            boto.storage_uri(uri_str, validate=False,
+                suppress_consec_slashes=False)
+        except InvalidUriError as e:
+            self.assertIn('Unrecognized scheme', e.message)
+
+
+if __name__ == '__main__':
+    unittest.main()
diff --git a/tests/unit/s3/test_website.py b/tests/unit/s3/test_website.py
new file mode 100644
index 0000000..74c2585
--- /dev/null
+++ b/tests/unit/s3/test_website.py
@@ -0,0 +1,230 @@
+# Copyright (c) 2013 Amazon.com, Inc. or its affiliates.  All Rights Reserved
+#
+# Permission is hereby granted, free of charge, to any person obtaining a
+# copy of this software and associated documentation files (the
+# "Software"), to deal in the Software without restriction, including
+# without limitation the rights to use, copy, modify, merge, publish, dis-
+# tribute, sublicense, and/or sell copies of the Software, and to permit
+# persons to whom the Software is furnished to do so, subject to the fol-
+# lowing conditions:
+#
+# The above copyright notice and this permission notice shall be included
+# in all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
+# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
+# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+# IN THE SOFTWARE.
+#
+
+from tests.unit import unittest
+import xml.dom.minidom
+import xml.sax
+
+from boto.s3.website import WebsiteConfiguration
+from boto.s3.website import RedirectLocation
+from boto.s3.website import RoutingRules
+from boto.s3.website import Condition
+from boto.s3.website import RoutingRules
+from boto.s3.website import RoutingRule
+from boto.s3.website import Redirect
+from boto import handler
+
+
+def pretty_print_xml(text):
+    text = ''.join(t.strip() for t in text.splitlines())
+    x = xml.dom.minidom.parseString(text)
+    return x.toprettyxml()
+
+
+class TestS3WebsiteConfiguration(unittest.TestCase):
+    maxDiff = None
+
+    def setUp(self):
+        pass
+
+    def tearDown(self):
+        pass
+
+    def test_suffix_only(self):
+        config = WebsiteConfiguration(suffix='index.html')
+        xml = config.to_xml()
+        self.assertIn(
+            '<IndexDocument><Suffix>index.html</Suffix></IndexDocument>', xml)
+
+    def test_suffix_and_error(self):
+        config = WebsiteConfiguration(suffix='index.html',
+                                      error_key='error.html')
+        xml = config.to_xml()
+        self.assertIn(
+            '<ErrorDocument><Key>error.html</Key></ErrorDocument>', xml)
+
+    def test_redirect_all_request_to_with_just_host(self):
+        location = RedirectLocation(hostname='example.com')
+        config = WebsiteConfiguration(redirect_all_requests_to=location)
+        xml = config.to_xml()
+        self.assertIn(
+            ('<RedirectAllRequestsTo><HostName>'
+             'example.com</HostName></RedirectAllRequestsTo>'), xml)
+
+    def test_redirect_all_requests_with_protocol(self):
+        location = RedirectLocation(hostname='example.com', protocol='https')
+        config = WebsiteConfiguration(redirect_all_requests_to=location)
+        xml = config.to_xml()
+        self.assertIn(
+            ('<RedirectAllRequestsTo><HostName>'
+             'example.com</HostName><Protocol>https</Protocol>'
+             '</RedirectAllRequestsTo>'), xml)
+
+    def test_routing_rules_key_prefix(self):
+        x = pretty_print_xml
+        # This rule redirects requests for docs/* to documentation/*
+        rules = RoutingRules()
+        condition = Condition(key_prefix='docs/')
+        redirect = Redirect(replace_key_prefix='documents/')
+        rules.add_rule(RoutingRule(condition, redirect))
+        config = WebsiteConfiguration(suffix='index.html', routing_rules=rules)
+        xml = config.to_xml()
+
+        expected_xml = """<?xml version="1.0" encoding="UTF-8"?>
+            <WebsiteConfiguration xmlns='http://s3.amazonaws.com/doc/2006-03-01/'>
+              <IndexDocument>
+                <Suffix>index.html</Suffix>
+              </IndexDocument>
+              <RoutingRules>
+                <RoutingRule>
+                <Condition>
+                  <KeyPrefixEquals>docs/</KeyPrefixEquals>
+                </Condition>
+                <Redirect>
+                  <ReplaceKeyPrefixWith>documents/</ReplaceKeyPrefixWith>
+                </Redirect>
+                </RoutingRule>
+              </RoutingRules>
+            </WebsiteConfiguration>
+        """
+        self.assertEqual(x(expected_xml), x(xml))
+
+    def test_routing_rules_to_host_on_404(self):
+        x = pretty_print_xml
+        # Another example from the docs:
+        # Redirect requests to a specific host in the event of a 404.
+        # Also, the redirect inserts a report-404/.  For example,
+        # if you request a page ExamplePage.html and it results
+        # in a 404, the request is routed to a page report-404/ExamplePage.html
+        rules = RoutingRules()
+        condition = Condition(http_error_code=404)
+        redirect = Redirect(hostname='example.com',
+                            replace_key_prefix='report-404/')
+        rules.add_rule(RoutingRule(condition, redirect))
+        config = WebsiteConfiguration(suffix='index.html', routing_rules=rules)
+        xml = config.to_xml()
+
+        expected_xml = """<?xml version="1.0" encoding="UTF-8"?>
+            <WebsiteConfiguration xmlns='http://s3.amazonaws.com/doc/2006-03-01/'>
+              <IndexDocument>
+                <Suffix>index.html</Suffix>
+              </IndexDocument>
+              <RoutingRules>
+                <RoutingRule>
+                <Condition>
+                  <HttpErrorCodeReturnedEquals>404</HttpErrorCodeReturnedEquals>
+                </Condition>
+                <Redirect>
+                  <HostName>example.com</HostName>
+                  <ReplaceKeyPrefixWith>report-404/</ReplaceKeyPrefixWith>
+                </Redirect>
+                </RoutingRule>
+              </RoutingRules>
+            </WebsiteConfiguration>
+        """
+        self.assertEqual(x(expected_xml), x(xml))
+
+    def test_key_prefix(self):
+        x = pretty_print_xml
+        rules = RoutingRules()
+        condition = Condition(key_prefix="images/")
+        redirect = Redirect(replace_key='folderdeleted.html')
+        rules.add_rule(RoutingRule(condition, redirect))
+        config = WebsiteConfiguration(suffix='index.html', routing_rules=rules)
+        xml = config.to_xml()
+
+        expected_xml = """<?xml version="1.0" encoding="UTF-8"?>
+            <WebsiteConfiguration xmlns='http://s3.amazonaws.com/doc/2006-03-01/'>
+              <IndexDocument>
+                <Suffix>index.html</Suffix>
+              </IndexDocument>
+              <RoutingRules>
+                <RoutingRule>
+                <Condition>
+                  <KeyPrefixEquals>images/</KeyPrefixEquals>
+                </Condition>
+                <Redirect>
+                  <ReplaceKeyWith>folderdeleted.html</ReplaceKeyWith>
+                </Redirect>
+                </RoutingRule>
+              </RoutingRules>
+            </WebsiteConfiguration>
+        """
+        self.assertEqual(x(expected_xml), x(xml))
+
+    def test_builders(self):
+        x = pretty_print_xml
+        # This is a more declarative way to create rules.
+        # First the long way.
+        rules = RoutingRules()
+        condition = Condition(http_error_code=404)
+        redirect = Redirect(hostname='example.com',
+                            replace_key_prefix='report-404/')
+        rules.add_rule(RoutingRule(condition, redirect))
+        xml = rules.to_xml()
+
+        # Then the more concise way.
+        rules2 = RoutingRules().add_rule(
+            RoutingRule.when(http_error_code=404).then_redirect(
+                hostname='example.com', replace_key_prefix='report-404/'))
+        xml2 = rules2.to_xml()
+        self.assertEqual(x(xml), x(xml2))
+
+    def test_parse_xml(self):
+        x = pretty_print_xml
+        xml_in = """<?xml version="1.0" encoding="UTF-8"?>
+            <WebsiteConfiguration xmlns='http://s3.amazonaws.com/doc/2006-03-01/'>
+              <IndexDocument>
+                <Suffix>index.html</Suffix>
+              </IndexDocument>
+              <ErrorDocument>
+                <Key>error.html</Key>
+              </ErrorDocument>
+              <RoutingRules>
+                <RoutingRule>
+                <Condition>
+                  <KeyPrefixEquals>docs/</KeyPrefixEquals>
+                </Condition>
+                <Redirect>
+                  <Protocol>https</Protocol>
+                  <HostName>www.example.com</HostName>
+                  <ReplaceKeyWith>documents/</ReplaceKeyWith>
+                  <HttpRedirectCode>302</HttpRedirectCode>
+                </Redirect>
+                </RoutingRule>
+                <RoutingRule>
+                <Condition>
+                  <HttpErrorCodeReturnedEquals>404</HttpErrorCodeReturnedEquals>
+                </Condition>
+                <Redirect>
+                  <HostName>example.com</HostName>
+                  <ReplaceKeyPrefixWith>report-404/</ReplaceKeyPrefixWith>
+                </Redirect>
+                </RoutingRule>
+              </RoutingRules>
+            </WebsiteConfiguration>
+        """
+        webconfig = WebsiteConfiguration()
+        h = handler.XmlHandler(webconfig, None)
+        xml.sax.parseString(xml_in, h)
+        xml_out = webconfig.to_xml()
+        self.assertEqual(x(xml_in), x(xml_out))
diff --git a/tests/unit/sns/__init__.py b/tests/unit/sns/__init__.py
new file mode 100644
index 0000000..e69de29
--- /dev/null
+++ b/tests/unit/sns/__init__.py
diff --git a/tests/unit/sns/test_connection.py b/tests/unit/sns/test_connection.py
new file mode 100644
index 0000000..9eedf04
--- /dev/null
+++ b/tests/unit/sns/test_connection.py
@@ -0,0 +1,99 @@
+#!/usr/bin/env python
+# Copyright (c) 2013 Amazon.com, Inc. or its affiliates.  All Rights Reserved
+#
+# Permission is hereby granted, free of charge, to any person obtaining a
+# copy of this software and associated documentation files (the
+# "Software"), to deal in the Software without restriction, including
+# without limitation the rights to use, copy, modify, merge, publish, dis-
+# tribute, sublicense, and/or sell copies of the Software, and to permit
+# persons to whom the Software is furnished to do so, subject to the fol-
+# lowing conditions:
+#
+# The above copyright notice and this permission notice shall be included
+# in all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
+# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
+# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+# IN THE SOFTWARE.
+#
+import json
+from tests.unit import unittest
+from tests.unit import AWSMockServiceTestCase
+from mock import Mock
+
+from boto.sns.connection import SNSConnection
+
+QUEUE_POLICY = {
+    u'Policy':
+        (u'{"Version":"2008-10-17","Id":"arn:aws:sqs:us-east-1:'
+         'idnum:testqueuepolicy/SQSDefaultPolicy","Statement":'
+         '[{"Sid":"sidnum","Effect":"Allow","Principal":{"AWS":"*"},'
+         '"Action":"SQS:GetQueueUrl","Resource":'
+         '"arn:aws:sqs:us-east-1:idnum:testqueuepolicy"}]}')}
+
+
+class TestSNSConnection(AWSMockServiceTestCase):
+    connection_class = SNSConnection
+
+    def setUp(self):
+        super(TestSNSConnection, self).setUp()
+
+    def default_body(self):
+        return "{}"
+
+    def test_sqs_with_existing_policy(self):
+        self.set_http_response(status_code=200)
+
+        queue = Mock()
+        queue.get_attributes.return_value = QUEUE_POLICY
+        queue.arn = 'arn:aws:sqs:us-east-1:idnum:queuename'
+
+        self.service_connection.subscribe_sqs_queue('topic_arn', queue)
+        self.assert_request_parameters({
+               'Action': 'Subscribe',
+               'ContentType': 'JSON',
+               'Endpoint': 'arn:aws:sqs:us-east-1:idnum:queuename',
+               'Protocol': 'sqs',
+               'SignatureMethod': 'HmacSHA256',
+               'SignatureVersion': 2,
+               'TopicArn': 'topic_arn',
+               'Version': '2010-03-31',
+        }, ignore_params_values=['AWSAccessKeyId', 'Timestamp'])
+
+        # Verify that the queue policy was properly updated.
+        actual_policy = json.loads(queue.set_attribute.call_args[0][1])
+        self.assertEqual(actual_policy['Version'], '2008-10-17')
+        # A new statement should be appended to the end of the statement list.
+        self.assertEqual(len(actual_policy['Statement']), 2)
+        self.assertEqual(actual_policy['Statement'][1]['Action'],
+                         'SQS:SendMessage')
+
+    def test_sqs_with_no_previous_policy(self):
+        self.set_http_response(status_code=200)
+
+        queue = Mock()
+        queue.get_attributes.return_value = {}
+        queue.arn = 'arn:aws:sqs:us-east-1:idnum:queuename'
+
+        self.service_connection.subscribe_sqs_queue('topic_arn', queue)
+        self.assert_request_parameters({
+               'Action': 'Subscribe',
+               'ContentType': 'JSON',
+               'Endpoint': 'arn:aws:sqs:us-east-1:idnum:queuename',
+               'Protocol': 'sqs',
+               'SignatureMethod': 'HmacSHA256',
+               'SignatureVersion': 2,
+               'TopicArn': 'topic_arn',
+               'Version': '2010-03-31',
+        }, ignore_params_values=['AWSAccessKeyId', 'Timestamp'])
+        actual_policy = json.loads(queue.set_attribute.call_args[0][1])
+        # Only a single statement should be part of the policy.
+        self.assertEqual(len(actual_policy['Statement']), 1)
+
+
+if __name__ == '__main__':
+    unittest.main()
diff --git a/tests/unit/sqs/__init__.py b/tests/unit/sqs/__init__.py
new file mode 100644
index 0000000..e69de29
--- /dev/null
+++ b/tests/unit/sqs/__init__.py
diff --git a/tests/unit/sqs/test_connection.py b/tests/unit/sqs/test_connection.py
new file mode 100644
index 0000000..7fee36c
--- /dev/null
+++ b/tests/unit/sqs/test_connection.py
@@ -0,0 +1,98 @@
+#!/usr/bin/env python
+# Copyright (c) 2012 Amazon.com, Inc. or its affiliates.  All Rights Reserved
+#
+# Permission is hereby granted, free of charge, to any person obtaining a
+# copy of this software and associated documentation files (the
+# "Software"), to deal in the Software without restriction, including
+# without limitation the rights to use, copy, modify, merge, publish, dis-
+# tribute, sublicense, and/or sell copies of the Software, and to permit
+# persons to whom the Software is furnished to do so, subject to the fol-
+# lowing conditions:
+#
+# The above copyright notice and this permission notice shall be included
+# in all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
+# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
+# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+# IN THE SOFTWARE.
+#
+
+from tests.unit import unittest
+from tests.unit import AWSMockServiceTestCase
+
+from boto.sqs.connection import SQSConnection
+from boto.sqs.regioninfo import SQSRegionInfo
+
+
+class SQSAuthParams(AWSMockServiceTestCase):
+    connection_class = SQSConnection
+
+    def setUp(self):
+        super(SQSAuthParams, self).setUp()
+
+    def default_body(self):
+        return """<?xml version="1.0"?>
+            <CreateQueueResponse>
+              <CreateQueueResult>
+                <QueueUrl>
+                  https://queue.amazonaws.com/599169622985/myqueue1
+                </QueueUrl>
+              </CreateQueueResult>
+              <ResponseMetadata>
+                <RequestId>54d4c94d-2307-54a8-bb27-806a682a5abd</RequestId>
+              </ResponseMetadata>
+            </CreateQueueResponse>"""
+
+    def test_auth_service_name_override(self):
+        self.set_http_response(status_code=200)
+        # We can use the auth_service_name to change what service
+        # name to use for the credential scope for sigv4.
+        self.service_connection.auth_service_name = 'service_override'
+
+        self.service_connection.create_queue('my_queue')
+        # Note the service_override value instead.
+        self.assertIn('us-east-1/service_override/aws4_request',
+                      self.actual_request.headers['Authorization'])
+
+    def test_class_attribute_can_set_service_name(self):
+        self.set_http_response(status_code=200)
+        # The SQS class has an 'AuthServiceName' param of 'sqs':
+        self.assertEqual(self.service_connection.AuthServiceName, 'sqs')
+
+        self.service_connection.create_queue('my_queue')
+        # And because of this, the value of 'sqs' will be used instead of
+        # 'queue' for the credential scope:
+        self.assertIn('us-east-1/sqs/aws4_request',
+                      self.actual_request.headers['Authorization'])
+
+    def test_auth_region_name_is_automatically_updated(self):
+        region = SQSRegionInfo(name='us-west-2',
+                               endpoint='us-west-2.queue.amazonaws.com')
+        self.service_connection = SQSConnection(
+            https_connection_factory=self.https_connection_factory,
+            aws_access_key_id='aws_access_key_id',
+            aws_secret_access_key='aws_secret_access_key',
+            region=region)
+        self.initialize_service_connection()
+        self.set_http_response(status_code=200)
+
+        self.service_connection.create_queue('my_queue')
+        # Note the region name below is 'us-west-2'.
+        self.assertIn('us-west-2/sqs/aws4_request',
+                      self.actual_request.headers['Authorization'])
+
+    def test_set_get_auth_service_and_region_names(self):
+        self.service_connection.auth_service_name = 'service_name'
+        self.service_connection.auth_region_name = 'region_name'
+
+        self.assertEqual(self.service_connection.auth_service_name,
+                         'service_name')
+        self.assertEqual(self.service_connection.auth_region_name, 'region_name')
+
+
+if __name__ == '__main__':
+    unittest.main()
diff --git a/tests/unit/sqs/test_queue.py b/tests/unit/sqs/test_queue.py
new file mode 100644
index 0000000..a8c86ca
--- /dev/null
+++ b/tests/unit/sqs/test_queue.py
@@ -0,0 +1,40 @@
+# Copyright (c) 2013 Amazon.com, Inc. or its affiliates.  All Rights Reserved
+#
+# Permission is hereby granted, free of charge, to any person obtaining a
+# copy of this software and associated documentation files (the
+# "Software"), to deal in the Software without restriction, including
+# without limitation the rights to use, copy, modify, merge, publish, dis-
+# tribute, sublicense, and/or sell copies of the Software, and to permit
+# persons to whom the Software is furnished to do so, subject to the fol-
+# lowing conditions:
+#
+# The above copyright notice and this permission notice shall be included
+# in all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
+# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
+# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+# IN THE SOFTWARE.
+#
+from tests.unit import unittest
+from mock import Mock
+
+from boto.sqs.queue import Queue
+
+
+class TestQueue(unittest.TestCase):
+
+    def test_queue_arn(self):
+        connection = Mock()
+        connection.region.name = 'us-east-1'
+        q = Queue(
+            connection=connection,
+            url='https://sqs.us-east-1.amazonaws.com/id/queuename')
+        self.assertEqual(q.arn, 'arn:aws:sqs:us-east-1:id:queuename')
+
+
+if __name__ == '__main__':
+    unittest.main()
diff --git a/tests/unit/sts/test_connection.py b/tests/unit/sts/test_connection.py
new file mode 100644
index 0000000..f874caf
--- /dev/null
+++ b/tests/unit/sts/test_connection.py
@@ -0,0 +1,74 @@
+#!/usr/bin/env python
+# Copyright (c) 2012 Amazon.com, Inc. or its affiliates.  All Rights Reserved
+#
+# Permission is hereby granted, free of charge, to any person obtaining a
+# copy of this software and associated documentation files (the
+# "Software"), to deal in the Software without restriction, including
+# without limitation the rights to use, copy, modify, merge, publish, dis-
+# tribute, sublicense, and/or sell copies of the Software, and to permit
+# persons to whom the Software is furnished to do so, subject to the fol-
+# lowing conditions:
+#
+# The above copyright notice and this permission notice shall be included
+# in all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
+# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
+# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+# IN THE SOFTWARE.
+#
+
+from tests.unit import unittest
+from boto.sts.connection import STSConnection
+from tests.unit import AWSMockServiceTestCase
+
+
+class TestSTSConnection(AWSMockServiceTestCase):
+    connection_class = STSConnection
+
+    def setUp(self):
+        super(TestSTSConnection, self).setUp()
+
+    def default_body(self):
+        return """
+            <AssumeRoleResponse xmlns="https://sts.amazonaws.com/doc/2011-06-15/">
+              <AssumeRoleResult>
+                <AssumedRoleUser>
+                  <Arn>arn:role</Arn>
+                  <AssumedRoleId>roleid:myrolesession</AssumedRoleId>
+                </AssumedRoleUser>
+                <Credentials>
+                  <SessionToken>session_token</SessionToken>
+                  <SecretAccessKey>secretkey</SecretAccessKey>
+                  <Expiration>2012-10-18T10:18:14.789Z</Expiration>
+                  <AccessKeyId>accesskey</AccessKeyId>
+                </Credentials>
+              </AssumeRoleResult>
+              <ResponseMetadata>
+                <RequestId>8b7418cb-18a8-11e2-a706-4bd22ca68ab7</RequestId>
+              </ResponseMetadata>
+            </AssumeRoleResponse>
+        """
+
+    def test_assume_role(self):
+        self.set_http_response(status_code=200)
+        response = self.service_connection.assume_role('arn:role', 'mysession')
+        self.assert_request_parameters(
+            {'Action': 'AssumeRole',
+             'RoleArn': 'arn:role',
+             'RoleSessionName': 'mysession'},
+            ignore_params_values=['Timestamp', 'AWSAccessKeyId',
+                                  'SignatureMethod', 'SignatureVersion',
+                                  'Version'])
+        self.assertEqual(response.credentials.access_key, 'accesskey')
+        self.assertEqual(response.credentials.secret_key, 'secretkey')
+        self.assertEqual(response.credentials.session_token, 'session_token')
+        self.assertEqual(response.user.arn, 'arn:role')
+        self.assertEqual(response.user.assume_role_id, 'roleid:myrolesession')
+
+
+if __name__ == '__main__':
+    unittest.main()
diff --git a/tests/unit/test_connection.py b/tests/unit/test_connection.py
new file mode 100644
index 0000000..d71587f
--- /dev/null
+++ b/tests/unit/test_connection.py
@@ -0,0 +1,287 @@
+# Copyright (c) 2013 Amazon.com, Inc. or its affiliates.  All Rights Reserved
+#
+# Permission is hereby granted, free of charge, to any person obtaining a
+# copy of this software and associated documentation files (the
+# "Software"), to deal in the Software without restriction, including
+# without limitation the rights to use, copy, modify, merge, publish, dis-
+# tribute, sublicense, and/or sell copies of the Software, and to permit
+# persons to whom the Software is furnished to do so, subject to the fol-
+# lowing conditions:
+#
+# The above copyright notice and this permission notice shall be included
+# in all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
+# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
+# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+# IN THE SOFTWARE.
+#
+import urlparse
+from tests.unit import unittest
+from httpretty import HTTPretty
+
+from boto.connection import AWSQueryConnection, AWSAuthConnection
+from boto.exception import BotoServerError
+from boto.regioninfo import RegionInfo
+from boto.compat import json
+
+
+class TestListParamsSerialization(unittest.TestCase):
+    maxDiff = None
+
+    def setUp(self):
+        self.connection = AWSQueryConnection('access_key', 'secret_key')
+
+    def test_complex_list_serialization(self):
+        # This example is taken from the doc string of
+        # build_complex_list_params.
+        params = {}
+        self.connection.build_complex_list_params(
+            params, [('foo', 'bar', 'baz'), ('foo2', 'bar2', 'baz2')],
+            'ParamName.member', ('One', 'Two', 'Three'))
+        self.assertDictEqual({
+            'ParamName.member.1.One': 'foo',
+            'ParamName.member.1.Two': 'bar',
+            'ParamName.member.1.Three': 'baz',
+            'ParamName.member.2.One': 'foo2',
+            'ParamName.member.2.Two': 'bar2',
+            'ParamName.member.2.Three': 'baz2',
+        }, params)
+
+    def test_simple_list_serialization(self):
+        params = {}
+        self.connection.build_list_params(
+            params, ['foo', 'bar', 'baz'], 'ParamName.member')
+        self.assertDictEqual({
+            'ParamName.member.1': 'foo',
+            'ParamName.member.2': 'bar',
+            'ParamName.member.3': 'baz',
+        }, params)
+
+
+class MockAWSService(AWSQueryConnection):
+    """
+    Fake AWS Service
+
+    This is used to test the AWSQueryConnection object is behaving properly.
+    """
+
+    APIVersion = '2012-01-01'
+    def _required_auth_capability(self):
+        return ['sign-v2']
+
+    def __init__(self, aws_access_key_id=None, aws_secret_access_key=None,
+                 is_secure=True, host=None, port=None,
+                 proxy=None, proxy_port=None,
+                 proxy_user=None, proxy_pass=None, debug=0,
+                 https_connection_factory=None, region=None, path='/',
+                 api_version=None, security_token=None,
+                 validate_certs=True):
+        self.region = region
+        AWSQueryConnection.__init__(self, aws_access_key_id,
+                                    aws_secret_access_key,
+                                    is_secure, port, proxy, proxy_port,
+                                    proxy_user, proxy_pass,
+                                    self.region.endpoint, debug,
+                                    https_connection_factory, path,
+                                    security_token,
+                                    validate_certs=validate_certs)
+
+class TestAWSAuthConnection(unittest.TestCase):
+    def test_get_path(self):
+        conn = AWSAuthConnection(
+            'mockservice.cc-zone-1.amazonaws.com',
+            aws_access_key_id='access_key',
+            aws_secret_access_key='secret',
+            suppress_consec_slashes=False
+        )
+        # Test some sample paths for mangling.
+        self.assertEqual(conn.get_path('/'), '/')
+        self.assertEqual(conn.get_path('image.jpg'), '/image.jpg')
+        self.assertEqual(conn.get_path('folder/image.jpg'), '/folder/image.jpg')
+        self.assertEqual(conn.get_path('folder//image.jpg'), '/folder//image.jpg')
+
+        # Ensure leading slashes aren't removed.
+        # See https://github.com/boto/boto/issues/1387
+        self.assertEqual(conn.get_path('/folder//image.jpg'), '/folder//image.jpg')
+        self.assertEqual(conn.get_path('/folder////image.jpg'), '/folder////image.jpg')
+        self.assertEqual(conn.get_path('///folder////image.jpg'), '///folder////image.jpg')
+
+
+class TestAWSQueryConnection(unittest.TestCase):
+    def setUp(self):
+        self.region = RegionInfo(name='cc-zone-1',
+                            endpoint='mockservice.cc-zone-1.amazonaws.com',
+                            connection_cls=MockAWSService)
+
+        HTTPretty.enable()
+
+    def tearDown(self):
+        HTTPretty.disable()
+
+class TestAWSQueryConnectionSimple(TestAWSQueryConnection):
+    def test_query_connection_basis(self):
+        HTTPretty.register_uri(HTTPretty.POST,
+                               'https://%s/' % self.region.endpoint,
+                               json.dumps({'test': 'secure'}),
+                               content_type='application/json')
+
+        conn = self.region.connect(aws_access_key_id='access_key',
+                                   aws_secret_access_key='secret')
+
+        self.assertEqual(conn.host, 'mockservice.cc-zone-1.amazonaws.com')
+
+    def test_single_command(self):
+        HTTPretty.register_uri(HTTPretty.POST,
+                               'https://%s/' % self.region.endpoint,
+                               json.dumps({'test': 'secure'}),
+                               content_type='application/json')
+
+        conn = self.region.connect(aws_access_key_id='access_key',
+                                   aws_secret_access_key='secret')
+        resp = conn.make_request('myCmd',
+                                 {'par1': 'foo', 'par2': 'baz'},
+                                 "/",
+                                 "POST")
+
+        args = urlparse.parse_qs(HTTPretty.last_request.body)
+        self.assertEqual(args['AWSAccessKeyId'], ['access_key'])
+        self.assertEqual(args['SignatureMethod'], ['HmacSHA256'])
+        self.assertEqual(args['Version'], [conn.APIVersion])
+        self.assertEqual(args['par1'], ['foo'])
+        self.assertEqual(args['par2'], ['baz'])
+
+        self.assertEqual(resp.read(), '{"test": "secure"}')
+
+    def test_multi_commands(self):
+        """Check connection re-use"""
+        HTTPretty.register_uri(HTTPretty.POST,
+                               'https://%s/' % self.region.endpoint,
+                               json.dumps({'test': 'secure'}),
+                               content_type='application/json')
+
+        conn = self.region.connect(aws_access_key_id='access_key',
+                                   aws_secret_access_key='secret')
+
+        resp1 = conn.make_request('myCmd1',
+                                  {'par1': 'foo', 'par2': 'baz'},
+                                  "/",
+                                  "POST")
+        body1 = urlparse.parse_qs(HTTPretty.last_request.body)
+
+        resp2 = conn.make_request('myCmd2',
+                                  {'par3': 'bar', 'par4': 'narf'},
+                                  "/",
+                                  "POST")
+        body2 = urlparse.parse_qs(HTTPretty.last_request.body)
+
+        self.assertEqual(body1['par1'], ['foo'])
+        self.assertEqual(body1['par2'], ['baz'])
+        with self.assertRaises(KeyError):
+            body1['par3']
+
+        self.assertEqual(body2['par3'], ['bar'])
+        self.assertEqual(body2['par4'], ['narf'])
+        with self.assertRaises(KeyError):
+            body2['par1']
+
+        self.assertEqual(resp1.read(), '{"test": "secure"}')
+        self.assertEqual(resp2.read(), '{"test": "secure"}')
+
+    def test_non_secure(self):
+        HTTPretty.register_uri(HTTPretty.POST,
+                               'http://%s/' % self.region.endpoint,
+                               json.dumps({'test': 'normal'}),
+                               content_type='application/json')
+
+        conn = self.region.connect(aws_access_key_id='access_key',
+                                   aws_secret_access_key='secret',
+                                   is_secure=False)
+        resp = conn.make_request('myCmd1',
+                                 {'par1': 'foo', 'par2': 'baz'},
+                                 "/",
+                                 "POST")
+
+        self.assertEqual(resp.read(), '{"test": "normal"}')
+
+    def test_alternate_port(self):
+        HTTPretty.register_uri(HTTPretty.POST,
+                               'http://%s:8080/' % self.region.endpoint,
+                               json.dumps({'test': 'alternate'}),
+                               content_type='application/json')
+
+        conn = self.region.connect(aws_access_key_id='access_key',
+                                   aws_secret_access_key='secret',
+                                   port=8080,
+                                   is_secure=False)
+        resp = conn.make_request('myCmd1',
+                                 {'par1': 'foo', 'par2': 'baz'},
+                                 "/",
+                                 "POST")
+
+        self.assertEqual(resp.read(), '{"test": "alternate"}')
+
+    def test_temp_failure(self):
+        responses = [HTTPretty.Response(body="{'test': 'fail'}", status=500),
+                     HTTPretty.Response(body="{'test': 'success'}", status=200)]
+
+        HTTPretty.register_uri(HTTPretty.POST,
+                               'https://%s/temp_fail/' % self.region.endpoint,
+                               responses=responses)
+
+        conn = self.region.connect(aws_access_key_id='access_key',
+                                   aws_secret_access_key='secret')
+        resp = conn.make_request('myCmd1',
+                                 {'par1': 'foo', 'par2': 'baz'},
+                                 '/temp_fail/',
+                                 'POST')
+        self.assertEqual(resp.read(), "{'test': 'success'}")
+
+class TestAWSQueryStatus(TestAWSQueryConnection):
+
+    def test_get_status(self):
+        HTTPretty.register_uri(HTTPretty.GET,
+                               'https://%s/status' % self.region.endpoint,
+                               '<status>ok</status>',
+                               content_type='text/xml')
+
+        conn = self.region.connect(aws_access_key_id='access_key',
+                                   aws_secret_access_key='secret')
+        resp = conn.get_status('getStatus',
+                               {'par1': 'foo', 'par2': 'baz'},
+                               'status')
+
+        self.assertEqual(resp, "ok")
+
+    def test_get_status_blank_error(self):
+        HTTPretty.register_uri(HTTPretty.GET,
+                               'https://%s/status' % self.region.endpoint,
+                               '',
+                               content_type='text/xml')
+
+        conn = self.region.connect(aws_access_key_id='access_key',
+                aws_secret_access_key='secret')
+        with self.assertRaises(BotoServerError):
+            resp = conn.get_status('getStatus',
+                                   {'par1': 'foo', 'par2': 'baz'},
+                                   'status')
+
+    def test_get_status_error(self):
+        HTTPretty.register_uri(HTTPretty.GET,
+                               'https://%s/status' % self.region.endpoint,
+                               '<status>error</status>',
+                               content_type='text/xml',
+                               status=400)
+
+        conn = self.region.connect(aws_access_key_id='access_key',
+                                   aws_secret_access_key='secret')
+        with self.assertRaises(BotoServerError):
+            resp = conn.get_status('getStatus',
+                                   {'par1': 'foo', 'par2': 'baz'},
+                                   'status')
+
+if __name__ == '__main__':
+    unittest.main()
diff --git a/tests/unit/test_exception.py b/tests/unit/test_exception.py
new file mode 100644
index 0000000..684ca0c
--- /dev/null
+++ b/tests/unit/test_exception.py
@@ -0,0 +1,78 @@
+from tests.unit import unittest
+
+from boto.exception import BotoServerError
+
+from httpretty import HTTPretty, httprettified
+
+class TestBotoServerError(unittest.TestCase):
+
+    def test_botoservererror_basics(self):
+        bse = BotoServerError('400', 'Bad Request')
+        self.assertEqual(bse.status, '400')
+        self.assertEqual(bse.reason, 'Bad Request')
+
+    def test_message_elb_xml(self):
+        # This test XML response comes from #509
+        xml = """
+<ErrorResponse xmlns="http://elasticloadbalancing.amazonaws.com/doc/2011-11-15/">
+  <Error>
+    <Type>Sender</Type>
+    <Code>LoadBalancerNotFound</Code>
+    <Message>Cannot find Load Balancer webapp-balancer2</Message>
+  </Error>
+  <RequestId>093f80d0-4473-11e1-9234-edce8ec08e2d</RequestId>
+</ErrorResponse>"""
+        bse = BotoServerError('400', 'Bad Request', body=xml)
+        
+        self.assertEqual(bse.error_message, 'Cannot find Load Balancer webapp-balancer2')
+        self.assertEqual(bse.request_id, '093f80d0-4473-11e1-9234-edce8ec08e2d')
+        self.assertEqual(bse.error_code, 'LoadBalancerNotFound')
+        self.assertEqual(bse.status, '400')
+        self.assertEqual(bse.reason, 'Bad Request')
+
+    def test_message_sd_xml(self):
+        # Sample XML response from: https://forums.aws.amazon.com/thread.jspa?threadID=87393
+        xml = """
+<Response>
+  <Errors>
+    <Error>
+      <Code>AuthorizationFailure</Code>
+      <Message>Session does not have permission to perform (sdb:CreateDomain) on resource (arn:aws:sdb:us-east-1:xxxxxxx:domain/test_domain). Contact account owner.</Message>
+      <BoxUsage>0.0055590278</BoxUsage>
+    </Error>
+  </Errors>
+  <RequestID>e73bb2bb-63e3-9cdc-f220-6332de66dbbe</RequestID>
+</Response>"""
+        bse = BotoServerError('403', 'Forbidden', body=xml)
+        self.assertEqual(bse.error_message, 
+            'Session does not have permission to perform (sdb:CreateDomain) on '
+            'resource (arn:aws:sdb:us-east-1:xxxxxxx:domain/test_domain). '
+            'Contact account owner.')
+        self.assertEqual(bse.box_usage, '0.0055590278')
+        self.assertEqual(bse.error_code, 'AuthorizationFailure')
+        self.assertEqual(bse.status, '403')
+        self.assertEqual(bse.reason, 'Forbidden')
+
+    @httprettified
+    def test_xmlns_not_loaded(self):
+        xml = '<ErrorResponse xmlns="http://elasticloadbalancing.amazonaws.com/doc/2011-11-15/">'
+        bse = BotoServerError('403', 'Forbidden', body=xml)
+        self.assertEqual([], HTTPretty.latest_requests)
+
+    @httprettified
+    def test_xml_entity_not_loaded(self):
+        xml = '<!DOCTYPE Message [<!ENTITY xxe SYSTEM "http://aws.amazon.com/">]><Message>error:&xxe;</Message>'
+        bse = BotoServerError('403', 'Forbidden', body=xml)
+        self.assertEqual([], HTTPretty.latest_requests)
+
+    def test_message_not_xml(self):
+        body = 'This is not XML'
+
+        bse = BotoServerError('400', 'Bad Request', body=body)
+        self.assertEqual(bse.error_message, 'This is not XML')
+
+    def test_getters(self):
+        body = "This is the body"
+
+        bse = BotoServerError('400', 'Bad Request', body=body)
+        self.assertEqual(bse.code, bse.error_code)
diff --git a/tests/unit/utils/__init__.py b/tests/unit/utils/__init__.py
new file mode 100644
index 0000000..e69de29
--- /dev/null
+++ b/tests/unit/utils/__init__.py
diff --git a/tests/unit/utils/test_utils.py b/tests/unit/utils/test_utils.py
index 205d3d8..abb8535 100644
--- a/tests/unit/utils/test_utils.py
+++ b/tests/unit/utils/test_utils.py
@@ -23,8 +23,12 @@
 import hashlib
 import hmac
 
+import mock
+
 from boto.utils import Password
 from boto.utils import pythonize_name
+from boto.utils import _build_instance_metadata_url
+from boto.utils import retry_url
 
 
 class TestPassword(unittest.TestCase):
@@ -105,5 +109,82 @@
         self.assertEqual(pythonize_name('HTTPStatus200Ok'), 'http_status_200_ok')
 
 
+class TestBuildInstanceMetadataURL(unittest.TestCase):
+    def test_normal(self):
+        # This is the all-defaults case.
+        self.assertEqual(_build_instance_metadata_url(
+                'http://169.254.169.254',
+                'latest',
+                'meta-data'
+            ),
+            'http://169.254.169.254/latest/meta-data/'
+        )
+
+    def test_custom_path(self):
+        self.assertEqual(_build_instance_metadata_url(
+                'http://169.254.169.254',
+                'latest',
+                'dynamic'
+            ),
+            'http://169.254.169.254/latest/dynamic/'
+        )
+
+    def test_custom_version(self):
+        self.assertEqual(_build_instance_metadata_url(
+                'http://169.254.169.254',
+                '1.0',
+                'meta-data'
+            ),
+            'http://169.254.169.254/1.0/meta-data/'
+        )
+
+    def test_custom_url(self):
+        self.assertEqual(_build_instance_metadata_url(
+                'http://10.0.1.5',
+                'latest',
+                'meta-data'
+            ),
+            'http://10.0.1.5/latest/meta-data/'
+        )
+
+    def test_all_custom(self):
+        self.assertEqual(_build_instance_metadata_url(
+                'http://10.0.1.5',
+                '2013-03-22',
+                'user-data'
+            ),
+            'http://10.0.1.5/2013-03-22/user-data/'
+        )
+
+
+class TestRetryURL(unittest.TestCase):
+    def setUp(self):
+        self.urlopen_patch = mock.patch('urllib2.urlopen')
+        self.opener_patch = mock.patch('urllib2.build_opener')
+        self.urlopen = self.urlopen_patch.start()
+        self.opener = self.opener_patch.start()
+
+    def tearDown(self):
+        self.urlopen_patch.stop()
+        self.opener_patch.stop()
+
+    def set_normal_response(self, response):
+        fake_response = mock.Mock()
+        fake_response.read.return_value = response
+        self.urlopen.return_value = fake_response
+
+    def set_no_proxy_allowed_response(self, response):
+        fake_response = mock.Mock()
+        fake_response.read.return_value = response
+        self.opener.return_value.open.return_value = fake_response
+
+    def test_retry_url_uses_proxy(self):
+        self.set_normal_response('normal response')
+        self.set_no_proxy_allowed_response('no proxy response')
+
+        response = retry_url('http://10.10.10.10/foo', num_retries=1)
+        self.assertEqual(response, 'no proxy response')
+
+
 if __name__ == '__main__':
     unittest.main()
diff --git a/tests/unit/vpc/__init__.py b/tests/unit/vpc/__init__.py
new file mode 100644
index 0000000..e69de29
--- /dev/null
+++ b/tests/unit/vpc/__init__.py
diff --git a/tests/unit/vpc/test_vpc.py b/tests/unit/vpc/test_vpc.py
new file mode 100644
index 0000000..499d158
--- /dev/null
+++ b/tests/unit/vpc/test_vpc.py
@@ -0,0 +1,40 @@
+# -*- coding: UTF-8 -*-
+from tests.unit import unittest
+from tests.unit import AWSMockServiceTestCase
+
+from boto.vpc import VPCConnection
+
+DESCRIBE_VPCS = r'''<?xml version="1.0" encoding="UTF-8"?>
+<DescribeVpcsResponse xmlns="http://ec2.amazonaws.com/doc/2013-02-01/">
+    <requestId>623040d1-b51c-40bc-8080-93486f38d03d</requestId>
+    <vpcSet>
+        <item>
+            <vpcId>vpc-12345678</vpcId>
+            <state>available</state>
+            <cidrBlock>172.16.0.0/16</cidrBlock>
+            <dhcpOptionsId>dopt-12345678</dhcpOptionsId>
+            <instanceTenancy>default</instanceTenancy>
+            <isDefault>false</isDefault>
+        </item>
+    </vpcSet>
+</DescribeVpcsResponse>'''
+
+class TestDescriveVPCs(AWSMockServiceTestCase):
+
+    connection_class = VPCConnection
+    
+    def default_body(self):
+        return DESCRIBE_VPCS
+    
+    def test_get_vpcs(self):
+        self.set_http_response(status_code=200)
+
+        api_response = self.service_connection.get_all_vpcs()
+        self.assertEqual(len(api_response), 1)
+
+        vpc = api_response[0]
+        self.assertFalse(vpc.is_default)
+        self.assertEqual(vpc.instance_tenancy,'default')
+
+if __name__ == '__main__':
+    unittest.main()
\ No newline at end of file
diff --git a/tests/unit/vpc/test_vpnconnection.py b/tests/unit/vpc/test_vpnconnection.py
new file mode 100644
index 0000000..dfce90f
--- /dev/null
+++ b/tests/unit/vpc/test_vpnconnection.py
@@ -0,0 +1,123 @@
+# -*- coding: UTF-8 -*-
+from tests.unit import unittest
+from tests.unit import AWSMockServiceTestCase
+
+from boto.vpc import VPCConnection
+
+DESCRIBE_VPNCONNECTIONS = r'''<?xml version="1.0" encoding="UTF-8"?>
+<DescribeVpnConnectionsResponse xmlns="http://ec2.amazonaws.com/doc/2013-02-01/">
+    <requestId>12345678-asdf-ghjk-zxcv-0987654321nb</requestId>
+    <vpnConnectionSet>
+        <item>
+            <vpnConnectionId>vpn-12qw34er56ty</vpnConnectionId>
+            <state>available</state>
+            <customerGatewayConfiguration>
+                &lt;?xml version="1.0" encoding="UTF-8"?&gt;
+            </customerGatewayConfiguration>
+            <type>ipsec.1</type>
+            <customerGatewayId>cgw-1234qwe9</customerGatewayId>
+            <vpnGatewayId>vgw-lkjh1234</vpnGatewayId>
+            <tagSet>
+                <item>
+                    <key>Name</key>
+                    <value>VPN 1</value>
+                </item>
+            </tagSet>
+            <vgwTelemetry>
+                <item>
+                    <outsideIpAddress>123.45.67.89</outsideIpAddress>
+                    <status>DOWN</status>
+                    <lastStatusChange>2013-03-19T19:20:34.000Z</lastStatusChange>
+                    <statusMessage/>
+                    <acceptedRouteCount>0</acceptedRouteCount>
+                </item>
+                <item>
+                    <outsideIpAddress>123.45.67.90</outsideIpAddress>
+                    <status>UP</status>
+                    <lastStatusChange>2013-03-20T08:00:14.000Z</lastStatusChange>
+                    <statusMessage/>
+                    <acceptedRouteCount>0</acceptedRouteCount>
+                </item>
+            </vgwTelemetry>
+            <options>
+                <staticRoutesOnly>true</staticRoutesOnly>
+            </options>
+            <routes>
+                <item>
+                    <destinationCidrBlock>192.168.0.0/24</destinationCidrBlock>
+                    <source>static</source>
+                    <state>available</state>
+                </item>
+            </routes>
+        </item>
+        <item>
+            <vpnConnectionId>vpn-qwerty12</vpnConnectionId>
+            <state>pending</state>
+            <customerGatewayConfiguration>
+                &lt;?xml version="1.0" encoding="UTF-8"?&gt;
+            </customerGatewayConfiguration>
+            <type>ipsec.1</type>
+            <customerGatewayId>cgw-01234567</customerGatewayId>            
+            <vpnGatewayId>vgw-asdfghjk</vpnGatewayId>
+            <vgwTelemetry>
+                <item>
+                    <outsideIpAddress>134.56.78.78</outsideIpAddress>
+                    <status>UP</status>
+                    <lastStatusChange>2013-03-20T01:46:30.000Z</lastStatusChange>
+                    <statusMessage/>
+                    <acceptedRouteCount>0</acceptedRouteCount>
+                </item>                
+                <item>
+                    <outsideIpAddress>134.56.78.79</outsideIpAddress>
+                    <status>UP</status>
+                    <lastStatusChange>2013-03-19T19:23:59.000Z</lastStatusChange>
+                    <statusMessage/>
+                    <acceptedRouteCount>0</acceptedRouteCount>
+                </item>
+            </vgwTelemetry>
+            <options>
+                <staticRoutesOnly>true</staticRoutesOnly>
+            </options>            
+            <routes>                
+                <item>
+                    <destinationCidrBlock>10.0.0.0/16</destinationCidrBlock>
+                    <source>static</source>
+                    <state>pending</state>
+                </item>
+            </routes>
+        </item>
+    </vpnConnectionSet>
+</DescribeVpnConnectionsResponse>'''
+
+class TestDescriveVPNConnections(AWSMockServiceTestCase):
+
+    connection_class = VPCConnection
+    
+    def default_body(self):
+        return DESCRIBE_VPNCONNECTIONS
+    
+    def test_get_vpcs(self):
+        self.set_http_response(status_code=200)
+
+        api_response = self.service_connection.get_all_vpn_connections()
+        self.assertEqual(len(api_response), 2)
+
+        vpn0 = api_response[0]
+        self.assertEqual(vpn0.type,'ipsec.1')
+        self.assertEqual(vpn0.customer_gateway_id,'cgw-1234qwe9')
+        self.assertEqual(vpn0.vpn_gateway_id,'vgw-lkjh1234')
+        self.assertEqual(len(vpn0.tunnels),2)
+        self.assertDictEqual(vpn0.tags,{'Name':'VPN 1'})
+
+        vpn1 = api_response[1]
+        self.assertEqual(vpn1.state,'pending')
+        self.assertEqual(len(vpn1.static_routes),1)
+        self.assertTrue(vpn1.options.static_routes_only)
+        self.assertEqual(vpn1.tunnels[0].status,'UP')
+        self.assertEqual(vpn1.tunnels[1].status,'UP')
+        self.assertDictEqual(vpn1.tags,{})
+        self.assertEqual(vpn1.static_routes[0].source,'static')
+        self.assertEqual(vpn1.static_routes[0].state,'pending')
+
+if __name__ == '__main__':
+    unittest.main()
\ No newline at end of file