Updated to boto 2.9.5 in preparation of gsutil roll.

Gsutil is being updated, meaning we'll need a more updated version of boto.
This updates boto to cb44274755436cda44dabedaa7ca204efa7d78f6 from
https://github.com/boto/boto.git.

R=maruel
AUTHOR=hinoka


git-svn-id: svn://svn.chromium.org/boto@8 4f2e627c-b00b-48dd-b1fb-2c643665b734
diff --git a/CONTRIBUTING b/CONTRIBUTING
new file mode 100644
index 0000000..f29942f
--- /dev/null
+++ b/CONTRIBUTING
@@ -0,0 +1,47 @@
+============
+Contributing
+============
+
+For more information, please see the official contribution docs at
+http://docs.pythonboto.org/en/latest/contributing.html.
+
+
+Contributing Code
+=================
+
+* A good patch:
+
+  * is clear.
+  * works across all supported versions of Python.
+  * follows the existing style of the code base (PEP-8).
+  * has comments included as needed.
+
+* A test case that demonstrates the previous flaw that now passes
+  with the included patch.
+* If it adds/changes a public API, it must also include documentation
+  for those changes.
+* Must be appropriately licensed (New BSD).
+
+
+Reporting An Issue/Feature
+==========================
+
+* Check to see if there's an existing issue/pull request for the
+  bug/feature. All issues are at https://github.com/boto/boto/issues
+  and pull reqs are at https://github.com/boto/boto/pulls.
+* If there isn't an existing issue there, please file an issue. The ideal
+  report includes:
+
+  * A description of the problem/suggestion.
+  * How to recreate the bug.
+  * If relevant, including the versions of your:
+
+    * Python interpreter
+    * boto
+    * Optionally of the other dependencies involved
+
+  * If possile, create a pull request with a (failing) test case demonstrating
+    what's wrong. This makes the process for fixing bugs quicker & gets issues
+    resolved sooner.
+
+
diff --git a/LICENSE b/LICENSE
new file mode 100644
index 0000000..07d9e8c
--- /dev/null
+++ b/LICENSE
@@ -0,0 +1,18 @@
+Permission is hereby granted, free of charge, to any person obtaining a
+copy of this software and associated documentation files (the
+"Software"), to deal in the Software without restriction, including
+without limitation the rights to use, copy, modify, merge, publish, dis-
+tribute, sublicense, and/or sell copies of the Software, and to permit
+persons to whom the Software is furnished to do so, subject to the fol-
+lowing conditions:
+
+The above copyright notice and this permission notice shall be included
+in all copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
+OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
+ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
+SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+IN THE SOFTWARE.
diff --git a/README.chromium b/README.chromium
index 054e7e4..4a5d087 100644
--- a/README.chromium
+++ b/README.chromium
@@ -1,5 +1,5 @@
 URL: http://github.com/boto/boto
-Version: 2.6.0
+Version: 2.9.5
 License: MIT License
 
-This is a forked copy of boto at revision 3968
+This is a forked copy of boto at revision cb44274755436cda44dabedaa7ca204efa7d78f6
diff --git a/README.rst b/README.rst
index 3499ad4..77c64aa 100644
--- a/README.rst
+++ b/README.rst
@@ -1,11 +1,15 @@
 ####
 boto
 ####
-boto 2.6.0
-19-Sep-2012
+boto 2.9.5
 
-.. image:: https://secure.travis-ci.org/boto/boto.png?branch=develop
-        :target: https://secure.travis-ci.org/boto/boto
+Released: 28-May-2013
+
+.. image:: https://travis-ci.org/boto/boto.png?branch=develop
+        :target: https://travis-ci.org/boto/boto
+        
+.. image:: https://pypip.in/d/boto/badge.png
+        :target: https://crate.io/packages/boto/
 
 ************
 Introduction
@@ -15,41 +19,70 @@
 At the moment, boto supports:
 
 * Compute
+
   * Amazon Elastic Compute Cloud (EC2)
   * Amazon Elastic Map Reduce (EMR)
   * AutoScaling
-  * Elastic Load Balancing (ELB)
+
 * Content Delivery
+
   * Amazon CloudFront
+
 * Database
+
   * Amazon Relational Data Service (RDS)
   * Amazon DynamoDB
   * Amazon SimpleDB
+  * Amazon ElastiCache
+  * Amazon Redshift
+
 * Deployment and Management
-  * AWS Identity and Access Management (IAM)
-  * Amazon CloudWatch
+
   * AWS Elastic Beanstalk
   * AWS CloudFormation
+  * AWS Data Pipeline
+
+* Identity & Access
+
+  * AWS Identity and Access Management (IAM)
+
 * Application Services
+
   * Amazon CloudSearch
   * Amazon Simple Workflow Service (SWF)
   * Amazon Simple Queue Service (SQS)
   * Amazon Simple Notification Server (SNS)
   * Amazon Simple Email Service (SES)
+
+* Monitoring
+
+  * Amazon CloudWatch
+
 * Networking
+
   * Amazon Route53
   * Amazon Virtual Private Cloud (VPC)
+  * Elastic Load Balancing (ELB)
+
 * Payments and Billing
+
   * Amazon Flexible Payment Service (FPS)
+
 * Storage
+
   * Amazon Simple Storage Service (S3)
   * Amazon Glacier
   * Amazon Elastic Block Store (EBS)
   * Google Cloud Storage
+
 * Workforce
+
   * Amazon Mechanical Turk
+
 * Other
+
   * Marketplace Web Services
+  * AWS Support
 
 The goal of boto is to support the full breadth and depth of Amazon
 Web Services.  In addition, boto provides support for other public
@@ -87,16 +120,6 @@
 To see what has changed over time in boto, you can check out the
 `release notes`_ in the wiki.
 
-*********************************
-Special Note for Python 3.x Users
-*********************************
-
-If you are interested in trying out boto with Python 3.x, check out the
-`neo`_ branch.  This is under active development and the goal is a version
-of boto that works in Python 2.6, 2.7, and 3.x.  Not everything is working
-just yet but many things are and it's worth a look if you are an active
-Python 3.x user.
-
 ***************************
 Finding Out More About Boto
 ***************************
@@ -113,12 +136,14 @@
 Join our IRC channel `#boto` on FreeNode.
 Webchat IRC channel: http://webchat.freenode.net/?channels=boto
 
+Join the `boto-users Google Group`_.
+
 *************************
 Getting Started with Boto
 *************************
 
 Your credentials can be passed into the methods that create
-connections.  Alternatively, boto will check for the existance of the
+connections.  Alternatively, boto will check for the existence of the
 following environment variables to ascertain your credentials:
 
 **AWS_ACCESS_KEY_ID** - Your AWS Access Key ID
@@ -141,3 +166,4 @@
 .. _this: http://code.google.com/p/boto/wiki/BotoConfig
 .. _gitflow: http://nvie.com/posts/a-successful-git-branching-model/
 .. _neo: https://github.com/boto/boto/tree/neo
+.. _boto-users Google Group: https://groups.google.com/forum/?fromgroups#!forum/boto-users
diff --git a/bin/cfadmin b/bin/cfadmin
index 7073452..6fcdd86 100755
--- a/bin/cfadmin
+++ b/bin/cfadmin
@@ -65,6 +65,26 @@
         sys.exit(1)
     cf.create_invalidation_request(dist.id, paths)
 
+def listinvalidations(cf, origin_or_id):
+    """List invalidation requests for a given origin"""
+    dist = None
+    for d in cf.get_all_distributions():
+        if d.id == origin_or_id or d.origin.dns_name == origin_or_id:
+            dist = d
+            break
+    if not dist:
+        print "Distribution not found: %s" % origin_or_id
+        sys.exit(1)
+    results = cf.get_invalidation_requests(dist.id)
+    if results:
+        for result in results:
+            if result.status == "InProgress":
+                result = result.get_invalidation_request()
+                print result.id, result.status, result.paths
+            else:
+                print result.id, result.status
+
+
 if __name__ == "__main__":
     import boto
     import sys
diff --git a/bin/dynamodb_dump b/bin/dynamodb_dump
new file mode 100755
index 0000000..8b6aada
--- /dev/null
+++ b/bin/dynamodb_dump
@@ -0,0 +1,75 @@
+#!/usr/bin/env python
+
+import argparse
+import errno
+import os
+
+import boto
+from boto.compat import json
+
+
+DESCRIPTION = """Dump the contents of one or more DynamoDB tables to the local filesystem.
+
+Each table is dumped into two files:
+  - {table_name}.metadata stores the table's name, schema and provisioned
+    throughput.
+  - {table_name}.data stores the table's actual contents.
+
+Both files are created in the current directory. To write them somewhere else,
+use the --out-dir parameter (the target directory will be created if needed).
+"""
+
+
+def dump_table(table, out_dir):
+    metadata_file = os.path.join(out_dir, "%s.metadata" % table.name)
+    data_file = os.path.join(out_dir, "%s.data" % table.name)
+
+    with open(metadata_file, "w") as metadata_fd:
+        json.dump(
+            {
+                "name": table.name,
+                "schema": table.schema.dict,
+                "read_units": table.read_units,
+                "write_units": table.write_units,
+            },
+            metadata_fd
+        )
+
+    with open(data_file, "w") as data_fd:
+        for item in table.scan():
+            # JSON can't serialize sets -- convert those to lists.
+            data = {}
+            for k, v in item.iteritems():
+                if isinstance(v, (set, frozenset)):
+                    data[k] = list(v)
+                else:
+                    data[k] = v
+
+            data_fd.write(json.dumps(data))
+            data_fd.write("\n")
+
+
+def dynamodb_dump(tables, out_dir):
+    try:
+        os.makedirs(out_dir)
+    except OSError as e:
+        # We don't care if the dir already exists.
+        if e.errno != errno.EEXIST:
+            raise
+
+    conn = boto.connect_dynamodb()
+    for t in tables:
+        dump_table(conn.get_table(t), out_dir)
+
+
+if __name__ == "__main__":
+    parser = argparse.ArgumentParser(
+        prog="dynamodb_dump",
+        description=DESCRIPTION
+    )
+    parser.add_argument("--out-dir", default=".")
+    parser.add_argument("tables", metavar="TABLES", nargs="+")
+
+    namespace = parser.parse_args()
+
+    dynamodb_dump(namespace.tables, namespace.out_dir)
diff --git a/bin/dynamodb_load b/bin/dynamodb_load
new file mode 100755
index 0000000..21dfa17
--- /dev/null
+++ b/bin/dynamodb_load
@@ -0,0 +1,109 @@
+#!/usr/bin/env python
+
+import argparse
+import os
+
+import boto
+from boto.compat import json
+from boto.dynamodb.schema import Schema
+
+
+DESCRIPTION = """Load data into one or more DynamoDB tables.
+
+For each table, data is read from two files:
+  - {table_name}.metadata for the table's name, schema and provisioned
+    throughput (only required if creating the table).
+  - {table_name}.data for the table's actual contents.
+
+Both files are searched for in the current directory. To read them from
+somewhere else, use the --in-dir parameter.
+
+This program does not wipe the tables prior to loading data. However, any
+items present in the data files will overwrite the table's contents.
+"""
+
+
+def _json_iterload(fd):
+    """Lazily load newline-separated JSON objects from a file-like object."""
+    buffer = ""
+    eof = False
+    while not eof:
+        try:
+            # Add a line to the buffer
+            buffer += fd.next()
+        except StopIteration:
+            # We can't let that exception bubble up, otherwise the last
+            # object in the file will never be decoded.
+            eof = True
+        try:
+            # Try to decode a JSON object.
+            json_object = json.loads(buffer.strip())
+
+            # Success: clear the buffer (everything was decoded).
+            buffer = ""
+        except ValueError:
+            if eof and buffer.strip():
+                # No more lines to load and the buffer contains something other
+                # than whitespace: the file is, in fact, malformed.
+                raise
+            # We couldn't decode a complete JSON object: load more lines.
+            continue
+
+        yield json_object
+
+
+def create_table(metadata_fd):
+    """Create a table from a metadata file-like object."""
+
+
+def load_table(table, in_fd):
+    """Load items into a table from a file-like object."""
+    for i in _json_iterload(in_fd):
+        # Convert lists back to sets.
+        data = {}
+        for k, v in i.iteritems():
+            if isinstance(v, list):
+                data[k] = set(v)
+            else:
+                data[k] = v
+        table.new_item(attrs=i).put()
+
+
+def dynamodb_load(tables, in_dir, create_tables):
+    conn = boto.connect_dynamodb()
+    for t in tables:
+        metadata_file = os.path.join(in_dir, "%s.metadata" % t)
+        data_file = os.path.join(in_dir, "%s.data" % t)
+        if create_tables:
+            with open(metadata_file) as meta_fd:
+                metadata = json.load(meta_fd)
+            table = conn.create_table(
+                name=t,
+                schema=Schema(metadata["schema"]),
+                read_units=metadata["read_units"],
+                write_units=metadata["write_units"],
+            )
+            table.refresh(wait_for_active=True)
+        else:
+            table = conn.get_table(t)
+
+        with open(data_file) as in_fd:
+            load_table(table, in_fd)
+
+
+if __name__ == "__main__":
+    parser = argparse.ArgumentParser(
+        prog="dynamodb_load",
+        description=DESCRIPTION
+    )
+    parser.add_argument(
+        "--create-tables",
+        action="store_true",
+        help="Create the tables if they don't exist already (without this flag, attempts to load data into non-existing tables fail)."
+    )
+    parser.add_argument("--in-dir", default=".")
+    parser.add_argument("tables", metavar="TABLES", nargs="+")
+
+    namespace = parser.parse_args()
+
+    dynamodb_load(namespace.tables, namespace.in_dir, namespace.create_tables)
diff --git a/bin/elbadmin b/bin/elbadmin
index 6c8a8c7..e0aaf9d 100755
--- a/bin/elbadmin
+++ b/bin/elbadmin
@@ -107,12 +107,30 @@
 
         print
 
+        # Make map of all instance Id's to Name tags
+        ec2 = boto.connect_ec2()
+
+        instance_health = b.get_instance_health()
+        instances = [state.instance_id for state in instance_health]
+
+        names = {}
+        for r in ec2.get_all_instances(instances):
+            for i in r.instances:
+                names[i.id] = i.tags.get('Name', '')
+
+        name_column_width = max([4] + [len(v) for k,v in names.iteritems()]) + 2
+
         print "Instances"
         print "---------"
-        print "%-12s %-15s %s" % ("ID", "STATE", "DESCRIPTION")
-        for state in b.get_instance_health():
-            print "%-12s %-15s %s" % (state.instance_id, state.state,
-                                      state.description)
+        print "%-12s %-15s %-*s %s" % ("ID",
+                                       "STATE",
+                                       name_column_width, "NAME",
+                                       "DESCRIPTION")
+        for state in instance_health:
+            print "%-12s %-15s %-*s %s" % (state.instance_id,
+                                           state.state,
+                                           name_column_width, names[state.instance_id],
+                                           state.description)
 
         print
 
diff --git a/bin/fetch_file b/bin/fetch_file
index 6b8c4da..9315aec 100755
--- a/bin/fetch_file
+++ b/bin/fetch_file
@@ -15,23 +15,29 @@
 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
 # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
 # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
-# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, 
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
 # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
 # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
 # IN THE SOFTWARE.
 #
 if __name__ == "__main__":
-	from optparse import OptionParser
-	parser = OptionParser(version="0.1", usage="Usage: %prog [options] url")
-	parser.add_option("-o", "--out-file", help="Output file", dest="outfile")
+    from optparse import OptionParser
+    usage = """%prog [options] URI
+Fetch a URI using the boto library and (by default) pipe contents to STDOUT
+The URI can be either an HTTP URL, or "s3://bucket_name/key_name"
+"""
+    parser = OptionParser(version="0.1", usage=usage)
+    parser.add_option("-o", "--out-file",
+                      help="File to receive output instead of STDOUT",
+                      dest="outfile")
 
-	(options, args) = parser.parse_args()
-	if len(args) < 1:
-		parser.print_help()
-		exit(1)
-	from boto.utils import fetch_file
-	f = fetch_file(args[0])
-	if options.outfile:
-		open(options.outfile, "w").write(f.read())
-	else:
-		print f.read()
+    (options, args) = parser.parse_args()
+    if len(args) < 1:
+        parser.print_help()
+        exit(1)
+    from boto.utils import fetch_file
+    f = fetch_file(args[0])
+    if options.outfile:
+        open(options.outfile, "w").write(f.read())
+    else:
+        print f.read()
diff --git a/bin/glacier b/bin/glacier
index aad1e8b..bd28adf 100755
--- a/bin/glacier
+++ b/bin/glacier
@@ -51,15 +51,15 @@
                     created
 
     Common args:
-        access_key - Your AWS Access Key ID.  If not supplied, boto will
-                     use the value of the environment variable
-                     AWS_ACCESS_KEY_ID
-        secret_key - Your AWS Secret Access Key.  If not supplied, boto
-                     will use the value of the environment variable
-                     AWS_SECRET_ACCESS_KEY
-        region     - AWS region to use. Possible vaules: us-east-1, us-west-1,
-                     us-west-2, ap-northeast-1, eu-west-1.
-                     Default: us-east-1
+        --access_key - Your AWS Access Key ID.  If not supplied, boto will
+                       use the value of the environment variable
+                       AWS_ACCESS_KEY_ID
+        --secret_key - Your AWS Secret Access Key.  If not supplied, boto
+                       will use the value of the environment variable
+                       AWS_SECRET_ACCESS_KEY
+        --region     - AWS region to use. Possible values: us-east-1, us-west-1,
+                       us-west-2, ap-northeast-1, eu-west-1.
+                       Default: us-east-1
 
     Vaults operations:
 
@@ -91,18 +91,18 @@
 
 
 def list_vaults(region, access_key=None, secret_key=None):
-    layer2 = connect(region, access_key, secret_key)
+    layer2 = connect(region, access_key = access_key, secret_key = secret_key)
     for vault in layer2.list_vaults():
         print vault.arn
 
 
 def list_jobs(vault_name, region, access_key=None, secret_key=None):
-    layer2 = connect(region, access_key, secret_key)
+    layer2 = connect(region, access_key = access_key, secret_key = secret_key)
     print layer2.layer1.list_jobs(vault_name)
 
 
 def upload_files(vault_name, filenames, region, access_key=None, secret_key=None):
-    layer2 = connect(region, access_key, secret_key)
+    layer2 = connect(region, access_key = access_key, secret_key = secret_key)
     layer2.create_vault(vault_name)
     glacier_vault = layer2.get_vault(vault_name)
     for filename in filenames:
@@ -131,11 +131,11 @@
     access_key = secret_key = None
     region = 'us-east-1'
     for option, value in opts:
-        if option in ('a', '--access_key'):
+        if option in ('-a', '--access_key'):
             access_key = value
-        elif option in ('s', '--secret_key'):
+        elif option in ('-s', '--secret_key'):
             secret_key = value
-        elif option in ('r', '--region'):
+        elif option in ('-r', '--region'):
             region = value
     # handle each command
     if command == 'vaults':
diff --git a/bin/list_instances b/bin/list_instances
index 4da5596..a8de4ad 100755
--- a/bin/list_instances
+++ b/bin/list_instances
@@ -35,8 +35,10 @@
     parser.add_option("-r", "--region", help="Region (default us-east-1)", dest="region", default="us-east-1")
     parser.add_option("-H", "--headers", help="Set headers (use 'T:tagname' for including tags)", default=None, action="store", dest="headers", metavar="ID,Zone,Groups,Hostname,State,T:Name")
     parser.add_option("-t", "--tab", help="Tab delimited, skip header - useful in shell scripts", action="store_true", default=False)
+    parser.add_option("-f", "--filter", help="Filter option sent to DescribeInstances API call, format is key1=value1,key2=value2,...", default=None)
     (options, args) = parser.parse_args()
 
+
     # Connect the region
     for r in regions():
         if r.name == options.region:
@@ -62,13 +64,19 @@
             format_string += "%%-%ds" % HEADERS[h]['length']
 
 
+    # Parse filters (if any)
+    if options.filter:
+        filters = dict([entry.split('=') for entry in options.filter.split(',')])
+    else:
+        filters = {}
+
     # List and print
 
     if not options.tab:
         print format_string % headers
         print "-" * len(format_string % headers)
 
-    for r in ec2.get_all_instances():
+    for r in ec2.get_all_instances(filters=filters):
         groups = [g.name for g in r.groups]
         for i in r.instances:
             i.groups = ','.join(groups)
diff --git a/bin/mturk b/bin/mturk
new file mode 100755
index 0000000..e0b4bab
--- /dev/null
+++ b/bin/mturk
@@ -0,0 +1,465 @@
+#!/usr/bin/env python
+# Copyright 2012 Kodi Arfer
+#
+# Permission is hereby granted, free of charge, to any person obtaining a
+# copy of this software and associated documentation files (the
+# "Software"), to deal in the Software without restriction, including
+# without limitation the rights to use, copy, modify, merge, publish, dis-
+# tribute, sublicense, and/or sell copies of the Software, and to permit
+# persons to whom the Software is furnished to do so, subject to the fol-
+# lowing conditions:
+#
+# The above copyright notice and this permission notice shall be included
+# in all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
+# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
+# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, 
+# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+
+import argparse # Hence, Python 2.7 is required.
+import sys
+import os.path
+import string
+import inspect
+import datetime, calendar
+import boto.mturk.connection, boto.mturk.price, boto.mturk.question, boto.mturk.qualification
+from boto.compat import json
+
+# --------------------------------------------------
+# Globals
+# -------------------------------------------------
+
+interactive = False
+con = None
+mturk_website = None
+
+default_nicknames_path = os.path.expanduser('~/.boto_mturkcli_hit_nicknames')
+nicknames = {}
+nickname_pool = set(string.ascii_lowercase)
+
+time_units = dict(
+    s = 1,
+    min = 60,
+    h = 60 * 60,
+    d = 24 * 60 * 60)
+
+qual_requirements = dict(
+    Adult = '00000000000000000060',
+    Locale = '00000000000000000071',
+    NumberHITsApproved = '00000000000000000040',
+    PercentAssignmentsSubmitted = '00000000000000000000',
+    PercentAssignmentsAbandoned = '00000000000000000070',
+    PercentAssignmentsReturned = '000000000000000000E0',
+    PercentAssignmentsApproved = '000000000000000000L0',
+    PercentAssignmentsRejected = '000000000000000000S0')
+
+qual_comparators = {v : k for k, v in dict(
+    LessThan = '<', LessThanOrEqualTo = '<=',
+    GreaterThan = '>', GreaterThanOrEqualTo = '>=',
+    EqualTo = '==', NotEqualTo = '!=',
+    Exists = 'exists').items()}
+
+example_config_file = '''Example configuration file:
+
+  {
+    "title": "Pick your favorite color",
+    "description": "In this task, you are asked to pick your favorite color.",
+    "reward": 0.50,
+    "assignments": 10,
+    "duration": "20 min",
+    "keywords": ["color", "favorites", "survey"],
+    "lifetime": "7 d",
+    "approval_delay": "14 d",
+    "qualifications": [
+        "PercentAssignmentsApproved > 90",
+        "Locale == US",
+        "2ARFPLSP75KLA8M8DH1HTEQVJT3SY6 exists"
+    ],
+    "question_url": "http://example.com/myhit",
+    "question_frame_height": 450
+  }'''
+
+# --------------------------------------------------
+# Subroutines
+# --------------------------------------------------
+
+def unjson(path):
+    with open(path) as o:
+        return json.load(o)
+
+def add_argparse_arguments(parser):
+    parser.add_argument('-P', '--production',
+        dest = 'sandbox', action = 'store_false', default = True,
+        help = 'use the production site (default: use the sandbox)')
+    parser.add_argument('--nicknames',
+        dest = 'nicknames_path', metavar = 'PATH',
+        default = default_nicknames_path,
+        help = 'where to store HIT nicknames (default: {})'.format(
+            default_nicknames_path))
+
+def init_by_args(args):
+    init(args.sandbox, args.nicknames_path)
+
+def init(sandbox = False, nicknames_path = default_nicknames_path):
+    global con, mturk_website, nicknames, original_nicknames
+
+    mturk_website = 'workersandbox.mturk.com' if sandbox else 'www.mturk.com'
+    con = boto.mturk.connection.MTurkConnection(
+        host = 'mechanicalturk.sandbox.amazonaws.com' if sandbox else 'mechanicalturk.amazonaws.com')
+
+    try:
+        nicknames = unjson(nicknames_path)
+    except IOError:
+        nicknames = {}
+    original_nicknames = nicknames.copy()
+
+def save_nicknames(nicknames_path = default_nicknames_path):
+    if nicknames != original_nicknames:
+        with open(nicknames_path, 'w') as o:
+            json.dump(nicknames, o, sort_keys = True, indent = 4)
+            print >>o
+
+def parse_duration(s):
+    '''Parses durations like "2 d", "48 h", "2880 min",
+"172800 s", or "172800".'''
+    x = s.split()
+    return int(x[0]) * time_units['s' if len(x) == 1 else x[1]]
+def display_duration(n):
+    for unit, m in sorted(time_units.items(), key = lambda x: -x[1]):
+        if n % m == 0:
+            return '{} {}'.format(n / m, unit)
+
+def parse_qualification(inp):
+    '''Parses qualifications like "PercentAssignmentsApproved > 90",
+"Locale == US", and "2ARFPLSP75KLA8M8DH1HTEQVJT3SY6 exists".'''
+    inp = inp.split()
+    name, comparator, value = inp.pop(0), inp.pop(0), (inp[0] if len(inp) else None)
+    qtid = qual_requirements.get(name)
+    if qtid is None:
+      # Treat "name" as a Qualification Type ID.
+        qtid = name
+    if qtid == qual_requirements['Locale']:
+        return boto.mturk.qualification.LocaleRequirement(
+            qual_comparators[comparator],
+            value,
+            required_to_preview = False)
+    return boto.mturk.qualification.Requirement(
+        qtid,
+        qual_comparators[comparator],
+        value,
+        required_to_preview = qtid == qual_requirements['Adult'])
+          # Thus required_to_preview is true only for the
+          # Worker_Adult requirement.
+
+def preview_url(hit):
+    return 'https://{}/mturk/preview?groupId={}'.format(
+        mturk_website, hit.HITTypeId)
+
+def parse_timestamp(s):
+    '''Takes a timestamp like "2012-11-24T16:34:41Z".
+
+Returns a datetime object in the local time zone.'''
+    return datetime.datetime.fromtimestamp(
+        calendar.timegm(
+        datetime.datetime.strptime(s, '%Y-%m-%dT%H:%M:%SZ').timetuple()))
+
+def get_hitid(nickname_or_hitid):
+    return nicknames.get(nickname_or_hitid) or nickname_or_hitid
+
+def get_nickname(hitid):
+    for k, v in nicknames.items():
+        if v == hitid:
+            return k
+    return None
+
+def display_datetime(dt):
+    return dt.strftime('%e %b %Y, %l:%M %P')
+
+def display_hit(hit, verbose = False):
+    et = parse_timestamp(hit.Expiration)
+    return '\n'.join([
+        '{} - {} ({}, {}, {})'.format(
+            get_nickname(hit.HITId),
+            hit.Title,
+            hit.FormattedPrice,
+            display_duration(int(hit.AssignmentDurationInSeconds)),
+            hit.HITStatus),
+        'HIT ID: ' + hit.HITId,
+        'Type ID: ' + hit.HITTypeId,
+        'Group ID: ' + hit.HITGroupId,
+        'Preview: ' + preview_url(hit),
+        'Created {}   {}'.format(
+            display_datetime(parse_timestamp(hit.CreationTime)),
+            'Expired' if et <= datetime.datetime.now() else
+                'Expires ' + display_datetime(et)),
+        'Assignments: {} -- {} avail, {} pending, {} reviewable, {} reviewed'.format(
+            hit.MaxAssignments,
+            hit.NumberOfAssignmentsAvailable,
+            hit.NumberOfAssignmentsPending,
+            int(hit.MaxAssignments) - (int(hit.NumberOfAssignmentsAvailable) + int(hit.NumberOfAssignmentsPending) + int(hit.NumberOfAssignmentsCompleted)),
+            hit.NumberOfAssignmentsCompleted)
+            if hasattr(hit, 'NumberOfAssignmentsAvailable')
+            else 'Assignments: {} total'.format(hit.MaxAssignments),
+            # For some reason, SearchHITs includes the
+            # NumberOfAssignmentsFoobar fields but GetHIT doesn't.
+        ] + ([] if not verbose else [
+            '\nDescription: ' + hit.Description,
+            '\nKeywords: ' + hit.Keywords
+        ])) + '\n'
+
+def digest_assignment(a):
+    return dict(
+        answers = {str(x.qid): str(x.fields[0]) for x in a.answers[0]},
+        **{k: str(getattr(a, k)) for k in (
+            'AcceptTime', 'SubmitTime',
+            'HITId', 'AssignmentId', 'WorkerId',
+            'AssignmentStatus')})
+
+# --------------------------------------------------
+# Commands
+# --------------------------------------------------
+
+def get_balance():
+    return con.get_account_balance()
+
+def show_hit(hit):
+    return display_hit(con.get_hit(hit)[0], verbose = True)
+
+def list_hits():
+    'Lists your 10 most recently created HITs, with the most recent last.'
+    return '\n'.join(reversed(map(display_hit, con.search_hits(
+        sort_by = 'CreationTime',
+        sort_direction = 'Descending',
+        page_size = 10))))
+
+def make_hit(title, description, keywords, reward, question_url, question_frame_height, duration, assignments, approval_delay, lifetime, qualifications = []):
+    r = con.create_hit(
+        title = title,
+        description = description,
+        keywords = con.get_keywords_as_string(keywords),
+        reward = con.get_price_as_price(reward),
+        question = boto.mturk.question.ExternalQuestion(
+            question_url,
+            question_frame_height),
+        duration = parse_duration(duration),
+        qualifications = boto.mturk.qualification.Qualifications(
+            map(parse_qualification, qualifications)),
+        max_assignments = assignments,
+        approval_delay = parse_duration(approval_delay),
+        lifetime = parse_duration(lifetime))
+    nick = None
+    available_nicks = nickname_pool - set(nicknames.keys())
+    if available_nicks:
+        nick = min(available_nicks)
+        nicknames[nick] = r[0].HITId
+    if interactive:
+        print 'Nickname:', nick
+        print 'HIT ID:', r[0].HITId
+        print 'Preview:', preview_url(r[0])
+    else:
+        return r[0]
+
+def extend_hit(hit, assignments_increment = None, expiration_increment = None):
+    con.extend_hit(hit, assignments_increment, expiration_increment)
+
+def expire_hit(hit):
+    con.expire_hit(hit)
+
+def delete_hit(hit):
+    '''Deletes a HIT using DisableHIT.
+
+Unreviewed assignments get automatically approved. Unsubmitted
+assignments get automatically approved upon submission.
+
+The API docs say DisableHIT doesn't work with Reviewable HITs,
+but apparently, it does.'''
+    con.disable_hit(hit)
+    global nicknames
+    nicknames = {k: v for k, v in nicknames.items() if v != hit}
+
+def list_assignments(hit, only_reviewable = False):
+    assignments = map(digest_assignment, con.get_assignments(
+        hit_id = hit,
+        page_size = 100,
+        status = 'Submitted' if only_reviewable else None))
+    if interactive:
+        print json.dumps(assignments, sort_keys = True, indent = 4)
+        print ' '.join([a['AssignmentId'] for a in assignments])
+        print ' '.join([a['WorkerId'] + ',' + a['AssignmentId'] for a in assignments])
+    else:
+        return assignments
+
+def grant_bonus(message, amount, pairs):
+    for worker, assignment in pairs:
+        con.grant_bonus(worker, assignment, con.get_price_as_price(amount), message)
+        if interactive: print 'Bonused', worker
+
+def approve_assignments(message, assignments):
+    for a in assignments:
+        con.approve_assignment(a, message)
+        if interactive: print 'Approved', a
+
+def reject_assignments(message, assignments):
+    for a in assignments:
+        con.reject_assignment(a, message)
+        if interactive: print 'Rejected', a
+
+def unreject_assignments(message, assignments):
+    for a in assignments:
+        con.approve_rejected_assignment(a, message)
+        if interactive: print 'Unrejected', a
+
+def notify_workers(subject, text, workers):
+    con.notify_workers(workers, subject, text)
+
+# --------------------------------------------------
+# Mainline code
+# --------------------------------------------------
+
+if __name__ == '__main__':
+    interactive = True
+
+    parser = argparse.ArgumentParser()
+    add_argparse_arguments(parser)
+    subs = parser.add_subparsers()
+
+    sub = subs.add_parser('bal',
+        help = 'display your prepaid balance')
+    sub.set_defaults(f = get_balance, a = lambda: [])
+
+    sub = subs.add_parser('hit',
+        help = 'get information about a HIT')
+    sub.add_argument('hit',
+        help = 'nickname or ID of the HIT to show')
+    sub.set_defaults(f = show_hit, a = lambda:
+        [get_hitid(args.hit)])
+
+    sub = subs.add_parser('hits',
+        help = 'list all your HITs')
+    sub.set_defaults(f = list_hits, a = lambda: [])
+
+    sub = subs.add_parser('new',
+        help = 'create a new HIT (external questions only)',
+        epilog = example_config_file,
+        formatter_class = argparse.RawDescriptionHelpFormatter)
+    sub.add_argument('json_path',
+        help = 'path to JSON configuration file for the HIT')
+    sub.add_argument('-u', '--question-url', dest = 'question_url',
+        metavar = 'URL',
+        help = 'URL for the external question')
+    sub.add_argument('-a', '--assignments', dest = 'assignments',
+        type = int, metavar = 'N',
+        help = 'number of assignments')
+    sub.add_argument('-r', '--reward', dest = 'reward',
+        type = float, metavar = 'PRICE',
+        help = 'reward amount, in USD')
+    sub.set_defaults(f = make_hit, a = lambda: dict(
+        unjson(args.json_path).items() + [(k, getattr(args, k))
+            for k in ('question_url', 'assignments', 'reward')
+            if getattr(args, k) is not None]))
+
+    sub = subs.add_parser('extend',
+        help = 'add assignments or time to a HIT')
+    sub.add_argument('hit',
+        help = 'nickname or ID of the HIT to extend')
+    sub.add_argument('-a', '--assignments', dest = 'assignments',
+        metavar = 'N', type = int,
+        help = 'number of assignments to add')
+    sub.add_argument('-t', '--time', dest = 'time',
+        metavar = 'T',
+        help = 'amount of time to add to the expiration date')
+    sub.set_defaults(f = extend_hit, a = lambda:
+        [get_hitid(args.hit), args.assignments,
+            args.time and parse_duration(args.time)])
+
+    sub = subs.add_parser('expire',
+        help = 'force a HIT to expire without deleting it')
+    sub.add_argument('hit',
+        help = 'nickname or ID of the HIT to expire')
+    sub.set_defaults(f = expire_hit, a = lambda:
+        [get_hitid(args.hit)])
+
+    sub = subs.add_parser('rm',
+        help = 'delete a HIT')
+    sub.add_argument('hit',
+        help = 'nickname or ID of the HIT to delete')
+    sub.set_defaults(f = delete_hit, a = lambda:
+        [get_hitid(args.hit)])
+
+    sub = subs.add_parser('as',
+        help = "list a HIT's submitted assignments")
+    sub.add_argument('hit',
+        help = 'nickname or ID of the HIT to get assignments for')
+    sub.add_argument('-r', '--reviewable', dest = 'only_reviewable',
+        action = 'store_true',
+        help = 'show only unreviewed assignments')
+    sub.set_defaults(f = list_assignments, a = lambda:
+        [get_hitid(args.hit), args.only_reviewable])
+
+    for command, fun, helpmsg in [
+            ('approve', approve_assignments, 'approve assignments'),
+            ('reject', reject_assignments, 'reject assignments'),
+            ('unreject', unreject_assignments, 'approve previously rejected assignments')]:
+        sub = subs.add_parser(command, help = helpmsg)
+        sub.add_argument('assignment', nargs = '+',
+            help = 'ID of an assignment')
+        sub.add_argument('-m', '--message', dest = 'message',
+            metavar = 'TEXT',
+            help = 'feedback message shown to workers')
+        sub.set_defaults(f = fun, a = lambda:
+            [args.message, args.assignment])
+
+    sub = subs.add_parser('bonus',
+        help = 'give some workers a bonus')
+    sub.add_argument('amount', type = float,
+        help = 'bonus amount, in USD')
+    sub.add_argument('message',
+        help = 'the reason for the bonus (shown to workers in an email sent by MTurk)')
+    sub.add_argument('widaid', nargs = '+',
+        help = 'a WORKER_ID,ASSIGNMENT_ID pair')
+    sub.set_defaults(f = grant_bonus, a = lambda:
+        [args.message, args.amount,
+            [p.split(',') for p in args.widaid]])
+
+    sub = subs.add_parser('notify',
+        help = 'send a message to some workers')
+    sub.add_argument('subject',
+        help = 'subject of the message')
+    sub.add_argument('message',
+        help = 'text of the message')
+    sub.add_argument('worker', nargs = '+',
+        help = 'ID of a worker')
+    sub.set_defaults(f = notify_workers, a = lambda:
+        [args.subject, args.message, args.worker])
+
+    args = parser.parse_args()
+
+    init_by_args(args)
+
+    f = args.f
+    a = args.a()
+    if isinstance(a, dict):
+        # We do some introspective gymnastics so we can produce a
+        # less incomprehensible error message if some arguments
+        # are missing.
+        spec = inspect.getargspec(f)
+        missing = set(spec.args[: len(spec.args) - len(spec.defaults)]) - set(a.keys())
+        if missing:
+            raise ValueError('Missing arguments: ' + ', '.join(missing))
+        doit = lambda: f(**a)
+    else:
+        doit = lambda: f(*a)
+
+    try:
+        x = doit()
+    except boto.mturk.connection.MTurkRequestError as e:
+        print 'MTurk error:', e.error_message
+        sys.exit(1)
+
+    if x is not None:
+        print x
+
+    save_nicknames()
diff --git a/bin/s3put b/bin/s3put
index 9e5c5f2..01d9fcb 100755
--- a/bin/s3put
+++ b/bin/s3put
@@ -15,22 +15,45 @@
 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
 # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
 # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
-# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, 
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
 # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
 # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
 # IN THE SOFTWARE.
 #
-import getopt, sys, os
+import getopt
+import sys
+import os
 import boto
-from boto.exception import S3ResponseError
+
+try:
+    # multipart portions copyright Fabian Topfstedt
+    # https://gist.github.com/924094
+
+    import math
+    import mimetypes
+    from multiprocessing import Pool
+    from boto.s3.connection import S3Connection
+    from filechunkio import FileChunkIO
+    multipart_capable = True
+    usage_flag_multipart_capable = """ [--multipart]"""
+    usage_string_multipart_capable = """
+        multipart - Upload files as multiple parts. This needs filechunkio."""
+except ImportError as err:
+    multipart_capable = False
+    usage_flag_multipart_capable = ""
+    usage_string_multipart_capable = '\n\n     "' + \
+        err.message[len('No module named '):] + \
+        '" is missing for multipart support '
+
 
 usage_string = """
 SYNOPSIS
     s3put [-a/--access_key <access_key>] [-s/--secret_key <secret_key>]
           -b/--bucket <bucket_name> [-c/--callback <num_cb>]
           [-d/--debug <debug_level>] [-i/--ignore <ignore_dirs>]
-          [-n/--no_op] [-p/--prefix <prefix>] [-q/--quiet]
-          [-g/--grant grant] [-w/--no_overwrite] [-r/--reduced] path
+          [-n/--no_op] [-p/--prefix <prefix>] [-k/--key_prefix <key_prefix>]
+          [-q/--quiet] [-g/--grant grant] [-w/--no_overwrite] [-r/--reduced]
+          [--header] [--host <s3_host>]""" + usage_flag_multipart_capable + """ path [path...]
 
     Where
         access_key - Your AWS Access Key ID.  If not supplied, boto will
@@ -44,7 +67,7 @@
         path - A path to a directory or file that represents the items
                to be uploaded.  If the path points to an individual file,
                that file will be uploaded to the specified bucket.  If the
-               path points to a directory, s3_it will recursively traverse
+               path points to a directory, it will recursively traverse
                the directory and upload all files to the specified bucket.
         debug_level - 0 means no debug output (default), 1 means normal
                       debug output from boto, and 2 means boto debug output
@@ -64,63 +87,176 @@
                      /bar/fie.baz
                  The prefix must end in a trailing separator and if it
                  does not then one will be added.
+        key_prefix - A prefix to be added to the S3 key name, after any
+                     stripping of the file path is done based on the
+                     "-p/--prefix" option.
+        reduced - Use Reduced Redundancy storage
         grant - A canned ACL policy that will be granted on each file
                 transferred to S3.  The value of provided must be one
                 of the "canned" ACL policies supported by S3:
                 private|public-read|public-read-write|authenticated-read
-        no_overwrite - No files will be overwritten on S3, if the file/key 
-                       exists on s3 it will be kept. This is useful for 
-                       resuming interrupted transfers. Note this is not a 
-                       sync, even if the file has been updated locally if 
-                       the key exists on s3 the file on s3 will not be 
+        no_overwrite - No files will be overwritten on S3, if the file/key
+                       exists on s3 it will be kept. This is useful for
+                       resuming interrupted transfers. Note this is not a
+                       sync, even if the file has been updated locally if
+                       the key exists on s3 the file on s3 will not be
                        updated.
-        reduced - Use Reduced Redundancy storage
+        header - key=value pairs of extra header(s) to pass along in the
+                 request
+        host - Hostname override, for using an endpoint other then AWS S3
+""" + usage_string_multipart_capable + """
 
 
      If the -n option is provided, no files will be transferred to S3 but
      informational messages will be printed about what would happen.
 """
+
+
 def usage():
     print usage_string
     sys.exit()
-  
+
+
 def submit_cb(bytes_so_far, total_bytes):
     print '%d bytes transferred / %d bytes total' % (bytes_so_far, total_bytes)
 
-def get_key_name(fullpath, prefix):
-    key_name = fullpath[len(prefix):]
+
+def get_key_name(fullpath, prefix, key_prefix):
+    if fullpath.startswith(prefix):
+        key_name = fullpath[len(prefix):]
+    else:
+        key_name = fullpath
     l = key_name.split(os.sep)
-    return '/'.join(l)
+    return key_prefix + '/'.join(l)
+
+
+def _upload_part(bucketname, aws_key, aws_secret, multipart_id, part_num,
+                 source_path, offset, bytes, debug, cb, num_cb,
+                 amount_of_retries=10):
+    """
+    Uploads a part with retries.
+    """
+    if debug == 1:
+        print "_upload_part(%s, %s, %s)" % (source_path, offset, bytes)
+
+    def _upload(retries_left=amount_of_retries):
+        try:
+            if debug == 1:
+                print 'Start uploading part #%d ...' % part_num
+            conn = S3Connection(aws_key, aws_secret)
+            conn.debug = debug
+            bucket = conn.get_bucket(bucketname)
+            for mp in bucket.get_all_multipart_uploads():
+                if mp.id == multipart_id:
+                    with FileChunkIO(source_path, 'r', offset=offset,
+                                     bytes=bytes) as fp:
+                        mp.upload_part_from_file(fp=fp, part_num=part_num,
+                                                 cb=cb, num_cb=num_cb)
+                    break
+        except Exception, exc:
+            if retries_left:
+                _upload(retries_left=retries_left - 1)
+            else:
+                print 'Failed uploading part #%d' % part_num
+                raise exc
+        else:
+            if debug == 1:
+                print '... Uploaded part #%d' % part_num
+
+    _upload()
+
+
+def multipart_upload(bucketname, aws_key, aws_secret, source_path, keyname,
+                     reduced, debug, cb, num_cb, acl='private', headers={},
+                     guess_mimetype=True, parallel_processes=4):
+    """
+    Parallel multipart upload.
+    """
+    conn = S3Connection(aws_key, aws_secret)
+    conn.debug = debug
+    bucket = conn.get_bucket(bucketname)
+
+    if guess_mimetype:
+        mtype = mimetypes.guess_type(keyname)[0] or 'application/octet-stream'
+        headers.update({'Content-Type': mtype})
+
+    mp = bucket.initiate_multipart_upload(keyname, headers=headers,
+                                          reduced_redundancy=reduced)
+
+    source_size = os.stat(source_path).st_size
+    bytes_per_chunk = max(int(math.sqrt(5242880) * math.sqrt(source_size)),
+                          5242880)
+    chunk_amount = int(math.ceil(source_size / float(bytes_per_chunk)))
+
+    pool = Pool(processes=parallel_processes)
+    for i in range(chunk_amount):
+        offset = i * bytes_per_chunk
+        remaining_bytes = source_size - offset
+        bytes = min([bytes_per_chunk, remaining_bytes])
+        part_num = i + 1
+        pool.apply_async(_upload_part, [bucketname, aws_key, aws_secret, mp.id,
+                                        part_num, source_path, offset, bytes,
+                                        debug, cb, num_cb])
+    pool.close()
+    pool.join()
+
+    if len(mp.get_all_parts()) == chunk_amount:
+        mp.complete_upload()
+        key = bucket.get_key(keyname)
+        key.set_acl(acl)
+    else:
+        mp.cancel_upload()
+
+
+def singlepart_upload(bucket, key_name, fullpath, *kargs, **kwargs):
+    """
+    Single upload.
+    """
+    k = bucket.new_key(key_name)
+    k.set_contents_from_filename(fullpath, *kargs, **kwargs)
+
+
+def expand_path(path):
+    path = os.path.expanduser(path)
+    path = os.path.expandvars(path)
+    return os.path.abspath(path)
+
 
 def main():
-    try:
-        opts, args = getopt.getopt(
-                sys.argv[1:], 'a:b:c::d:g:hi:np:qs:vwr',
-                ['access_key=', 'bucket=', 'callback=', 'debug=', 'help',
-                 'grant=', 'ignore=', 'no_op', 'prefix=', 'quiet',
-                 'secret_key=', 'no_overwrite', 'reduced', "header="]
-                )
-    except:
-        usage()
-    ignore_dirs = []
+
+    # default values
     aws_access_key_id = None
     aws_secret_access_key = None
     bucket_name = ''
-    total = 0
+    ignore_dirs = []
     debug = 0
     cb = None
     num_cb = 0
     quiet = False
     no_op = False
     prefix = '/'
+    key_prefix = ''
     grant = None
     no_overwrite = False
     reduced = False
     headers = {}
+    host = None
+    multipart_requested = False
+
+    try:
+        opts, args = getopt.getopt(
+            sys.argv[1:], 'a:b:c::d:g:hi:k:np:qs:wr',
+            ['access_key=', 'bucket=', 'callback=', 'debug=', 'help', 'grant=',
+             'ignore=', 'key_prefix=', 'no_op', 'prefix=', 'quiet',
+             'secret_key=', 'no_overwrite', 'reduced', 'header=', 'multipart',
+             'host='])
+    except:
+        usage()
+
+    # parse opts
     for o, a in opts:
         if o in ('-h', '--help'):
             usage()
-            sys.exit()
         if o in ('-a', '--access_key'):
             aws_access_key_id = a
         if o in ('-b', '--bucket'):
@@ -138,78 +274,101 @@
             no_op = True
         if o in ('-w', '--no_overwrite'):
             no_overwrite = True
-        if o in ('-r', '--reduced'):
-            reduced = True
         if o in ('-p', '--prefix'):
             prefix = a
             if prefix[-1] != os.sep:
                 prefix = prefix + os.sep
+            prefix = expand_path(prefix)
+        if o in ('-k', '--key_prefix'):
+            key_prefix = a
         if o in ('-q', '--quiet'):
             quiet = True
         if o in ('-s', '--secret_key'):
             aws_secret_access_key = a
+        if o in ('-r', '--reduced'):
+            reduced = True
         if o in ('--header'):
-            (k,v) = a.split("=")
+            (k, v) = a.split("=")
             headers[k] = v
-    if len(args) != 1:
-        print usage()
-    path = os.path.expanduser(args[0])
-    path = os.path.expandvars(path)
-    path = os.path.abspath(path)
-    if bucket_name:
+        if o in ('--host'):
+            host = a
+        if o in ('--multipart'):
+            if multipart_capable:
+                multipart_requested = True
+            else:
+                print "multipart upload requested but not capable"
+                sys.exit()
+
+    if len(args) < 1:
+        usage()
+
+    if not bucket_name:
+        print "bucket name is required!"
+        usage()
+
+    if host:
+        c = boto.connect_s3(host=host, aws_access_key_id=aws_access_key_id,
+                        aws_secret_access_key=aws_secret_access_key)
+    else:
         c = boto.connect_s3(aws_access_key_id=aws_access_key_id,
-                            aws_secret_access_key=aws_secret_access_key)
-        c.debug = debug
-        b = c.get_bucket(bucket_name)
+                        aws_secret_access_key=aws_secret_access_key)
+    c.debug = debug
+    b = c.get_bucket(bucket_name)
+    existing_keys_to_check_against = []
+    files_to_check_for_upload = []
+
+    for path in args:
+        path = expand_path(path)
+        # upload a directory of files recursively
         if os.path.isdir(path):
             if no_overwrite:
                 if not quiet:
                     print 'Getting list of existing keys to check against'
-                keys = []
-                for key in b.list(get_key_name(path, prefix)):
-                    keys.append(key.name)
+                for key in b.list(get_key_name(path, prefix, key_prefix)):
+                    existing_keys_to_check_against.append(key.name)
             for root, dirs, files in os.walk(path):
                 for ignore in ignore_dirs:
                     if ignore in dirs:
                         dirs.remove(ignore)
-                for file in files:
-                    if file.startswith("."):
+                for path in files:
+                    if path.startswith("."):
                         continue
-                    fullpath = os.path.join(root, file)
-                    key_name = get_key_name(fullpath, prefix)
-                    copy_file = True
-                    if no_overwrite:
-                        if key_name in keys:
-                            copy_file = False
-                            if not quiet:
-                                print 'Skipping %s as it exists in s3' % file
-                    if copy_file:
-                        if not quiet:
-                            print 'Copying %s to %s/%s' % (file, bucket_name, key_name)
-                        if not no_op:
-                            k = b.new_key(key_name)
-                            k.set_contents_from_filename(
-                                    fullpath, cb=cb, num_cb=num_cb,
-                                    policy=grant, reduced_redundancy=reduced,
-                                    headers=headers
-                                    )
-                    total += 1
+                    files_to_check_for_upload.append(os.path.join(root, path))
+
+        # upload a single file
         elif os.path.isfile(path):
-            key_name = get_key_name(path, prefix)
-            copy_file = True
-            if no_overwrite:
-                if b.get_key(key_name):
-                    copy_file = False
-                    if not quiet:
-                        print 'Skipping %s as it exists in s3' % path
-            if copy_file:
-                k = b.new_key(key_name)
-                k.set_contents_from_filename(path, cb=cb, num_cb=num_cb,
-                                             policy=grant,
-                                             reduced_redundancy=reduced, headers=headers)
-    else:
-        print usage()
+            fullpath = os.path.abspath(path)
+            key_name = get_key_name(fullpath, prefix, key_prefix)
+            files_to_check_for_upload.append(fullpath)
+            existing_keys_to_check_against.append(key_name)
+
+        # we are trying to upload something unknown
+        else:
+            print "I don't know what %s is, so i can't upload it" % path
+
+    for fullpath in files_to_check_for_upload:
+        key_name = get_key_name(fullpath, prefix, key_prefix)
+
+        if no_overwrite and key_name in existing_keys_to_check_against:
+            if not quiet:
+                print 'Skipping %s as it exists in s3' % fullpath
+            continue
+
+        if not quiet:
+            print 'Copying %s to %s/%s' % (fullpath, bucket_name, key_name)
+
+        if not no_op:
+            # 0-byte files don't work and also don't need multipart upload
+            if os.stat(fullpath).st_size != 0 and multipart_capable and \
+                    multipart_requested:
+                multipart_upload(bucket_name, aws_access_key_id,
+                                 aws_secret_access_key, fullpath, key_name,
+                                 reduced, debug, cb, num_cb,
+                                 grant or 'private', headers)
+            else:
+                singlepart_upload(b, key_name, fullpath, cb=cb, num_cb=num_cb,
+                                  policy=grant, reduced_redundancy=reduced,
+                                  headers=headers)
 
 if __name__ == "__main__":
     main()
-
diff --git a/bin/sdbadmin b/bin/sdbadmin
index 7e87c7b..3fbd3f4 100755
--- a/bin/sdbadmin
+++ b/bin/sdbadmin
@@ -26,15 +26,7 @@
 import boto
 import time
 from boto import sdb
-
-# Allow support for JSON
-try:
-    import simplejson as json
-except:
-    try:
-        import json
-    except:
-        json = False
+from boto.compat import json
 
 def choice_input(options, default=None, title=None):
     """
diff --git a/boto/__init__.py b/boto/__init__.py
index b0eb6bd..2166670 100644
--- a/boto/__init__.py
+++ b/boto/__init__.py
@@ -2,6 +2,7 @@
 # Copyright (c) 2010-2011, Eucalyptus Systems, Inc.
 # Copyright (c) 2011, Nexenta Systems Inc.
 # Copyright (c) 2012 Amazon.com, Inc. or its affiliates.
+# Copyright (c) 2010, Google, Inc.
 # All rights reserved.
 #
 # Permission is hereby granted, free of charge, to any person obtaining a
@@ -35,12 +36,20 @@
 import urlparse
 from boto.exception import InvalidUriError
 
-__version__ = '2.6.0-dev'
+__version__ = '2.9.5'
 Version = __version__  # for backware compatibility
 
 UserAgent = 'Boto/%s (%s)' % (__version__, sys.platform)
 config = Config()
 
+# Regex to disallow buckets violating charset or not [3..255] chars total.
+BUCKET_NAME_RE = re.compile(r'^[a-zA-Z0-9][a-zA-Z0-9\._-]{1,253}[a-zA-Z0-9]$')
+# Regex to disallow buckets with individual DNS labels longer than 63.
+TOO_LONG_DNS_NAME_COMP = re.compile(r'[-_a-z0-9]{64}')
+GENERATION_RE = re.compile(r'(?P<versionless_uri_str>.+)'
+                           r'#(?P<generation>[0-9]+)$')
+VERSION_RE = re.compile('(?P<versionless_uri_str>.+)#(?P<version_id>.+)$')
+
 
 def init_logging():
     for file in BotoConfigLocations:
@@ -635,9 +644,81 @@
     return Layer1(aws_access_key_id, aws_secret_access_key, **kwargs)
 
 
+def connect_elastictranscoder(aws_access_key_id=None,
+                              aws_secret_access_key=None,
+                              **kwargs):
+    """
+    :type aws_access_key_id: string
+    :param aws_access_key_id: Your AWS Access Key ID
+
+    :type aws_secret_access_key: string
+    :param aws_secret_access_key: Your AWS Secret Access Key
+
+    :rtype: :class:`boto.ets.layer1.ElasticTranscoderConnection`
+    :return: A connection to Amazon's Elastic Transcoder service
+    """
+    from boto.elastictranscoder.layer1 import ElasticTranscoderConnection
+    return ElasticTranscoderConnection(
+        aws_access_key_id=aws_access_key_id,
+        aws_secret_access_key=aws_secret_access_key,
+        **kwargs)
+
+
+def connect_opsworks(aws_access_key_id=None,
+                     aws_secret_access_key=None,
+                     **kwargs):
+    from boto.opsworks.layer1 import OpsWorksConnection
+    return OpsWorksConnection(
+        aws_access_key_id=aws_access_key_id,
+        aws_secret_access_key=aws_secret_access_key,
+        **kwargs)
+
+
+def connect_redshift(aws_access_key_id=None,
+                     aws_secret_access_key=None,
+                     **kwargs):
+    """
+    :type aws_access_key_id: string
+    :param aws_access_key_id: Your AWS Access Key ID
+
+    :type aws_secret_access_key: string
+    :param aws_secret_access_key: Your AWS Secret Access Key
+
+    :rtype: :class:`boto.redshift.layer1.RedshiftConnection`
+    :return: A connection to Amazon's Redshift service
+    """
+    from boto.redshift.layer1 import RedshiftConnection
+    return RedshiftConnection(
+        aws_access_key_id=aws_access_key_id,
+        aws_secret_access_key=aws_secret_access_key,
+        **kwargs
+    )
+
+
+def connect_support(aws_access_key_id=None,
+                    aws_secret_access_key=None,
+                    **kwargs):
+    """
+    :type aws_access_key_id: string
+    :param aws_access_key_id: Your AWS Access Key ID
+
+    :type aws_secret_access_key: string
+    :param aws_secret_access_key: Your AWS Secret Access Key
+
+    :rtype: :class:`boto.support.layer1.SupportConnection`
+    :return: A connection to Amazon's Support service
+    """
+    from boto.support.layer1 import SupportConnection
+    return SupportConnection(
+        aws_access_key_id=aws_access_key_id,
+        aws_secret_access_key=aws_secret_access_key,
+        **kwargs
+    )
+
+
 def storage_uri(uri_str, default_scheme='file', debug=0, validate=True,
                 bucket_storage_uri_class=BucketStorageUri,
-                suppress_consec_slashes=True):
+                suppress_consec_slashes=True, is_latest=False):
     """
     Instantiate a StorageUri from a URI string.
 
@@ -653,6 +734,9 @@
     :param bucket_storage_uri_class: Allows mocking for unit tests.
     :param suppress_consec_slashes: If provided, controls whether
         consecutive slashes will be suppressed in key paths.
+    :type is_latest: bool
+    :param is_latest: whether this versioned object represents the
+        current version.
 
     We allow validate to be disabled to allow caller
     to implement bucket-level wildcarding (outside the boto library;
@@ -664,14 +748,17 @@
     ``uri_str`` must be one of the following formats:
 
     * gs://bucket/name
+    * gs://bucket/name#ver
     * s3://bucket/name
     * gs://bucket
     * s3://bucket
     * filename (which could be a Unix path like /a/b/c or a Windows path like
       C:\a\b\c)
 
-    The last example uses the default scheme ('file', unless overridden)
+    The last example uses the default scheme ('file', unless overridden).
     """
+    version_id = None
+    generation = None
 
     # Manually parse URI components instead of using urlparse.urlparse because
     # what we're calling URIs don't really fit the standard syntax for URIs
@@ -688,7 +775,8 @@
             if not (platform.system().lower().startswith('windows')
                     and colon_pos == 1
                     and drive_char >= 'a' and drive_char <= 'z'):
-              raise InvalidUriError('"%s" contains ":" instead of "://"' % uri_str)
+              raise InvalidUriError('"%s" contains ":" instead of "://"' %
+                                    uri_str)
         scheme = default_scheme.lower()
         path = uri_str
     else:
@@ -707,23 +795,38 @@
     else:
         path_parts = path.split('/', 1)
         bucket_name = path_parts[0]
-        if (validate and bucket_name and
-            # Disallow buckets violating charset or not [3..255] chars total.
-            (not re.match('^[a-z0-9][a-z0-9\._-]{1,253}[a-z0-9]$', bucket_name)
-            # Disallow buckets with individual DNS labels longer than 63.
-             or re.search('[-_a-z0-9]{64}', bucket_name))):
-            raise InvalidUriError('Invalid bucket name in URI "%s"' % uri_str)
-        # If enabled, ensure the bucket name is valid, to avoid possibly
-        # confusing other parts of the code. (For example if we didn't
+        object_name = ''
+        # If validate enabled, ensure the bucket name is valid, to avoid
+        # possibly confusing other parts of the code. (For example if we didn't
         # catch bucket names containing ':', when a user tried to connect to
         # the server with that name they might get a confusing error about
         # non-integer port numbers.)
-        object_name = ''
+        if (validate and bucket_name and
+            (not BUCKET_NAME_RE.match(bucket_name)
+             or TOO_LONG_DNS_NAME_COMP.search(bucket_name))):
+            raise InvalidUriError('Invalid bucket name in URI "%s"' % uri_str)
+        if scheme == 'gs':
+            match = GENERATION_RE.search(path)
+            if match:
+                md = match.groupdict()
+                versionless_uri_str = md['versionless_uri_str']
+                path_parts = versionless_uri_str.split('/', 1)
+                generation = int(md['generation'])
+        elif scheme == 's3':
+            match = VERSION_RE.search(path)
+            if match:
+                md = match.groupdict()
+                versionless_uri_str = md['versionless_uri_str']
+                path_parts = versionless_uri_str.split('/', 1)
+                version_id = md['version_id']
+        else:
+            raise InvalidUriError('Unrecognized scheme "%s"' % scheme)
         if len(path_parts) > 1:
             object_name = path_parts[1]
         return bucket_storage_uri_class(
             scheme, bucket_name, object_name, debug,
-            suppress_consec_slashes=suppress_consec_slashes)
+            suppress_consec_slashes=suppress_consec_slashes,
+            version_id=version_id, generation=generation, is_latest=is_latest)
 
 
 def storage_uri_for_key(key):
diff --git a/boto/auth.py b/boto/auth.py
index 29f9ac5..cd7ac68 100644
--- a/boto/auth.py
+++ b/boto/auth.py
@@ -32,13 +32,14 @@
 import boto.exception
 import boto.plugin
 import boto.utils
+import copy
+import datetime
+from email.utils import formatdate
 import hmac
 import sys
-import urllib
 import time
-import datetime
-import copy
-from email.utils import formatdate
+import urllib
+import posixpath
 
 from boto.auth_handler import AuthHandler
 from boto.exception import BotoClientError
@@ -164,9 +165,9 @@
         boto.log.debug('StringToSign:\n%s' % string_to_sign)
         b64_hmac = self.sign_string(string_to_sign)
         auth_hdr = self._provider.auth_header
-        headers['Authorization'] = ("%s %s:%s" %
-                                    (auth_hdr,
-                                     self._provider.access_key, b64_hmac))
+        auth = ("%s %s:%s" % (auth_hdr, self._provider.access_key, b64_hmac))
+        boto.log.debug('Signature:\n%s' % auth)
+        headers['Authorization'] = auth
 
 
 class HmacAuthV2Handler(AuthHandler, HmacKeys):
@@ -188,6 +189,9 @@
         headers = http_request.headers
         if 'Date' not in headers:
             headers['Date'] = formatdate(usegmt=True)
+        if self._provider.security_token:
+            key = self._provider.security_token_header
+            headers[key] = self._provider.security_token
 
         b64_hmac = self.sign_string(headers['Date'])
         auth_hdr = self._provider.auth_header
@@ -264,7 +268,7 @@
         headers_to_sign = self.headers_to_sign(http_request)
         canonical_headers = self.canonical_headers(headers_to_sign)
         string_to_sign = '\n'.join([http_request.method,
-                                    http_request.path,
+                                    http_request.auth_path,
                                     '',
                                     canonical_headers,
                                     '',
@@ -303,9 +307,15 @@
 
     capability = ['hmac-v4']
 
-    def __init__(self, host, config, provider):
+    def __init__(self, host, config, provider,
+                 service_name=None, region_name=None):
         AuthHandler.__init__(self, host, config, provider)
         HmacKeys.__init__(self, host, config, provider)
+        # You can set the service_name and region_name to override the
+        # values which would otherwise come from the endpoint, e.g.
+        # <service>.<region>.amazonaws.com.
+        self.service_name = service_name
+        self.region_name = region_name
 
     def _sign(self, key, msg, hex=False):
         if hex:
@@ -319,14 +329,22 @@
         Select the headers from the request that need to be included
         in the StringToSign.
         """
+        host_header_value = self.host_header(self.host, http_request)
         headers_to_sign = {}
-        headers_to_sign = {'Host': self.host}
+        headers_to_sign = {'Host': host_header_value}
         for name, value in http_request.headers.items():
             lname = name.lower()
             if lname.startswith('x-amz'):
                 headers_to_sign[name] = value
         return headers_to_sign
 
+    def host_header(self, host, http_request):
+        port = http_request.port
+        secure = http_request.protocol == 'https'
+        if ((port == 80 and not secure) or (port == 443 and secure)):
+            return host
+        return '%s:%s' % (host, port)
+
     def query_string(self, http_request):
         parameter_names = sorted(http_request.params.keys())
         pairs = []
@@ -337,12 +355,15 @@
         return '&'.join(pairs)
 
     def canonical_query_string(self, http_request):
+        # POST requests pass parameters in through the
+        # http_request.body field.
+        if http_request.method == 'POST':
+            return ""
         l = []
-        for param in http_request.params:
+        for param in sorted(http_request.params):
             value = str(http_request.params[param])
             l.append('%s=%s' % (urllib.quote(param, safe='-_.~'),
                                 urllib.quote(value, safe='-_.~')))
-        l = sorted(l)
         return '&'.join(l)
 
     def canonical_headers(self, headers_to_sign):
@@ -352,9 +373,9 @@
         case, sorting them in alphabetical order and then joining
         them into a string, separated by newlines.
         """
-        l = ['%s:%s' % (n.lower().strip(),
-                      headers_to_sign[n].strip()) for n in headers_to_sign]
-        l = sorted(l)
+        l = sorted(['%s:%s' % (n.lower().strip(),
+                    ' '.join(headers_to_sign[n].strip().split()))
+                    for n in headers_to_sign])
         return '\n'.join(l)
 
     def signed_headers(self, headers_to_sign):
@@ -363,7 +384,11 @@
         return ';'.join(l)
 
     def canonical_uri(self, http_request):
-        return http_request.path
+        # Normalize the path.
+        normalized = posixpath.normpath(http_request.auth_path)
+        # Then urlencode whatever's left.
+        encoded = urllib.quote(normalized)
+        return encoded
 
     def payload(self, http_request):
         body = http_request.body
@@ -396,13 +421,26 @@
         scope = []
         http_request.timestamp = http_request.headers['X-Amz-Date'][0:8]
         scope.append(http_request.timestamp)
+        # The service_name and region_name either come from:
+        # * The service_name/region_name attrs or (if these values are None)
+        # * parsed from the endpoint <service>.<region>.amazonaws.com.
         parts = http_request.host.split('.')
-        if len(parts) == 3:
-            http_request.region_name = 'us-east-1'
+        if self.region_name is not None:
+            region_name = self.region_name
         else:
-            http_request.region_name = parts[1]
+            if len(parts) == 3:
+                region_name = 'us-east-1'
+            else:
+                region_name = parts[1]
+        if self.service_name is not None:
+            service_name = self.service_name
+        else:
+            service_name = parts[0]
+
+        http_request.service_name = service_name
+        http_request.region_name = region_name
+
         scope.append(http_request.region_name)
-        http_request.service_name = parts[0]
         scope.append(http_request.service_name)
         scope.append('aws4_request')
         return '/'.join(scope)
@@ -443,6 +481,18 @@
         req.headers['X-Amz-Date'] = now.strftime('%Y%m%dT%H%M%SZ')
         if self._provider.security_token:
             req.headers['X-Amz-Security-Token'] = self._provider.security_token
+        qs = self.query_string(req)
+        if qs and req.method == 'POST':
+            # Stash request parameters into post body
+            # before we generate the signature.
+            req.body = qs
+            req.headers['Content-Type'] = 'application/x-www-form-urlencoded; charset=UTF-8'
+            req.headers['Content-Length'] = str(len(req.body))
+        else:
+            # Safe to modify req.path here since
+            # the signature will use req.auth_path.
+            req.path = req.path.split('?')[0]
+            req.path = req.path + '?' + qs
         canonical_request = self.canonical_request(req)
         boto.log.debug('CanonicalRequest:\n%s' % canonical_request)
         string_to_sign = self.string_to_sign(req, canonical_request)
@@ -454,10 +504,6 @@
         l.append('SignedHeaders=%s' % self.signed_headers(headers_to_sign))
         l.append('Signature=%s' % signature)
         req.headers['Authorization'] = ','.join(l)
-        qs = self.query_string(req)
-        if qs:
-            req.path = req.path.split('?')[0]
-            req.path = req.path + '?' + qs
 
 
 class QuerySignatureHelper(HmacKeys):
@@ -519,6 +565,11 @@
     SignatureVersion = 1
     capability = ['sign-v1', 'mturk']
 
+    def __init__(self, *args, **kw):
+        QuerySignatureHelper.__init__(self, *args, **kw)
+        AuthHandler.__init__(self, *args, **kw)
+        self._hmac_256 = None
+
     def _calc_signature(self, params, *args):
         boto.log.debug('using _calc_signature_1')
         hmac = self._get_hmac()
@@ -612,8 +663,7 @@
         An implementation of AuthHandler.
 
     Raises:
-        boto.exception.NoAuthHandlerFound:
-        boto.exception.TooManyAuthHandlerReadyToAuthenticate:
+        boto.exception.NoAuthHandlerFound
     """
     ready_handlers = []
     auth_handlers = boto.plugin.get_plugin(AuthHandler, requested_capability)
@@ -632,18 +682,14 @@
               ' %s '
               'Check your credentials' % (len(names), str(names)))
 
-    if len(ready_handlers) > 1:
-        # NOTE: Even though it would be nice to accept more than one handler
-        # by using one of the many ready handlers, we are never sure that each
-        # of them are referring to the same storage account. Since we cannot
-        # easily guarantee that, it is always safe to fail, rather than operate
-        # on the wrong account.
-        names = [handler.__class__.__name__ for handler in ready_handlers]
-        raise boto.exception.TooManyAuthHandlerReadyToAuthenticate(
-               '%d AuthHandlers %s ready to authenticate for requested_capability '
-               '%s, only 1 expected. This happens if you import multiple '
-               'pluging.Plugin implementations that declare support for the '
-               'requested_capability.' % (len(names), str(names),
-               requested_capability))
-
-    return ready_handlers[0]
+    # We select the last ready auth handler that was loaded, to allow users to
+    # customize how auth works in environments where there are shared boto
+    # config files (e.g., /etc/boto.cfg and ~/.boto): The more general,
+    # system-wide shared configs should be loaded first, and the user's
+    # customizations loaded last. That way, for example, the system-wide
+    # config might include a plugin_directory that includes a service account
+    # auth plugin shared by all users of a Google Compute Engine instance
+    # (allowing sharing of non-user data between various services), and the
+    # user could override this with a .boto config that includes user-specific
+    # credentials (for access to user data).
+    return ready_handlers[-1]
diff --git a/boto/beanstalk/__init__.py b/boto/beanstalk/__init__.py
index e69de29..904d855 100644
--- a/boto/beanstalk/__init__.py
+++ b/boto/beanstalk/__init__.py
@@ -0,0 +1,65 @@
+# Copyright (c) 2013 Amazon.com, Inc. or its affiliates.
+# All Rights Reserved
+#
+# Permission is hereby granted, free of charge, to any person obtaining a
+# copy of this software and associated documentation files (the
+# "Software"), to deal in the Software without restriction, including
+# without limitation the rights to use, copy, modify, merge, publish, dis-
+# tribute, sublicense, and/or sell copies of the Software, and to permit
+# persons to whom the Software is furnished to do so, subject to the fol-
+# lowing conditions:
+#
+# The above copyright notice and this permission notice shall be included
+# in all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
+# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
+# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+# IN THE SOFTWARE.
+#
+from boto.regioninfo import RegionInfo
+
+
+def regions():
+    """
+    Get all available regions for the AWS Elastic Beanstalk service.
+
+    :rtype: list
+    :return: A list of :class:`boto.regioninfo.RegionInfo`
+    """
+    import boto.beanstalk.layer1
+    return [RegionInfo(name='us-east-1',
+                       endpoint='elasticbeanstalk.us-east-1.amazonaws.com',
+                       connection_cls=boto.beanstalk.layer1.Layer1),
+            RegionInfo(name='us-west-1',
+                       endpoint='elasticbeanstalk.us-west-1.amazonaws.com',
+                       connection_cls=boto.beanstalk.layer1.Layer1),
+            RegionInfo(name='us-west-2',
+                       endpoint='elasticbeanstalk.us-west-2.amazonaws.com',
+                       connection_cls=boto.beanstalk.layer1.Layer1),
+            RegionInfo(name='ap-northeast-1',
+                       endpoint='elasticbeanstalk.ap-northeast-1.amazonaws.com',
+                       connection_cls=boto.beanstalk.layer1.Layer1),
+            RegionInfo(name='ap-southeast-1',
+                       endpoint='elasticbeanstalk.ap-southeast-1.amazonaws.com',
+                       connection_cls=boto.beanstalk.layer1.Layer1),
+            RegionInfo(name='ap-southeast-2',
+                       endpoint='elasticbeanstalk.ap-southeast-2.amazonaws.com',
+                       connection_cls=boto.beanstalk.layer1.Layer1),
+            RegionInfo(name='eu-west-1',
+                       endpoint='elasticbeanstalk.eu-west-1.amazonaws.com',
+                       connection_cls=boto.beanstalk.layer1.Layer1),
+            RegionInfo(name='sa-east-1',
+                       endpoint='elasticbeanstalk.sa-east-1.amazonaws.com',
+                       connection_cls=boto.beanstalk.layer1.Layer1),
+            ]
+
+
+def connect_to_region(region_name, **kw_params):
+    for region in regions():
+        if region.name == region_name:
+            return region.connect(**kw_params)
+    return None
diff --git a/boto/beanstalk/exception.py b/boto/beanstalk/exception.py
index c209cef..f6f9ffa 100644
--- a/boto/beanstalk/exception.py
+++ b/boto/beanstalk/exception.py
@@ -1,5 +1,5 @@
 import sys
-import json
+from boto.compat import json
 from boto.exception import BotoServerError
 
 
diff --git a/boto/beanstalk/layer1.py b/boto/beanstalk/layer1.py
index 5e994e1..e63f70e 100644
--- a/boto/beanstalk/layer1.py
+++ b/boto/beanstalk/layer1.py
@@ -21,10 +21,10 @@
 # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
 # IN THE SOFTWARE.
 #
-import json
 
 import boto
 import boto.jsonresponse
+from boto.compat import json
 from boto.regioninfo import RegionInfo
 from boto.connection import AWSQueryConnection
 
@@ -54,7 +54,7 @@
                                     security_token)
 
     def _required_auth_capability(self):
-        return ['sign-v2']
+        return ['hmac-v4']
 
     def _encode_bool(self, v):
         v = bool(v)
@@ -75,7 +75,7 @@
 
         :type cname_prefix: string
         :param cname_prefix: The prefix used when this CNAME is
-        reserved.
+            reserved.
         """
         params = {'CNAMEPrefix': cname_prefix}
         return self._get_response('CheckDNSAvailability', params)
@@ -87,9 +87,9 @@
 
         :type application_name: string
         :param application_name: The name of the application.
-        Constraint: This name must be unique within your account. If the
-        specified name already exists, the action returns an
-        InvalidParameterValue error.
+            Constraint: This name must be unique within your account. If the
+            specified name already exists, the action returns an
+            InvalidParameterValue error.
 
         :type description: string
         :param description: Describes the application.
@@ -108,37 +108,34 @@
 
         :type application_name: string
         :param application_name: The name of the application. If no
-        application is found with this name, and AutoCreateApplication
-        is false, returns an InvalidParameterValue error.
+            application is found with this name, and AutoCreateApplication is
+            false, returns an InvalidParameterValue error.
 
         :type version_label: string
-        :param version_label: A label identifying this
-        version.Constraint: Must be unique per application. If an
-        application version already exists with this label for the
-        specified application, AWS Elastic Beanstalk returns an
-        InvalidParameterValue error.
+        :param version_label: A label identifying this version. Constraint:
+            Must be unique per application. If an application version already
+            exists with this label for the specified application, AWS Elastic
+            Beanstalk returns an InvalidParameterValue error.
 
         :type description: string
         :param description: Describes this version.
 
         :type s3_bucket: string
-        :param s3_bucket: The Amazon S3 bucket where the data is
-        located.
+        :param s3_bucket: The Amazon S3 bucket where the data is located.
 
         :type s3_key: string
-        :param s3_key: The Amazon S3 key where the data is located.
-        Both s3_bucket and s3_key must be specified in order to use
-        a specific source bundle.  If both of these values are not specified
-        the sample application will be used.
+        :param s3_key: The Amazon S3 key where the data is located.  Both
+            s3_bucket and s3_key must be specified in order to use a specific
+            source bundle.  If both of these values are not specified the
+            sample application will be used.
 
         :type auto_create_application: boolean
-        :param auto_create_application: Determines how the system
-        behaves if the specified application for this version does not
-        already exist:  true: Automatically creates the specified
-        application for this version if it does not already exist.
-        false: Returns an InvalidParameterValue if the specified
-        application for this version does not already exist.  Default:
-        false  Valid Values: true | false
+        :param auto_create_application: Determines how the system behaves if
+            the specified application for this version does not already exist:
+            true: Automatically creates the specified application for this
+            version if it does not already exist.  false: Returns an
+            InvalidParameterValue if the specified application for this version
+            does not already exist.  Default: false  Valid Values: true | false
 
         :raises: TooManyApplicationsException,
                  TooManyApplicationVersionsException,
@@ -171,52 +168,49 @@
         configuration settings.
 
         :type application_name: string
-        :param application_name: The name of the application to
-        associate with this configuration template. If no application is
-        found with this name, AWS Elastic Beanstalk returns an
-        InvalidParameterValue error.
+        :param application_name: The name of the application to associate with
+            this configuration template. If no application is found with this
+            name, AWS Elastic Beanstalk returns an InvalidParameterValue error.
 
         :type template_name: string
-        :param template_name: The name of the configuration
-        template.Constraint: This name must be unique per application.
-        Default: If a configuration template already exists with this
-        name, AWS Elastic Beanstalk returns an InvalidParameterValue
-        error.
+        :param template_name: The name of the configuration template.
+            Constraint: This name must be unique per application.  Default: If
+            a configuration template already exists with this name, AWS Elastic
+            Beanstalk returns an InvalidParameterValue error.
 
         :type solution_stack_name: string
-        :param solution_stack_name: The name of the solution stack used
-        by this configuration. The solution stack specifies the
-        operating system, architecture, and application server for a
-        configuration template. It determines the set of configuration
-        options as well as the possible and default values.  Use
-        ListAvailableSolutionStacks to obtain a list of available
-        solution stacks.  Default: If the SolutionStackName is not
-        specified and the source configuration parameter is blank, AWS
-        Elastic Beanstalk uses the default solution stack. If not
-        specified and the source configuration parameter is specified,
-        AWS Elastic Beanstalk uses the same solution stack as the source
-        configuration template.
+        :param solution_stack_name: The name of the solution stack used by this
+            configuration. The solution stack specifies the operating system,
+            architecture, and application server for a configuration template.
+            It determines the set of configuration options as well as the
+            possible and default values.  Use ListAvailableSolutionStacks to
+            obtain a list of available solution stacks.  Default: If the
+            SolutionStackName is not specified and the source configuration
+            parameter is blank, AWS Elastic Beanstalk uses the default solution
+            stack. If not specified and the source configuration parameter is
+            specified, AWS Elastic Beanstalk uses the same solution stack as
+            the source configuration template.
 
         :type source_configuration_application_name: string
         :param source_configuration_application_name: The name of the
-        application associated with the configuration.
+            application associated with the configuration.
 
         :type source_configuration_template_name: string
         :param source_configuration_template_name: The name of the
-        configuration template.
+            configuration template.
 
         :type environment_id: string
         :param environment_id: The ID of the environment used with this
-        configuration template.
+            configuration template.
 
         :type description: string
         :param description: Describes this configuration.
 
         :type option_settings: list
-        :param option_settings: If specified, AWS Elastic Beanstalk sets
-        the specified configuration option to the requested value. The
-        new value overrides the value obtained from the solution stack
-        or the source configuration template.
+        :param option_settings: If specified, AWS Elastic Beanstalk sets the
+            specified configuration option to the requested value. The new
+            value overrides the value obtained from the solution stack or the
+            source configuration template.
 
         :raises: InsufficientPrivilegesException,
                  TooManyConfigurationTemplatesException
@@ -226,9 +220,9 @@
         if solution_stack_name:
             params['SolutionStackName'] = solution_stack_name
         if source_configuration_application_name:
-            params['ApplicationName'] = source_configuration_application_name
+            params['SourceConfiguration.ApplicationName'] = source_configuration_application_name
         if source_configuration_template_name:
-            params['TemplateName'] = source_configuration_template_name
+            params['SourceConfiguration.TemplateName'] = source_configuration_template_name
         if environment_id:
             params['EnvironmentId'] = environment_id
         if description:
@@ -247,73 +241,72 @@
         """Launches an environment for the application using a configuration.
 
         :type application_name: string
-        :param application_name: The name of the application that
-        contains the version to be deployed.  If no application is found
-        with this name, CreateEnvironment returns an
-        InvalidParameterValue error.
+        :param application_name: The name of the application that contains the
+            version to be deployed.  If no application is found with this name,
+            CreateEnvironment returns an InvalidParameterValue error.
 
         :type version_label: string
-        :param version_label: The name of the application version to
-        deploy. If the specified application has no associated
-        application versions, AWS Elastic Beanstalk UpdateEnvironment
-        returns an InvalidParameterValue error.  Default: If not
-        specified, AWS Elastic Beanstalk attempts to launch the most
-        recently created application version.
+        :param version_label: The name of the application version to deploy. If
+            the specified application has no associated application versions,
+            AWS Elastic Beanstalk UpdateEnvironment returns an
+            InvalidParameterValue error.  Default: If not specified, AWS
+            Elastic Beanstalk attempts to launch the most recently created
+            application version.
 
         :type environment_name: string
-        :param environment_name: A unique name for the deployment
-        environment. Used in the application URL. Constraint: Must be
-        from 4 to 23 characters in length. The name can contain only
-        letters, numbers, and hyphens. It cannot start or end with a
-        hyphen. This name must be unique in your account. If the
-        specified name already exists, AWS Elastic Beanstalk returns an
-        InvalidParameterValue error. Default: If the CNAME parameter is
-        not specified, the environment name becomes part of the CNAME,
-        and therefore part of the visible URL for your application.
+        :param environment_name: A unique name for the deployment environment.
+            Used in the application URL. Constraint: Must be from 4 to 23
+            characters in length. The name can contain only letters, numbers,
+            and hyphens. It cannot start or end with a hyphen. This name must
+            be unique in your account. If the specified name already exists,
+            AWS Elastic Beanstalk returns an InvalidParameterValue error.
+            Default: If the CNAME parameter is not specified, the environment
+            name becomes part of the CNAME, and therefore part of the visible
+            URL for your application.
 
         :type template_name: string
         :param template_name: The name of the configuration template to
-        use in deployment. If no configuration template is found with
-        this name, AWS Elastic Beanstalk returns an
-        InvalidParameterValue error.  Condition: You must specify either
-        this parameter or a SolutionStackName, but not both. If you
-        specify both, AWS Elastic Beanstalk returns an
-        InvalidParameterCombination error. If you do not specify either,
-        AWS Elastic Beanstalk returns a MissingRequiredParameter error.
+            use in deployment. If no configuration template is found with this
+            name, AWS Elastic Beanstalk returns an InvalidParameterValue error.
+            Condition: You must specify either this parameter or a
+            SolutionStackName, but not both. If you specify both, AWS Elastic
+            Beanstalk returns an InvalidParameterCombination error. If you do
+            not specify either, AWS Elastic Beanstalk returns a
+            MissingRequiredParameter error.
 
         :type solution_stack_name: string
-        :param solution_stack_name: This is an alternative to specifying
-        a configuration name. If specified, AWS Elastic Beanstalk sets
-        the configuration values to the default values associated with
-        the specified solution stack.  Condition: You must specify
-        either this or a TemplateName, but not both. If you specify
-        both, AWS Elastic Beanstalk returns an
-        InvalidParameterCombination error. If you do not specify either,
-        AWS Elastic Beanstalk returns a MissingRequiredParameter error.
+        :param solution_stack_name: This is an alternative to specifying a
+            configuration name. If specified, AWS Elastic Beanstalk sets the
+            configuration values to the default values associated with the
+            specified solution stack.  Condition: You must specify either this
+            or a TemplateName, but not both. If you specify both, AWS Elastic
+            Beanstalk returns an InvalidParameterCombination error. If you do
+            not specify either, AWS Elastic Beanstalk returns a
+            MissingRequiredParameter error.
 
         :type cname_prefix: string
-        :param cname_prefix: If specified, the environment attempts to
-        use this value as the prefix for the CNAME. If not specified,
-        the environment uses the environment name.
+        :param cname_prefix: If specified, the environment attempts to use this
+            value as the prefix for the CNAME. If not specified, the
+            environment uses the environment name.
 
         :type description: string
         :param description: Describes this environment.
 
         :type option_settings: list
-        :param option_settings: If specified, AWS Elastic Beanstalk sets
-        the specified configuration options to the requested value in
-        the configuration set for the new environment. These override
-        the values obtained from the solution stack or the configuration
-        template.  Each element in the list is a tuple of (Namespace,
-        OptionName, Value), for example::
+        :param option_settings: If specified, AWS Elastic Beanstalk sets the
+            specified configuration options to the requested value in the
+            configuration set for the new environment. These override the
+            values obtained from the solution stack or the configuration
+            template.  Each element in the list is a tuple of (Namespace,
+            OptionName, Value), for example::
 
-            [('aws:autoscaling:launchconfiguration',
-              'Ec2KeyName', 'mykeypair')]
+                [('aws:autoscaling:launchconfiguration',
+                    'Ec2KeyName', 'mykeypair')]
 
         :type options_to_remove: list
-        :param options_to_remove: A list of custom user-defined
-        configuration options to remove from the configuration set for
-        this new environment.
+        :param options_to_remove: A list of custom user-defined configuration
+            options to remove from the configuration set for this new
+            environment.
 
         :raises: TooManyEnvironmentsException, InsufficientPrivilegesException
 
@@ -363,7 +356,7 @@
 
         :type terminate_env_by_force: boolean
         :param terminate_env_by_force: When set to true, running
-        environments will be terminated before deleting the application.
+            environments will be terminated before deleting the application.
 
         :raises: OperationInProgressException
 
@@ -380,14 +373,15 @@
 
         :type application_name: string
         :param application_name: The name of the application to delete
-        releases from.
+            releases from.
 
         :type version_label: string
         :param version_label: The label of the version to delete.
 
         :type delete_source_bundle: boolean
         :param delete_source_bundle: Indicates whether to delete the
-        associated source bundle from Amazon S3.  Valid Values: true | false
+            associated source bundle from Amazon S3.  Valid Values: true |
+            false
 
         :raises: SourceBundleDeletionException,
                  InsufficientPrivilegesException,
@@ -406,11 +400,11 @@
 
         :type application_name: string
         :param application_name: The name of the application to delete
-        the configuration template from.
+            the configuration template from.
 
         :type template_name: string
         :param template_name: The name of the configuration template to
-        delete.
+            delete.
 
         :raises: OperationInProgressException
 
@@ -434,11 +428,11 @@
 
         :type application_name: string
         :param application_name: The name of the application the
-        environment is associated with.
+            environment is associated with.
 
         :type environment_name: string
         :param environment_name: The name of the environment to delete
-        the draft configuration from.
+            the draft configuration from.
 
         """
         params = {'ApplicationName': application_name,
@@ -450,14 +444,14 @@
         """Returns descriptions for existing application versions.
 
         :type application_name: string
-        :param application_name: If specified, AWS Elastic Beanstalk
-        restricts the returned descriptions to only include ones that
-        are associated with the specified application.
+        :param application_name: If specified, AWS Elastic Beanstalk restricts
+            the returned descriptions to only include ones that are associated
+            with the specified application.
 
         :type version_labels: list
         :param version_labels: If specified, restricts the returned
-        descriptions to only include ones that have the specified
-        version labels.
+            descriptions to only include ones that have the specified version
+            labels.
 
         """
         params = {}
@@ -472,9 +466,9 @@
         """Returns the descriptions of existing applications.
 
         :type application_names: list
-        :param application_names: If specified, AWS Elastic Beanstalk
-        restricts the returned descriptions to only include those with
-        the specified names.
+        :param application_names: If specified, AWS Elastic Beanstalk restricts
+            the returned descriptions to only include those with the specified
+            names.
 
         """
         params = {}
@@ -497,26 +491,26 @@
         is changed.
 
         :type application_name: string
-        :param application_name: The name of the application associated
-        with the configuration template or environment. Only needed if
-        you want to describe the configuration options associated with
-        either the configuration template or environment.
+        :param application_name: The name of the application associated with
+            the configuration template or environment. Only needed if you want
+            to describe the configuration options associated with either the
+            configuration template or environment.
 
         :type template_name: string
-        :param template_name: The name of the configuration template
-        whose configuration options you want to describe.
+        :param template_name: The name of the configuration template whose
+            configuration options you want to describe.
 
         :type environment_name: string
         :param environment_name: The name of the environment whose
-        configuration options you want to describe.
+            configuration options you want to describe.
 
         :type solution_stack_name: string
         :param solution_stack_name: The name of the solution stack whose
-        configuration options you want to describe.
+            configuration options you want to describe.
 
         :type options: list
         :param options: If specified, restricts the descriptions to only
-        the specified options.
+            the specified options.
         """
         params = {}
         if application_name:
@@ -547,23 +541,22 @@
 
         :type application_name: string
         :param application_name: The application for the environment or
-        configuration template.
+            configuration template.
 
         :type template_name: string
         :param template_name: The name of the configuration template to
-        describe.  Conditional: You must specify either this parameter
-        or an EnvironmentName, but not both. If you specify both, AWS
-        Elastic Beanstalk returns an InvalidParameterCombination error.
-        If you do not specify either, AWS Elastic Beanstalk returns a
-        MissingRequiredParameter error.
+            describe.  Conditional: You must specify either this parameter or
+            an EnvironmentName, but not both. If you specify both, AWS Elastic
+            Beanstalk returns an InvalidParameterCombination error.  If you do
+            not specify either, AWS Elastic Beanstalk returns a
+            MissingRequiredParameter error.
 
         :type environment_name: string
-        :param environment_name: The name of the environment to
-        describe.  Condition: You must specify either this or a
-        TemplateName, but not both. If you specify both, AWS Elastic
-        Beanstalk returns an InvalidParameterCombination error. If you
-        do not specify either, AWS Elastic Beanstalk returns
-        MissingRequiredParameter error.
+        :param environment_name: The name of the environment to describe.
+            Condition: You must specify either this or a TemplateName, but not
+            both. If you specify both, AWS Elastic Beanstalk returns an
+            InvalidParameterCombination error. If you do not specify either,
+            AWS Elastic Beanstalk returns MissingRequiredParameter error.
         """
         params = {'ApplicationName': application_name}
         if template_name:
@@ -578,15 +571,15 @@
 
         :type environment_id: string
         :param environment_id: The ID of the environment to retrieve AWS
-        resource usage data.  Condition: You must specify either this or
-        an EnvironmentName, or both. If you do not specify either, AWS
-        Elastic Beanstalk returns MissingRequiredParameter error.
+            resource usage data.  Condition: You must specify either this or an
+            EnvironmentName, or both. If you do not specify either, AWS Elastic
+            Beanstalk returns MissingRequiredParameter error.
 
         :type environment_name: string
         :param environment_name: The name of the environment to retrieve
-        AWS resource usage data.  Condition: You must specify either
-        this or an EnvironmentId, or both. If you do not specify either,
-        AWS Elastic Beanstalk returns MissingRequiredParameter error.
+            AWS resource usage data.  Condition: You must specify either this
+            or an EnvironmentId, or both. If you do not specify either, AWS
+            Elastic Beanstalk returns MissingRequiredParameter error.
 
         :raises: InsufficientPrivilegesException
         """
@@ -604,35 +597,35 @@
         """Returns descriptions for existing environments.
 
         :type application_name: string
-        :param application_name: If specified, AWS Elastic Beanstalk
-        restricts the returned descriptions to include only those that
-        are associated with this application.
+        :param application_name: If specified, AWS Elastic Beanstalk restricts
+            the returned descriptions to include only those that are associated
+            with this application.
 
         :type version_label: string
-        :param version_label: If specified, AWS Elastic Beanstalk
-        restricts the returned descriptions to include only those that
-        are associated with this application version.
+        :param version_label: If specified, AWS Elastic Beanstalk restricts the
+            returned descriptions to include only those that are associated
+            with this application version.
 
         :type environment_ids: list
-        :param environment_ids: If specified, AWS Elastic Beanstalk
-        restricts the returned descriptions to include only those that
-        have the specified IDs.
+        :param environment_ids: If specified, AWS Elastic Beanstalk restricts
+            the returned descriptions to include only those that have the
+            specified IDs.
 
         :type environment_names: list
-        :param environment_names: If specified, AWS Elastic Beanstalk
-        restricts the returned descriptions to include only those that
-        have the specified names.
+        :param environment_names: If specified, AWS Elastic Beanstalk restricts
+            the returned descriptions to include only those that have the
+            specified names.
 
         :type include_deleted: boolean
         :param include_deleted: Indicates whether to include deleted
-        environments:  true: Environments that have been deleted after
-        IncludedDeletedBackTo are displayed.  false: Do not include
-        deleted environments.
+            environments:  true: Environments that have been deleted after
+            IncludedDeletedBackTo are displayed.  false: Do not include deleted
+            environments.
 
         :type included_deleted_back_to: timestamp
-        :param included_deleted_back_to: If specified when
-        IncludeDeleted is set to true, then environments deleted after
-        this date are displayed.
+        :param included_deleted_back_to: If specified when IncludeDeleted is
+            set to true, then environments deleted after this date are
+            displayed.
         """
         params = {}
         if application_name:
@@ -659,57 +652,55 @@
         """Returns event descriptions matching criteria up to the last 6 weeks.
 
         :type application_name: string
-        :param application_name: If specified, AWS Elastic Beanstalk
-        restricts the returned descriptions to include only those
-        associated with this application.
+        :param application_name: If specified, AWS Elastic Beanstalk restricts
+            the returned descriptions to include only those associated with
+            this application.
 
         :type version_label: string
-        :param version_label: If specified, AWS Elastic Beanstalk
-        restricts the returned descriptions to those associated with
-        this application version.
+        :param version_label: If specified, AWS Elastic Beanstalk restricts the
+            returned descriptions to those associated with this application
+            version.
 
         :type template_name: string
-        :param template_name: If specified, AWS Elastic Beanstalk
-        restricts the returned descriptions to those that are associated
-        with this environment configuration.
+        :param template_name: If specified, AWS Elastic Beanstalk restricts the
+            returned descriptions to those that are associated with this
+            environment configuration.
 
         :type environment_id: string
-        :param environment_id: If specified, AWS Elastic Beanstalk
-        restricts the returned descriptions to those associated with
-        this environment.
+        :param environment_id: If specified, AWS Elastic Beanstalk restricts
+            the returned descriptions to those associated with this
+            environment.
 
         :type environment_name: string
-        :param environment_name: If specified, AWS Elastic Beanstalk
-        restricts the returned descriptions to those associated with
-        this environment.
+        :param environment_name: If specified, AWS Elastic Beanstalk restricts
+            the returned descriptions to those associated with this
+            environment.
 
         :type request_id: string
-        :param request_id: If specified, AWS Elastic Beanstalk restricts
-        the described events to include only those associated with this
-        request ID.
+        :param request_id: If specified, AWS Elastic Beanstalk restricts the
+            described events to include only those associated with this request
+            ID.
 
         :type severity: string
-        :param severity: If specified, limits the events returned from
-        this call to include only those with the specified severity or
-        higher.
+        :param severity: If specified, limits the events returned from this
+            call to include only those with the specified severity or higher.
 
         :type start_time: timestamp
-        :param start_time: If specified, AWS Elastic Beanstalk restricts
-        the returned descriptions to those that occur on or after this
-        time.
+        :param start_time: If specified, AWS Elastic Beanstalk restricts the
+            returned descriptions to those that occur on or after this time.
 
         :type end_time: timestamp
-        :param end_time: If specified, AWS Elastic Beanstalk restricts
-        the returned descriptions to those that occur up to, but not
-        including, the EndTime.
+        :param end_time: If specified, AWS Elastic Beanstalk restricts the
+            returned descriptions to those that occur up to, but not including,
+            the EndTime.
 
         :type max_records: integer
-        :param max_records: Specifies the maximum number of events that
-        can be returned, beginning with the most recent event.
+        :param max_records: Specifies the maximum number of events that can be
+            returned, beginning with the most recent event.
 
         :type next_token: string
-        :param next_token: Pagination token. If specified, the events
-        return the next batch of results.
+        :param next_token: Pagination token. If specified, the events return
+            the next batch of results.
         """
         params = {}
         if application_name:
@@ -748,15 +739,15 @@
 
         :type environment_id: string
         :param environment_id: The ID of the environment to rebuild.
-        Condition: You must specify either this or an EnvironmentName,
-        or both. If you do not specify either, AWS Elastic Beanstalk
-        returns MissingRequiredParameter error.
+            Condition: You must specify either this or an EnvironmentName, or
+            both.  If you do not specify either, AWS Elastic Beanstalk returns
+            MissingRequiredParameter error.
 
         :type environment_name: string
         :param environment_name: The name of the environment to rebuild.
-        Condition: You must specify either this or an EnvironmentId, or
-        both. If you do not specify either, AWS Elastic Beanstalk
-        returns MissingRequiredParameter error.
+            Condition: You must specify either this or an EnvironmentId, or
+            both.  If you do not specify either, AWS Elastic Beanstalk returns
+            MissingRequiredParameter error.
 
         :raises: InsufficientPrivilegesException
         """
@@ -781,19 +772,19 @@
 
         :type environment_id: string
         :param environment_id: The ID of the environment of the
-        requested data. If no such environment is found,
-        RequestEnvironmentInfo returns an InvalidParameterValue error.
-        Condition: You must specify either this or an EnvironmentName,
-        or both. If you do not specify either, AWS Elastic Beanstalk
-        returns MissingRequiredParameter error.
+            requested data. If no such environment is found,
+            RequestEnvironmentInfo returns an InvalidParameterValue error.
+            Condition: You must specify either this or an EnvironmentName, or
+            both. If you do not specify either, AWS Elastic Beanstalk returns
+            MissingRequiredParameter error.
 
         :type environment_name: string
         :param environment_name: The name of the environment of the
-        requested data. If no such environment is found,
-        RequestEnvironmentInfo returns an InvalidParameterValue error.
-        Condition: You must specify either this or an EnvironmentId, or
-        both. If you do not specify either, AWS Elastic Beanstalk
-        returns MissingRequiredParameter error.
+            requested data. If no such environment is found,
+            RequestEnvironmentInfo returns an InvalidParameterValue error.
+            Condition: You must specify either this or an EnvironmentId, or
+            both. If you do not specify either, AWS Elastic Beanstalk returns
+            MissingRequiredParameter error.
         """
         params = {'InfoType': info_type}
         if environment_id:
@@ -808,16 +799,16 @@
         server running on each Amazon EC2 instance.
 
         :type environment_id: string
-        :param environment_id: The ID of the environment to restart the
-        server for.  Condition: You must specify either this or an
-        EnvironmentName, or both. If you do not specify either, AWS
-        Elastic Beanstalk returns MissingRequiredParameter error.
+        :param environment_id: The ID of the environment to restart the server
+            for.  Condition: You must specify either this or an
+            EnvironmentName, or both. If you do not specify either, AWS Elastic
+            Beanstalk returns MissingRequiredParameter error.
 
         :type environment_name: string
-        :param environment_name: The name of the environment to restart
-        the server for.  Condition: You must specify either this or an
-        EnvironmentId, or both. If you do not specify either, AWS
-        Elastic Beanstalk returns MissingRequiredParameter error.
+        :param environment_name: The name of the environment to restart the
+            server for.  Condition: You must specify either this or an
+            EnvironmentId, or both. If you do not specify either, AWS Elastic
+            Beanstalk returns MissingRequiredParameter error.
         """
         params = {}
         if environment_id:
@@ -836,18 +827,18 @@
         :param info_type: The type of information to retrieve.
 
         :type environment_id: string
-        :param environment_id: The ID of the data's environment. If no
-        such environment is found, returns an InvalidParameterValue
-        error.  Condition: You must specify either this or an
-        EnvironmentName, or both. If you do not specify either, AWS
-        Elastic Beanstalk returns MissingRequiredParameter error.
+        :param environment_id: The ID of the data's environment. If no such
+            environment is found, returns an InvalidParameterValue error.
+            Condition: You must specify either this or an EnvironmentName, or
+            both.  If you do not specify either, AWS Elastic Beanstalk returns
+            MissingRequiredParameter error.
 
         :type environment_name: string
-        :param environment_name: The name of the data's environment. If
-        no such environment is found, returns an InvalidParameterValue
-        error.  Condition: You must specify either this or an
-        EnvironmentId, or both. If you do not specify either, AWS
-        Elastic Beanstalk returns MissingRequiredParameter error.
+        :param environment_name: The name of the data's environment. If no such
+            environment is found, returns an InvalidParameterValue error.
+            Condition: You must specify either this or an EnvironmentId, or
+            both.  If you do not specify either, AWS Elastic Beanstalk returns
+            MissingRequiredParameter error.
         """
         params = {'InfoType': info_type}
         if environment_id:
@@ -864,31 +855,31 @@
 
         :type source_environment_id: string
         :param source_environment_id: The ID of the source environment.
-        Condition: You must specify at least the SourceEnvironmentID or
-        the SourceEnvironmentName. You may also specify both. If you
-        specify the SourceEnvironmentId, you must specify the
-        DestinationEnvironmentId.
+            Condition: You must specify at least the SourceEnvironmentID or the
+            SourceEnvironmentName. You may also specify both. If you specify
+            the SourceEnvironmentId, you must specify the
+            DestinationEnvironmentId.
 
         :type source_environment_name: string
-        :param source_environment_name: The name of the source
-        environment.  Condition: You must specify at least the
-        SourceEnvironmentID or the SourceEnvironmentName. You may also
-        specify both. If you specify the SourceEnvironmentName, you must
-        specify the DestinationEnvironmentName.
+        :param source_environment_name: The name of the source environment.
+            Condition: You must specify at least the SourceEnvironmentID or the
+            SourceEnvironmentName. You may also specify both. If you specify
+            the SourceEnvironmentName, you must specify the
+            DestinationEnvironmentName.
 
         :type destination_environment_id: string
         :param destination_environment_id: The ID of the destination
-        environment.  Condition: You must specify at least the
-        DestinationEnvironmentID or the DestinationEnvironmentName. You
-        may also specify both. You must specify the SourceEnvironmentId
-        with the DestinationEnvironmentId.
+            environment.  Condition: You must specify at least the
+            DestinationEnvironmentID or the DestinationEnvironmentName. You may
+            also specify both. You must specify the SourceEnvironmentId with
+            the DestinationEnvironmentId.
 
         :type destination_environment_name: string
         :param destination_environment_name: The name of the destination
-        environment.  Condition: You must specify at least the
-        DestinationEnvironmentID or the DestinationEnvironmentName. You
-        may also specify both. You must specify the
-        SourceEnvironmentName with the DestinationEnvironmentName.
+            environment.  Condition: You must specify at least the
+            DestinationEnvironmentID or the DestinationEnvironmentName. You may
+            also specify both. You must specify the SourceEnvironmentName with
+            the DestinationEnvironmentName.
         """
         params = {}
         if source_environment_id:
@@ -907,25 +898,25 @@
 
         :type environment_id: string
         :param environment_id: The ID of the environment to terminate.
-        Condition: You must specify either this or an EnvironmentName,
-        or both. If you do not specify either, AWS Elastic Beanstalk
-        returns MissingRequiredParameter error.
+            Condition: You must specify either this or an EnvironmentName, or
+            both.  If you do not specify either, AWS Elastic Beanstalk returns
+            MissingRequiredParameter error.
 
         :type environment_name: string
-        :param environment_name: The name of the environment to
-        terminate. Condition: You must specify either this or an
-        EnvironmentId, or both. If you do not specify either, AWS
-        Elastic Beanstalk returns MissingRequiredParameter error.
+        :param environment_name: The name of the environment to terminate.
+            Condition: You must specify either this or an EnvironmentId, or
+            both.  If you do not specify either, AWS Elastic Beanstalk returns
+            MissingRequiredParameter error.
 
         :type terminate_resources: boolean
         :param terminate_resources: Indicates whether the associated AWS
-        resources should shut down when the environment is terminated:
-        true: (default) The user AWS resources (for example, the Auto
-        Scaling group, LoadBalancer, etc.) are terminated along with the
-        environment.  false: The environment is removed from the AWS
-        Elastic Beanstalk but the AWS resources continue to operate.
-        For more information, see the  AWS Elastic Beanstalk User Guide.
-        Default: true  Valid Values: true | false
+            resources should shut down when the environment is terminated:
+            true: (default) The user AWS resources (for example, the Auto
+            Scaling group, LoadBalancer, etc.) are terminated along with the
+            environment.  false: The environment is removed from the AWS
+            Elastic Beanstalk but the AWS resources continue to operate.  For
+            more information, see the  AWS Elastic Beanstalk User Guide.
+            Default: true  Valid Values: true | false
 
         :raises: InsufficientPrivilegesException
         """
@@ -946,13 +937,13 @@
 
         :type application_name: string
         :param application_name: The name of the application to update.
-        If no such application is found, UpdateApplication returns an
-        InvalidParameterValue error.
+            If no such application is found, UpdateApplication returns an
+            InvalidParameterValue error.
 
         :type description: string
-        :param description: A new description for the application.
-        Default: If not specified, AWS Elastic Beanstalk does not update
-        the description.
+        :param description: A new description for the application.  Default: If
+            not specified, AWS Elastic Beanstalk does not update the
+            description.
         """
         params = {'ApplicationName': application_name}
         if description:
@@ -964,14 +955,14 @@
         """Updates the application version to have the properties.
 
         :type application_name: string
-        :param application_name: The name of the application associated
-        with this version.  If no application is found with this name,
-        UpdateApplication returns an InvalidParameterValue error.
+        :param application_name: The name of the application associated with
+            this version.  If no application is found with this name,
+            UpdateApplication returns an InvalidParameterValue error.
 
         :type version_label: string
         :param version_label: The name of the version to update. If no
-        application version is found with this label, UpdateApplication
-        returns an InvalidParameterValue error.
+            application version is found with this label, UpdateApplication
+            returns an InvalidParameterValue error.
 
         :type description: string
         :param description: A new description for this release.
@@ -990,28 +981,27 @@
         specified properties or configuration option values.
 
         :type application_name: string
-        :param application_name: The name of the application associated
-        with the configuration template to update. If no application is
-        found with this name, UpdateConfigurationTemplate returns an
-        InvalidParameterValue error.
+        :param application_name: The name of the application associated with
+            the configuration template to update. If no application is found
+            with this name, UpdateConfigurationTemplate returns an
+            InvalidParameterValue error.
 
         :type template_name: string
-        :param template_name: The name of the configuration template to
-        update. If no configuration template is found with this name,
-        UpdateConfigurationTemplate returns an InvalidParameterValue
-        error.
+        :param template_name: The name of the configuration template to update.
+            If no configuration template is found with this name,
+            UpdateConfigurationTemplate returns an InvalidParameterValue error.
 
         :type description: string
         :param description: A new description for the configuration.
 
         :type option_settings: list
-        :param option_settings: A list of configuration option settings
-        to update with the new specified option value.
+        :param option_settings: A list of configuration option settings to
+            update with the new specified option value.
 
         :type options_to_remove: list
-        :param options_to_remove: A list of configuration options to
-        remove from the configuration set.  Constraint: You can remove
-        only UserDefined configuration options.
+        :param options_to_remove: A list of configuration options to remove
+            from the configuration set.  Constraint: You can remove only
+            UserDefined configuration options.
 
         :raises: InsufficientPrivilegesException
         """
@@ -1045,47 +1035,43 @@
         setting descriptions with different DeploymentStatus values.
 
         :type environment_id: string
-        :param environment_id: The ID of the environment to update. If
-        no environment with this ID exists, AWS Elastic Beanstalk
-        returns an InvalidParameterValue error.  Condition: You must
-        specify either this or an EnvironmentName, or both. If you do
-        not specify either, AWS Elastic Beanstalk returns
-        MissingRequiredParameter error.
+        :param environment_id: The ID of the environment to update. If no
+            environment with this ID exists, AWS Elastic Beanstalk returns an
+            InvalidParameterValue error.  Condition: You must specify either
+            this or an EnvironmentName, or both. If you do not specify either,
+            AWS Elastic Beanstalk returns MissingRequiredParameter error.
 
         :type environment_name: string
-        :param environment_name: The name of the environment to update.
-        If no environment with this name exists, AWS Elastic Beanstalk
-        returns an InvalidParameterValue error.  Condition: You must
-        specify either this or an EnvironmentId, or both. If you do not
-        specify either, AWS Elastic Beanstalk returns
-        MissingRequiredParameter error.
+        :param environment_name: The name of the environment to update.  If no
+            environment with this name exists, AWS Elastic Beanstalk returns an
+            InvalidParameterValue error.  Condition: You must specify either
+            this or an EnvironmentId, or both. If you do not specify either,
+            AWS Elastic Beanstalk returns MissingRequiredParameter error.
 
         :type version_label: string
-        :param version_label: If this parameter is specified, AWS
-        Elastic Beanstalk deploys the named application version to the
-        environment. If no such application version is found, returns an
-        InvalidParameterValue error.
+        :param version_label: If this parameter is specified, AWS Elastic
+            Beanstalk deploys the named application version to the environment.
+            If no such application version is found, returns an
+            InvalidParameterValue error.
 
         :type template_name: string
-        :param template_name: If this parameter is specified, AWS
-        Elastic Beanstalk deploys this configuration template to the
-        environment. If no such configuration template is found, AWS
-        Elastic Beanstalk returns an InvalidParameterValue error.
+        :param template_name: If this parameter is specified, AWS Elastic
+            Beanstalk deploys this configuration template to the environment.
+            If no such configuration template is found, AWS Elastic Beanstalk
+            returns an InvalidParameterValue error.
 
         :type description: string
         :param description: If this parameter is specified, AWS Elastic
-        Beanstalk updates the description of this environment.
+            Beanstalk updates the description of this environment.
 
         :type option_settings: list
-        :param option_settings: If specified, AWS Elastic Beanstalk
-        updates the configuration set associated with the running
-        environment and sets the specified configuration options to the
-        requested value.
+        :param option_settings: If specified, AWS Elastic Beanstalk updates the
+            configuration set associated with the running environment and sets
+            the specified configuration options to the requested value.
 
         :type options_to_remove: list
-        :param options_to_remove: A list of custom user-defined
-        configuration options to remove from the configuration set for
-        this environment.
+        :param options_to_remove: A list of custom user-defined configuration
+            options to remove from the configuration set for this environment.
 
         :raises: InsufficientPrivilegesException
         """
@@ -1121,21 +1107,21 @@
 
         :type application_name: string
         :param application_name: The name of the application that the
-        configuration template or environment belongs to.
+            configuration template or environment belongs to.
 
         :type template_name: string
         :param template_name: The name of the configuration template to
-        validate the settings against.  Condition: You cannot specify
-        both this and an environment name.
+            validate the settings against.  Condition: You cannot specify both
+            this and an environment name.
 
         :type environment_name: string
-        :param environment_name: The name of the environment to validate
-        the settings against.  Condition: You cannot specify both this
-        and a configuration template name.
+        :param environment_name: The name of the environment to validate the
+            settings against.  Condition: You cannot specify both this and a
+            configuration template name.
 
         :type option_settings: list
-        :param option_settings: A list of the options and desired values
-        to evaluate.
+        :param option_settings: A list of the options and desired values to
+            evaluate.
 
         :raises: InsufficientPrivilegesException
         """
diff --git a/boto/beanstalk/response.py b/boto/beanstalk/response.py
index 22bc102..2d071bc 100644
--- a/boto/beanstalk/response.py
+++ b/boto/beanstalk/response.py
@@ -175,7 +175,7 @@
 
 class EnvironmentInfoDescription(BaseObject):
     def __init__(self, response):
-        EnvironmentInfoDescription(Response, self).__init__()
+        super(EnvironmentInfoDescription, self).__init__()
 
         self.ec2_instance_id = str(response['Ec2InstanceId'])
         self.info_type = str(response['InfoType'])
diff --git a/boto/cacerts/cacerts.txt b/boto/cacerts/cacerts.txt
index e65f21d..f6e0ee6 100644
--- a/boto/cacerts/cacerts.txt
+++ b/boto/cacerts/cacerts.txt
@@ -631,3 +631,1567 @@
 95K+8cPV1ZVqBLssziY2ZcgxxufuP+NXdYR6Ee9GTxj005i7qIcyunL2POI9n9cd
 2cNgQ4xYDiKWL2KjLB+6rQXvqzJ4h6BUcxm1XAX5Uj5tLUUL9wqT6u0G+bI=
 -----END CERTIFICATE-----
+
+GTE CyberTrust Global Root
+==========================
+
+-----BEGIN CERTIFICATE-----
+MIICWjCCAcMCAgGlMA0GCSqGSIb3DQEBBAUAMHUxCzAJBgNVBAYTAlVTMRgwFgYD
+VQQKEw9HVEUgQ29ycG9yYXRpb24xJzAlBgNVBAsTHkdURSBDeWJlclRydXN0IFNv
+bHV0aW9ucywgSW5jLjEjMCEGA1UEAxMaR1RFIEN5YmVyVHJ1c3QgR2xvYmFsIFJv
+b3QwHhcNOTgwODEzMDAyOTAwWhcNMTgwODEzMjM1OTAwWjB1MQswCQYDVQQGEwJV
+UzEYMBYGA1UEChMPR1RFIENvcnBvcmF0aW9uMScwJQYDVQQLEx5HVEUgQ3liZXJU
+cnVzdCBTb2x1dGlvbnMsIEluYy4xIzAhBgNVBAMTGkdURSBDeWJlclRydXN0IEds
+b2JhbCBSb290MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCVD6C28FCc6HrH
+iM3dFw4usJTQGz0O9pTAipTHBsiQl8i4ZBp6fmw8U+E3KHNgf7KXUwefU/ltWJTS
+r41tiGeA5u2ylc9yMcqlHHK6XALnZELn+aks1joNrI1CqiQBOeacPwGFVw1Yh0X4
+04Wqk2kmhXBIgD8SFcd5tB8FLztimQIDAQABMA0GCSqGSIb3DQEBBAUAA4GBAG3r
+GwnpXtlR22ciYaQqPEh346B8pt5zohQDhT37qw4wxYMWM4ETCJ57NE7fQMh017l9
+3PR2VX2bY1QY6fDq81yx2YtCHrnAlU66+tXifPVoYb+O7AWXX1uw16OFNMQkpw0P
+lZPvy5TYnh+dXIVtx6quTx8itc2VrbqnzPmrC3p/
+-----END CERTIFICATE-----
+
+GlobalSign Root CA
+==================
+
+-----BEGIN CERTIFICATE-----
+MIIDdTCCAl2gAwIBAgILBAAAAAABFUtaw5QwDQYJKoZIhvcNAQEFBQAwVzELMAkG
+A1UEBhMCQkUxGTAXBgNVBAoTEEdsb2JhbFNpZ24gbnYtc2ExEDAOBgNVBAsTB1Jv
+b3QgQ0ExGzAZBgNVBAMTEkdsb2JhbFNpZ24gUm9vdCBDQTAeFw05ODA5MDExMjAw
+MDBaFw0yODAxMjgxMjAwMDBaMFcxCzAJBgNVBAYTAkJFMRkwFwYDVQQKExBHbG9i
+YWxTaWduIG52LXNhMRAwDgYDVQQLEwdSb290IENBMRswGQYDVQQDExJHbG9iYWxT
+aWduIFJvb3QgQ0EwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDaDuaZ
+jc6j40+Kfvvxi4Mla+pIH/EqsLmVEQS98GPR4mdmzxzdzxtIK+6NiY6arymAZavp
+xy0Sy6scTHAHoT0KMM0VjU/43dSMUBUc71DuxC73/OlS8pF94G3VNTCOXkNz8kHp
+1Wrjsok6Vjk4bwY8iGlbKk3Fp1S4bInMm/k8yuX9ifUSPJJ4ltbcdG6TRGHRjcdG
+snUOhugZitVtbNV4FpWi6cgKOOvyJBNPc1STE4U6G7weNLWLBYy5d4ux2x8gkasJ
+U26Qzns3dLlwR5EiUWMWea6xrkEmCMgZK9FGqkjWZCrXgzT/LCrBbBlDSgeF59N8
+9iFo7+ryUp9/k5DPAgMBAAGjQjBAMA4GA1UdDwEB/wQEAwIBBjAPBgNVHRMBAf8E
+BTADAQH/MB0GA1UdDgQWBBRge2YaRQ2XyolQL30EzTSo//z9SzANBgkqhkiG9w0B
+AQUFAAOCAQEA1nPnfE920I2/7LqivjTFKDK1fPxsnCwrvQmeU79rXqoRSLblCKOz
+yj1hTdNGCbM+w6DjY1Ub8rrvrTnhQ7k4o+YviiY776BQVvnGCv04zcQLcFGUl5gE
+38NflNUVyRRBnMRddWQVDf9VMOyGj/8N7yy5Y0b2qvzfvGn9LhJIZJrglfCm7ymP
+AbEVtQwdpf5pLGkkeB6zpxxxYu7KyJesF12KwvhHhm4qxFYxldBniYUr+WymXUad
+DKqC5JlR3XC321Y9YeRq4VzW9v493kHMB65jUr9TU/Qr6cf9tveCX4XSQRjbgbME
+HMUfpIBvFSDJ3gyICh3WZlXi/EjJKSZp4A==
+-----END CERTIFICATE-----
+
+GlobalSign Root CA - R2
+=======================
+
+-----BEGIN CERTIFICATE-----
+MIIDujCCAqKgAwIBAgILBAAAAAABD4Ym5g0wDQYJKoZIhvcNAQEFBQAwTDEgMB4G
+A1UECxMXR2xvYmFsU2lnbiBSb290IENBIC0gUjIxEzARBgNVBAoTCkdsb2JhbFNp
+Z24xEzARBgNVBAMTCkdsb2JhbFNpZ24wHhcNMDYxMjE1MDgwMDAwWhcNMjExMjE1
+MDgwMDAwWjBMMSAwHgYDVQQLExdHbG9iYWxTaWduIFJvb3QgQ0EgLSBSMjETMBEG
+A1UEChMKR2xvYmFsU2lnbjETMBEGA1UEAxMKR2xvYmFsU2lnbjCCASIwDQYJKoZI
+hvcNAQEBBQADggEPADCCAQoCggEBAKbPJA6+Lm8omUVCxKs+IVSbC9N/hHD6ErPL
+v4dfxn+G07IwXNb9rfF73OX4YJYJkhD10FPe+3t+c4isUoh7SqbKSaZeqKeMWhG8
+eoLrvozps6yWJQeXSpkqBy+0Hne/ig+1AnwblrjFuTosvNYSuetZfeLQBoZfXklq
+tTleiDTsvHgMCJiEbKjNS7SgfQx5TfC4LcshytVsW33hoCmEofnTlEnLJGKRILzd
+C9XZzPnqJworc5HGnRusyMvo4KD0L5CLTfuwNhv2GXqF4G3yYROIXJ/gkwpRl4pa
+zq+r1feqCapgvdzZX99yqWATXgAByUr6P6TqBwMhAo6CygPCm48CAwEAAaOBnDCB
+mTAOBgNVHQ8BAf8EBAMCAQYwDwYDVR0TAQH/BAUwAwEB/zAdBgNVHQ4EFgQUm+IH
+V2ccHsBqBt5ZtJot39wZhi4wNgYDVR0fBC8wLTAroCmgJ4YlaHR0cDovL2NybC5n
+bG9iYWxzaWduLm5ldC9yb290LXIyLmNybDAfBgNVHSMEGDAWgBSb4gdXZxwewGoG
+3lm0mi3f3BmGLjANBgkqhkiG9w0BAQUFAAOCAQEAmYFThxxol4aR7OBKuEQLq4Gs
+J0/WwbgcQ3izDJr86iw8bmEbTUsp9Z8FHSbBuOmDAGJFtqkIk7mpM0sYmsL4h4hO
+291xNBrBVNpGP+DTKqttVCL1OmLNIG+6KYnX3ZHu01yiPqFbQfXf5WRDLenVOavS
+ot+3i9DAgBkcRcAtjOj4LaR0VknFBbVPFd5uRHg5h6h+u/N5GJG79G+dwfCMNYxd
+AfvDbbnvRG15RjF+Cv6pgsH/76tuIMRQyV+dTZsXjAzlAcmgQWpzU/qlULRuJQ/7
+TBj0/VLZjmmx6BEP3ojY+x1J96relc8geMJgEtslQIxq/H5COEBkEveegeGTLg==
+-----END CERTIFICATE-----
+
+ValiCert Class 1 VA
+===================
+
+-----BEGIN CERTIFICATE-----
+MIIC5zCCAlACAQEwDQYJKoZIhvcNAQEFBQAwgbsxJDAiBgNVBAcTG1ZhbGlDZXJ0
+IFZhbGlkYXRpb24gTmV0d29yazEXMBUGA1UEChMOVmFsaUNlcnQsIEluYy4xNTAz
+BgNVBAsTLFZhbGlDZXJ0IENsYXNzIDEgUG9saWN5IFZhbGlkYXRpb24gQXV0aG9y
+aXR5MSEwHwYDVQQDExhodHRwOi8vd3d3LnZhbGljZXJ0LmNvbS8xIDAeBgkqhkiG
+9w0BCQEWEWluZm9AdmFsaWNlcnQuY29tMB4XDTk5MDYyNTIyMjM0OFoXDTE5MDYy
+NTIyMjM0OFowgbsxJDAiBgNVBAcTG1ZhbGlDZXJ0IFZhbGlkYXRpb24gTmV0d29y
+azEXMBUGA1UEChMOVmFsaUNlcnQsIEluYy4xNTAzBgNVBAsTLFZhbGlDZXJ0IENs
+YXNzIDEgUG9saWN5IFZhbGlkYXRpb24gQXV0aG9yaXR5MSEwHwYDVQQDExhodHRw
+Oi8vd3d3LnZhbGljZXJ0LmNvbS8xIDAeBgkqhkiG9w0BCQEWEWluZm9AdmFsaWNl
+cnQuY29tMIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQDYWYJ6ibiWuqYvaG9Y
+LqdUHAZu9OqNSLwxlBfw8068srg1knaw0KWlAdcAAxIiGQj4/xEjm84H9b9pGib+
+TunRf50sQB1ZaG6m+FiwnRqP0z/x3BkGgagO4DrdyFNFCQbmD3DD+kCmDuJWBQ8Y
+TfwggtFzVXSNdnKgHZ0dwN0/cQIDAQABMA0GCSqGSIb3DQEBBQUAA4GBAFBoPUn0
+LBwGlN+VYH+Wexf+T3GtZMjdd9LvWVXoP+iOBSoh8gfStadS/pyxtuJbdxdA6nLW
+I8sogTLDAHkY7FkXicnGah5xyf23dKUlRWnFSKsZ4UWKJWsZ7uW7EvV/96aNUcPw
+nXS3qT6gpf+2SQMT2iLM7XGCK5nPOrf1LXLI
+-----END CERTIFICATE-----
+
+ValiCert Class 2 VA
+===================
+
+-----BEGIN CERTIFICATE-----
+MIIC5zCCAlACAQEwDQYJKoZIhvcNAQEFBQAwgbsxJDAiBgNVBAcTG1ZhbGlDZXJ0
+IFZhbGlkYXRpb24gTmV0d29yazEXMBUGA1UEChMOVmFsaUNlcnQsIEluYy4xNTAz
+BgNVBAsTLFZhbGlDZXJ0IENsYXNzIDIgUG9saWN5IFZhbGlkYXRpb24gQXV0aG9y
+aXR5MSEwHwYDVQQDExhodHRwOi8vd3d3LnZhbGljZXJ0LmNvbS8xIDAeBgkqhkiG
+9w0BCQEWEWluZm9AdmFsaWNlcnQuY29tMB4XDTk5MDYyNjAwMTk1NFoXDTE5MDYy
+NjAwMTk1NFowgbsxJDAiBgNVBAcTG1ZhbGlDZXJ0IFZhbGlkYXRpb24gTmV0d29y
+azEXMBUGA1UEChMOVmFsaUNlcnQsIEluYy4xNTAzBgNVBAsTLFZhbGlDZXJ0IENs
+YXNzIDIgUG9saWN5IFZhbGlkYXRpb24gQXV0aG9yaXR5MSEwHwYDVQQDExhodHRw
+Oi8vd3d3LnZhbGljZXJ0LmNvbS8xIDAeBgkqhkiG9w0BCQEWEWluZm9AdmFsaWNl
+cnQuY29tMIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQDOOnHK5avIWZJV16vY
+dA757tn2VUdZZUcOBVXc65g2PFxTXdMwzzjsvUGJ7SVCCSRrCl6zfN1SLUzm1NZ9
+WlmpZdRJEy0kTRxQb7XBhVQ7/nHk01xC+YDgkRoKWzk2Z/M/VXwbP7RfZHM047QS
+v4dk+NoS/zcnwbNDu+97bi5p9wIDAQABMA0GCSqGSIb3DQEBBQUAA4GBADt/UG9v
+UJSZSWI4OB9L+KXIPqeCgfYrx+jFzug6EILLGACOTb2oWH+heQC1u+mNr0HZDzTu
+IYEZoDJJKPTEjlbVUjP9UNV+mWwD5MlM/Mtsq2azSiGM5bUMMj4QssxsodyamEwC
+W/POuZ6lcg5Ktz885hZo+L7tdEy8W9ViH0Pd
+-----END CERTIFICATE-----
+
+RSA Root Certificate 1
+======================
+
+-----BEGIN CERTIFICATE-----
+MIIC5zCCAlACAQEwDQYJKoZIhvcNAQEFBQAwgbsxJDAiBgNVBAcTG1ZhbGlDZXJ0
+IFZhbGlkYXRpb24gTmV0d29yazEXMBUGA1UEChMOVmFsaUNlcnQsIEluYy4xNTAz
+BgNVBAsTLFZhbGlDZXJ0IENsYXNzIDMgUG9saWN5IFZhbGlkYXRpb24gQXV0aG9y
+aXR5MSEwHwYDVQQDExhodHRwOi8vd3d3LnZhbGljZXJ0LmNvbS8xIDAeBgkqhkiG
+9w0BCQEWEWluZm9AdmFsaWNlcnQuY29tMB4XDTk5MDYyNjAwMjIzM1oXDTE5MDYy
+NjAwMjIzM1owgbsxJDAiBgNVBAcTG1ZhbGlDZXJ0IFZhbGlkYXRpb24gTmV0d29y
+azEXMBUGA1UEChMOVmFsaUNlcnQsIEluYy4xNTAzBgNVBAsTLFZhbGlDZXJ0IENs
+YXNzIDMgUG9saWN5IFZhbGlkYXRpb24gQXV0aG9yaXR5MSEwHwYDVQQDExhodHRw
+Oi8vd3d3LnZhbGljZXJ0LmNvbS8xIDAeBgkqhkiG9w0BCQEWEWluZm9AdmFsaWNl
+cnQuY29tMIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQDjmFGWHOjVsQaBalfD
+cnWTq8+epvzzFlLWLU2fNUSoLgRNB0mKOCn1dzfnt6td3zZxFJmP3MKS8edgkpfs
+2Ejcv8ECIMYkpChMMFp2bbFc893enhBxoYjHW5tBbcqwuI4V7q0zK89HBFx1cQqY
+JJgpp0lZpd34t0NiYfPT4tBVPwIDAQABMA0GCSqGSIb3DQEBBQUAA4GBAFa7AliE
+Zwgs3x/be0kz9dNnnfS0ChCzycUs4pJqcXgn8nCDQtM+z6lU9PHYkhaM0QTLS6vJ
+n0WuPIqpsHEzXcjFV9+vqDWzf4mH6eglkrh/hXqu1rweN1gqZ8mRzyqBPu3GOd/A
+PhmcGcwTTYJBtYze4D1gCCAPRX5ron+jjBXu
+-----END CERTIFICATE-----
+
+Entrust.net Premium 2048 Secure Server CA
+=========================================
+
+-----BEGIN CERTIFICATE-----
+MIIEXDCCA0SgAwIBAgIEOGO5ZjANBgkqhkiG9w0BAQUFADCBtDEUMBIGA1UEChML
+RW50cnVzdC5uZXQxQDA+BgNVBAsUN3d3dy5lbnRydXN0Lm5ldC9DUFNfMjA0OCBp
+bmNvcnAuIGJ5IHJlZi4gKGxpbWl0cyBsaWFiLikxJTAjBgNVBAsTHChjKSAxOTk5
+IEVudHJ1c3QubmV0IExpbWl0ZWQxMzAxBgNVBAMTKkVudHJ1c3QubmV0IENlcnRp
+ZmljYXRpb24gQXV0aG9yaXR5ICgyMDQ4KTAeFw05OTEyMjQxNzUwNTFaFw0xOTEy
+MjQxODIwNTFaMIG0MRQwEgYDVQQKEwtFbnRydXN0Lm5ldDFAMD4GA1UECxQ3d3d3
+LmVudHJ1c3QubmV0L0NQU18yMDQ4IGluY29ycC4gYnkgcmVmLiAobGltaXRzIGxp
+YWIuKTElMCMGA1UECxMcKGMpIDE5OTkgRW50cnVzdC5uZXQgTGltaXRlZDEzMDEG
+A1UEAxMqRW50cnVzdC5uZXQgQ2VydGlmaWNhdGlvbiBBdXRob3JpdHkgKDIwNDgp
+MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEArU1LqRKGsuqjIAcVFmQq
+K0vRvwtKTY7tgHalZ7d4QMBzQshowNtTK91euHaYNZOLGp18EzoOH1u3Hs/lJBQe
+sYGpjX24zGtLA/ECDNyrpUAkAH90lKGdCCmziAv1h3edVc3kw37XamSrhRSGlVuX
+MlBvPci6Zgzj/L24ScF2iUkZ/cCovYmjZy/Gn7xxGWC4LeksyZB2ZnuU4q941mVT
+XTzWnLLPKQP5L6RQstRIzgUyVYr9smRMDuSYB3Xbf9+5CFVghTAp+XtIpGmG4zU/
+HoZdenoVve8AjhUiVBcAkCaTvA5JaJG/+EfTnZVCwQ5N328mz8MYIWJmQ3DW1cAH
+4QIDAQABo3QwcjARBglghkgBhvhCAQEEBAMCAAcwHwYDVR0jBBgwFoAUVeSB0RGA
+vtiJuQijMfmhJAkWuXAwHQYDVR0OBBYEFFXkgdERgL7YibkIozH5oSQJFrlwMB0G
+CSqGSIb2fQdBAAQQMA4bCFY1LjA6NC4wAwIEkDANBgkqhkiG9w0BAQUFAAOCAQEA
+WUesIYSKF8mciVMeuoCFGsY8Tj6xnLZ8xpJdGGQC49MGCBFhfGPjK50xA3B20qMo
+oPS7mmNz7W3lKtvtFKkrxjYR0CvrB4ul2p5cGZ1WEvVUKcgF7bISKo30Axv/55IQ
+h7A6tcOdBTcSo8f0FbnVpDkWm1M6I5HxqIKiaohowXkCIryqptau37AUX7iH0N18
+f3v/rxzP5tsHrV7bhZ3QKw0z2wTR5klAEyt2+z7pnIkPFc4YsIV4IU9rTw76NmfN
+B/L/CNDi3tm/Kq+4h4YhPATKt5Rof8886ZjXOP/swNlQ8C5LWK5Gb9Auw2DaclVy
+vUxFnmG6v4SBkgPR0ml8xQ==
+-----END CERTIFICATE-----
+
+Baltimore CyberTrust Root
+=========================
+
+-----BEGIN CERTIFICATE-----
+MIIDdzCCAl+gAwIBAgIEAgAAuTANBgkqhkiG9w0BAQUFADBaMQswCQYDVQQGEwJJ
+RTESMBAGA1UEChMJQmFsdGltb3JlMRMwEQYDVQQLEwpDeWJlclRydXN0MSIwIAYD
+VQQDExlCYWx0aW1vcmUgQ3liZXJUcnVzdCBSb290MB4XDTAwMDUxMjE4NDYwMFoX
+DTI1MDUxMjIzNTkwMFowWjELMAkGA1UEBhMCSUUxEjAQBgNVBAoTCUJhbHRpbW9y
+ZTETMBEGA1UECxMKQ3liZXJUcnVzdDEiMCAGA1UEAxMZQmFsdGltb3JlIEN5YmVy
+VHJ1c3QgUm9vdDCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAKMEuyKr
+mD1X6CZymrV51Cni4eiVgLGw41uOKymaZN+hXe2wCQVt2yguzmKiYv60iNoS6zjr
+IZ3AQSsBUnuId9Mcj8e6uYi1agnnc+gRQKfRzMpijS3ljwumUNKoUMMo6vWrJYeK
+mpYcqWe4PwzV9/lSEy/CG9VwcPCPwBLKBsua4dnKM3p31vjsufFoREJIE9LAwqSu
+XmD+tqYF/LTdB1kC1FkYmGP1pWPgkAx9XbIGevOF6uvUA65ehD5f/xXtabz5OTZy
+dc93Uk3zyZAsuT3lySNTPx8kmCFcB5kpvcY67Oduhjprl3RjM71oGDHweI12v/ye
+jl0qhqdNkNwnGjkCAwEAAaNFMEMwHQYDVR0OBBYEFOWdWTCCR1jMrPoIVDaGezq1
+BE3wMBIGA1UdEwEB/wQIMAYBAf8CAQMwDgYDVR0PAQH/BAQDAgEGMA0GCSqGSIb3
+DQEBBQUAA4IBAQCFDF2O5G9RaEIFoN27TyclhAO992T9Ldcw46QQF+vaKSm2eT92
+9hkTI7gQCvlYpNRhcL0EYWoSihfVCr3FvDB81ukMJY2GQE/szKN+OMY3EU/t3Wgx
+jkzSswF07r51XgdIGn9w/xZchMB5hbgF/X++ZRGjD8ACtPhSNzkE1akxehi/oCr0
+Epn3o0WC4zxe9Z2etciefC7IpJ5OCBRLbf1wbWsaY71k5h+3zvDyny67G7fyUIhz
+ksLi4xaNmjICq44Y3ekQEe5+NauQrz4wlHrQMz2nZQ/1/I6eYs9HRCwBXbsdtTLS
+R9I4LtD+gdwyah617jzV/OeBHRnDJELqYzmp
+-----END CERTIFICATE-----
+
+AddTrust Low-Value Services Root
+================================
+
+-----BEGIN CERTIFICATE-----
+MIIEGDCCAwCgAwIBAgIBATANBgkqhkiG9w0BAQUFADBlMQswCQYDVQQGEwJTRTEU
+MBIGA1UEChMLQWRkVHJ1c3QgQUIxHTAbBgNVBAsTFEFkZFRydXN0IFRUUCBOZXR3
+b3JrMSEwHwYDVQQDExhBZGRUcnVzdCBDbGFzcyAxIENBIFJvb3QwHhcNMDAwNTMw
+MTAzODMxWhcNMjAwNTMwMTAzODMxWjBlMQswCQYDVQQGEwJTRTEUMBIGA1UEChML
+QWRkVHJ1c3QgQUIxHTAbBgNVBAsTFEFkZFRydXN0IFRUUCBOZXR3b3JrMSEwHwYD
+VQQDExhBZGRUcnVzdCBDbGFzcyAxIENBIFJvb3QwggEiMA0GCSqGSIb3DQEBAQUA
+A4IBDwAwggEKAoIBAQCWltQhSWDia+hBBwzexODcEyPNwTXH+9ZOEQpnXvUGW2ul
+CDtbKRY654eyNAbFvAWlA3yCyykQruGIgb3WntP+LVbBFc7jJp0VLhD7Bo8wBN6n
+tGO0/7Gcrjyvd7ZWxbWroulpOj0OM3kyP3CCkplhbY0wCI9xP6ZIVxn4JdxLZlyl
+dI+Yrsj5wAYi56xz36Uu+1LcsRVlIPo1Zmne3yzxbrww2ywkEtvrNTVokMsAsJch
+PXQhI2U0K7t4WaPW4XY5mqRJjox0r26kmqPZm9I4XJuiGMx1I4S+6+JNM3GOGvDC
++Mcdoq0Dlyz4zyXG9rgkMbFjXZJ/Y/AlyVMuH79NAgMBAAGjgdIwgc8wHQYDVR0O
+BBYEFJWxtPCUtr3H2tERCSG+wa9J/RB7MAsGA1UdDwQEAwIBBjAPBgNVHRMBAf8E
+BTADAQH/MIGPBgNVHSMEgYcwgYSAFJWxtPCUtr3H2tERCSG+wa9J/RB7oWmkZzBl
+MQswCQYDVQQGEwJTRTEUMBIGA1UEChMLQWRkVHJ1c3QgQUIxHTAbBgNVBAsTFEFk
+ZFRydXN0IFRUUCBOZXR3b3JrMSEwHwYDVQQDExhBZGRUcnVzdCBDbGFzcyAxIENB
+IFJvb3SCAQEwDQYJKoZIhvcNAQEFBQADggEBACxtZBsfzQ3duQH6lmM0MkhHma6X
+7f1yFqZzR1r0693p9db7RcwpiURdv0Y5PejuvE1Uhh4dbOMXJ0PhiVYrqW9yTkkz
+43J8KiOavD7/KCrto/8cI7pDVwlnTUtiBi34/2ydYB7YHEt9tTEv2dB8Xfjea4MY
+eDdXL+gzB2ffHsdrKpV2ro9Xo/D0UrSpUwjP4E/TelOL/bscVjby/rK25Xa71SJl
+pz/+0WatC7xrmYbvP33zGDLKe8bjq2RGlfgmadlVg3sslgf/WSxEo8bl6ancoWOA
+WiFeIc9TVPC6b4nbqKqVz4vjccweGyBECMB6tkD9xOQ14R0WHNC8K47Wcdk=
+-----END CERTIFICATE-----
+
+AddTrust External Root
+======================
+
+-----BEGIN CERTIFICATE-----
+MIIENjCCAx6gAwIBAgIBATANBgkqhkiG9w0BAQUFADBvMQswCQYDVQQGEwJTRTEU
+MBIGA1UEChMLQWRkVHJ1c3QgQUIxJjAkBgNVBAsTHUFkZFRydXN0IEV4dGVybmFs
+IFRUUCBOZXR3b3JrMSIwIAYDVQQDExlBZGRUcnVzdCBFeHRlcm5hbCBDQSBSb290
+MB4XDTAwMDUzMDEwNDgzOFoXDTIwMDUzMDEwNDgzOFowbzELMAkGA1UEBhMCU0Ux
+FDASBgNVBAoTC0FkZFRydXN0IEFCMSYwJAYDVQQLEx1BZGRUcnVzdCBFeHRlcm5h
+bCBUVFAgTmV0d29yazEiMCAGA1UEAxMZQWRkVHJ1c3QgRXh0ZXJuYWwgQ0EgUm9v
+dDCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBALf3GjPm8gAELTngTlvt
+H7xsD821+iO2zt6bETOXpClMfZOfvUq8k+0DGuOPz+VtUFrWlymUWoCwSXrbLpX9
+uMq/NzgtHj6RQa1wVsfwTz/oMp50ysiQVOnGXw94nZpAPA6sYapeFI+eh6FqUNzX
+mk6vBbOmcZSccbNQYArHE504B4YCqOmoaSYYkKtMsE8jqzpPhNjfzp/haW+710LX
+a0Tkx63ubUFfclpxCDezeWWkWaCUN/cALw3CknLa0Dhy2xSoRcRdKn23tNbE7qzN
+E0S3ySvdQwAl+mG5aWpYIxG3pzOPVnVZ9c0p10a3CitlttNCbxWyuHv77+ldU9U0
+WicCAwEAAaOB3DCB2TAdBgNVHQ4EFgQUrb2YejS0Jvf6xCZU7wO94CTLVBowCwYD
+VR0PBAQDAgEGMA8GA1UdEwEB/wQFMAMBAf8wgZkGA1UdIwSBkTCBjoAUrb2YejS0
+Jvf6xCZU7wO94CTLVBqhc6RxMG8xCzAJBgNVBAYTAlNFMRQwEgYDVQQKEwtBZGRU
+cnVzdCBBQjEmMCQGA1UECxMdQWRkVHJ1c3QgRXh0ZXJuYWwgVFRQIE5ldHdvcmsx
+IjAgBgNVBAMTGUFkZFRydXN0IEV4dGVybmFsIENBIFJvb3SCAQEwDQYJKoZIhvcN
+AQEFBQADggEBALCb4IUlwtYj4g+WBpKdQZic2YR5gdkeWxQHIzZlj7DYd7usQWxH
+YINRsPkyPef89iYTx4AWpb9a/IfPeHmJIZriTAcKhjW88t5RxNKWt9x+Tu5w/Rw5
+6wwCURQtjr0W4MHfRnXnJK3s9EK0hZNwEGe6nQY1ShjTK3rMUUKhemPR5ruhxSvC
+Nr4TDea9Y355e6cJDUCrat2PisP29owaQgVR1EX1n6diIWgVIEM8med8vSTYqZEX
+c4g/VhsxOBi0cQ+azcgOno4uG+GMmIPLHzHxREzGBHNJdmAPx/i9F4BrLunMTA5a
+mnkPIAou1Z5jJh5VkpTYghdae9C8x49OhgQ=
+-----END CERTIFICATE-----
+
+AddTrust Public Services Root
+=============================
+
+-----BEGIN CERTIFICATE-----
+MIIEFTCCAv2gAwIBAgIBATANBgkqhkiG9w0BAQUFADBkMQswCQYDVQQGEwJTRTEU
+MBIGA1UEChMLQWRkVHJ1c3QgQUIxHTAbBgNVBAsTFEFkZFRydXN0IFRUUCBOZXR3
+b3JrMSAwHgYDVQQDExdBZGRUcnVzdCBQdWJsaWMgQ0EgUm9vdDAeFw0wMDA1MzAx
+MDQxNTBaFw0yMDA1MzAxMDQxNTBaMGQxCzAJBgNVBAYTAlNFMRQwEgYDVQQKEwtB
+ZGRUcnVzdCBBQjEdMBsGA1UECxMUQWRkVHJ1c3QgVFRQIE5ldHdvcmsxIDAeBgNV
+BAMTF0FkZFRydXN0IFB1YmxpYyBDQSBSb290MIIBIjANBgkqhkiG9w0BAQEFAAOC
+AQ8AMIIBCgKCAQEA6Rowj4OIFMEg2Dybjxt+A3S72mnTRqX4jsIMEZBRpS9mVEBV
+6tsfSlbunyNu9DnLoblv8n75XYcmYZ4c+OLspoH4IcUkzBEMP9smcnrHAZcHF/nX
+GCwwfQ56HmIexkvA/X1id9NEHif2P0tEs7c42TkfYNVRknMDtABp4/MUTu7R3AnP
+dzRGULD4EfL+OHn3Bzn+UZKXC1sIXzSGAa2Il+tmzV7R/9x98oTaunet3IAIx6eH
+1lWfl2royBFkuucZKT8Rs3iQhCBSWxHveNCD9tVIkNAwHM+A+WD+eeSI8t0A65RF
+62WUaUC6wNW0uLp9BBGo6zEFlpROWCGOn9Bg/QIDAQABo4HRMIHOMB0GA1UdDgQW
+BBSBPjfYkrAfd59ctKtzquf2NGAv+jALBgNVHQ8EBAMCAQYwDwYDVR0TAQH/BAUw
+AwEB/zCBjgYDVR0jBIGGMIGDgBSBPjfYkrAfd59ctKtzquf2NGAv+qFopGYwZDEL
+MAkGA1UEBhMCU0UxFDASBgNVBAoTC0FkZFRydXN0IEFCMR0wGwYDVQQLExRBZGRU
+cnVzdCBUVFAgTmV0d29yazEgMB4GA1UEAxMXQWRkVHJ1c3QgUHVibGljIENBIFJv
+b3SCAQEwDQYJKoZIhvcNAQEFBQADggEBAAP3FUr4JNojVhaTdt02KLmuG7jD8WS6
+IBh4lSknVwW8fCr0uVFV2ocC3g8WFzH4qnkuCRO7r7IgGRLlk/lL+YPoRNWyQSW/
+iHVv/xD8SlTQX/D67zZzfRs2RcYhbbQVuE7PnFylPVoAjgbjPGsye/Kf8Lb93/Ao
+GEjwxrzQvzSAlsJKsW2Ox5BF3i9nrEUEo3rcVZLJR2bYGozH7ZxOmuASu7VqTITh
+4SINhwBk/ox9Yjllpu9CtoAlEmEBqCQTcAARJl/6NVDFSMwGR+gn2HCNX2TmoUQm
+XiLsks3/QppEIW1cxeMiHV9HEufOX1362KqxMy3ZdvJOOjMMK7MtkAY=
+-----END CERTIFICATE-----
+
+AddTrust Qualified Certificates Root
+====================================
+
+-----BEGIN CERTIFICATE-----
+MIIEHjCCAwagAwIBAgIBATANBgkqhkiG9w0BAQUFADBnMQswCQYDVQQGEwJTRTEU
+MBIGA1UEChMLQWRkVHJ1c3QgQUIxHTAbBgNVBAsTFEFkZFRydXN0IFRUUCBOZXR3
+b3JrMSMwIQYDVQQDExpBZGRUcnVzdCBRdWFsaWZpZWQgQ0EgUm9vdDAeFw0wMDA1
+MzAxMDQ0NTBaFw0yMDA1MzAxMDQ0NTBaMGcxCzAJBgNVBAYTAlNFMRQwEgYDVQQK
+EwtBZGRUcnVzdCBBQjEdMBsGA1UECxMUQWRkVHJ1c3QgVFRQIE5ldHdvcmsxIzAh
+BgNVBAMTGkFkZFRydXN0IFF1YWxpZmllZCBDQSBSb290MIIBIjANBgkqhkiG9w0B
+AQEFAAOCAQ8AMIIBCgKCAQEA5B6a/twJWoekn0e+EV+vhDTbYjx5eLfpMLXsDBwq
+xBb/4Oxx64r1EW7tTw2R0hIYLUkVAcKkIhPHEWT/IhKauY5cLwjPcWqzZwFZ8V1G
+87B4pfYOQnrjfxvM0PC3KP0q6p6zsLkEqv32x7SxuCqg+1jxGaBvcCV+PmlKfw8i
+2O+tCBGaKZnhqkRFmhJePp1tUvznoD1oL/BLcHwTOK28FSXx1s6rosAx1i+f4P8U
+WfyEk9mHfExUE+uf0S0R+Bg6Ot4l2ffTQO2kBhLEO+GRwVY18BTcZTYJbqukB8c1
+0cIDMzZbdSZtQvESa0NvS3GU+jQd7RNuyoB/mC9suWXY6QIDAQABo4HUMIHRMB0G
+A1UdDgQWBBQ5lYtii1zJ1IC6WA+XPxUIQ8yYpzALBgNVHQ8EBAMCAQYwDwYDVR0T
+AQH/BAUwAwEB/zCBkQYDVR0jBIGJMIGGgBQ5lYtii1zJ1IC6WA+XPxUIQ8yYp6Fr
+pGkwZzELMAkGA1UEBhMCU0UxFDASBgNVBAoTC0FkZFRydXN0IEFCMR0wGwYDVQQL
+ExRBZGRUcnVzdCBUVFAgTmV0d29yazEjMCEGA1UEAxMaQWRkVHJ1c3QgUXVhbGlm
+aWVkIENBIFJvb3SCAQEwDQYJKoZIhvcNAQEFBQADggEBABmrder4i2VhlRO6aQTv
+hsoToMeqT2QbPxj2qC0sVY8FtzDqQmodwCVRLae/DLPt7wh/bDxGGuoYQ992zPlm
+hpwsaPXpF/gxsxjE1kh9I0xowX67ARRvxdlu3rsEQmr49lx95dr6h+sNNVJn0J6X
+dgWTP5XHAeZpVTh/EGGZyeNfpso+gmNIquIISD6q8rKFYqa0p9m9N5xotS1WfbC3
+P6CxB9bpT9zeRXEwMn8bLgn5v1Kh7sKAPgZcLlVAwRv1cEWw3F369nJad9Jjzc9Y
+iQBCYz95OdBEsIJuQRno3eDBiFrRHnGTHyQwdOUeqN48Jzd/g66ed8/wMLH/S5no
+xqE=
+-----END CERTIFICATE-----
+
+Entrust Root Certification Authority
+====================================
+
+-----BEGIN CERTIFICATE-----
+MIIEkTCCA3mgAwIBAgIERWtQVDANBgkqhkiG9w0BAQUFADCBsDELMAkGA1UEBhMC
+VVMxFjAUBgNVBAoTDUVudHJ1c3QsIEluYy4xOTA3BgNVBAsTMHd3dy5lbnRydXN0
+Lm5ldC9DUFMgaXMgaW5jb3Jwb3JhdGVkIGJ5IHJlZmVyZW5jZTEfMB0GA1UECxMW
+KGMpIDIwMDYgRW50cnVzdCwgSW5jLjEtMCsGA1UEAxMkRW50cnVzdCBSb290IENl
+cnRpZmljYXRpb24gQXV0aG9yaXR5MB4XDTA2MTEyNzIwMjM0MloXDTI2MTEyNzIw
+NTM0MlowgbAxCzAJBgNVBAYTAlVTMRYwFAYDVQQKEw1FbnRydXN0LCBJbmMuMTkw
+NwYDVQQLEzB3d3cuZW50cnVzdC5uZXQvQ1BTIGlzIGluY29ycG9yYXRlZCBieSBy
+ZWZlcmVuY2UxHzAdBgNVBAsTFihjKSAyMDA2IEVudHJ1c3QsIEluYy4xLTArBgNV
+BAMTJEVudHJ1c3QgUm9vdCBDZXJ0aWZpY2F0aW9uIEF1dGhvcml0eTCCASIwDQYJ
+KoZIhvcNAQEBBQADggEPADCCAQoCggEBALaVtkNC+sZtKm9I35RMOVcF7sN5EUFo
+Nu3s/poBj6E4KPz3EEZmLk0eGrEaTsbRwJWIsMn/MYszA9u3g3s+IIRe7bJWKKf4
+4LlAcTfFy0cOlypowCKVYhXbR9n10Cv/gkvJrT7eTNuQgFA/CYqEAOwwCj0Yzfv9
+KlmaI5UXLEWeH25DeW0MXJj+SKfFI0dcXv1u5x609mhF0YaDW6KKjbHjKYD+JXGI
+rb68j6xSlkuqUY3kEzEZ6E5Nn9uss2rVvDlUccp6en+Q3X0dgNmBu1kmwhH+5pPi
+94DkZfs0Nw4pgHBNrziGLp5/V6+eF67rHMsoIV+2HNjnogQi+dPa2MsCAwEAAaOB
+sDCBrTAOBgNVHQ8BAf8EBAMCAQYwDwYDVR0TAQH/BAUwAwEB/zArBgNVHRAEJDAi
+gA8yMDA2MTEyNzIwMjM0MlqBDzIwMjYxMTI3MjA1MzQyWjAfBgNVHSMEGDAWgBRo
+kORnpKZTgMeGZqTx90tD+4S9bTAdBgNVHQ4EFgQUaJDkZ6SmU4DHhmak8fdLQ/uE
+vW0wHQYJKoZIhvZ9B0EABBAwDhsIVjcuMTo0LjADAgSQMA0GCSqGSIb3DQEBBQUA
+A4IBAQCT1DCw1wMgKtD5Y+iRDAUgqV8ZyntyTtSx29CW+1RaGSwMCPeyvIWonX9t
+O1KzKtvn1ISMY/YPyyYBkVBs9F8U4pN0wBOeMDpQ47RgxRzwIkSNcUesyBrJ6Zua
+AGAT/3B+XxFNSRuzFVJ7yVTav52Vr2ua2J7p8eRDjeIRRDq/r72DQnNSi6q7pynP
+9WQcCk3RvKqsnyrQ/39/2n3qse0wJcGE2jTSW3iDVuycNsMm4hH2Z0kdkquM++v/
+eu6FSqdQgPCnXEqULl8FmTxSQeDNtGPPAUO6nIPcj2A781q0tHuu2guQOHXvgR1m
+0vdXcDazv/wor3ElhVsT/h5/WrQ8
+-----END CERTIFICATE-----
+
+GeoTrust Global CA
+==================
+
+-----BEGIN CERTIFICATE-----
+MIIDVDCCAjygAwIBAgIDAjRWMA0GCSqGSIb3DQEBBQUAMEIxCzAJBgNVBAYTAlVT
+MRYwFAYDVQQKEw1HZW9UcnVzdCBJbmMuMRswGQYDVQQDExJHZW9UcnVzdCBHbG9i
+YWwgQ0EwHhcNMDIwNTIxMDQwMDAwWhcNMjIwNTIxMDQwMDAwWjBCMQswCQYDVQQG
+EwJVUzEWMBQGA1UEChMNR2VvVHJ1c3QgSW5jLjEbMBkGA1UEAxMSR2VvVHJ1c3Qg
+R2xvYmFsIENBMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA2swYYzD9
+9BcjGlZ+W988bDjkcbd4kdS8odhM+KhDtgPpTSEHCIjaWC9mOSm9BXiLnTjoBbdq
+fnGk5sRgprDvgOSJKA+eJdbtg/OtppHHmMlCGDUUna2YRpIuT8rxh0PBFpVXLVDv
+iS2Aelet8u5fa9IAjbkU+BQVNdnARqN7csiRv8lVK83Qlz6cJmTM386DGXHKTubU
+1XupGc1V3sjs0l44U+VcT4wt/lAjNvxm5suOpDkZALeVAjmRCw7+OC7RHQWa9k0+
+bw8HHa8sHo9gOeL6NlMTOdReJivbPagUvTLrGAMoUgRx5aszPeE4uwc2hGKceeoW
+MPRfwCvocWvk+QIDAQABo1MwUTAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQWBBTA
+ephojYn7qwVkDBF9qn1luMrMTjAfBgNVHSMEGDAWgBTAephojYn7qwVkDBF9qn1l
+uMrMTjANBgkqhkiG9w0BAQUFAAOCAQEANeMpauUvXVSOKVCUn5kaFOSPeCpilKIn
+Z57QzxpeR+nBsqTP3UEaBU6bS+5Kb1VSsyShNwrrZHYqLizz/Tt1kL/6cdjHPTfS
+tQWVYrmm3ok9Nns4d0iXrKYgjy6myQzCsplFAMfOEVEiIuCl6rYVSAlk6l5PdPcF
+PseKUgzbFbS9bZvlxrFUaKnjaZC2mqUPuLk/IH2uSrW4nOQdtqvmlKXBx4Ot2/Un
+hw4EbNX/3aBd7YdStysVAq45pmp06drE57xNNB6pXE0zX5IJL4hmXXeXxx12E6nV
+5fEWCRE11azbJHFwLJhWC9kXtNHjUStedejV0NxPNO3CBWaAocvmMw==
+-----END CERTIFICATE-----
+
+GeoTrust Global CA 2
+====================
+
+-----BEGIN CERTIFICATE-----
+MIIDZjCCAk6gAwIBAgIBATANBgkqhkiG9w0BAQUFADBEMQswCQYDVQQGEwJVUzEW
+MBQGA1UEChMNR2VvVHJ1c3QgSW5jLjEdMBsGA1UEAxMUR2VvVHJ1c3QgR2xvYmFs
+IENBIDIwHhcNMDQwMzA0MDUwMDAwWhcNMTkwMzA0MDUwMDAwWjBEMQswCQYDVQQG
+EwJVUzEWMBQGA1UEChMNR2VvVHJ1c3QgSW5jLjEdMBsGA1UEAxMUR2VvVHJ1c3Qg
+R2xvYmFsIENBIDIwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDvPE1A
+PRDfO1MA4Wf+lGAVPoWI8YkNkMgoI5kF6CsgncbzYEbYwbLVjDHZ3CB5JIG/NTL8
+Y2nbsSpr7iFY8gjpeMtvy/wWUsiRxP89c96xPqfCfWbB9X5SJBri1WeR0IIQ13hL
+TytCOb1kLUCgsBDTOEhGiKEMuzozKmKY+wCdE1l/bztyqu6mD4b5BWHqZ38MN5aL
+5mkWRxHCJ1kDs6ZgwiFAVvqgx306E+PsV8ez1q6diYD3Aecs9pYrEw15LNnA5IZ7
+S4wMcoKK+xfNAGw6EzywhIdLFnopsk/bHdQL82Y3vdj2V7teJHq4PIu5+pIaGoSe
+2HSPqht/XvT+RSIhAgMBAAGjYzBhMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0OBBYE
+FHE4NvICMVNHK266ZUapEBVYIAUJMB8GA1UdIwQYMBaAFHE4NvICMVNHK266ZUap
+EBVYIAUJMA4GA1UdDwEB/wQEAwIBhjANBgkqhkiG9w0BAQUFAAOCAQEAA/e1K6td
+EPx7srJerJsOflN4WT5CBP51o62sgU7XAotexC3IUnbHLB/8gTKY0UvGkpMzNTEv
+/NgdRN3ggX+d6YvhZJFiCzkIjKx0nVnZellSlxG5FntvRdOW2TF9AjYPnDtuzywN
+A0ZF66D0f0hExghAzN4bcLUprbqLOzRldRtxIR0sFAqwlpW41uryZfspuk/qkZN0
+abby/+Ea0AzRdoXLiiW9l14sbxWZJue2Kf8i7MkCx1YAzUm5s2x7UwQa4qjJqhIF
+I8LO57sEAszAR6LkxCkvW0VXiVHuPOtSCP8HNR6fNWpHSlaY0VqFH4z1Ir+rzoPz
+4iIprn2DQKi6bA==
+-----END CERTIFICATE-----
+
+GeoTrust Universal CA
+=====================
+
+-----BEGIN CERTIFICATE-----
+MIIFaDCCA1CgAwIBAgIBATANBgkqhkiG9w0BAQUFADBFMQswCQYDVQQGEwJVUzEW
+MBQGA1UEChMNR2VvVHJ1c3QgSW5jLjEeMBwGA1UEAxMVR2VvVHJ1c3QgVW5pdmVy
+c2FsIENBMB4XDTA0MDMwNDA1MDAwMFoXDTI5MDMwNDA1MDAwMFowRTELMAkGA1UE
+BhMCVVMxFjAUBgNVBAoTDUdlb1RydXN0IEluYy4xHjAcBgNVBAMTFUdlb1RydXN0
+IFVuaXZlcnNhbCBDQTCCAiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBAKYV
+VaCjxuAfjJ0hUNfBvitbtaSeodlyWL0AG0y/YckUHUWCq8YdgNY96xCcOq9tJPi8
+cQGeBvV8Xx7BDlXKg5pZMK4ZyzBIle0iN430SppyZj6tlcDgFgDgEB8rMQ7XlFTT
+QjOgNB0eRXbdT8oYN+yFFXoZCPzVx5zw8qkuEKmS5j1YPakWaDwvdSEYfyh3peFh
+F7em6fgemdtzbvQKoiFs7tqqhZJmr/Z6a4LauiIINQ/PQvE1+mrufislzDoR5G2v
+c7J2Ha3QsnhnGqQ5HFELZ1aD/ThdDc7d8Lsrlh/eezJS/R27tQahsiFepdaVaH/w
+mZ7cRQg+59IJDTWU3YBOU5fXtQlEIGQWFwMCTFMNaN7VqnJNk22CDtucvc+081xd
+VHppCZbW2xHBjXWotM85yM48vCR85mLK4b19p71XZQvk/iXttmkQ3CgaRr0BHdCX
+teGYO8A3ZNY9lO4L4fUorgtWv3GLIylBjobFS1J72HGrH4oVpjuDWtdYAVHGTEHZ
+f9hBZ3KiKN9gg6meyHv8U3NyWfWTehd2Ds735VzZC1U0oqpbtWpU5xPKV+yXbfRe
+Bi9Fi1jUIxaS5BZuKGNZMN9QAZxjiRqf2xeUgnA3wySemkfWWspOqGmJch+RbNt+
+nhutxx9z3SxPGWX9f5NAEC7S8O08ni4oPmkmM8V7AgMBAAGjYzBhMA8GA1UdEwEB
+/wQFMAMBAf8wHQYDVR0OBBYEFNq7LqqwDLiIJlF0XG0D08DYj3rWMB8GA1UdIwQY
+MBaAFNq7LqqwDLiIJlF0XG0D08DYj3rWMA4GA1UdDwEB/wQEAwIBhjANBgkqhkiG
+9w0BAQUFAAOCAgEAMXjmx7XfuJRAyXHEqDXsRh3ChfMoWIawC/yOsjmPRFWrZIRc
+aanQmjg8+uUfNeVE44B5lGiku8SfPeE0zTBGi1QrlaXv9z+ZhP015s8xxtxqv6fX
+IwjhmF7DWgh2qaavdy+3YL1ERmrvl/9zlcGO6JP7/TG37FcREUWbMPEaiDnBTzyn
+ANXH/KttgCJwpQzgXQQpAvvLoJHRfNbDflDVnVi+QTjruXU8FdmbyUqDWcDaU/0z
+uzYYm4UPFd3uLax2k7nZAY1IEKj79TiG8dsKxr2EoyNB3tZ3b4XUhRxQ4K5RirqN
+Pnbiucon8l+f725ZDQbYKxek0nxru18UGkiPGkzns0ccjkxFKyDuSN/n3QmOGKja
+QI2SJhFTYXNd673nxE0pN2HrrDktZy4W1vUAg4WhzH92xH3kt0tm7wNFYGm2DFKW
+koRepqO1pD4r2czYG0eq8kTaT/kD6PAUyz/zg97QwVTjt+gKN02LIFkDMBmhLMi9
+ER/frslKxfMnZmaGrGiR/9nmUxwPi1xpZQomyB40w11Re9epnAahNt3ViZS82eQt
+DF4JbAiXfKM9fJP/P6EUp8+1Xevb2xzEdt+Iub1FBZUbrvxGakyvSOPOrg/Sfuvm
+bJxPgWp6ZKy7PtXny3YuxadIwVyQD8vIP/rmMuGNG2+k5o7Y+SlIis5z/iw=
+-----END CERTIFICATE-----
+
+GeoTrust Universal CA 2
+=======================
+
+-----BEGIN CERTIFICATE-----
+MIIFbDCCA1SgAwIBAgIBATANBgkqhkiG9w0BAQUFADBHMQswCQYDVQQGEwJVUzEW
+MBQGA1UEChMNR2VvVHJ1c3QgSW5jLjEgMB4GA1UEAxMXR2VvVHJ1c3QgVW5pdmVy
+c2FsIENBIDIwHhcNMDQwMzA0MDUwMDAwWhcNMjkwMzA0MDUwMDAwWjBHMQswCQYD
+VQQGEwJVUzEWMBQGA1UEChMNR2VvVHJ1c3QgSW5jLjEgMB4GA1UEAxMXR2VvVHJ1
+c3QgVW5pdmVyc2FsIENBIDIwggIiMA0GCSqGSIb3DQEBAQUAA4ICDwAwggIKAoIC
+AQCzVFLByT7y2dyxUxpZKeexw0Uo5dfR7cXFS6GqdHtXr0om/Nj1XqduGdt0DE81
+WzILAePb63p3NeqqWuDW6KFXlPCQo3RWlEQwAx5cTiuFJnSCegx2oG9NzkEtoBUG
+FF+3Qs17j1hhNNwqCPkuwwGmIkQcTAeC5lvO0Ep8BNMZcyfwqph/Lq9O64ceJHdq
+XbboW0W63MOhBW9Wjo8QJqVJwy7XQYci4E+GymC16qFjwAGXEHm9ADwSbSsVsaxL
+se4YuU6W3Nx2/zu+z18DwPw76L5GG//aQMJS9/7jOvdqdzXQ2o3rXhhqMcceujwb
+KNZrVMaqW9eiLBsZzKIC9ptZvTdrhrVtgrrY6slWvKk2WP0+GfPtDCapkzj4T8Fd
+IgbQl+rhrcZV4IErKIM6+vR7IVEAvlI4zs1meaj0gVbi0IMJR1FbUGrP20gaXT73
+y/Zl92zxlfgCOzJWgjl6W70viRu/obTo/3+NjN8D8WBOWBFM66M/ECuDmgFz2ZRt
+hAAnZqzwcEAJQpKtT5MNYQlRJNiS1QuUYbKHsu3/mjX/hVTK7URDrBs8FmtISgoc
+QIgfksILAAX/8sgCSqSqqcyZlpwvWOB94b67B9xfBHJcMTTD7F8t4D1kkCLm0ey4
+Lt1ZrtmhN79UNdxzMk+MBB4zsslG8dhcyFVQyWi9qLo2CQIDAQABo2MwYTAPBgNV
+HRMBAf8EBTADAQH/MB0GA1UdDgQWBBR281Xh+qQ2+/CfXGJx7Tz0RzgQKzAfBgNV
+HSMEGDAWgBR281Xh+qQ2+/CfXGJx7Tz0RzgQKzAOBgNVHQ8BAf8EBAMCAYYwDQYJ
+KoZIhvcNAQEFBQADggIBAGbBxiPz2eAubl/oz66wsCVNK/g7WJtAJDday6sWSf+z
+dXkzoS9tcBc0kf5nfo/sm+VegqlVHy/c1FEHEv6sFj4sNcZj/NwQ6w2jqtB8zNHQ
+L1EuxBRa3ugZ4T7GzKQp5y6EqgYweHZUcyiYWTjgAA1i00J9IZ+uPTqM1fp3DRgr
+Fg5fNuH8KrUwJM/gYwx7WBr+mbpCErGR9Hxo4sjoryzqyX6uuyo9DRXcNJW2GHSo
+ag/HtPQTxORb7QrSpJdMKu0vbBKJPfEncKpqA1Ihn0CoZ1Dy81of398j9tx4TuaY
+T1U6U+Pv8vSfx3zYWK8pIpe44L2RLrB27FcRz+8pRPPphXpgY+RdM4kX2TGq2tbz
+GDVyz4crL2MjhF2EjD9XoIj8mZEoJmmZ1I+XRL6O1UixpCgp8RW04eWe3fiPpm8m
+1wk8OhwRDqZsN/etRIcsKMfYdIKz0G9KV7s1KSegi+ghp4dkNl3M2Basx7InQJJV
+OCiNUW7dFGdTbHFcJoRNdVq2fmBWqU2t+5sel/MN2dKXVHfaPRK34B7vCAas+YWH
+6aLcr34YEoP9VhdBLtUpgn2Z9DH2canPLAEnpQW5qrJITirvn5NSUZU8UnOOVkwX
+QMAJKOSLakhT2+zNVVXxxvjpoixMptEmX36vWkzaH6byHCx+rgIW0lbQL1dTR+iS
+-----END CERTIFICATE-----
+
+America Online Root Certification Authority 1
+=============================================
+
+-----BEGIN CERTIFICATE-----
+MIIDpDCCAoygAwIBAgIBATANBgkqhkiG9w0BAQUFADBjMQswCQYDVQQGEwJVUzEc
+MBoGA1UEChMTQW1lcmljYSBPbmxpbmUgSW5jLjE2MDQGA1UEAxMtQW1lcmljYSBP
+bmxpbmUgUm9vdCBDZXJ0aWZpY2F0aW9uIEF1dGhvcml0eSAxMB4XDTAyMDUyODA2
+MDAwMFoXDTM3MTExOTIwNDMwMFowYzELMAkGA1UEBhMCVVMxHDAaBgNVBAoTE0Ft
+ZXJpY2EgT25saW5lIEluYy4xNjA0BgNVBAMTLUFtZXJpY2EgT25saW5lIFJvb3Qg
+Q2VydGlmaWNhdGlvbiBBdXRob3JpdHkgMTCCASIwDQYJKoZIhvcNAQEBBQADggEP
+ADCCAQoCggEBAKgv6KRpBgNHw+kqmP8ZonCaxlCyfqXfaE0bfA+2l2h9LaaLl+lk
+hsmj76CGv2BlnEtUiMJIxUo5vxTjWVXlGbR0yLQFOVwWpeKVBeASrlmLojNoWBym
+1BW32J/X3HGrfpq/m44zDyL9Hy7nBzbvYjnF3cu6JRQj3gzGPTzOggjmZj7aUTsW
+OqMFf6Dch9Wc/HKpoH145LcxVR5lu9RhsCFg7RAycsWSJR74kEoYeEfffjA3PlAb
+2xzTa5qGUwew76wGePiEmf4hjUyAtgyC9mZweRrTT6PP8c9GsEsPPt2IYriMqQko
+O3rHl+Ee5fSfwMCuJKDIodkP1nsmgmkyPacCAwEAAaNjMGEwDwYDVR0TAQH/BAUw
+AwEB/zAdBgNVHQ4EFgQUAK3Zo/Z59m50qX8zPYEX10zPM94wHwYDVR0jBBgwFoAU
+AK3Zo/Z59m50qX8zPYEX10zPM94wDgYDVR0PAQH/BAQDAgGGMA0GCSqGSIb3DQEB
+BQUAA4IBAQB8itEfGDeC4Liwo+1WlchiYZwFos3CYiZhzRAW18y0ZTTQEYqtqKkF
+Zu90821fnZmv9ov761KyBZiibyrFVL0lvV+uyIbqRizBs73B6UlwGBaXCBOMIOAb
+LjpHyx7kADCVW/RFo8AasAFOq73AI25jP4BKxQft3OJvx8Fi8eNy1gTIdGcL+oir
+oQHIb/AUr9KZzVGTfu0uOMe9zkZQPXLjeSWdm4grECDdpbgyn43gKd8hdIaC2y+C
+MMbHNYaz+ZZfRtsMRf3zUMNvxsNIrUam4SdHCh0Om7bCd39j8uB9Gr784N/Xx6ds
+sPmuujz9dLQR6FgNgLzTqIA6me11zEZ7
+-----END CERTIFICATE-----
+
+America Online Root Certification Authority 2
+=============================================
+
+-----BEGIN CERTIFICATE-----
+MIIFpDCCA4ygAwIBAgIBATANBgkqhkiG9w0BAQUFADBjMQswCQYDVQQGEwJVUzEc
+MBoGA1UEChMTQW1lcmljYSBPbmxpbmUgSW5jLjE2MDQGA1UEAxMtQW1lcmljYSBP
+bmxpbmUgUm9vdCBDZXJ0aWZpY2F0aW9uIEF1dGhvcml0eSAyMB4XDTAyMDUyODA2
+MDAwMFoXDTM3MDkyOTE0MDgwMFowYzELMAkGA1UEBhMCVVMxHDAaBgNVBAoTE0Ft
+ZXJpY2EgT25saW5lIEluYy4xNjA0BgNVBAMTLUFtZXJpY2EgT25saW5lIFJvb3Qg
+Q2VydGlmaWNhdGlvbiBBdXRob3JpdHkgMjCCAiIwDQYJKoZIhvcNAQEBBQADggIP
+ADCCAgoCggIBAMxBRR3pPU0Q9oyxQcngXssNt79Hc9PwVU3dxgz6sWYFas14tNwC
+206B89enfHG8dWOgXeMHDEjsJcQDIPT/DjsS/5uN4cbVG7RtIuOx238hZK+GvFci
+KtZHgVdEglZTvYYUAQv8f3SkWq7xuhG1m1hagLQ3eAkzfDJHA1zEpYNI9FdWboE2
+JxhP7JsowtS013wMPgwr38oE18aO6lhOqKSlGBxsRZijQdEt0sdtjRnxrXm3gT+9
+BoInLRBYBbV4Bbkv2wxrkJB+FFk4u5QkE+XRnRTf04JNRvCAOVIyD+OEsnpD8l7e
+Xz8d3eOyG6ChKiMDbi4BFYdcpnV1x5dhvt6G3NRI270qv0pV2uh9UPu0gBe4lL8B
+PeraunzgWGcXuVjgiIZGZ2ydEEdYMtA1fHkqkKJaEBEjNa0vzORKW6fIJ/KD3l67
+Xnfn6KVuY8INXWHQjNJsWiEOyiijzirplcdIz5ZvHZIlyMbGwcEMBawmxNJ10uEq
+Z8A9W6Wa6897GqidFEXlD6CaZd4vKL3Ob5Rmg0gp2OpljK+T2WSfVVcmv2/LNzGZ
+o2C7HK2JNDJiuEMhBnIMoVxtRsX6Kc8w3onccVvdtjc+31D1uAclJuW8tf48ArO3
++L5DwYcRlJ4jbBeKuIonDFRH8KmzwICMoCfrHRnjB453cMor9H124HhnAgMBAAGj
+YzBhMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0OBBYEFE1FwWg4u3OpaaEg5+31IqEj
+FNeeMB8GA1UdIwQYMBaAFE1FwWg4u3OpaaEg5+31IqEjFNeeMA4GA1UdDwEB/wQE
+AwIBhjANBgkqhkiG9w0BAQUFAAOCAgEAZ2sGuV9FOypLM7PmG2tZTiLMubekJcmn
+xPBUlgtk87FYT15R/LKXeydlwuXK5w0MJXti4/qftIe3RUavg6WXSIylvfEWK5t2
+LHo1YGwRgJfMqZJS5ivmae2p+DYtLHe/YUjRYwu5W1LtGLBDQiKmsXeu3mnFzccc
+obGlHBD7GL4acN3Bkku+KVqdPzW+5X1R+FXgJXUjhx5c3LqdsKyzadsXg8n33gy8
+CNyRnqjQ1xU3c6U1uPx+xURABsPr+CKAXEfOAuMRn0T//ZoyzH1kUQ7rVyZ2OuMe
+IjzCpjbdGe+n/BLzJsBZMYVMnNjP36TMzCmT/5RtdlwTCJfy7aULTd3oyWgOZtMA
+DjMSW7yV5TKQqLPGbIOtd+6Lfn6xqavT4fG2wLHqiMDn05DpKJKUe2h7lyoKZy2F
+AjgQ5ANh1NolNscIWC2hp1GvMApJ9aZphwctREZ2jirlmjvXGKL8nDgQzMY70rUX
+Om/9riW99XJZZLF0KjhfGEzfz3EEWjbUvy+ZnOjZurGV5gJLIaFb1cFPj65pbVPb
+AZO1XB4Y3WRayhgoPmMEEf0cjQAPuDffZ4qdZqkCapH/E8ovXYO8h5Ns3CRRFgQl
+Zvqz2cK6Kb6aSDiCmfS/O0oxGfm/jiEzFMpPVF/7zvuPcX/9XhmgD0uRuMRUvAaw
+RY8mkaKO/qk=
+-----END CERTIFICATE-----
+
+Comodo AAA Services root
+========================
+
+-----BEGIN CERTIFICATE-----
+MIIEMjCCAxqgAwIBAgIBATANBgkqhkiG9w0BAQUFADB7MQswCQYDVQQGEwJHQjEb
+MBkGA1UECAwSR3JlYXRlciBNYW5jaGVzdGVyMRAwDgYDVQQHDAdTYWxmb3JkMRow
+GAYDVQQKDBFDb21vZG8gQ0EgTGltaXRlZDEhMB8GA1UEAwwYQUFBIENlcnRpZmlj
+YXRlIFNlcnZpY2VzMB4XDTA0MDEwMTAwMDAwMFoXDTI4MTIzMTIzNTk1OVowezEL
+MAkGA1UEBhMCR0IxGzAZBgNVBAgMEkdyZWF0ZXIgTWFuY2hlc3RlcjEQMA4GA1UE
+BwwHU2FsZm9yZDEaMBgGA1UECgwRQ29tb2RvIENBIExpbWl0ZWQxITAfBgNVBAMM
+GEFBQSBDZXJ0aWZpY2F0ZSBTZXJ2aWNlczCCASIwDQYJKoZIhvcNAQEBBQADggEP
+ADCCAQoCggEBAL5AnfRu4ep2hxxNRUSOvkbIgwadwSr+GB+O5AL686tdUIoWMQua
+BtDFcCLNSS1UY8y2bmhGC1Pqy0wkwLxyTurxFa70VJoSCsN6sjNg4tqJVfMiWPPe
+3M/vg4aijJRPn2jymJBGhCfHdr/jzDUsi14HZGWCwEiwqJH5YZ92IFCokcdmtet4
+YgNW8IoaE+oxox6gmf049vYnMlhvB/VruPsUK6+3qszWY19zjNoFmag4qMsXeDZR
+rOme9Hg6jc8P2ULimAyrL58OAd7vn5lJ8S3frHRNG5i1R8XlKdH5kBjHYpy+g8cm
+ez6KJcfA3Z3mNWgQIJ2P2N7Sw4ScDV7oL8kCAwEAAaOBwDCBvTAdBgNVHQ4EFgQU
+oBEKIz6W8Qfs4q8p74Klf9AwpLQwDgYDVR0PAQH/BAQDAgEGMA8GA1UdEwEB/wQF
+MAMBAf8wewYDVR0fBHQwcjA4oDagNIYyaHR0cDovL2NybC5jb21vZG9jYS5jb20v
+QUFBQ2VydGlmaWNhdGVTZXJ2aWNlcy5jcmwwNqA0oDKGMGh0dHA6Ly9jcmwuY29t
+b2RvLm5ldC9BQUFDZXJ0aWZpY2F0ZVNlcnZpY2VzLmNybDANBgkqhkiG9w0BAQUF
+AAOCAQEACFb8AvCb6P+k+tZ7xkSAzk/ExfYAWMymtrwUSWgEdujm7l3sAg9g1o1Q
+GE8mTgHj5rCl7r+8dFRBv/38ErjHT1r0iWAFf2C3BUrz9vHCv8S5dIa2LX1rzNLz
+Rt0vxuBqw8M0Ayx9lt1awg6nCpnBBYurDC/zXDrPbDdVCYfeU0BsWO/8tqtlbgT2
+G9w84FoVxp7Z8VlIMCFlA2zs6SFz7JsDoeA3raAVGI/6ugLOpyypEBMs1OUIJqsi
+l2D4kF501KKaU73yqWjgom7C12yxow+ev+to51byrvLjKzg6CYG1a4XXvi3tPxq3
+smPi9WIsgtRqAEFQ8TmDn5XpNpaYbg==
+-----END CERTIFICATE-----
+
+Comodo Secure Services root
+===========================
+
+-----BEGIN CERTIFICATE-----
+MIIEPzCCAyegAwIBAgIBATANBgkqhkiG9w0BAQUFADB+MQswCQYDVQQGEwJHQjEb
+MBkGA1UECAwSR3JlYXRlciBNYW5jaGVzdGVyMRAwDgYDVQQHDAdTYWxmb3JkMRow
+GAYDVQQKDBFDb21vZG8gQ0EgTGltaXRlZDEkMCIGA1UEAwwbU2VjdXJlIENlcnRp
+ZmljYXRlIFNlcnZpY2VzMB4XDTA0MDEwMTAwMDAwMFoXDTI4MTIzMTIzNTk1OVow
+fjELMAkGA1UEBhMCR0IxGzAZBgNVBAgMEkdyZWF0ZXIgTWFuY2hlc3RlcjEQMA4G
+A1UEBwwHU2FsZm9yZDEaMBgGA1UECgwRQ29tb2RvIENBIExpbWl0ZWQxJDAiBgNV
+BAMMG1NlY3VyZSBDZXJ0aWZpY2F0ZSBTZXJ2aWNlczCCASIwDQYJKoZIhvcNAQEB
+BQADggEPADCCAQoCggEBAMBxM4KK0HDrc4eCQNUd5MvJDkKQ+d40uaG6EfQlhfPM
+cm3ye5drswfxdySRXyWP9nQ95IDC+DwN879A6vfIUtFyb+/Iq0G4bi4XKpVpDM3S
+HpR7LZQdqnXXs5jLrLxkU0C8j6ysNstcrbvd4JQX7NFc0L/vpZXJkMWwrPsbQ996
+CF23uPJAGysnnlDOXmWCiIxe004MeuoIkbY2qitC++rCoznl2yY4rYsK7hljxxwk
+3wN42ubqwUcaCwtGCd0C/N7Lh1/XMGNooa7cMqG6vv5Eq2i2pRcV/b3Vp6ea5EQz
+6YiO/O1R65NxTq0B50SOqy3LqP4BSUjwwN3HaNiS/j0CAwEAAaOBxzCBxDAdBgNV
+HQ4EFgQUPNiTiMLAggnMAZkGkyDpnnAJY08wDgYDVR0PAQH/BAQDAgEGMA8GA1Ud
+EwEB/wQFMAMBAf8wgYEGA1UdHwR6MHgwO6A5oDeGNWh0dHA6Ly9jcmwuY29tb2Rv
+Y2EuY29tL1NlY3VyZUNlcnRpZmljYXRlU2VydmljZXMuY3JsMDmgN6A1hjNodHRw
+Oi8vY3JsLmNvbW9kby5uZXQvU2VjdXJlQ2VydGlmaWNhdGVTZXJ2aWNlcy5jcmww
+DQYJKoZIhvcNAQEFBQADggEBAIcBbSMdflsXfcFhMs+P5/OKlFlm4J4oqF7Tt/Q0
+5qo5spcWxYJvMqTpjOev/e/C6LlLqqP05tqNZSH7uoDrJiiFGv45jN5bBAS0VPmj
+Z55B+glSzAVIqMk/IQQezkhr/IXownuvf7fM+F86/TXGDe+X3EyrEeFryzHRbPtI
+gKvcnDe4IRRLDXE97IMzbtFuMhbsmMcWi1mmNKsFVy2T96oTy9IT4rcuO81rUBcJ
+aD61JlfutuC23bkpgHl9j6PwpCikFcSF9CfUa7/lXORlAnZUtOM3ZiTTGWHIUhDl
+izeauan5Hb/qmZJhlv8BzaFfDbxxvA6sCx1HRR3B7Hzs/Sk=
+-----END CERTIFICATE-----
+
+Comodo Trusted Services root
+============================
+
+-----BEGIN CERTIFICATE-----
+MIIEQzCCAyugAwIBAgIBATANBgkqhkiG9w0BAQUFADB/MQswCQYDVQQGEwJHQjEb
+MBkGA1UECAwSR3JlYXRlciBNYW5jaGVzdGVyMRAwDgYDVQQHDAdTYWxmb3JkMRow
+GAYDVQQKDBFDb21vZG8gQ0EgTGltaXRlZDElMCMGA1UEAwwcVHJ1c3RlZCBDZXJ0
+aWZpY2F0ZSBTZXJ2aWNlczAeFw0wNDAxMDEwMDAwMDBaFw0yODEyMzEyMzU5NTla
+MH8xCzAJBgNVBAYTAkdCMRswGQYDVQQIDBJHcmVhdGVyIE1hbmNoZXN0ZXIxEDAO
+BgNVBAcMB1NhbGZvcmQxGjAYBgNVBAoMEUNvbW9kbyBDQSBMaW1pdGVkMSUwIwYD
+VQQDDBxUcnVzdGVkIENlcnRpZmljYXRlIFNlcnZpY2VzMIIBIjANBgkqhkiG9w0B
+AQEFAAOCAQ8AMIIBCgKCAQEA33FvNlhTWvI2VFeAxHQIIO0Yfyod5jWaHiWsnOWW
+fnJSoBVC21ndZHoa0Lh73TkVvFVIxO06AOoxEbrycXQaZ7jPM8yoMa+j49d/vzMt
+TGo87IvDktJTdyR0nAducPy9C1t2ul/y/9c3S0pgePfw+spwtOpZqqPOSC+pw7IL
+fhdyFgymBwwbOM/JYrc/oJOlh0Hyt3BAd9i+FHzjqMB6juljatEPmsbS9Is6FARW
+1O24zG71++IsWL1/T2sr92AkWCTOJu80kTrV44HQsvAEAtdbtz6SrGsSivnkBbA7
+kUlcsutT6vifR4buv5XAwAaf0lteERv0xwQ1KdJVXOTt6wIDAQABo4HJMIHGMB0G
+A1UdDgQWBBTFe1i97doladL3WRaoszLAeydb9DAOBgNVHQ8BAf8EBAMCAQYwDwYD
+VR0TAQH/BAUwAwEB/zCBgwYDVR0fBHwwejA8oDqgOIY2aHR0cDovL2NybC5jb21v
+ZG9jYS5jb20vVHJ1c3RlZENlcnRpZmljYXRlU2VydmljZXMuY3JsMDqgOKA2hjRo
+dHRwOi8vY3JsLmNvbW9kby5uZXQvVHJ1c3RlZENlcnRpZmljYXRlU2VydmljZXMu
+Y3JsMA0GCSqGSIb3DQEBBQUAA4IBAQDIk4E7ibSvuIQSTI3S8NtwuleGFTQQuS9/
+HrCoiWChisJ3DFBKmwCL2Iv0QeLQg4pKHBQGsKNoBXAxMKdTmw7pSqBYaWcOrp32
+pSxBvzwGa+RZzG0Q8ZZvH9/0BAKkn0U+yNj6NkZEUD+Cl5EfKNsYEYwq5GWDVxIS
+jBc/lDb+XbDABHcTuPQV1T84zJQ6VdCsmPW6AF/ghhmBeC8owH7TzEIK9a5QoNE+
+xqFx7D+gIIxmOom0jtTYsU0lR+4viMi14QVFwL4Ucd56/Y57fU0IlqUSc/Atyjcn
+dBInTMu2l+nZrghtWjlA3QVHdWpaIbOjGM9O9y5Xt5hwXsjEeLBi
+-----END CERTIFICATE-----
+
+UTN DATACorp SGC Root CA
+========================
+
+-----BEGIN CERTIFICATE-----
+MIIEXjCCA0agAwIBAgIQRL4Mi1AAIbQR0ypoBqmtaTANBgkqhkiG9w0BAQUFADCB
+kzELMAkGA1UEBhMCVVMxCzAJBgNVBAgTAlVUMRcwFQYDVQQHEw5TYWx0IExha2Ug
+Q2l0eTEeMBwGA1UEChMVVGhlIFVTRVJUUlVTVCBOZXR3b3JrMSEwHwYDVQQLExho
+dHRwOi8vd3d3LnVzZXJ0cnVzdC5jb20xGzAZBgNVBAMTElVUTiAtIERBVEFDb3Jw
+IFNHQzAeFw05OTA2MjQxODU3MjFaFw0xOTA2MjQxOTA2MzBaMIGTMQswCQYDVQQG
+EwJVUzELMAkGA1UECBMCVVQxFzAVBgNVBAcTDlNhbHQgTGFrZSBDaXR5MR4wHAYD
+VQQKExVUaGUgVVNFUlRSVVNUIE5ldHdvcmsxITAfBgNVBAsTGGh0dHA6Ly93d3cu
+dXNlcnRydXN0LmNvbTEbMBkGA1UEAxMSVVROIC0gREFUQUNvcnAgU0dDMIIBIjAN
+BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA3+5YEKIrblXEjr8uRgnn4AgPLit6
+E5Qbvfa2gI5lBZMAHryv4g+OGQ0SR+ysraP6LnD43m77VkIVni5c7yPeIbkFdicZ
+D0/Ww5y0vpQZY/KmEQrrU0icvvIpOxboGqBMpsn0GFlowHDyUwDAXlCCpVZvNvlK
+4ESGoE1O1kduSUrLZ9emxAW5jh70/P/N5zbgnAVssjMiFdC04MwXwLLA9P4yPykq
+lXvY8qdOD1R8oQ2AswkDwf9c3V6aPryuvEeKaq5xyh+xKrhfQgUL7EYw0XILyulW
+bfXv33i+Ybqypa4ETLyorGkVl73v67SMvzX41MPRKA5cOp9wGDMgd8SirwIDAQAB
+o4GrMIGoMAsGA1UdDwQEAwIBxjAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQWBBRT
+MtGzz3/64PGgXYVOktKeRR20TzA9BgNVHR8ENjA0MDKgMKAuhixodHRwOi8vY3Js
+LnVzZXJ0cnVzdC5jb20vVVROLURBVEFDb3JwU0dDLmNybDAqBgNVHSUEIzAhBggr
+BgEFBQcDAQYKKwYBBAGCNwoDAwYJYIZIAYb4QgQBMA0GCSqGSIb3DQEBBQUAA4IB
+AQAnNZcAiosovcYzMB4p/OL31ZjUQLtgyr+rFywJNn9Q+kHcrpY6CiM+iVnJowft
+Gzet/Hy+UUla3joKVAgWRcKZsYfNjGjgaQPpxE6YsjuMFrMOoAyYUJuTqXAJyCyj
+j98C5OBxOvG0I3KgqgHf35g+FFCgMSa9KOlaMCZ1+XtgHI3zzVAmbQQnmt/VDUVH
+KWss5nbZqSl9Mt3JNjy9rjXxEZ4du5A/EkdOjtd+D2JzHVImOBwYSf0wdJrE5SIv
+2MCN7ZF6TACPcn9d2t0bi0Vr591pl6jFVkwPDPafepE39peC4N1xaf92P2BNPM/3
+mfnGV/TJVTl4uix5yaaIK/QI
+-----END CERTIFICATE-----
+
+UTN USERFirst Hardware Root CA
+==============================
+
+-----BEGIN CERTIFICATE-----
+MIIEdDCCA1ygAwIBAgIQRL4Mi1AAJLQR0zYq/mUK/TANBgkqhkiG9w0BAQUFADCB
+lzELMAkGA1UEBhMCVVMxCzAJBgNVBAgTAlVUMRcwFQYDVQQHEw5TYWx0IExha2Ug
+Q2l0eTEeMBwGA1UEChMVVGhlIFVTRVJUUlVTVCBOZXR3b3JrMSEwHwYDVQQLExho
+dHRwOi8vd3d3LnVzZXJ0cnVzdC5jb20xHzAdBgNVBAMTFlVUTi1VU0VSRmlyc3Qt
+SGFyZHdhcmUwHhcNOTkwNzA5MTgxMDQyWhcNMTkwNzA5MTgxOTIyWjCBlzELMAkG
+A1UEBhMCVVMxCzAJBgNVBAgTAlVUMRcwFQYDVQQHEw5TYWx0IExha2UgQ2l0eTEe
+MBwGA1UEChMVVGhlIFVTRVJUUlVTVCBOZXR3b3JrMSEwHwYDVQQLExhodHRwOi8v
+d3d3LnVzZXJ0cnVzdC5jb20xHzAdBgNVBAMTFlVUTi1VU0VSRmlyc3QtSGFyZHdh
+cmUwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCx98M4P7Sof885glFn
+0G2f0v9Y8+efK+wNiVSZuTiZFvfgIXlIwrthdBKWHTxqctU8EGc6Oe0rE81m65UJ
+M6Rsl7HoxuzBdXmcRl6Nq9Bq/bkqVRcQVLMZ8Jr28bFdtqdt++BxF2uiiPsA3/4a
+MXcMmgF6sTLjKwEHOG7DpV4jvEWbe1DByTCP2+UretNb+zNAHqDVmBe8i4fDidNd
+oI6yqqr2jmmIBsX6iSHzCJ1pLgkzmykNRg+MzEk0sGlRvfkGzWitZky8PqxhvQqI
+DsjfPe58BEydCl5rkdbux+0ojatNh4lz0G6k0B4WixThdkQDf2Os5M1JnMWS9Ksy
+oUhbAgMBAAGjgbkwgbYwCwYDVR0PBAQDAgHGMA8GA1UdEwEB/wQFMAMBAf8wHQYD
+VR0OBBYEFKFyXyYbKJhDlV0HN9WFlp1L0sNFMEQGA1UdHwQ9MDswOaA3oDWGM2h0
+dHA6Ly9jcmwudXNlcnRydXN0LmNvbS9VVE4tVVNFUkZpcnN0LUhhcmR3YXJlLmNy
+bDAxBgNVHSUEKjAoBggrBgEFBQcDAQYIKwYBBQUHAwUGCCsGAQUFBwMGBggrBgEF
+BQcDBzANBgkqhkiG9w0BAQUFAAOCAQEARxkP3nTGmZev/K0oXnWO6y1n7k57K9cM
+//bey1WiCuFMVGWTYGufEpytXoMs61quwOQt9ABjHbjAbPLPSbtNk28Gpgoiskli
+CE7/yMgUsogWXecB5BKV5UU0s4tpvc+0hY91UZ59Ojg6FEgSxvunOxqNDYJAB+gE
+CJChicsZUN/KHAG8HQQZexB2lzvukJDKxA4fFm517zP4029bHpbj4HR3dHuKom4t
+3XbWOTCC8KucUvIqx69JXn7HaOWCgchqJ/kniCrVWFCVH/A7HFe7fRQ5YiuayZSS
+KqMiDP+JJn1fIytH1xUdqWqeUQ0qUZ6B+dQ7XnASfxAynB67nfhmqA==
+-----END CERTIFICATE-----
+
+XRamp Global CA Root
+====================
+
+-----BEGIN CERTIFICATE-----
+MIIEMDCCAxigAwIBAgIQUJRs7Bjq1ZxN1ZfvdY+grTANBgkqhkiG9w0BAQUFADCB
+gjELMAkGA1UEBhMCVVMxHjAcBgNVBAsTFXd3dy54cmFtcHNlY3VyaXR5LmNvbTEk
+MCIGA1UEChMbWFJhbXAgU2VjdXJpdHkgU2VydmljZXMgSW5jMS0wKwYDVQQDEyRY
+UmFtcCBHbG9iYWwgQ2VydGlmaWNhdGlvbiBBdXRob3JpdHkwHhcNMDQxMTAxMTcx
+NDA0WhcNMzUwMTAxMDUzNzE5WjCBgjELMAkGA1UEBhMCVVMxHjAcBgNVBAsTFXd3
+dy54cmFtcHNlY3VyaXR5LmNvbTEkMCIGA1UEChMbWFJhbXAgU2VjdXJpdHkgU2Vy
+dmljZXMgSW5jMS0wKwYDVQQDEyRYUmFtcCBHbG9iYWwgQ2VydGlmaWNhdGlvbiBB
+dXRob3JpdHkwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCYJB69FbS6
+38eMpSe2OAtp87ZOqCwuIR1cRN8hXX4jdP5efrRKt6atH67gBhbim1vZZ3RrXYCP
+KZ2GG9mcDZhtdhAoWORlsH9KmHmf4MMxfoArtYzAQDsRhtDLooY2YKTVMIJt2W7Q
+DxIEM5dfT2Fa8OT5kavnHTu86M/0ay00fOJIYRyO82FEzG+gSqmUsE3a56k0enI4
+qEHMPJQRfevIpoy3hsvKMzvZPTeL+3o+hiznc9cKV6xkmxnr9A8ECIqsAxcZZPRa
+JSKNNCyy9mgdEm3Tih4U2sSPpuIjhdV6Db1q4Ons7Be7QhtnqiXtRYMh/MHJfNVi
+PvryxS3T/dRlAgMBAAGjgZ8wgZwwEwYJKwYBBAGCNxQCBAYeBABDAEEwCwYDVR0P
+BAQDAgGGMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0OBBYEFMZPoj0GY4QJnM5i5ASs
+jVy16bYbMDYGA1UdHwQvMC0wK6ApoCeGJWh0dHA6Ly9jcmwueHJhbXBzZWN1cml0
+eS5jb20vWEdDQS5jcmwwEAYJKwYBBAGCNxUBBAMCAQEwDQYJKoZIhvcNAQEFBQAD
+ggEBAJEVOQMBG2f7Shz5CmBbodpNl2L5JFMn14JkTpAuw0kbK5rc/Kh4ZzXxHfAR
+vbdI4xD2Dd8/0sm2qlWkSLoC295ZLhVbO50WfUfXN+pfTXYSNrsf16GBBEYgoyxt
+qZ4Bfj8pzgCT3/3JknOJiWSe5yvkHJEs0rnOfc5vMZnT5r7SHpDwCRR5XCOrTdLa
+IR9NmXmd4c8nnxCbHIgNsIpkQTG4DmyQJKSbXHGPurt+HBvbaoAPIbzp26a3QPSy
+i6mx5O+aGtA9aZnuqCij4Tyz8LIRnM98QObd50N9otg6tamN8jSZxNQQ4Qb9CYQQ
+O+7ETPTsJ3xCwnR8gooJybQDJbw=
+-----END CERTIFICATE-----
+
+Go Daddy Class 2 CA
+===================
+
+-----BEGIN CERTIFICATE-----
+MIIEADCCAuigAwIBAgIBADANBgkqhkiG9w0BAQUFADBjMQswCQYDVQQGEwJVUzEh
+MB8GA1UEChMYVGhlIEdvIERhZGR5IEdyb3VwLCBJbmMuMTEwLwYDVQQLEyhHbyBE
+YWRkeSBDbGFzcyAyIENlcnRpZmljYXRpb24gQXV0aG9yaXR5MB4XDTA0MDYyOTE3
+MDYyMFoXDTM0MDYyOTE3MDYyMFowYzELMAkGA1UEBhMCVVMxITAfBgNVBAoTGFRo
+ZSBHbyBEYWRkeSBHcm91cCwgSW5jLjExMC8GA1UECxMoR28gRGFkZHkgQ2xhc3Mg
+MiBDZXJ0aWZpY2F0aW9uIEF1dGhvcml0eTCCASAwDQYJKoZIhvcNAQEBBQADggEN
+ADCCAQgCggEBAN6d1+pXGEmhW+vXX0iG6r7d/+TvZxz0ZWizV3GgXne77ZtJ6XCA
+PVYYYwhv2vLM0D9/AlQiVBDYsoHUwHU9S3/Hd8M+eKsaA7Ugay9qK7HFiH7Eux6w
+wdhFJ2+qN1j3hybX2C32qRe3H3I2TqYXP2WYktsqbl2i/ojgC95/5Y0V4evLOtXi
+EqITLdiOr18SPaAIBQi2XKVlOARFmR6jYGB0xUGlcmIbYsUfb18aQr4CUWWoriMY
+avx4A6lNf4DD+qta/KFApMoZFv6yyO9ecw3ud72a9nmYvLEHZ6IVDd2gWMZEewo+
+YihfukEHU1jPEX44dMX4/7VpkI+EdOqXG68CAQOjgcAwgb0wHQYDVR0OBBYEFNLE
+sNKR1EwRcbNhyz2h/t2oatTjMIGNBgNVHSMEgYUwgYKAFNLEsNKR1EwRcbNhyz2h
+/t2oatTjoWekZTBjMQswCQYDVQQGEwJVUzEhMB8GA1UEChMYVGhlIEdvIERhZGR5
+IEdyb3VwLCBJbmMuMTEwLwYDVQQLEyhHbyBEYWRkeSBDbGFzcyAyIENlcnRpZmlj
+YXRpb24gQXV0aG9yaXR5ggEAMAwGA1UdEwQFMAMBAf8wDQYJKoZIhvcNAQEFBQAD
+ggEBADJL87LKPpH8EsahB4yOd6AzBhRckB4Y9wimPQoZ+YeAEW5p5JYXMP80kWNy
+OO7MHAGjHZQopDH2esRU1/blMVgDoszOYtuURXO1v0XJJLXVggKtI3lpjbi2Tc7P
+TMozI+gciKqdi0FuFskg5YmezTvacPd+mSYgFFQlq25zheabIZ0KbIIOqPjCDPoQ
+HmyW74cNxA9hi63ugyuV+I6ShHI56yDqg+2DzZduCLzrTia2cyvk0/ZM/iZx4mER
+dEr/VxqHD3VILs9RaRegAhJhldXRQLIQTO7ErBBDpqWeCtWVYpoNz4iCxTIM5Cuf
+ReYNnyicsbkqWletNw+vHX/bvZ8=
+-----END CERTIFICATE-----
+
+Starfield Class 2 CA
+====================
+
+-----BEGIN CERTIFICATE-----
+MIIEDzCCAvegAwIBAgIBADANBgkqhkiG9w0BAQUFADBoMQswCQYDVQQGEwJVUzEl
+MCMGA1UEChMcU3RhcmZpZWxkIFRlY2hub2xvZ2llcywgSW5jLjEyMDAGA1UECxMp
+U3RhcmZpZWxkIENsYXNzIDIgQ2VydGlmaWNhdGlvbiBBdXRob3JpdHkwHhcNMDQw
+NjI5MTczOTE2WhcNMzQwNjI5MTczOTE2WjBoMQswCQYDVQQGEwJVUzElMCMGA1UE
+ChMcU3RhcmZpZWxkIFRlY2hub2xvZ2llcywgSW5jLjEyMDAGA1UECxMpU3RhcmZp
+ZWxkIENsYXNzIDIgQ2VydGlmaWNhdGlvbiBBdXRob3JpdHkwggEgMA0GCSqGSIb3
+DQEBAQUAA4IBDQAwggEIAoIBAQC3Msj+6XGmBIWtDBFk385N78gDGIc/oav7PKaf
+8MOh2tTYbitTkPskpD6E8J7oX+zlJ0T1KKY/e97gKvDIr1MvnsoFAZMej2YcOadN
++lq2cwQlZut3f+dZxkqZJRRU6ybH838Z1TBwj6+wRir/resp7defqgSHo9T5iaU0
+X9tDkYI22WY8sbi5gv2cOj4QyDvvBmVmepsZGD3/cVE8MC5fvj13c7JdBmzDI1aa
+K4UmkhynArPkPw2vCHmCuDY96pzTNbO8acr1zJ3o/WSNF4Azbl5KXZnJHoe0nRrA
+1W4TNSNe35tfPe/W93bC6j67eA0cQmdrBNj41tpvi/JEoAGrAgEDo4HFMIHCMB0G
+A1UdDgQWBBS/X7fRzt0fhvRbVazc1xDCDqmI5zCBkgYDVR0jBIGKMIGHgBS/X7fR
+zt0fhvRbVazc1xDCDqmI56FspGowaDELMAkGA1UEBhMCVVMxJTAjBgNVBAoTHFN0
+YXJmaWVsZCBUZWNobm9sb2dpZXMsIEluYy4xMjAwBgNVBAsTKVN0YXJmaWVsZCBD
+bGFzcyAyIENlcnRpZmljYXRpb24gQXV0aG9yaXR5ggEAMAwGA1UdEwQFMAMBAf8w
+DQYJKoZIhvcNAQEFBQADggEBAAWdP4id0ckaVaGsafPzWdqbAYcaT1epoXkJKtv3
+L7IezMdeatiDh6GX70k1PncGQVhiv45YuApnP+yz3SFmH8lU+nLMPUxA2IGvd56D
+eruix/U0F47ZEUD0/CwqTRV/p2JdLiXTAAsgGh1o+Re49L2L7ShZ3U0WixeDyLJl
+xy16paq8U4Zt3VekyvggQQto8PT7dL5WXXp59fkdheMtlb71cZBDzI0fmgAKhynp
+VSJYACPq4xJDKVtHCN2MQWplBqjlIapBtJUhlbl90TSrE9atvNziPTnNvT51cKEY
+WQPJIrSPnNVeKtelttQKbfi3QBFGmh95DmK/D5fs4C8fF5Q=
+-----END CERTIFICATE-----
+
+StartCom Certification Authority
+================================
+
+-----BEGIN CERTIFICATE-----
+MIIHyTCCBbGgAwIBAgIBATANBgkqhkiG9w0BAQUFADB9MQswCQYDVQQGEwJJTDEW
+MBQGA1UEChMNU3RhcnRDb20gTHRkLjErMCkGA1UECxMiU2VjdXJlIERpZ2l0YWwg
+Q2VydGlmaWNhdGUgU2lnbmluZzEpMCcGA1UEAxMgU3RhcnRDb20gQ2VydGlmaWNh
+dGlvbiBBdXRob3JpdHkwHhcNMDYwOTE3MTk0NjM2WhcNMzYwOTE3MTk0NjM2WjB9
+MQswCQYDVQQGEwJJTDEWMBQGA1UEChMNU3RhcnRDb20gTHRkLjErMCkGA1UECxMi
+U2VjdXJlIERpZ2l0YWwgQ2VydGlmaWNhdGUgU2lnbmluZzEpMCcGA1UEAxMgU3Rh
+cnRDb20gQ2VydGlmaWNhdGlvbiBBdXRob3JpdHkwggIiMA0GCSqGSIb3DQEBAQUA
+A4ICDwAwggIKAoICAQDBiNsJvGxGfHiflXu1M5DycmLWwTYgIiRezul38kMKogZk
+pMyONvg45iPwbm2xPN1yo4UcodM9tDMr0y+v/uqwQVlntsQGfQqedIXWeUyAN3rf
+OQVSWff0G0ZDpNKFhdLDcfN1YjS6LIp/Ho/u7TTQEceWzVI9ujPW3U3eCztKS5/C
+Ji/6tRYccjV3yjxd5srhJosaNnZcAdt0FCX+7bWgiA/deMotHweXMAEtcnn6RtYT
+Kqi5pquDSR3l8u/d5AGOGAqPY1MWhWKpDhk6zLVmpsJrdAfkK+F2PrRt2PZE4XNi
+HzvEvqBTViVsUQn3qqvKv3b9bZvzndu/PWa8DFaqr5hIlTpL36dYUNk4dalb6kMM
+Av+Z6+hsTXBbKWWc3apdzK8BMewM69KN6Oqce+Zu9ydmDBpI125C4z/eIT574Q1w
++2OqqGwaVLRcJXrJosmLFqa7LH4XXgVNWG4SHQHuEhANxjJ/GP/89PrNbpHoNkm+
+Gkhpi8KWTRoSsmkXwQqQ1vp5Iki/untp+HDH+no32NgN0nZPV/+Qt+OR0t3vwmC3
+Zzrd/qqc8NSLf3Iizsafl7b4r4qgEKjZ+xjGtrVcUjyJthkqcwEKDwOzEmDyei+B
+26Nu/yYwl/WL3YlXtq09s68rxbd2AvCl1iuahhQqcvbjM4xdCUsT37uMdBNSSwID
+AQABo4ICUjCCAk4wDAYDVR0TBAUwAwEB/zALBgNVHQ8EBAMCAa4wHQYDVR0OBBYE
+FE4L7xqkQFulF2mHMMo0aEPQQa7yMGQGA1UdHwRdMFswLKAqoCiGJmh0dHA6Ly9j
+ZXJ0LnN0YXJ0Y29tLm9yZy9zZnNjYS1jcmwuY3JsMCugKaAnhiVodHRwOi8vY3Js
+LnN0YXJ0Y29tLm9yZy9zZnNjYS1jcmwuY3JsMIIBXQYDVR0gBIIBVDCCAVAwggFM
+BgsrBgEEAYG1NwEBATCCATswLwYIKwYBBQUHAgEWI2h0dHA6Ly9jZXJ0LnN0YXJ0
+Y29tLm9yZy9wb2xpY3kucGRmMDUGCCsGAQUFBwIBFilodHRwOi8vY2VydC5zdGFy
+dGNvbS5vcmcvaW50ZXJtZWRpYXRlLnBkZjCB0AYIKwYBBQUHAgIwgcMwJxYgU3Rh
+cnQgQ29tbWVyY2lhbCAoU3RhcnRDb20pIEx0ZC4wAwIBARqBl0xpbWl0ZWQgTGlh
+YmlsaXR5LCByZWFkIHRoZSBzZWN0aW9uICpMZWdhbCBMaW1pdGF0aW9ucyogb2Yg
+dGhlIFN0YXJ0Q29tIENlcnRpZmljYXRpb24gQXV0aG9yaXR5IFBvbGljeSBhdmFp
+bGFibGUgYXQgaHR0cDovL2NlcnQuc3RhcnRjb20ub3JnL3BvbGljeS5wZGYwEQYJ
+YIZIAYb4QgEBBAQDAgAHMDgGCWCGSAGG+EIBDQQrFilTdGFydENvbSBGcmVlIFNT
+TCBDZXJ0aWZpY2F0aW9uIEF1dGhvcml0eTANBgkqhkiG9w0BAQUFAAOCAgEAFmyZ
+9GYMNPXQhV59CuzaEE44HF7fpiUFS5Eyweg78T3dRAlbB0mKKctmArexmvclmAk8
+jhvh3TaHK0u7aNM5Zj2gJsfyOZEdUauCe37Vzlrk4gNXcGmXCPleWKYK34wGmkUW
+FjgKXlf2Ysd6AgXmvB618p70qSmD+LIU424oh0TDkBreOKk8rENNZEXO3SipXPJz
+ewT4F+irsfMuXGRuczE6Eri8sxHkfY+BUZo7jYn0TZNmezwD7dOaHZrzZVD1oNB1
+ny+v8OqCQ5j4aZyJecRDjkZy42Q2Eq/3JR44iZB3fsNrarnDy0RLrHiQi+fHLB5L
+EUTINFInzQpdn4XBidUaePKVEFMy3YCEZnXZtWgo+2EuvoSoOMCZEoalHmdkrQYu
+L6lwhceWD3yJZfWOQ1QOq92lgDmUYMA0yZZwLKMS9R9Ie70cfmu3nZD0Ijuu+Pwq
+yvqCUqDvr0tVk+vBtfAii6w0TiYiBKGHLHVKt+V9E9e4DGTANtLJL4YSjCMJwRuC
+O3NJo2pXh5Tl1njFmUNj403gdy3hZZlyaQQaRwnmDwFWJPsfvw55qVguucQJAX6V
+um0ABj6y6koQOdjQK/W/7HW/lwLFCRsI3FU34oH7N4RDYiDK51ZLZer+bMEkkySh
+NOsF/5oirpt9P/FlUQqmMGqz9IgcgA38corog14=
+-----END CERTIFICATE-----
+
+DigiCert Assured ID Root CA
+===========================
+
+-----BEGIN CERTIFICATE-----
+MIIDtzCCAp+gAwIBAgIQDOfg5RfYRv6P5WD8G/AwOTANBgkqhkiG9w0BAQUFADBl
+MQswCQYDVQQGEwJVUzEVMBMGA1UEChMMRGlnaUNlcnQgSW5jMRkwFwYDVQQLExB3
+d3cuZGlnaWNlcnQuY29tMSQwIgYDVQQDExtEaWdpQ2VydCBBc3N1cmVkIElEIFJv
+b3QgQ0EwHhcNMDYxMTEwMDAwMDAwWhcNMzExMTEwMDAwMDAwWjBlMQswCQYDVQQG
+EwJVUzEVMBMGA1UEChMMRGlnaUNlcnQgSW5jMRkwFwYDVQQLExB3d3cuZGlnaWNl
+cnQuY29tMSQwIgYDVQQDExtEaWdpQ2VydCBBc3N1cmVkIElEIFJvb3QgQ0EwggEi
+MA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCtDhXO5EOAXLGH87dg+XESpa7c
+JpSIqvTO9SA5KFhgDPiA2qkVlTJhPLWxKISKityfCgyDF3qPkKyK53lTXDGEKvYP
+mDI2dsze3Tyoou9q+yHyUmHfnyDXH+Kx2f4YZNISW1/5WBg1vEfNoTb5a3/UsDg+
+wRvDjDPZ2C8Y/igPs6eD1sNuRMBhNZYW/lmci3Zt1/GiSw0r/wty2p5g0I6QNcZ4
+VYcgoc/lbQrISXwxmDNsIumH0DJaoroTghHtORedmTpyoeb6pNnVFzF1roV9Iq4/
+AUaG9ih5yLHa5FcXxH4cDrC0kqZWs72yl+2qp/C3xag/lRbQ/6GW6whfGHdPAgMB
+AAGjYzBhMA4GA1UdDwEB/wQEAwIBhjAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQW
+BBRF66Kv9JLLgjEtUYunpyGd823IDzAfBgNVHSMEGDAWgBRF66Kv9JLLgjEtUYun
+pyGd823IDzANBgkqhkiG9w0BAQUFAAOCAQEAog683+Lt8ONyc3pklL/3cmbYMuRC
+dWKuh+vy1dneVrOfzM4UKLkNl2BcEkxY5NM9g0lFWJc1aRqoR+pWxnmrEthngYTf
+fwk8lOa4JiwgvT2zKIn3X/8i4peEH+ll74fg38FnSbNd67IJKusm7Xi+fT8r87cm
+NW1fiQG2SVufAQWbqz0lwcy2f8Lxb4bG+mRo64EtlOtCt/qMHt1i8b5QZ7dsvfPx
+H2sMNgcWfzd8qVttevESRmCD1ycEvkvOl77DZypoEd+A5wwzZr8TDRRu838fYxAe
++o0bJW1sj6W3YQGx0qMmoRBxna3iw/nDmVG3KwcIzi7mULKn+gpFL6Lw8g==
+-----END CERTIFICATE-----
+
+DigiCert Global Root CA
+=======================
+
+-----BEGIN CERTIFICATE-----
+MIIDrzCCApegAwIBAgIQCDvgVpBCRrGhdWrJWZHHSjANBgkqhkiG9w0BAQUFADBh
+MQswCQYDVQQGEwJVUzEVMBMGA1UEChMMRGlnaUNlcnQgSW5jMRkwFwYDVQQLExB3
+d3cuZGlnaWNlcnQuY29tMSAwHgYDVQQDExdEaWdpQ2VydCBHbG9iYWwgUm9vdCBD
+QTAeFw0wNjExMTAwMDAwMDBaFw0zMTExMTAwMDAwMDBaMGExCzAJBgNVBAYTAlVT
+MRUwEwYDVQQKEwxEaWdpQ2VydCBJbmMxGTAXBgNVBAsTEHd3dy5kaWdpY2VydC5j
+b20xIDAeBgNVBAMTF0RpZ2lDZXJ0IEdsb2JhbCBSb290IENBMIIBIjANBgkqhkiG
+9w0BAQEFAAOCAQ8AMIIBCgKCAQEA4jvhEXLeqKTTo1eqUKKPC3eQyaKl7hLOllsB
+CSDMAZOnTjC3U/dDxGkAV53ijSLdhwZAAIEJzs4bg7/fzTtxRuLWZscFs3YnFo97
+nh6Vfe63SKMI2tavegw5BmV/Sl0fvBf4q77uKNd0f3p4mVmFaG5cIzJLv07A6Fpt
+43C/dxC//AH2hdmoRBBYMql1GNXRor5H4idq9Joz+EkIYIvUX7Q6hL+hqkpMfT7P
+T19sdl6gSzeRntwi5m3OFBqOasv+zbMUZBfHWymeMr/y7vrTC0LUq7dBMtoM1O/4
+gdW7jVg/tRvoSSiicNoxBN33shbyTApOB6jtSj1etX+jkMOvJwIDAQABo2MwYTAO
+BgNVHQ8BAf8EBAMCAYYwDwYDVR0TAQH/BAUwAwEB/zAdBgNVHQ4EFgQUA95QNVbR
+TLtm8KPiGxvDl7I90VUwHwYDVR0jBBgwFoAUA95QNVbRTLtm8KPiGxvDl7I90VUw
+DQYJKoZIhvcNAQEFBQADggEBAMucN6pIExIK+t1EnE9SsPTfrgT1eXkIoyQY/Esr
+hMAtudXH/vTBH1jLuG2cenTnmCmrEbXjcKChzUyImZOMkXDiqw8cvpOp/2PV5Adg
+06O/nVsJ8dWO41P0jmP6P6fbtGbfYmbW0W5BjfIttep3Sp+dWOIrWcBAI+0tKIJF
+PnlUkiaY4IBIqDfv8NZ5YBberOgOzW6sRBc4L0na4UU+Krk2U886UAb3LujEV0ls
+YSEY1QSteDwsOoBrp+uvFRTp2InBuThs4pFsiv9kuXclVzDAGySj4dzp30d8tbQk
+CAUw7C29C79Fv1C5qfPrmAESrciIxpg0X40KPMbp1ZWVbd4=
+-----END CERTIFICATE-----
+
+DigiCert High Assurance EV Root CA
+==================================
+
+-----BEGIN CERTIFICATE-----
+MIIDxTCCAq2gAwIBAgIQAqxcJmoLQJuPC3nyrkYldzANBgkqhkiG9w0BAQUFADBs
+MQswCQYDVQQGEwJVUzEVMBMGA1UEChMMRGlnaUNlcnQgSW5jMRkwFwYDVQQLExB3
+d3cuZGlnaWNlcnQuY29tMSswKQYDVQQDEyJEaWdpQ2VydCBIaWdoIEFzc3VyYW5j
+ZSBFViBSb290IENBMB4XDTA2MTExMDAwMDAwMFoXDTMxMTExMDAwMDAwMFowbDEL
+MAkGA1UEBhMCVVMxFTATBgNVBAoTDERpZ2lDZXJ0IEluYzEZMBcGA1UECxMQd3d3
+LmRpZ2ljZXJ0LmNvbTErMCkGA1UEAxMiRGlnaUNlcnQgSGlnaCBBc3N1cmFuY2Ug
+RVYgUm9vdCBDQTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAMbM5XPm
++9S75S0tMqbf5YE/yc0lSbZxKsPVlDRnogocsF9ppkCxxLeyj9CYpKlBWTrT3JTW
+PNt0OKRKzE0lgvdKpVMSOO7zSW1xkX5jtqumX8OkhPhPYlG++MXs2ziS4wblCJEM
+xChBVfvLWokVfnHoNb9Ncgk9vjo4UFt3MRuNs8ckRZqnrG0AFFoEt7oT61EKmEFB
+Ik5lYYeBQVCmeVyJ3hlKV9Uu5l0cUyx+mM0aBhakaHPQNAQTXKFx01p8VdteZOE3
+hzBWBOURtCmAEvF5OYiiAhF8J2a3iLd48soKqDirCmTCv2ZdlYTBoSUeh10aUAsg
+EsxBu24LUTi4S8sCAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgGGMA8GA1UdEwEB/wQF
+MAMBAf8wHQYDVR0OBBYEFLE+w2kD+L9HAdSYJhoIAu9jZCvDMB8GA1UdIwQYMBaA
+FLE+w2kD+L9HAdSYJhoIAu9jZCvDMA0GCSqGSIb3DQEBBQUAA4IBAQAcGgaX3Nec
+nzyIZgYIVyHbIUf4KmeqvxgydkAQV8GK83rZEWWONfqe/EW1ntlMMUu4kehDLI6z
+eM7b41N5cdblIZQB2lWHmiRk9opmzN6cN82oNLFpmyPInngiK3BD41VHMWEZ71jF
+hS9OMPagMRYjyOfiZRYzy78aG6A9+MpeizGLYAiJLQwGXFK3xPkKmNEVX58Svnw2
+Yzi9RKR/5CYrCsSXaQ3pjOLAEFe4yHYSkVXySGnYvCoCWw9E1CAx2/S6cCZdkGCe
+vEsXCS+0yx5DaMkHJ8HSXPfqIbloEpw8nL+e/IBcm2PN7EeqJSdnoDfzAIJ9VNep
++OkuE6N36B9K
+-----END CERTIFICATE-----
+
+GeoTrust Primary Certification Authority
+========================================
+
+-----BEGIN CERTIFICATE-----
+MIIDfDCCAmSgAwIBAgIQGKy1av1pthU6Y2yv2vrEoTANBgkqhkiG9w0BAQUFADBY
+MQswCQYDVQQGEwJVUzEWMBQGA1UEChMNR2VvVHJ1c3QgSW5jLjExMC8GA1UEAxMo
+R2VvVHJ1c3QgUHJpbWFyeSBDZXJ0aWZpY2F0aW9uIEF1dGhvcml0eTAeFw0wNjEx
+MjcwMDAwMDBaFw0zNjA3MTYyMzU5NTlaMFgxCzAJBgNVBAYTAlVTMRYwFAYDVQQK
+Ew1HZW9UcnVzdCBJbmMuMTEwLwYDVQQDEyhHZW9UcnVzdCBQcmltYXJ5IENlcnRp
+ZmljYXRpb24gQXV0aG9yaXR5MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKC
+AQEAvrgVe//UfH1nrYNke8hCUy3f9oQIIGHWAVlqnEQRr+92/ZV+zmEwu3qDXwK9
+AWbK7hWNb6EwnL2hhZ6UOvNWiAAxz9juapYC2e0DjPt1befquFUWBRaa9OBesYjA
+ZIVcFU2Ix7e64HXprQU9nceJSOC7KMgD4TCTZF5SwFlwIjVXiIrxlQqD17wxcwE0
+7e9GceBrAqg1cmuXm2bgyxx5X9gaBGgeRwLmnWDiNpcB3841kt++Z8dtd1k7j53W
+kBWUvEI0EME5+bEnPn7WinXFsq+W06Lem+SYvn3h6YGttm/81w7a4DSwDRp35+MI
+mO9Y+pyEtzavwt+s0vQQBnBxNQIDAQABo0IwQDAPBgNVHRMBAf8EBTADAQH/MA4G
+A1UdDwEB/wQEAwIBBjAdBgNVHQ4EFgQULNVQQZcVi/CPNmFbSvtr2ZnJM5IwDQYJ
+KoZIhvcNAQEFBQADggEBAFpwfyzdtzRP9YZRqSa+S7iq8XEN3GHHoOo0Hnp3DwQ1
+6CePbJC/kRYkRj5KTs4rFtULUh38H2eiAkUxT87z+gOneZ1TatnaYzr4gNfTmeGl
+4b7UVXGYNTq+k+qurUKykG/g/CFNNWMziUnWm07Kx+dOCQD32sfvmWKZd7aVIl6K
+oKv0uHiYyjgZmclynnjNS6yvGaBzEi38wkG6gZHaFloxt/m0cYASSJlyc1pZU8Fj
+UjPtp8nSOQJw+uCxQmYpqptR7TBUIhRf2asdweSU8Pj1K/fqynhG1riR/aYNKxoU
+AT6A8EKglQdebc3MS6RFjasS6LPeWuWgfOgPIh1a6Vk=
+-----END CERTIFICATE-----
+
+COMODO Certification Authority
+==============================
+
+-----BEGIN CERTIFICATE-----
+MIIEHTCCAwWgAwIBAgIQToEtioJl4AsC7j41AkblPTANBgkqhkiG9w0BAQUFADCB
+gTELMAkGA1UEBhMCR0IxGzAZBgNVBAgTEkdyZWF0ZXIgTWFuY2hlc3RlcjEQMA4G
+A1UEBxMHU2FsZm9yZDEaMBgGA1UEChMRQ09NT0RPIENBIExpbWl0ZWQxJzAlBgNV
+BAMTHkNPTU9ETyBDZXJ0aWZpY2F0aW9uIEF1dGhvcml0eTAeFw0wNjEyMDEwMDAw
+MDBaFw0yOTEyMzEyMzU5NTlaMIGBMQswCQYDVQQGEwJHQjEbMBkGA1UECBMSR3Jl
+YXRlciBNYW5jaGVzdGVyMRAwDgYDVQQHEwdTYWxmb3JkMRowGAYDVQQKExFDT01P
+RE8gQ0EgTGltaXRlZDEnMCUGA1UEAxMeQ09NT0RPIENlcnRpZmljYXRpb24gQXV0
+aG9yaXR5MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA0ECLi3LjkRv3
+UcEbVASY06m/weaKXTuH+7uIzg3jLz8GlvCiKVCZrts7oVewdFFxze1CkU1B/qnI
+2GqGd0S7WWaXUF601CxwRM/aN5VCaTwwxHGzUvAhTaHYujl8HJ6jJJ3ygxaYqhZ8
+Q5sVW7euNJH+1GImGEaaP+vB+fGQV+useg2L23IwambV4EajcNxo2f8ESIl33rXp
++2dtQem8Ob0y2WIC8bGoPW43nOIv4tOiJovGuFVDiOEjPqXSJDlqR6sA1KGzqSX+
+DT+nHbrTUcELpNqsOO9VUCQFZUaTNE8tja3G1CEZ0o7KBWFxB3NH5YoZEr0ETc5O
+nKVIrLsm9wIDAQABo4GOMIGLMB0GA1UdDgQWBBQLWOWLxkwVN6RAqTCpIb5HNlpW
+/zAOBgNVHQ8BAf8EBAMCAQYwDwYDVR0TAQH/BAUwAwEB/zBJBgNVHR8EQjBAMD6g
+PKA6hjhodHRwOi8vY3JsLmNvbW9kb2NhLmNvbS9DT01PRE9DZXJ0aWZpY2F0aW9u
+QXV0aG9yaXR5LmNybDANBgkqhkiG9w0BAQUFAAOCAQEAPpiem/Yb6dc5t3iuHXIY
+SdOH5EOC6z/JqvWote9VfCFSZfnVDeFs9D6Mk3ORLgLETgdxb8CPOGEIqB6BCsAv
+IC9Bi5HcSEW88cbeunZrM8gALTFGTO3nnc+IlP8zwFboJIYmuNg4ON8qa90SzMc/
+RxdMosIGlgnW2/4/PEZB31jiVg88O8EckzXZOFKs7sjsLjBOlDW0JB9LeGna8gI4
+zJVSk/BwJVmcIGfE7vmLV2H0knZ9P4SNVbfo5azV8fUZVqZa+5Acr5Pr5RzUZ5dd
+BA6+C4OmF4O5MBKgxTMVBbkN+8cFduPYSo38NBejxiEovjBFMR7HeL5YYTisO+IB
+ZQ==
+-----END CERTIFICATE-----
+
+Network Solutions Certificate Authority
+=======================================
+
+-----BEGIN CERTIFICATE-----
+MIID5jCCAs6gAwIBAgIQV8szb8JcFuZHFhfjkDFo4DANBgkqhkiG9w0BAQUFADBi
+MQswCQYDVQQGEwJVUzEhMB8GA1UEChMYTmV0d29yayBTb2x1dGlvbnMgTC5MLkMu
+MTAwLgYDVQQDEydOZXR3b3JrIFNvbHV0aW9ucyBDZXJ0aWZpY2F0ZSBBdXRob3Jp
+dHkwHhcNMDYxMjAxMDAwMDAwWhcNMjkxMjMxMjM1OTU5WjBiMQswCQYDVQQGEwJV
+UzEhMB8GA1UEChMYTmV0d29yayBTb2x1dGlvbnMgTC5MLkMuMTAwLgYDVQQDEydO
+ZXR3b3JrIFNvbHV0aW9ucyBDZXJ0aWZpY2F0ZSBBdXRob3JpdHkwggEiMA0GCSqG
+SIb3DQEBAQUAA4IBDwAwggEKAoIBAQDkvH6SMG3G2I4rC7xGzuAnlt7e+foS0zwz
+c7MEL7xxjOWftiJgPl9dzgn/ggwbmlFQGiaJ3dVhXRncEg8tCqJDXRfQNJIg6nPP
+OCwGJgl6cvf6UDL4wpPTaaIjzkGxzOTVHzbRijr4jGPiFFlp7Q3Tf2vouAPlT2rl
+mGNpSAW+Lv8ztumXWWn4Zxmuk2GWRBXTcrA/vGp97Eh/jcOrqnErU2lBUzS1sLnF
+BgrEsEX1QV1uiUV7PTsmjHTC5dLRfbIR1PtYMiKagMnc/Qzpf14Dl847ABSHJ3A4
+qY5usyd2mFHgBeMhqxrVhSI8KbWaFsWAqPS7azCPL0YCorEMIuDTAgMBAAGjgZcw
+gZQwHQYDVR0OBBYEFCEwyfsA106Y2oeqKtCnLrFAMadMMA4GA1UdDwEB/wQEAwIB
+BjAPBgNVHRMBAf8EBTADAQH/MFIGA1UdHwRLMEkwR6BFoEOGQWh0dHA6Ly9jcmwu
+bmV0c29sc3NsLmNvbS9OZXR3b3JrU29sdXRpb25zQ2VydGlmaWNhdGVBdXRob3Jp
+dHkuY3JsMA0GCSqGSIb3DQEBBQUAA4IBAQC7rkvnt1frf6ott3NHhWrB5KUd5Oc8
+6fRZZXe1eltajSU24HqXLjjAV2CDmAaDn7l2em5Q4LqILPxFzBiwmZVRDuwduIj/
+h1AcgsLj4DKAv6ALR8jDMe+ZZzKATxcheQxpXN5eNK4CtSbqUN9/GGUsyfJj4akH
+/nxxH2szJGoeBfcFaMBqEssuXmHLrijTfsK0ZpEmXzwuJF/LWA/rKOyvEZbz3Htv
+wKeI8lN3s2Berq4o2jUsbzRF0ybh3uxbTydrFny9RAQYgrOJeRcQcT16ohZO9QHN
+pGxlaKFJdlxDydi8NmdspZS11My5vWo1ViHe2MPr+8ukYEywVaCge1ey
+-----END CERTIFICATE-----
+
+COMODO ECC Certification Authority
+==================================
+
+-----BEGIN CERTIFICATE-----
+MIICiTCCAg+gAwIBAgIQH0evqmIAcFBUTAGem2OZKjAKBggqhkjOPQQDAzCBhTEL
+MAkGA1UEBhMCR0IxGzAZBgNVBAgTEkdyZWF0ZXIgTWFuY2hlc3RlcjEQMA4GA1UE
+BxMHU2FsZm9yZDEaMBgGA1UEChMRQ09NT0RPIENBIExpbWl0ZWQxKzApBgNVBAMT
+IkNPTU9ETyBFQ0MgQ2VydGlmaWNhdGlvbiBBdXRob3JpdHkwHhcNMDgwMzA2MDAw
+MDAwWhcNMzgwMTE4MjM1OTU5WjCBhTELMAkGA1UEBhMCR0IxGzAZBgNVBAgTEkdy
+ZWF0ZXIgTWFuY2hlc3RlcjEQMA4GA1UEBxMHU2FsZm9yZDEaMBgGA1UEChMRQ09N
+T0RPIENBIExpbWl0ZWQxKzApBgNVBAMTIkNPTU9ETyBFQ0MgQ2VydGlmaWNhdGlv
+biBBdXRob3JpdHkwdjAQBgcqhkjOPQIBBgUrgQQAIgNiAAQDR3svdcmCFYX7deSR
+FtSrYpn1PlILBs5BAH+X4QokPB0BBO490o0JlwzgdeT6+3eKKvUDYEs2ixYjFq0J
+cfRK9ChQtP6IHG4/bC8vCVlbpVsLM5niwz2J+Wos77LTBumjQjBAMB0GA1UdDgQW
+BBR1cacZSBm8nZ3qQUfflMRId5nTeTAOBgNVHQ8BAf8EBAMCAQYwDwYDVR0TAQH/
+BAUwAwEB/zAKBggqhkjOPQQDAwNoADBlAjEA7wNbeqy3eApyt4jf/7VGFAkK+qDm
+fQjGGoe9GKhzvSbKYAydzpmfz1wPMOG+FDHqAjAU9JM8SaczepBGR7NjfRObTrdv
+GDeAU/7dIOA1mjbRxwG55tzd8/8dLDoWV9mSOdY=
+-----END CERTIFICATE-----
+
+TC TrustCenter Class 2 CA II
+============================
+
+-----BEGIN CERTIFICATE-----
+MIIEqjCCA5KgAwIBAgIOLmoAAQACH9dSISwRXDswDQYJKoZIhvcNAQEFBQAwdjEL
+MAkGA1UEBhMCREUxHDAaBgNVBAoTE1RDIFRydXN0Q2VudGVyIEdtYkgxIjAgBgNV
+BAsTGVRDIFRydXN0Q2VudGVyIENsYXNzIDIgQ0ExJTAjBgNVBAMTHFRDIFRydXN0
+Q2VudGVyIENsYXNzIDIgQ0EgSUkwHhcNMDYwMTEyMTQzODQzWhcNMjUxMjMxMjI1
+OTU5WjB2MQswCQYDVQQGEwJERTEcMBoGA1UEChMTVEMgVHJ1c3RDZW50ZXIgR21i
+SDEiMCAGA1UECxMZVEMgVHJ1c3RDZW50ZXIgQ2xhc3MgMiBDQTElMCMGA1UEAxMc
+VEMgVHJ1c3RDZW50ZXIgQ2xhc3MgMiBDQSBJSTCCASIwDQYJKoZIhvcNAQEBBQAD
+ggEPADCCAQoCggEBAKuAh5uO8MN8h9foJIIRszzdQ2Lu+MNF2ujhoF/RKrLqk2jf
+tMjWQ+nEdVl//OEd+DFwIxuInie5e/060smp6RQvkL4DUsFJzfb95AhmC1eKokKg
+uNV/aVyQMrKXDcpK3EY+AlWJU+MaWss2xgdW94zPEfRMuzBwBJWl9jmM/XOBCH2J
+XjIeIqkiRUuwZi4wzJ9l/fzLganx4Duvo4bRierERXlQXa7pIXSSTYtZgo+U4+lK
+8edJsBTj9WLL1XK9H7nSn6DNqPoByNkN39r8R52zyFTfSUrxIan+GE7uSNQZu+99
+5OKdy1u2bv/jzVrndIIFuoAlOMvkaZ6vQaoahPUCAwEAAaOCATQwggEwMA8GA1Ud
+EwEB/wQFMAMBAf8wDgYDVR0PAQH/BAQDAgEGMB0GA1UdDgQWBBTjq1RMgKHbVkO3
+kUrL84J6E1wIqzCB7QYDVR0fBIHlMIHiMIHfoIHcoIHZhjVodHRwOi8vd3d3LnRy
+dXN0Y2VudGVyLmRlL2NybC92Mi90Y19jbGFzc18yX2NhX0lJLmNybIaBn2xkYXA6
+Ly93d3cudHJ1c3RjZW50ZXIuZGUvQ049VEMlMjBUcnVzdENlbnRlciUyMENsYXNz
+JTIwMiUyMENBJTIwSUksTz1UQyUyMFRydXN0Q2VudGVyJTIwR21iSCxPVT1yb290
+Y2VydHMsREM9dHJ1c3RjZW50ZXIsREM9ZGU/Y2VydGlmaWNhdGVSZXZvY2F0aW9u
+TGlzdD9iYXNlPzANBgkqhkiG9w0BAQUFAAOCAQEAjNfffu4bgBCzg/XbEeprS6iS
+GNn3Bzn1LL4GdXpoUxUc6krtXvwjshOg0wn/9vYua0Fxec3ibf2uWWuFHbhOIprt
+ZjluS5TmVfwLG4t3wVMTZonZKNaL80VKY7f9ewthXbhtvsPcW3nS7Yblok2+XnR8
+au0WOB9/WIFaGusyiC2y8zl3gK9etmF1KdsjTYjKUCjLhdLTEKJZbtOTVAB6okaV
+hgWcqRmY5TFyDADiZ9lA4CQze28suVyrZZ0srHbqNZn1l7kPJOzHdiEoZa5X6AeI
+dUpWoNIFOqTmjZKILPPy4cHGYdtBxceb9w4aUUXCYWvcZCcXjFq32nQozZfkvQ==
+-----END CERTIFICATE-----
+
+TC TrustCenter Class 3 CA II
+============================
+
+-----BEGIN CERTIFICATE-----
+MIIEqjCCA5KgAwIBAgIOSkcAAQAC5aBd1j8AUb8wDQYJKoZIhvcNAQEFBQAwdjEL
+MAkGA1UEBhMCREUxHDAaBgNVBAoTE1RDIFRydXN0Q2VudGVyIEdtYkgxIjAgBgNV
+BAsTGVRDIFRydXN0Q2VudGVyIENsYXNzIDMgQ0ExJTAjBgNVBAMTHFRDIFRydXN0
+Q2VudGVyIENsYXNzIDMgQ0EgSUkwHhcNMDYwMTEyMTQ0MTU3WhcNMjUxMjMxMjI1
+OTU5WjB2MQswCQYDVQQGEwJERTEcMBoGA1UEChMTVEMgVHJ1c3RDZW50ZXIgR21i
+SDEiMCAGA1UECxMZVEMgVHJ1c3RDZW50ZXIgQ2xhc3MgMyBDQTElMCMGA1UEAxMc
+VEMgVHJ1c3RDZW50ZXIgQ2xhc3MgMyBDQSBJSTCCASIwDQYJKoZIhvcNAQEBBQAD
+ggEPADCCAQoCggEBALTgu1G7OVyLBMVMeRwjhjEQY0NVJz/GRcekPewJDRoeIMJW
+Ht4bNwcwIi9v8Qbxq63WyKthoy9DxLCyLfzDlml7forkzMA5EpBCYMnMNWju2l+Q
+Vl/NHE1bWEnrDgFPZPosPIlY2C8u4rBo6SI7dYnWRBpl8huXJh0obazovVkdKyT2
+1oQDZogkAHhg8fir/gKya/si+zXmFtGt9i4S5Po1auUZuV3bOx4a+9P/FRQI2Alq
+ukWdFHlgfa9Aigdzs5OW03Q0jTo3Kd5c7PXuLjHCINy+8U9/I1LZW+Jk2ZyqBwi1
+Rb3R0DHBq1SfqdLDYmAD8bs5SpJKPQq5ncWg/jcCAwEAAaOCATQwggEwMA8GA1Ud
+EwEB/wQFMAMBAf8wDgYDVR0PAQH/BAQDAgEGMB0GA1UdDgQWBBTUovyfs8PYA9NX
+XAek0CSnwPIA1DCB7QYDVR0fBIHlMIHiMIHfoIHcoIHZhjVodHRwOi8vd3d3LnRy
+dXN0Y2VudGVyLmRlL2NybC92Mi90Y19jbGFzc18zX2NhX0lJLmNybIaBn2xkYXA6
+Ly93d3cudHJ1c3RjZW50ZXIuZGUvQ049VEMlMjBUcnVzdENlbnRlciUyMENsYXNz
+JTIwMyUyMENBJTIwSUksTz1UQyUyMFRydXN0Q2VudGVyJTIwR21iSCxPVT1yb290
+Y2VydHMsREM9dHJ1c3RjZW50ZXIsREM9ZGU/Y2VydGlmaWNhdGVSZXZvY2F0aW9u
+TGlzdD9iYXNlPzANBgkqhkiG9w0BAQUFAAOCAQEANmDkcPcGIEPZIxpC8vijsrlN
+irTzwppVMXzEO2eatN9NDoqTSheLG43KieHPOh6sHfGcMrSOWXaiQYUlN6AT0PV8
+TtXqluJucsG7Kv5sbviRmEb8yRtXW+rIGjs/sFGYPAfaLFkB2otE6OF0/ado3VS6
+g0bsyEa1+K+XwDsJHI/OcpY9M1ZwvJbL2NV9IJqDnxrcOfHFcqMRA/07QlIp2+gB
+95tejNaNhk4Z+rwcvsUhpYeeeC422wlxo3I0+GzjBgnyXlal092Y+tTmBvTwtiBj
+S+opvaqCZh77gaqnN60TGOaSw4HBM7uIHqHn4rS9MWwOUT1v+5ZWgOI2F9Hc5A==
+-----END CERTIFICATE-----
+
+TC TrustCenter Universal CA I
+=============================
+
+-----BEGIN CERTIFICATE-----
+MIID3TCCAsWgAwIBAgIOHaIAAQAC7LdggHiNtgYwDQYJKoZIhvcNAQEFBQAweTEL
+MAkGA1UEBhMCREUxHDAaBgNVBAoTE1RDIFRydXN0Q2VudGVyIEdtYkgxJDAiBgNV
+BAsTG1RDIFRydXN0Q2VudGVyIFVuaXZlcnNhbCBDQTEmMCQGA1UEAxMdVEMgVHJ1
+c3RDZW50ZXIgVW5pdmVyc2FsIENBIEkwHhcNMDYwMzIyMTU1NDI4WhcNMjUxMjMx
+MjI1OTU5WjB5MQswCQYDVQQGEwJERTEcMBoGA1UEChMTVEMgVHJ1c3RDZW50ZXIg
+R21iSDEkMCIGA1UECxMbVEMgVHJ1c3RDZW50ZXIgVW5pdmVyc2FsIENBMSYwJAYD
+VQQDEx1UQyBUcnVzdENlbnRlciBVbml2ZXJzYWwgQ0EgSTCCASIwDQYJKoZIhvcN
+AQEBBQADggEPADCCAQoCggEBAKR3I5ZEr5D0MacQ9CaHnPM42Q9e3s9B6DGtxnSR
+JJZ4Hgmgm5qVSkr1YnwCqMqs+1oEdjneX/H5s7/zA1hV0qq34wQi0fiU2iIIAI3T
+fCZdzHd55yx4Oagmcw6iXSVphU9VDprvxrlE4Vc93x9UIuVvZaozhDrzznq+VZeu
+jRIPFDPiUHDDSYcTvFHe15gSWu86gzOSBnWLknwSaHtwag+1m7Z3W0hZneTvWq3z
+wZ7U10VOylY0Ibw+F1tvdwxIAUMpsN0/lm7mlaoMwCC2/T42J5zjXM9OgdwZu5GQ
+fezmlwQek8wiSdeXhrYTCjxDI3d+8NzmzSQfO4ObNDqDNOMCAwEAAaNjMGEwHwYD
+VR0jBBgwFoAUkqR1LKSevoFE63n8isWVpesQdXMwDwYDVR0TAQH/BAUwAwEB/zAO
+BgNVHQ8BAf8EBAMCAYYwHQYDVR0OBBYEFJKkdSyknr6BROt5/IrFlaXrEHVzMA0G
+CSqGSIb3DQEBBQUAA4IBAQAo0uCG1eb4e/CX3CJrO5UUVg8RMKWaTzqwOuAGy2X1
+7caXJ/4l8lfmXpWMPmRgFVp/Lw0BxbFg/UU1z/CyvwbZ71q+s2IhtNerNXxTPqYn
+8aEt2hojnczd7Dwtnic0XQ/CNnm8yUpiLe1r2X1BQ3y2qsrtYbE3ghUJGooWMNjs
+ydZHcnhLEEYUjl8Or+zHL6sQ17bxbuyGssLoDZJz3KL0Dzq/YSMQiZxIQG5wALPT
+ujdEWBF6AmqI8Dc08BnprNRlc/ZpjGSUOnmFKbAWKwyCPwacx/0QK54PLLae4xW/
+2TYcuiUaUj0a7CIMHOCkoj3w6DnPgcB77V0fb8XQC9eY
+-----END CERTIFICATE-----
+
+Cybertrust Global Root
+======================
+
+-----BEGIN CERTIFICATE-----
+MIIDoTCCAomgAwIBAgILBAAAAAABD4WqLUgwDQYJKoZIhvcNAQEFBQAwOzEYMBYG
+A1UEChMPQ3liZXJ0cnVzdCwgSW5jMR8wHQYDVQQDExZDeWJlcnRydXN0IEdsb2Jh
+bCBSb290MB4XDTA2MTIxNTA4MDAwMFoXDTIxMTIxNTA4MDAwMFowOzEYMBYGA1UE
+ChMPQ3liZXJ0cnVzdCwgSW5jMR8wHQYDVQQDExZDeWJlcnRydXN0IEdsb2JhbCBS
+b290MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA+Mi8vRRQZhP/8NN5
+7CPytxrHjoXxEnOmGaoQ25yiZXRadz5RfVb23CO21O1fWLE3TdVJDm71aofW0ozS
+J8bi/zafmGWgE07GKmSb1ZASzxQG9Dvj1Ci+6A74q05IlG2OlTEQXO2iLb3VOm2y
+HLtgwEZLAfVJrn5GitB0jaEMAs7u/OePuGtm839EAL9mJRQr3RAwHQeWP032a7iP
+t3sMpTjr3kfb1V05/Iin89cqdPHoWqI7n1C6poxFNcJQZZXcY4Lv3b93TZxiyWNz
+FtApD0mpSPCzqrdsxacwOUBdrsTiXSZT8M4cIwhhqJQZugRiQOwfOHB3EgZxpzAY
+XSUnpQIDAQABo4GlMIGiMA4GA1UdDwEB/wQEAwIBBjAPBgNVHRMBAf8EBTADAQH/
+MB0GA1UdDgQWBBS2CHsNesysIEyGVjJez6tuhS1wVzA/BgNVHR8EODA2MDSgMqAw
+hi5odHRwOi8vd3d3Mi5wdWJsaWMtdHJ1c3QuY29tL2NybC9jdC9jdHJvb3QuY3Js
+MB8GA1UdIwQYMBaAFLYIew16zKwgTIZWMl7Pq26FLXBXMA0GCSqGSIb3DQEBBQUA
+A4IBAQBW7wojoFROlZfJ+InaRcHUowAl9B8Tq7ejhVhpwjCt2BWKLePJzYFa+HMj
+Wqd8BfP9IjsO0QbE2zZMcwSO5bAi5MXzLqXZI+O4Tkogp24CJJ8iYGd7ix1yCcUx
+XOl5n4BHPa2hCwcUPUf/A2kaDAtE52Mlp3+yybh2hO0j9n0Hq0V+09+zv+mKts2o
+omcrUtW3ZfA5TGOgkXmTUg9U3YO7n9GPp1Nzw8v/MOx8BLjYRB+TX3EJIrduPuoc
+A06dGiBh+4E37F78CkWr1+cXVdCg6mCbpvbjjFspwgZgFJ0tl0ypkxWdYcQBX0jW
+WL1WMRJOEcgh4LMRkWXbtKaIOM5V
+-----END CERTIFICATE-----
+
+GeoTrust Primary Certification Authority - G3
+=============================================
+
+-----BEGIN CERTIFICATE-----
+MIID/jCCAuagAwIBAgIQFaxulBmyeUtB9iepwxgPHzANBgkqhkiG9w0BAQsFADCB
+mDELMAkGA1UEBhMCVVMxFjAUBgNVBAoTDUdlb1RydXN0IEluYy4xOTA3BgNVBAsT
+MChjKSAyMDA4IEdlb1RydXN0IEluYy4gLSBGb3IgYXV0aG9yaXplZCB1c2Ugb25s
+eTE2MDQGA1UEAxMtR2VvVHJ1c3QgUHJpbWFyeSBDZXJ0aWZpY2F0aW9uIEF1dGhv
+cml0eSAtIEczMB4XDTA4MDQwMjAwMDAwMFoXDTM3MTIwMTIzNTk1OVowgZgxCzAJ
+BgNVBAYTAlVTMRYwFAYDVQQKEw1HZW9UcnVzdCBJbmMuMTkwNwYDVQQLEzAoYykg
+MjAwOCBHZW9UcnVzdCBJbmMuIC0gRm9yIGF1dGhvcml6ZWQgdXNlIG9ubHkxNjA0
+BgNVBAMTLUdlb1RydXN0IFByaW1hcnkgQ2VydGlmaWNhdGlvbiBBdXRob3JpdHkg
+LSBHMzCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBANziXmJYHTNXOTIz
++uvLh4yn1ErdBojqZI4xmKU4kB6Yzy5jK/BGvESyiaHAKAxJcCGVn2TAppMSAmUm
+hsalifD614SgcK9PGpc/BkTVyetyEH3kMSj7HGHmKAdEc5IiaacDiGydY8hS2pgn
+5whMcD60yRLBxWeDXTPzAxHsatBT4tG6NmCUgLthY2xbF37fQJQeqw3CIShwiP/W
+JmxsYAQlTlV+fe+/lEjetx3dcI0FX4ilm/LC7urRQEFtYjgdVgbFA0dRIBn8exAL
+DmKudlW/X3e+PkkBUz2YJQN2JFodtNuJ6nnltrM7P7pMKEF/BqxqjsHQ9gUdfeZC
+huOl1UcCAwEAAaNCMEAwDwYDVR0TAQH/BAUwAwEB/zAOBgNVHQ8BAf8EBAMCAQYw
+HQYDVR0OBBYEFMR5yo6hTgMdHNxr2zFblD4/MH8tMA0GCSqGSIb3DQEBCwUAA4IB
+AQAtxRPPVoB7eni9n64smefv2t+UXglpp+duaIy9cr5HqQ6XErhK8WTTOd8lNNTB
+zU6B8A8ExCSzNJbGpqow32hhc9f5joWJ7w5elShKKiePEI4ufIbEAp7aDHdlDkQN
+kv39sxY2+hENHYwOB4lqKVb3cvTdFZx3NWZXqxNT2I7BQMXXExZacse3aQHEerGD
+AWh9jUGhlBjBJVz88P6DAod8DQ3PLghcSkANPuyBYeYk28rgDi0Hsj5W3I31QYUH
+SJsMC8tJP33st/3LjWeJGqvtux6jAAgIFyqCXDFdRootD4abdNlF+9RAsXqqaC2G
+spki4cErx5z481+oghLrGREt
+-----END CERTIFICATE-----
+
+thawte Primary Root CA - G2
+===========================
+
+-----BEGIN CERTIFICATE-----
+MIICiDCCAg2gAwIBAgIQNfwmXNmET8k9Jj1Xm67XVjAKBggqhkjOPQQDAzCBhDEL
+MAkGA1UEBhMCVVMxFTATBgNVBAoTDHRoYXd0ZSwgSW5jLjE4MDYGA1UECxMvKGMp
+IDIwMDcgdGhhd3RlLCBJbmMuIC0gRm9yIGF1dGhvcml6ZWQgdXNlIG9ubHkxJDAi
+BgNVBAMTG3RoYXd0ZSBQcmltYXJ5IFJvb3QgQ0EgLSBHMjAeFw0wNzExMDUwMDAw
+MDBaFw0zODAxMTgyMzU5NTlaMIGEMQswCQYDVQQGEwJVUzEVMBMGA1UEChMMdGhh
+d3RlLCBJbmMuMTgwNgYDVQQLEy8oYykgMjAwNyB0aGF3dGUsIEluYy4gLSBGb3Ig
+YXV0aG9yaXplZCB1c2Ugb25seTEkMCIGA1UEAxMbdGhhd3RlIFByaW1hcnkgUm9v
+dCBDQSAtIEcyMHYwEAYHKoZIzj0CAQYFK4EEACIDYgAEotWcgnuVnfFSeIf+iha/
+BebfowJPDQfGAFG6DAJSLSKkQjnE/o/qycG+1E3/n3qe4rF8mq2nhglzh9HnmuN6
+papu+7qzcMBniKI11KOasf2twu8x+qi58/sIxpHR+ymVo0IwQDAPBgNVHRMBAf8E
+BTADAQH/MA4GA1UdDwEB/wQEAwIBBjAdBgNVHQ4EFgQUmtgAMADna3+FGO6Lts6K
+DPgR4bswCgYIKoZIzj0EAwMDaQAwZgIxAN344FdHW6fmCsO99YCKlzUNG4k8VIZ3
+KMqh9HneteY4sPBlcIx/AlTCv//YoT7ZzwIxAMSNlPzcU9LcnXgWHxUzI1NS41ox
+XZ3Krr0TKUQNJ1uo52icEvdYPy5yAlejj6EULg==
+-----END CERTIFICATE-----
+
+thawte Primary Root CA - G3
+===========================
+
+-----BEGIN CERTIFICATE-----
+MIIEKjCCAxKgAwIBAgIQYAGXt0an6rS0mtZLL/eQ+zANBgkqhkiG9w0BAQsFADCB
+rjELMAkGA1UEBhMCVVMxFTATBgNVBAoTDHRoYXd0ZSwgSW5jLjEoMCYGA1UECxMf
+Q2VydGlmaWNhdGlvbiBTZXJ2aWNlcyBEaXZpc2lvbjE4MDYGA1UECxMvKGMpIDIw
+MDggdGhhd3RlLCBJbmMuIC0gRm9yIGF1dGhvcml6ZWQgdXNlIG9ubHkxJDAiBgNV
+BAMTG3RoYXd0ZSBQcmltYXJ5IFJvb3QgQ0EgLSBHMzAeFw0wODA0MDIwMDAwMDBa
+Fw0zNzEyMDEyMzU5NTlaMIGuMQswCQYDVQQGEwJVUzEVMBMGA1UEChMMdGhhd3Rl
+LCBJbmMuMSgwJgYDVQQLEx9DZXJ0aWZpY2F0aW9uIFNlcnZpY2VzIERpdmlzaW9u
+MTgwNgYDVQQLEy8oYykgMjAwOCB0aGF3dGUsIEluYy4gLSBGb3IgYXV0aG9yaXpl
+ZCB1c2Ugb25seTEkMCIGA1UEAxMbdGhhd3RlIFByaW1hcnkgUm9vdCBDQSAtIEcz
+MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAsr8nLPvb2FvdeHsbnndm
+gcs+vHyu86YnmjSjaDFxODNi5PNxZnmxqWWjpYvVj2AtP0LMqmsywCPLLEHd5N/8
+YZzic7IilRFDGF/Eth9XbAoFWCLINkw6fKXRz4aviKdEAhN0cXMKQlkC+BsUa0Lf
+b1+6a4KinVvnSr0eAXLbS3ToO39/fR8EtCab4LRarEc9VbjXsCZSKAExQGbY2SS9
+9irY7CFJXJv2eul/VTV+lmuNk5Mny5K76qxAwJ/C+IDPXfRa3M50hqY+bAtTyr2S
+zhkGcuYMXDhpxwTWvGzOW/b3aJzcJRVIiKHpqfiYnODz1TEoYRFsZ5aNOZnLwkUk
+OQIDAQABo0IwQDAPBgNVHRMBAf8EBTADAQH/MA4GA1UdDwEB/wQEAwIBBjAdBgNV
+HQ4EFgQUrWyqlGCc7eT/+j4KdCtjA/e2Wb8wDQYJKoZIhvcNAQELBQADggEBABpA
+2JVlrAmSicY59BDlqQ5mU1143vokkbvnRFHfxhY0Cu9qRFHqKweKA3rD6z8KLFIW
+oCtDuSWQP3CpMyVtRRooOyfPqsMpQhvfO0zAMzRbQYi/aytlryjvsvXDqmbOe1bu
+t8jLZ8HJnBoYuMTDSQPxYA5QzUbF83d597YV4Djbxy8ooAw/dyZ02SUS2jHaGh7c
+KUGRIjxpp7sC8rZcJwOJ9Abqm+RyguOhCcHpABnTPtRwa7pxpqpYrvS76Wy274fM
+m7v/OeZWYdMKp8RcTGB7BXcmer/YB1IsYvdwY9k5vG8cwnncdimvzsUsZAReiDZu
+MdRAGmI0Nj81Aa6sY6A=
+-----END CERTIFICATE-----
+
+GeoTrust Primary Certification Authority - G2
+=============================================
+
+-----BEGIN CERTIFICATE-----
+MIICrjCCAjWgAwIBAgIQPLL0SAoA4v7rJDteYD7DazAKBggqhkjOPQQDAzCBmDEL
+MAkGA1UEBhMCVVMxFjAUBgNVBAoTDUdlb1RydXN0IEluYy4xOTA3BgNVBAsTMChj
+KSAyMDA3IEdlb1RydXN0IEluYy4gLSBGb3IgYXV0aG9yaXplZCB1c2Ugb25seTE2
+MDQGA1UEAxMtR2VvVHJ1c3QgUHJpbWFyeSBDZXJ0aWZpY2F0aW9uIEF1dGhvcml0
+eSAtIEcyMB4XDTA3MTEwNTAwMDAwMFoXDTM4MDExODIzNTk1OVowgZgxCzAJBgNV
+BAYTAlVTMRYwFAYDVQQKEw1HZW9UcnVzdCBJbmMuMTkwNwYDVQQLEzAoYykgMjAw
+NyBHZW9UcnVzdCBJbmMuIC0gRm9yIGF1dGhvcml6ZWQgdXNlIG9ubHkxNjA0BgNV
+BAMTLUdlb1RydXN0IFByaW1hcnkgQ2VydGlmaWNhdGlvbiBBdXRob3JpdHkgLSBH
+MjB2MBAGByqGSM49AgEGBSuBBAAiA2IABBWx6P0DFUPlrOuHNxFi79KDNlJ9RVcL
+So17VDs6bl8VAsBQps8lL33KSLjHUGMcKiEIfJo22Av+0SbFWDEwKCXzXV2juLal
+tJLtbCyf691DiaI8S0iRHVDsJt/WYC69IaNCMEAwDwYDVR0TAQH/BAUwAwEB/zAO
+BgNVHQ8BAf8EBAMCAQYwHQYDVR0OBBYEFBVfNVdRVfslsq0DafwBo/q+EVXVMAoG
+CCqGSM49BAMDA2cAMGQCMGSWWaboCd6LuvpaiIjwH5HTRqjySkwCY/tsXzjbLkGT
+qQ7mndwxHLKgpxgceeHHNgIwOlavmnRs9vuD4DPTCF+hnMJbn0bWtsuRBmOiBucz
+rD6ogRLQy7rQkgu2npaqBA+K
+-----END CERTIFICATE-----
+
+VeriSign Universal Root Certification Authority
+===============================================
+
+-----BEGIN CERTIFICATE-----
+MIIEuTCCA6GgAwIBAgIQQBrEZCGzEyEDDrvkEhrFHTANBgkqhkiG9w0BAQsFADCB
+vTELMAkGA1UEBhMCVVMxFzAVBgNVBAoTDlZlcmlTaWduLCBJbmMuMR8wHQYDVQQL
+ExZWZXJpU2lnbiBUcnVzdCBOZXR3b3JrMTowOAYDVQQLEzEoYykgMjAwOCBWZXJp
+U2lnbiwgSW5jLiAtIEZvciBhdXRob3JpemVkIHVzZSBvbmx5MTgwNgYDVQQDEy9W
+ZXJpU2lnbiBVbml2ZXJzYWwgUm9vdCBDZXJ0aWZpY2F0aW9uIEF1dGhvcml0eTAe
+Fw0wODA0MDIwMDAwMDBaFw0zNzEyMDEyMzU5NTlaMIG9MQswCQYDVQQGEwJVUzEX
+MBUGA1UEChMOVmVyaVNpZ24sIEluYy4xHzAdBgNVBAsTFlZlcmlTaWduIFRydXN0
+IE5ldHdvcmsxOjA4BgNVBAsTMShjKSAyMDA4IFZlcmlTaWduLCBJbmMuIC0gRm9y
+IGF1dGhvcml6ZWQgdXNlIG9ubHkxODA2BgNVBAMTL1ZlcmlTaWduIFVuaXZlcnNh
+bCBSb290IENlcnRpZmljYXRpb24gQXV0aG9yaXR5MIIBIjANBgkqhkiG9w0BAQEF
+AAOCAQ8AMIIBCgKCAQEAx2E3XrEBNNti1xWb/1hajCMj1mCOkdeQmIN65lgZOIzF
+9uVkhbSicfvtvbnazU0AtMgtc6XHaXGVHzk8skQHnOgO+k1KxCHfKWGPMiJhgsWH
+H26MfF8WIFFE0XBPV+rjHOPMee5Y2A7Cs0WTwCznmhcrewA3ekEzeOEz4vMQGn+H
+LL729fdC4uW/h2KJXwBL38Xd5HVEMkE6HnFuacsLdUYI0crSK5XQz/u5QGtkjFdN
+/BMReYTtXlT2NJ8IAfMQJQYXStrxHXpma5hgZqTZ79IugvHw7wnqRMkVauIDbjPT
+rJ9VAMf2CGqUuV/c4DPxhGD5WycRtPwW8rtWaoAljQIDAQABo4GyMIGvMA8GA1Ud
+EwEB/wQFMAMBAf8wDgYDVR0PAQH/BAQDAgEGMG0GCCsGAQUFBwEMBGEwX6FdoFsw
+WTBXMFUWCWltYWdlL2dpZjAhMB8wBwYFKw4DAhoEFI/l0xqGrI2Oa8PPgGrUSBgs
+exkuMCUWI2h0dHA6Ly9sb2dvLnZlcmlzaWduLmNvbS92c2xvZ28uZ2lmMB0GA1Ud
+DgQWBBS2d/ppSEefUxLVwuoHMnYH0ZcHGTANBgkqhkiG9w0BAQsFAAOCAQEASvj4
+sAPmLGd75JR3Y8xuTPl9Dg3cyLk1uXBPY/ok+myDjEedO2Pzmvl2MpWRsXe8rJq+
+seQxIcaBlVZaDrHC1LGmWazxY8u4TB1ZkErvkBYoH1quEPuBUDgMbMzxPcP1Y+Oz
+4yHJJDnp/RVmRvQbEdBNc6N9Rvk97ahfYtTxP/jgdFcrGJ2BtMQo2pSXpXDrrB2+
+BxHw1dvd5Yzw1TKwg+ZX4o+/vqGqvz0dtdQ46tewXDpPaj+PwGZsY6rp2aQW9IHR
+lRQOfc2VNNnSj3BzgXucfr2YYdhFh5iQxeuGMMY1v/D/w1WIg0vvBZIGcfK4mJO3
+7M2CYfE45k+XmCpajQ==
+-----END CERTIFICATE-----
+
+VeriSign Class 3 Public Primary Certification Authority - G4
+============================================================
+
+-----BEGIN CERTIFICATE-----
+MIIDhDCCAwqgAwIBAgIQL4D+I4wOIg9IZxIokYesszAKBggqhkjOPQQDAzCByjEL
+MAkGA1UEBhMCVVMxFzAVBgNVBAoTDlZlcmlTaWduLCBJbmMuMR8wHQYDVQQLExZW
+ZXJpU2lnbiBUcnVzdCBOZXR3b3JrMTowOAYDVQQLEzEoYykgMjAwNyBWZXJpU2ln
+biwgSW5jLiAtIEZvciBhdXRob3JpemVkIHVzZSBvbmx5MUUwQwYDVQQDEzxWZXJp
+U2lnbiBDbGFzcyAzIFB1YmxpYyBQcmltYXJ5IENlcnRpZmljYXRpb24gQXV0aG9y
+aXR5IC0gRzQwHhcNMDcxMTA1MDAwMDAwWhcNMzgwMTE4MjM1OTU5WjCByjELMAkG
+A1UEBhMCVVMxFzAVBgNVBAoTDlZlcmlTaWduLCBJbmMuMR8wHQYDVQQLExZWZXJp
+U2lnbiBUcnVzdCBOZXR3b3JrMTowOAYDVQQLEzEoYykgMjAwNyBWZXJpU2lnbiwg
+SW5jLiAtIEZvciBhdXRob3JpemVkIHVzZSBvbmx5MUUwQwYDVQQDEzxWZXJpU2ln
+biBDbGFzcyAzIFB1YmxpYyBQcmltYXJ5IENlcnRpZmljYXRpb24gQXV0aG9yaXR5
+IC0gRzQwdjAQBgcqhkjOPQIBBgUrgQQAIgNiAASnVnp8Utpkmw4tXNherJI9/gHm
+GUo9FANL+mAnINmDiWn6VMaaGF5VKmTeBvaNSjutEDxlPZCIBIngMGGzrl0Bp3ve
+fLK+ymVhAIau2o970ImtTR1ZmkGxvEeA3J5iw/mjgbIwga8wDwYDVR0TAQH/BAUw
+AwEB/zAOBgNVHQ8BAf8EBAMCAQYwbQYIKwYBBQUHAQwEYTBfoV2gWzBZMFcwVRYJ
+aW1hZ2UvZ2lmMCEwHzAHBgUrDgMCGgQUj+XTGoasjY5rw8+AatRIGCx7GS4wJRYj
+aHR0cDovL2xvZ28udmVyaXNpZ24uY29tL3ZzbG9nby5naWYwHQYDVR0OBBYEFLMW
+kf3upm7ktS5Jj4d4gYDs5bG1MAoGCCqGSM49BAMDA2gAMGUCMGYhDBgmYFo4e1ZC
+4Kf8NoRRkSAsdk1DPcQdhCPQrNZ8NQbOzWm9kA3bbEhCHQ6qQgIxAJw9SDkjOVga
+FRJZap7v1VmyHVIsmXHNxynfGyphe3HR3vPA5Q06Sqotp9iGKt0uEA==
+-----END CERTIFICATE-----
+
+GlobalSign Root CA - R3
+=======================
+
+-----BEGIN CERTIFICATE-----
+MIIDXzCCAkegAwIBAgILBAAAAAABIVhTCKIwDQYJKoZIhvcNAQELBQAwTDEgMB4G
+A1UECxMXR2xvYmFsU2lnbiBSb290IENBIC0gUjMxEzARBgNVBAoTCkdsb2JhbFNp
+Z24xEzARBgNVBAMTCkdsb2JhbFNpZ24wHhcNMDkwMzE4MTAwMDAwWhcNMjkwMzE4
+MTAwMDAwWjBMMSAwHgYDVQQLExdHbG9iYWxTaWduIFJvb3QgQ0EgLSBSMzETMBEG
+A1UEChMKR2xvYmFsU2lnbjETMBEGA1UEAxMKR2xvYmFsU2lnbjCCASIwDQYJKoZI
+hvcNAQEBBQADggEPADCCAQoCggEBAMwldpB5BngiFvXAg7aEyiie/QV2EcWtiHL8
+RgJDx7KKnQRfJMsuS+FggkbhUqsMgUdwbN1k0ev1LKMPgj0MK66X17YUhhB5uzsT
+gHeMCOFJ0mpiLx9e+pZo34knlTifBtc+ycsmWQ1z3rDI6SYOgxXG71uL0gRgykmm
+KPZpO/bLyCiR5Z2KYVc3rHQU3HTgOu5yLy6c+9C7v/U9AOEGM+iCK65TpjoWc4zd
+QQ4gOsC0p6Hpsk+QLjJg6VfLuQSSaGjlOCZgdbKfd/+RFO+uIEn8rUAVSNECMWEZ
+XriX7613t2Saer9fwRPvm2L7DWzgVGkWqQPabumDk3F2xmmFghcCAwEAAaNCMEAw
+DgYDVR0PAQH/BAQDAgEGMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0OBBYEFI/wS3+o
+LkUkrk1Q+mOai97i3Ru8MA0GCSqGSIb3DQEBCwUAA4IBAQBLQNvAUKr+yAzv95ZU
+RUm7lgAJQayzE4aGKAczymvmdLm6AC2upArT9fHxD4q/c2dKg8dEe3jgr25sbwMp
+jjM5RcOO5LlXbKr8EpbsU8Yt5CRsuZRj+9xTaGdWPoO4zzUhw8lo/s7awlOqzJCK
+6fBdRoyV3XpYKBovHd7NADdBj+1EbddTKJd+82cEHhXXipa0095MJ6RMG3NzdvQX
+mcIfeg7jLQitChws/zyrVQ4PkX4268NXSb7hLi18YIvDQVETI53O9zJrlAGomecs
+Mx86OyXShkDOOyyGeMlhLxS67ttVb9+E7gUJTb0o2HLO02JQZR7rkpeDMdmztcpH
+WD9f
+-----END CERTIFICATE-----
+
+TC TrustCenter Universal CA III
+===============================
+
+-----BEGIN CERTIFICATE-----
+MIID4TCCAsmgAwIBAgIOYyUAAQACFI0zFQLkbPQwDQYJKoZIhvcNAQEFBQAwezEL
+MAkGA1UEBhMCREUxHDAaBgNVBAoTE1RDIFRydXN0Q2VudGVyIEdtYkgxJDAiBgNV
+BAsTG1RDIFRydXN0Q2VudGVyIFVuaXZlcnNhbCBDQTEoMCYGA1UEAxMfVEMgVHJ1
+c3RDZW50ZXIgVW5pdmVyc2FsIENBIElJSTAeFw0wOTA5MDkwODE1MjdaFw0yOTEy
+MzEyMzU5NTlaMHsxCzAJBgNVBAYTAkRFMRwwGgYDVQQKExNUQyBUcnVzdENlbnRl
+ciBHbWJIMSQwIgYDVQQLExtUQyBUcnVzdENlbnRlciBVbml2ZXJzYWwgQ0ExKDAm
+BgNVBAMTH1RDIFRydXN0Q2VudGVyIFVuaXZlcnNhbCBDQSBJSUkwggEiMA0GCSqG
+SIb3DQEBAQUAA4IBDwAwggEKAoIBAQDC2pxisLlxErALyBpXsq6DFJmzNEubkKLF
+5+cvAqBNLaT6hdqbJYUtQCggbergvbFIgyIpRJ9Og+41URNzdNW88jBmlFPAQDYv
+DIRlzg9uwliT6CwLOunBjvvya8o84pxOjuT5fdMnnxvVZ3iHLX8LR7PH6MlIfK8v
+zArZQe+f/prhsq75U7Xl6UafYOPfjdN/+5Z+s7Vy+EutCHnNaYlAJ/Uqwa1D7KRT
+yGG299J5KmcYdkhtWyUB0SbFt1dpIxVbYYqt8Bst2a9c8SaQaanVDED1M4BDj5yj
+dipFtK+/fz6HP3bFzSreIMUWWMv5G/UPyw0RUmS40nZid4PxWJ//AgMBAAGjYzBh
+MB8GA1UdIwQYMBaAFFbn4VslQ4Dg9ozhcbyO5YAvxEjiMA8GA1UdEwEB/wQFMAMB
+Af8wDgYDVR0PAQH/BAQDAgEGMB0GA1UdDgQWBBRW5+FbJUOA4PaM4XG8juWAL8RI
+4jANBgkqhkiG9w0BAQUFAAOCAQEAg8ev6n9NCjw5sWi+e22JLumzCecYV42Fmhfz
+dkJQEw/HkG8zrcVJYCtsSVgZ1OK+t7+rSbyUyKu+KGwWaODIl0YgoGhnYIg5IFHY
+aAERzqf2EQf27OysGh+yZm5WZ2B6dF7AbZc2rrUNXWZzwCUyRdhKBgePxLcHsU0G
+DeGl6/R1yrqc0L2z0zIkTO5+4nYES0lT2PLpVDP85XEfPRRclkvxOvIAu2y0+pZV
+CIgJwcyRGSmwIC3/yzikQOEXvnlhgP8HA4ZMTnsGnxGGjYnuJ8Tb4rwZjgvDwxPH
+LQNjO9Po5KIqwoIIlBZU8O8fJ5AluA0OKBtHd0e9HKgl8ZS0Zg==
+-----END CERTIFICATE-----
+
+Go Daddy Root Certificate Authority - G2
+========================================
+
+-----BEGIN CERTIFICATE-----
+MIIDxTCCAq2gAwIBAgIBADANBgkqhkiG9w0BAQsFADCBgzELMAkGA1UEBhMCVVMx
+EDAOBgNVBAgTB0FyaXpvbmExEzARBgNVBAcTClNjb3R0c2RhbGUxGjAYBgNVBAoT
+EUdvRGFkZHkuY29tLCBJbmMuMTEwLwYDVQQDEyhHbyBEYWRkeSBSb290IENlcnRp
+ZmljYXRlIEF1dGhvcml0eSAtIEcyMB4XDTA5MDkwMTAwMDAwMFoXDTM3MTIzMTIz
+NTk1OVowgYMxCzAJBgNVBAYTAlVTMRAwDgYDVQQIEwdBcml6b25hMRMwEQYDVQQH
+EwpTY290dHNkYWxlMRowGAYDVQQKExFHb0RhZGR5LmNvbSwgSW5jLjExMC8GA1UE
+AxMoR28gRGFkZHkgUm9vdCBDZXJ0aWZpY2F0ZSBBdXRob3JpdHkgLSBHMjCCASIw
+DQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAL9xYgjx+lk09xvJGKP3gElY6SKD
+E6bFIEMBO4Tx5oVJnyfq9oQbTqC023CYxzIBsQU+B07u9PpPL1kwIuerGVZr4oAH
+/PMWdYA5UXvl+TW2dE6pjYIT5LY/qQOD+qK+ihVqf94Lw7YZFAXK6sOoBJQ7Rnwy
+DfMAZiLIjWltNowRGLfTshxgtDj6AozO091GB94KPutdfMh8+7ArU6SSYmlRJQVh
+GkSBjCypQ5Yj36w6gZoOKcUcqeldHraenjAKOc7xiID7S13MMuyFYkMlNAJWJwGR
+tDtwKj9useiciAF9n9T521NtYJ2/LOdYq7hfRvzOxBsDPAnrSTFcaUaz4EcCAwEA
+AaNCMEAwDwYDVR0TAQH/BAUwAwEB/zAOBgNVHQ8BAf8EBAMCAQYwHQYDVR0OBBYE
+FDqahQcQZyi27/a9BUFuIMGU2g/eMA0GCSqGSIb3DQEBCwUAA4IBAQCZ21151fmX
+WWcDYfF+OwYxdS2hII5PZYe096acvNjpL9DbWu7PdIxztDhC2gV7+AJ1uP2lsdeu
+9tfeE8tTEH6KRtGX+rcuKxGrkLAngPnon1rpN5+r5N9ss4UXnT3ZJE95kTXWXwTr
+gIOrmgIttRD02JDHBHNA7XIloKmf7J6raBKZV8aPEjoJpL1E/QYVN8Gb5DKj7Tjo
+2GTzLH4U/ALqn83/B2gX2yKQOC16jdFU8WnjXzPKej17CuPKf1855eJ1usV2GDPO
+LPAvTK33sefOT6jEm0pUBsV/fdUID+Ic/n4XuKxe9tQWskMJDE32p2u0mYRlynqI
+4uJEvlz36hz1
+-----END CERTIFICATE-----
+
+Starfield Root Certificate Authority - G2
+=========================================
+
+-----BEGIN CERTIFICATE-----
+MIID3TCCAsWgAwIBAgIBADANBgkqhkiG9w0BAQsFADCBjzELMAkGA1UEBhMCVVMx
+EDAOBgNVBAgTB0FyaXpvbmExEzARBgNVBAcTClNjb3R0c2RhbGUxJTAjBgNVBAoT
+HFN0YXJmaWVsZCBUZWNobm9sb2dpZXMsIEluYy4xMjAwBgNVBAMTKVN0YXJmaWVs
+ZCBSb290IENlcnRpZmljYXRlIEF1dGhvcml0eSAtIEcyMB4XDTA5MDkwMTAwMDAw
+MFoXDTM3MTIzMTIzNTk1OVowgY8xCzAJBgNVBAYTAlVTMRAwDgYDVQQIEwdBcml6
+b25hMRMwEQYDVQQHEwpTY290dHNkYWxlMSUwIwYDVQQKExxTdGFyZmllbGQgVGVj
+aG5vbG9naWVzLCBJbmMuMTIwMAYDVQQDEylTdGFyZmllbGQgUm9vdCBDZXJ0aWZp
+Y2F0ZSBBdXRob3JpdHkgLSBHMjCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoC
+ggEBAL3twQP89o/8ArFvW59I2Z154qK3A2FWGMNHttfKPTUuiUP3oWmb3ooa/RMg
+nLRJdzIpVv257IzdIvpy3Cdhl+72WoTsbhm5iSzchFvVdPtrX8WJpRBSiUZV9Lh1
+HOZ/5FSuS/hVclcCGfgXcVnrHigHdMWdSL5stPSksPNkN3mSwOxGXn/hbVNMYq/N
+Hwtjuzqd+/x5AJhhdM8mgkBj87JyahkNmcrUDnXMN/uLicFZ8WJ/X7NfZTD4p7dN
+dloedl40wOiWVpmKs/B/pM293DIxfJHP4F8R+GuqSVzRmZTRouNjWwl2tVZi4Ut0
+HZbUJtQIBFnQmA4O5t78w+wfkPECAwEAAaNCMEAwDwYDVR0TAQH/BAUwAwEB/zAO
+BgNVHQ8BAf8EBAMCAQYwHQYDVR0OBBYEFHwMMh+n2TB/xH1oo2Kooc6rB1snMA0G
+CSqGSIb3DQEBCwUAA4IBAQARWfolTwNvlJk7mh+ChTnUdgWUXuEok21iXQnCoKjU
+sHU48TRqneSfioYmUeYs0cYtbpUgSpIB7LiKZ3sx4mcujJUDJi5DnUox9g61DLu3
+4jd/IroAow57UvtruzvE03lRTs2Q9GcHGcg8RnoNAX3FWOdt5oUwF5okxBDgBPfg
+8n/Uqgr/Qh037ZTlZFkSIHc40zI+OIF1lnP6aI+xy84fxez6nH7PfrHxBy22/L/K
+pL/QlwVKvOoYKAKQvVR4CSFx09F9HdkWsKlhPdAKACL8x3vLCWRFCztAgfd9fDL1
+mMpYjn0q7pBZc2T5NnReJaH1ZgUufzkVqSr7UIuOhWn0
+-----END CERTIFICATE-----
+
+Starfield Services Root Certificate Authority - G2
+==================================================
+
+-----BEGIN CERTIFICATE-----
+MIID7zCCAtegAwIBAgIBADANBgkqhkiG9w0BAQsFADCBmDELMAkGA1UEBhMCVVMx
+EDAOBgNVBAgTB0FyaXpvbmExEzARBgNVBAcTClNjb3R0c2RhbGUxJTAjBgNVBAoT
+HFN0YXJmaWVsZCBUZWNobm9sb2dpZXMsIEluYy4xOzA5BgNVBAMTMlN0YXJmaWVs
+ZCBTZXJ2aWNlcyBSb290IENlcnRpZmljYXRlIEF1dGhvcml0eSAtIEcyMB4XDTA5
+MDkwMTAwMDAwMFoXDTM3MTIzMTIzNTk1OVowgZgxCzAJBgNVBAYTAlVTMRAwDgYD
+VQQIEwdBcml6b25hMRMwEQYDVQQHEwpTY290dHNkYWxlMSUwIwYDVQQKExxTdGFy
+ZmllbGQgVGVjaG5vbG9naWVzLCBJbmMuMTswOQYDVQQDEzJTdGFyZmllbGQgU2Vy
+dmljZXMgUm9vdCBDZXJ0aWZpY2F0ZSBBdXRob3JpdHkgLSBHMjCCASIwDQYJKoZI
+hvcNAQEBBQADggEPADCCAQoCggEBANUMOsQq+U7i9b4Zl1+OiFOxHz/Lz58gE20p
+OsgPfTz3a3Y4Y9k2YKibXlwAgLIvWX/2h/klQ4bnaRtSmpDhcePYLQ1Ob/bISdm2
+8xpWriu2dBTrz/sm4xq6HZYuajtYlIlHVv8loJNwU4PahHQUw2eeBGg6345AWh1K
+Ts9DkTvnVtYAcMtS7nt9rjrnvDH5RfbCYM8TWQIrgMw0R9+53pBlbQLPLJGmpufe
+hRhJfGZOozptqbXuNC66DQO4M99H67FrjSXZm86B0UVGMpZwh94CDklDhbZsc7tk
+6mFBrMnUVN+HL8cisibMn1lUaJ/8viovxFUcdUBgF4UCVTmLfwUCAwEAAaNCMEAw
+DwYDVR0TAQH/BAUwAwEB/zAOBgNVHQ8BAf8EBAMCAQYwHQYDVR0OBBYEFJxfAN+q
+AdcwKziIorhtSpzyEZGDMA0GCSqGSIb3DQEBCwUAA4IBAQBLNqaEd2ndOxmfZyMI
+bw5hyf2E3F/YNoHN2BtBLZ9g3ccaaNnRbobhiCPPE95Dz+I0swSdHynVv/heyNXB
+ve6SbzJ08pGCL72CQnqtKrcgfU28elUSwhXqvfdqlS5sdJ/PHLTyxQGjhdByPq1z
+qwubdQxtRbeOlKyWN7Wg0I8VRw7j6IPdj/3vQQF3zCepYoUz8jcI73HPdwbeyBkd
+iEDPfUYd/x7H4c7/I9vG+o1VTqkC50cRRj70/b17KSa7qWFiNyi2LSr2EIZkyXCn
+0q23KXB56jzaYyWf/Wi3MOxw+3WKt21gZ7IeyLnp2KhvAotnDU0mV3HaIPzBSlCN
+sSi6
+-----END CERTIFICATE-----
+
+AffirmTrust Commercial
+======================
+
+-----BEGIN CERTIFICATE-----
+MIIDTDCCAjSgAwIBAgIId3cGJyapsXwwDQYJKoZIhvcNAQELBQAwRDELMAkGA1UE
+BhMCVVMxFDASBgNVBAoMC0FmZmlybVRydXN0MR8wHQYDVQQDDBZBZmZpcm1UcnVz
+dCBDb21tZXJjaWFsMB4XDTEwMDEyOTE0MDYwNloXDTMwMTIzMTE0MDYwNlowRDEL
+MAkGA1UEBhMCVVMxFDASBgNVBAoMC0FmZmlybVRydXN0MR8wHQYDVQQDDBZBZmZp
+cm1UcnVzdCBDb21tZXJjaWFsMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKC
+AQEA9htPZwcroRX1BiLLHwGy43NFBkRJLLtJJRTWzsO3qyxPxkEylFf6EqdbDuKP
+Hx6GGaeqtS25Xw2Kwq+FNXkyLbscYjfysVtKPcrNcV/pQr6U6Mje+SJIZMblq8Yr
+ba0F8PrVC8+a5fBQpIs7R6UjW3p6+DM/uO+Zl+MgwdYoic+U+7lF7eNAFxHUdPAL
+MeIrJmqbTFeurCA+ukV6BfO9m2kVrn1OIGPENXY6BwLJN/3HR+7o8XYdcxXyl6S1
+yHp52UKqK39c/s4mT6NmgTWvRLpUHhwwMmWd5jyTXlBOeuM61G7MGvv50jeuJCqr
+VwMiKA1JdX+3KNp1v47j3A55MQIDAQABo0IwQDAdBgNVHQ4EFgQUnZPGU4teyq8/
+nx4P5ZmVvCT2lI8wDwYDVR0TAQH/BAUwAwEB/zAOBgNVHQ8BAf8EBAMCAQYwDQYJ
+KoZIhvcNAQELBQADggEBAFis9AQOzcAN/wr91LoWXym9e2iZWEnStB03TX8nfUYG
+XUPGhi4+c7ImfU+TqbbEKpqrIZcUsd6M06uJFdhrJNTxFq7YpFzUf1GO7RgBsZNj
+vbz4YYCanrHOQnDiqX0GJX0nof5v7LMeJNrjS1UaADs1tDvZ110w/YETifLCBivt
+Z8SOyUOyXGsViQK8YvxO8rUzqrJv0wqiUOP2O+guRMLbZjipM1ZI8W0bM40NjD9g
+N53Tym1+NH4Nn3J2ixufcv1SNUFFApYvHLKac0khsUlHRUe072o0EclNmsxZt9YC
+nlpOZbWUrhvfKbAW8b8Angc6F2S1BLUjIZkKlTuXfO8=
+-----END CERTIFICATE-----
+
+AffirmTrust Networking
+======================
+
+-----BEGIN CERTIFICATE-----
+MIIDTDCCAjSgAwIBAgIIfE8EORzUmS0wDQYJKoZIhvcNAQEFBQAwRDELMAkGA1UE
+BhMCVVMxFDASBgNVBAoMC0FmZmlybVRydXN0MR8wHQYDVQQDDBZBZmZpcm1UcnVz
+dCBOZXR3b3JraW5nMB4XDTEwMDEyOTE0MDgyNFoXDTMwMTIzMTE0MDgyNFowRDEL
+MAkGA1UEBhMCVVMxFDASBgNVBAoMC0FmZmlybVRydXN0MR8wHQYDVQQDDBZBZmZp
+cm1UcnVzdCBOZXR3b3JraW5nMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKC
+AQEAtITMMxcua5Rsa2FSoOujz3mUTOWUgJnLVWREZY9nZOIG41w3SfYvm4SEHi3y
+YJ0wTsyEheIszx6e/jarM3c1RNg1lho9Nuh6DtjVR6FqaYvZ/Ls6rnla1fTWcbua
+kCNrmreIdIcMHl+5ni36q1Mr3Lt2PpNMCAiMHqIjHNRqrSK6mQEubWXLviRmVSRL
+QESxG9fhwoXA3hA/Pe24/PHxI1Pcv2WXb9n5QHGNfb2V1M6+oF4nI979ptAmDgAp
+6zxG8D1gvz9Q0twmQVGeFDdCBKNwV6gbh+0t+nvujArjqWaJGctB+d1ENmHP4ndG
+yH329JKBNv3bNPFyfvMMFr20FQIDAQABo0IwQDAdBgNVHQ4EFgQUBx/S55zawm6i
+QLSwelAQUHTEyL0wDwYDVR0TAQH/BAUwAwEB/zAOBgNVHQ8BAf8EBAMCAQYwDQYJ
+KoZIhvcNAQEFBQADggEBAIlXshZ6qML91tmbmzTCnLQyFE2npN/svqe++EPbkTfO
+tDIuUFUaNU52Q3Eg75N3ThVwLofDwR1t3Mu1J9QsVtFSUzpE0nPIxBsFZVpikpzu
+QY0x2+c06lkh1QF612S4ZDnNye2v7UsDSKegmQGA3GWjNq5lWUhPgkvIZfFXHeVZ
+Lgo/bNjR9eUJtGxUAArgFU2HdW23WJZa3W3SAKD0m0i+wzekujbgfIeFlxoVot4u
+olu9rxj5kFDNcFn4J2dHy8egBzp90SxdbBk6ZrV9/ZFvgrG+CJPbFEfxojfHRZ48
+x3evZKiT3/Zpg4Jg8klCNO1aAFSFHBY2kgxc+qatv9s=
+-----END CERTIFICATE-----
+
+AffirmTrust Premium
+===================
+
+-----BEGIN CERTIFICATE-----
+MIIFRjCCAy6gAwIBAgIIbYwURrGmCu4wDQYJKoZIhvcNAQEMBQAwQTELMAkGA1UE
+BhMCVVMxFDASBgNVBAoMC0FmZmlybVRydXN0MRwwGgYDVQQDDBNBZmZpcm1UcnVz
+dCBQcmVtaXVtMB4XDTEwMDEyOTE0MTAzNloXDTQwMTIzMTE0MTAzNlowQTELMAkG
+A1UEBhMCVVMxFDASBgNVBAoMC0FmZmlybVRydXN0MRwwGgYDVQQDDBNBZmZpcm1U
+cnVzdCBQcmVtaXVtMIICIjANBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAxBLf
+qV/+Qd3d9Z+K4/as4Tx4mrzY8H96oDMq3I0gW64tb+eT2TZwamjPjlGjhVtnBKAQ
+JG9dKILBl1fYSCkTtuG+kU3fhQxTGJoeJKJPj/CihQvL9Cl/0qRY7iZNyaqoe5rZ
++jjeRFcV5fiMyNlI4g0WJx0eyIOFJbe6qlVBzAMiSy2RjYvmia9mx+n/K+k8rNrS
+s8PhaJyJ+HoAVt70VZVs+7pk3WKL3wt3MutizCaam7uqYoNMtAZ6MMgpv+0GTZe5
+HMQxK9VfvFMSF5yZVylmd2EhMQcuJUmdGPLu8ytxjLW6OQdJd/zvLpKQBY0tL3d7
+70O/Nbua2Plzpyzy0FfuKE4mX4+QaAkvuPjcBukumj5Rp9EixAqnOEhss/n/fauG
+V+O61oV4d7pD6kh/9ti+I20ev9E2bFhc8e6kGVQa9QPSdubhjL08s9NIS+LI+H+S
+qHZGnEJlPqQewQcDWkYtuJfzt9WyVSHvutxMAJf7FJUnM7/oQ0dG0giZFmA7mn7S
+5u046uwBHjxIVkkJx0w3AJ6IDsBz4W9m6XJHMD4Q5QsDyZpCAGzFlH5hxIrff4Ia
+C1nEWTJ3s7xgaVY5/bQGeyzWZDbZvUjthB9+pSKPKrhC9IK31FOQeE4tGv2Bb0TX
+OwF0lkLgAOIua+rF7nKsu7/+6qqo+Nz2snmKtmcCAwEAAaNCMEAwHQYDVR0OBBYE
+FJ3AZ6YMItkm9UWrpmVSESfYRaxjMA8GA1UdEwEB/wQFMAMBAf8wDgYDVR0PAQH/
+BAQDAgEGMA0GCSqGSIb3DQEBDAUAA4ICAQCzV00QYk465KzquByvMiPIs0laUZx2
+KI15qldGF9X1Uva3ROgIRL8YhNILgM3FEv0AVQVhh0HctSSePMTYyPtwni94loMg
+Nt58D2kTiKV1NpgIpsbfrM7jWNa3Pt668+s0QNiigfV4Py/VpfzZotReBA4Xrf5B
+8OWycvpEgjNC6C1Y91aMYj+6QrCcDFx+LmUmXFNPALJ4fqENmS2NuB2OosSw/WDQ
+MKSOyARiqcTtNd56l+0OOF6SL5Nwpamcb6d9Ex1+xghIsV5n61EIJenmJWtSKZGc
+0jlzCFfemQa0W50QBuHCAKi4HEoCChTQwUHK+4w1IX2COPKpVJEZNZOUbWo6xbLQ
+u4mGk+ibyQ86p3q4ofB4Rvr8Ny/lioTz3/4E2aFooC8k4gmVBtWVyuEklut89pMF
+u+1z6S3RdTnX5yTb2E5fQ4+e0BQ5v1VwSJlXMbSc7kqYA5YwH2AG7hsj/oFgIxpH
+YoWlzBk0gG+zrBrjn/B7SK3VAdlntqlyk+otZrWyuOQ9PLLvTIzq6we/qzWaVYa8
+GKa1qF60g2xraUDTn9zxw2lrueFtCfTxqlB2Cnp9ehehVZZCmTEJ3WARjQUwfuaO
+RtGdFNrHF+QFlozEJLUbzxQHskD4o55BhrwE0GuWyCqANP2/7waj3VjFhT0+j/6e
+KeC2uAloGRwYQw==
+-----END CERTIFICATE-----
+
+AffirmTrust Premium ECC
+=======================
+
+-----BEGIN CERTIFICATE-----
+MIIB/jCCAYWgAwIBAgIIdJclisc/elQwCgYIKoZIzj0EAwMwRTELMAkGA1UEBhMC
+VVMxFDASBgNVBAoMC0FmZmlybVRydXN0MSAwHgYDVQQDDBdBZmZpcm1UcnVzdCBQ
+cmVtaXVtIEVDQzAeFw0xMDAxMjkxNDIwMjRaFw00MDEyMzExNDIwMjRaMEUxCzAJ
+BgNVBAYTAlVTMRQwEgYDVQQKDAtBZmZpcm1UcnVzdDEgMB4GA1UEAwwXQWZmaXJt
+VHJ1c3QgUHJlbWl1bSBFQ0MwdjAQBgcqhkjOPQIBBgUrgQQAIgNiAAQNMF4bFZ0D
+0KF5Nbc6PJJ6yhUczWLznCZcBz3lVPqj1swS6vQUX+iOGasvLkjmrBhDeKzQN8O9
+ss0s5kfiGuZjuD0uL3jET9v0D6RoTFVya5UdThhClXjMNzyR4ptlKymjQjBAMB0G
+A1UdDgQWBBSaryl6wBE1NSZRMADDav5A1a7WPDAPBgNVHRMBAf8EBTADAQH/MA4G
+A1UdDwEB/wQEAwIBBjAKBggqhkjOPQQDAwNnADBkAjAXCfOHiFBar8jAQr9HX/Vs
+aobgxCd05DhT1wV/GzTjxi+zygk8N53X57hG8f2h4nECMEJZh0PUUd+60wkyWs6I
+flc9nF9Ca/UHLbXwgpP5WW+uZPpY5Yse42O+tYHNbwKMeQ==
+-----END CERTIFICATE-----
+
+StartCom Certification Authority
+================================
+
+-----BEGIN CERTIFICATE-----
+MIIHhzCCBW+gAwIBAgIBLTANBgkqhkiG9w0BAQsFADB9MQswCQYDVQQGEwJJTDEW
+MBQGA1UEChMNU3RhcnRDb20gTHRkLjErMCkGA1UECxMiU2VjdXJlIERpZ2l0YWwg
+Q2VydGlmaWNhdGUgU2lnbmluZzEpMCcGA1UEAxMgU3RhcnRDb20gQ2VydGlmaWNh
+dGlvbiBBdXRob3JpdHkwHhcNMDYwOTE3MTk0NjM3WhcNMzYwOTE3MTk0NjM2WjB9
+MQswCQYDVQQGEwJJTDEWMBQGA1UEChMNU3RhcnRDb20gTHRkLjErMCkGA1UECxMi
+U2VjdXJlIERpZ2l0YWwgQ2VydGlmaWNhdGUgU2lnbmluZzEpMCcGA1UEAxMgU3Rh
+cnRDb20gQ2VydGlmaWNhdGlvbiBBdXRob3JpdHkwggIiMA0GCSqGSIb3DQEBAQUA
+A4ICDwAwggIKAoICAQDBiNsJvGxGfHiflXu1M5DycmLWwTYgIiRezul38kMKogZk
+pMyONvg45iPwbm2xPN1yo4UcodM9tDMr0y+v/uqwQVlntsQGfQqedIXWeUyAN3rf
+OQVSWff0G0ZDpNKFhdLDcfN1YjS6LIp/Ho/u7TTQEceWzVI9ujPW3U3eCztKS5/C
+Ji/6tRYccjV3yjxd5srhJosaNnZcAdt0FCX+7bWgiA/deMotHweXMAEtcnn6RtYT
+Kqi5pquDSR3l8u/d5AGOGAqPY1MWhWKpDhk6zLVmpsJrdAfkK+F2PrRt2PZE4XNi
+HzvEvqBTViVsUQn3qqvKv3b9bZvzndu/PWa8DFaqr5hIlTpL36dYUNk4dalb6kMM
+Av+Z6+hsTXBbKWWc3apdzK8BMewM69KN6Oqce+Zu9ydmDBpI125C4z/eIT574Q1w
++2OqqGwaVLRcJXrJosmLFqa7LH4XXgVNWG4SHQHuEhANxjJ/GP/89PrNbpHoNkm+
+Gkhpi8KWTRoSsmkXwQqQ1vp5Iki/untp+HDH+no32NgN0nZPV/+Qt+OR0t3vwmC3
+Zzrd/qqc8NSLf3Iizsafl7b4r4qgEKjZ+xjGtrVcUjyJthkqcwEKDwOzEmDyei+B
+26Nu/yYwl/WL3YlXtq09s68rxbd2AvCl1iuahhQqcvbjM4xdCUsT37uMdBNSSwID
+AQABo4ICEDCCAgwwDwYDVR0TAQH/BAUwAwEB/zAOBgNVHQ8BAf8EBAMCAQYwHQYD
+VR0OBBYEFE4L7xqkQFulF2mHMMo0aEPQQa7yMB8GA1UdIwQYMBaAFE4L7xqkQFul
+F2mHMMo0aEPQQa7yMIIBWgYDVR0gBIIBUTCCAU0wggFJBgsrBgEEAYG1NwEBATCC
+ATgwLgYIKwYBBQUHAgEWImh0dHA6Ly93d3cuc3RhcnRzc2wuY29tL3BvbGljeS5w
+ZGYwNAYIKwYBBQUHAgEWKGh0dHA6Ly93d3cuc3RhcnRzc2wuY29tL2ludGVybWVk
+aWF0ZS5wZGYwgc8GCCsGAQUFBwICMIHCMCcWIFN0YXJ0IENvbW1lcmNpYWwgKFN0
+YXJ0Q29tKSBMdGQuMAMCAQEagZZMaW1pdGVkIExpYWJpbGl0eSwgcmVhZCB0aGUg
+c2VjdGlvbiAqTGVnYWwgTGltaXRhdGlvbnMqIG9mIHRoZSBTdGFydENvbSBDZXJ0
+aWZpY2F0aW9uIEF1dGhvcml0eSBQb2xpY3kgYXZhaWxhYmxlIGF0IGh0dHA6Ly93
+d3cuc3RhcnRzc2wuY29tL3BvbGljeS5wZGYwEQYJYIZIAYb4QgEBBAQDAgAHMDgG
+CWCGSAGG+EIBDQQrFilTdGFydENvbSBGcmVlIFNTTCBDZXJ0aWZpY2F0aW9uIEF1
+dGhvcml0eTANBgkqhkiG9w0BAQsFAAOCAgEAjo/n3JR5fPGFf59Jb2vKXfuM/gTF
+wWLRfUKKvFO3lANmMD+x5wqnUCBVJX92ehQN6wQOQOY+2IirByeDqXWmN3PH/UvS
+Ta0XQMhGvjt/UfzDtgUx3M2FIk5xt/JxXrAaxrqTi3iSSoX4eA+D/i+tLPfkpLst
+0OcNOrg+zvZ49q5HJMqjNTbOx8aHmNrs++myziebiMMEofYLWWivydsQD032ZGNc
+pRJvkrKTlMeIFw6Ttn5ii5B/q06f/ON1FE8qMt9bDeD1e5MNq6HPh+GlBEXoPBKl
+CcWw0bdT82AUuoVpaiF8H3VhFyAXe2w7QSlc4axa0c2Mm+tgHRns9+Ww2vl5GKVF
+P0lDV9LdJNUso/2RjSe15esUBppMeyG7Oq0wBhjA2MFrLH9ZXF2RsXAiV+uKa0hK
+1Q8p7MZAwC+ITGgBF3f0JBlPvfrhsiAhS90a2Cl9qrjeVOwhVYBsHvUwyKMQ5bLm
+KhQxw4UtjJixhlpPiVktucf3HMiKf8CdBUrmQk9io20ppB+Fq9vlgcitKj1MXVuE
+JnHEhV5xJMqlG2zYYdMa4FTbzrqpMrUi9nNBCV24F10OD5mQ1kfabwo6YigUZ4LZ
+8dCAWZvLMdibD4x3TrVoivJs9iQOLWxwxXPR3hTQcY+203sC9uO41Alua551hDnm
+fyWl8kgAwKQB2j8=
+-----END CERTIFICATE-----
+
+StartCom Certification Authority G2
+===================================
+
+-----BEGIN CERTIFICATE-----
+MIIFYzCCA0ugAwIBAgIBOzANBgkqhkiG9w0BAQsFADBTMQswCQYDVQQGEwJJTDEW
+MBQGA1UEChMNU3RhcnRDb20gTHRkLjEsMCoGA1UEAxMjU3RhcnRDb20gQ2VydGlm
+aWNhdGlvbiBBdXRob3JpdHkgRzIwHhcNMTAwMTAxMDEwMDAxWhcNMzkxMjMxMjM1
+OTAxWjBTMQswCQYDVQQGEwJJTDEWMBQGA1UEChMNU3RhcnRDb20gTHRkLjEsMCoG
+A1UEAxMjU3RhcnRDb20gQ2VydGlmaWNhdGlvbiBBdXRob3JpdHkgRzIwggIiMA0G
+CSqGSIb3DQEBAQUAA4ICDwAwggIKAoICAQC2iTZbB7cgNr2Cu+EWIAOVeq8Oo1XJ
+JZlKxdBWQYeQTSFgpBSHO839sj60ZwNq7eEPS8CRhXBF4EKe3ikj1AENoBB5uNsD
+vfOpL9HG4A/LnooUCri99lZi8cVytjIl2bLzvWXFDSxu1ZJvGIsAQRSCb0AgJnoo
+D/Uefyf3lLE3PbfHkffiAez9lInhzG7TNtYKGXmu1zSCZf98Qru23QumNK9LYP5/
+Q0kGi4xDuFby2X8hQxfqp0iVAXV16iulQ5XqFYSdCI0mblWbq9zSOdIxHWDirMxW
+RST1HFSr7obdljKF+ExP6JV2tgXdNiNnvP8V4so75qbsO+wmETRIjfaAKxojAuuK
+HDp2KntWFhxyKrOq42ClAJ8Em+JvHhRYW6Vsi1g8w7pOOlz34ZYrPu8HvKTlXcxN
+nw3h3Kq74W4a7I/htkxNeXJdFzULHdfBR9qWJODQcqhaX2YtENwvKhOuJv4KHBnM
+0D4LnMgJLvlblnpHnOl68wVQdJVznjAJ85eCXuaPOQgeWeU1FEIT/wCc976qUM/i
+UUjXuG+v+E5+M5iSFGI6dWPPe/regjupuznixL0sAA7IF6wT700ljtizkC+p2il9
+Ha90OrInwMEePnWjFqmveiJdnxMaz6eg6+OGCtP95paV1yPIN93EfKo2rJgaErHg
+TuixO/XWb/Ew1wIDAQABo0IwQDAPBgNVHRMBAf8EBTADAQH/MA4GA1UdDwEB/wQE
+AwIBBjAdBgNVHQ4EFgQUS8W0QGutHLOlHGVuRjaJhwUMDrYwDQYJKoZIhvcNAQEL
+BQADggIBAHNXPyzVlTJ+N9uWkusZXn5T50HsEbZH77Xe7XRcxfGOSeD8bpkTzZ+K
+2s06Ctg6Wgk/XzTQLwPSZh0avZyQN8gMjgdalEVGKua+etqhqaRpEpKwfTbURIfX
+UfEpY9Z1zRbkJ4kd+MIySP3bmdCPX1R0zKxnNBFi2QwKN4fRoxdIjtIXHfbX/dtl
+6/2o1PXWT6RbdejF0mCy2wl+JYt7ulKSnj7oxXehPOBKc2thz4bcQ///If4jXSRK
+9dNtD2IEBVeC2m6kMyV5Sy5UGYvMLD0w6dEG/+gyRr61M3Z3qAFdlsHB1b6uJcDJ
+HgoJIIihDsnzb02CVAAgp9KP5DlUFy6NHrgbuxu9mk47EDTcnIhT76IxW1hPkWLI
+wpqazRVdOKnWvvgTtZ8SafJQYqz7Fzf07rh1Z2AQ+4NQ+US1dZxAF7L+/XldblhY
+XzD8AK6vM8EOTmy6p6ahfzLbOOCxchcKK5HsamMm7YnUeMx0HgX4a/6ManY5Ka5l
+IxKVCCIcl85bBu4M4ru8H0ST9tg4RQUh7eStqxK2A6RCLi3ECToDZ2mEmuFZkIoo
+hdVddLHRDiBYmxOlsGOm7XtH/UVVMKTumtTm4ofvmMkyghEpIrwACjFeLQ/Ajulr
+so8uBtjRkcfGEvRM/TAXw8HaOFvjqermobp573PYtlNXLfbQ4ddI
+-----END CERTIFICATE-----
diff --git a/boto/cloudformation/__init__.py b/boto/cloudformation/__init__.py
index 53a02e5..d5de73e 100644
--- a/boto/cloudformation/__init__.py
+++ b/boto/cloudformation/__init__.py
@@ -30,7 +30,9 @@
     'sa-east-1': 'cloudformation.sa-east-1.amazonaws.com',
     'eu-west-1': 'cloudformation.eu-west-1.amazonaws.com',
     'ap-northeast-1': 'cloudformation.ap-northeast-1.amazonaws.com',
-    'ap-southeast-1': 'cloudformation.ap-southeast-1.amazonaws.com'}
+    'ap-southeast-1': 'cloudformation.ap-southeast-1.amazonaws.com',
+    'ap-southeast-2': 'cloudformation.ap-southeast-2.amazonaws.com',
+}
 
 
 def regions():
diff --git a/boto/cloudformation/connection.py b/boto/cloudformation/connection.py
index 816066c..84c7680 100644
--- a/boto/cloudformation/connection.py
+++ b/boto/cloudformation/connection.py
@@ -19,17 +19,13 @@
 # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
 # IN THE SOFTWARE.
 
-try:
-    import simplejson as json
-except:
-    import json
-
 import boto
 from boto.cloudformation.stack import Stack, StackSummary, StackEvent
 from boto.cloudformation.stack import StackResource, StackResourceSummary
 from boto.cloudformation.template import Template
 from boto.connection import AWSQueryConnection
 from boto.regioninfo import RegionInfo
+from boto.compat import json
 
 
 class CloudFormationConnection(AWSQueryConnection):
@@ -42,9 +38,15 @@
     DefaultRegionEndpoint = boto.config.get('Boto', 'cfn_region_endpoint',
                                             'cloudformation.us-east-1.amazonaws.com')
 
-    valid_states = ("CREATE_IN_PROGRESS", "CREATE_FAILED", "CREATE_COMPLETE",
-            "ROLLBACK_IN_PROGRESS", "ROLLBACK_FAILED", "ROLLBACK_COMPLETE",
-            "DELETE_IN_PROGRESS", "DELETE_FAILED", "DELETE_COMPLETE")
+    valid_states = (
+        'CREATE_IN_PROGRESS', 'CREATE_FAILED', 'CREATE_COMPLETE',
+        'ROLLBACK_IN_PROGRESS', 'ROLLBACK_FAILED', 'ROLLBACK_COMPLETE',
+        'DELETE_IN_PROGRESS', 'DELETE_FAILED', 'DELETE_COMPLETE',
+        'UPDATE_IN_PROGRESS', 'UPDATE_COMPLETE_CLEANUP_IN_PROGRESS',
+        'UPDATE_COMPLETE', 'UPDATE_ROLLBACK_IN_PROGRESS',
+        'UPDATE_ROLLBACK_FAILED',
+        'UPDATE_ROLLBACK_COMPLETE_CLEANUP_IN_PROGRESS',
+        'UPDATE_ROLLBACK_COMPLETE')
 
     def __init__(self, aws_access_key_id=None, aws_secret_access_key=None,
                  is_secure=True, port=None, proxy=None, proxy_port=None,
diff --git a/boto/cloudformation/stack.py b/boto/cloudformation/stack.py
index 9a9d63b..289e18f 100644
--- a/boto/cloudformation/stack.py
+++ b/boto/cloudformation/stack.py
@@ -41,7 +41,10 @@
 
     def endElement(self, name, value, connection):
         if name == 'CreationTime':
-            self.creation_time = datetime.strptime(value, '%Y-%m-%dT%H:%M:%SZ')
+            try:
+                self.creation_time = datetime.strptime(value, '%Y-%m-%dT%H:%M:%SZ')
+            except ValueError:
+                self.creation_time = datetime.strptime(value, '%Y-%m-%dT%H:%M:%S.%fZ')
         elif name == "Description":
             self.description = value
         elif name == "DisableRollback":
@@ -200,6 +203,7 @@
         dict.__init__(self)
         self.connection = connection
         self._current_key = None
+        self._current_value = None
 
     def startElement(self, name, attrs, connection):
         return None
@@ -208,10 +212,15 @@
         if name == "Key":
             self._current_key = value
         elif name == "Value":
-            self[self._current_key] = value
+            self._current_value = value
         else:
             setattr(self, name, value)
 
+        if self._current_key and self._current_value:
+            self[self._current_key] = self._current_value
+            self._current_key = None
+            self._current_value = None
+
 
 class NotificationARN(object):
     def __init__(self, connection=None):
@@ -345,7 +354,10 @@
         elif name == "StackName":
             self.stack_name = value
         elif name == "Timestamp":
-            self.timestamp = datetime.strptime(value, '%Y-%m-%dT%H:%M:%SZ')
+            try:
+                self.timestamp = datetime.strptime(value, '%Y-%m-%dT%H:%M:%SZ')
+            except ValueError:
+                self.timestamp = datetime.strptime(value, '%Y-%m-%dT%H:%M:%S.%fZ')
         else:
             setattr(self, name, value)
 
diff --git a/boto/cloudfront/distribution.py b/boto/cloudfront/distribution.py
index 718f2c2..78b2624 100644
--- a/boto/cloudfront/distribution.py
+++ b/boto/cloudfront/distribution.py
@@ -14,7 +14,7 @@
 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
 # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
 # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
-# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, 
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
 # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
 # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
 # IN THE SOFTWARE.
@@ -22,10 +22,7 @@
 import uuid
 import base64
 import time
-try:
-    import simplejson as json
-except ImportError:
-    import json
+from boto.compat import json
 from boto.cloudfront.identity import OriginAccessIdentity
 from boto.cloudfront.object import Object, StreamingObject
 from boto.cloudfront.signers import ActiveTrustedSigners, TrustedSigners
@@ -52,23 +49,23 @@
         :param enabled: Whether the distribution is enabled to accept
                         end user requests for content.
         :type enabled: bool
-        
+
         :param caller_reference: A unique number that ensures the
                                  request can't be replayed.  If no
                                  caller_reference is provided, boto
                                  will generate a type 4 UUID for use
                                  as the caller reference.
         :type enabled: str
-        
+
         :param cnames: A CNAME alias you want to associate with this
                        distribution. You can have up to 10 CNAME aliases
                        per distribution.
         :type enabled: array of str
-        
+
         :param comment: Any comments you want to include about the
                         distribution.
         :type comment: str
-        
+
         :param trusted_signers: Specifies any AWS accounts you want to
                                 permit to create signed URLs for private
                                 content. If you want the distribution to
@@ -77,7 +74,7 @@
                                 distribution to use basic URLs, leave
                                 this None.
         :type trusted_signers: :class`boto.cloudfront.signers.TrustedSigners`
-        
+
         :param default_root_object: Designates a default root object.
                                     Only include a DefaultRootObject value
                                     if you are going to assign a default
@@ -89,7 +86,7 @@
                         this should contain a LoggingInfo object; otherwise
                         it should contain None.
         :type logging: :class`boto.cloudfront.logging.LoggingInfo`
-        
+
         """
         self.connection = connection
         self.origin = origin
@@ -281,7 +278,7 @@
 
     def get_distribution(self):
         return self.connection.get_streaming_distribution_info(self.id)
-    
+
 class Distribution:
 
     def __init__(self, connection=None, config=None, domain_name='',
@@ -403,11 +400,11 @@
             return self._bucket
         else:
             raise NotImplementedError('Unable to get_objects on CustomOrigin')
-    
+
     def get_objects(self):
         """
         Return a list of all content objects in this distribution.
-        
+
         :rtype: list of :class:`boto.cloudfront.object.Object`
         :return: The content objects
         """
@@ -643,13 +640,13 @@
     @staticmethod
     def _sign_string(message, private_key_file=None, private_key_string=None):
         """
-        Signs a string for use with Amazon CloudFront.  Requires the M2Crypto
-        library be installed.
+        Signs a string for use with Amazon CloudFront.
+        Requires the rsa library be installed.
         """
         try:
-            from M2Crypto import EVP
+            import rsa
         except ImportError:
-            raise NotImplementedError("Boto depends on the python M2Crypto "
+            raise NotImplementedError("Boto depends on the python rsa "
                                       "library to generate signed URLs for "
                                       "CloudFront")
         # Make sure only one of private_key_file and private_key_string is set
@@ -657,18 +654,16 @@
             raise ValueError("Only specify the private_key_file or the private_key_string not both")
         if not private_key_file and not private_key_string:
             raise ValueError("You must specify one of private_key_file or private_key_string")
-        # if private_key_file is a file object read the key string from there
+        # If private_key_file is a file, read its contents. Otherwise, open it and then read it
         if isinstance(private_key_file, file):
             private_key_string = private_key_file.read()
-        # Now load key and calculate signature
-        if private_key_string:
-            key = EVP.load_key_string(private_key_string)
-        else:
-            key = EVP.load_key(private_key_file)
-        key.reset_context(md='sha1')
-        key.sign_init()
-        key.sign_update(str(message))
-        signature = key.sign_final()
+        elif private_key_file:
+            with open(private_key_file, 'r') as file_handle:
+                private_key_string = file_handle.read()
+
+        # Sign it!
+        private_key = rsa.PrivateKey.load_pkcs1(private_key_string)
+        signature = rsa.sign(str(message), private_key, 'SHA-1')
         return signature
 
     @staticmethod
@@ -746,5 +741,5 @@
 
     def delete(self):
         self.connection.delete_streaming_distribution(self.id, self.etag)
-            
-        
+
+
diff --git a/boto/cloudsearch/__init__.py b/boto/cloudsearch/__init__.py
index 9c8157a..5ba1060 100644
--- a/boto/cloudsearch/__init__.py
+++ b/boto/cloudsearch/__init__.py
@@ -35,6 +35,9 @@
     return [RegionInfo(name='us-east-1',
                        endpoint='cloudsearch.us-east-1.amazonaws.com',
                        connection_cls=boto.cloudsearch.layer1.Layer1),
+            RegionInfo(name='eu-west-1',
+                       endpoint='cloudsearch.eu-west-1.amazonaws.com',
+                       connection_cls=boto.cloudsearch.layer1.Layer1),
             ]
 
 
diff --git a/boto/cloudsearch/document.py b/boto/cloudsearch/document.py
index 64a11e0..c799d70 100644
--- a/boto/cloudsearch/document.py
+++ b/boto/cloudsearch/document.py
@@ -21,12 +21,9 @@
 # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
 # IN THE SOFTWARE.
 #
-try:
-    import simplejson as json
-except ImportError:
-    import json
 
 import boto.exception
+from boto.compat import json
 import requests
 import boto
 
@@ -37,8 +34,50 @@
 class CommitMismatchError(Exception):
     pass
 
+class EncodingError(Exception):
+    """
+    Content sent for Cloud Search indexing was incorrectly encoded.
+
+    This usually happens when a document is marked as unicode but non-unicode
+    characters are present.
+    """
+    pass
+
+class ContentTooLongError(Exception):
+    """
+    Content sent for Cloud Search indexing was too long
+
+    This will usually happen when documents queued for indexing add up to more
+    than the limit allowed per upload batch (5MB)
+
+    """
+    pass
 
 class DocumentServiceConnection(object):
+    """
+    A CloudSearch document service.
+
+    The DocumentServiceConection is used to add, remove and update documents in
+    CloudSearch. Commands are uploaded to CloudSearch in SDF (Search Document Format).
+
+    To generate an appropriate SDF, use :func:`add` to add or update documents,
+    as well as :func:`delete` to remove documents.
+
+    Once the set of documents is ready to be index, use :func:`commit` to send the
+    commands to CloudSearch.
+
+    If there are a lot of documents to index, it may be preferable to split the
+    generation of SDF data and the actual uploading into CloudSearch. Retrieve
+    the current SDF with :func:`get_sdf`. If this file is the uploaded into S3,
+    it can be retrieved back afterwards for upload into CloudSearch using
+    :func:`add_sdf_from_s3`.
+
+    The SDF is not cleared after a :func:`commit`. If you wish to continue
+    using the DocumentServiceConnection for another batch upload of commands,
+    you will need to :func:`clear_sdf` first to stop the previous batch of
+    commands from being uploaded again.
+
+    """
 
     def __init__(self, domain=None, endpoint=None):
         self.domain = domain
@@ -49,26 +88,95 @@
         self._sdf = None
 
     def add(self, _id, version, fields, lang='en'):
+        """
+        Add a document to be processed by the DocumentService
+
+        The document will not actually be added until :func:`commit` is called
+
+        :type _id: string
+        :param _id: A unique ID used to refer to this document.
+
+        :type version: int
+        :param version: Version of the document being indexed. If a file is
+            being reindexed, the version should be higher than the existing one
+            in CloudSearch.
+
+        :type fields: dict
+        :param fields: A dictionary of key-value pairs to be uploaded .
+
+        :type lang: string
+        :param lang: The language code the data is in. Only 'en' is currently
+            supported
+        """
+
         d = {'type': 'add', 'id': _id, 'version': version, 'lang': lang,
             'fields': fields}
         self.documents_batch.append(d)
 
     def delete(self, _id, version):
+        """
+        Schedule a document to be removed from the CloudSearch service
+
+        The document will not actually be scheduled for removal until :func:`commit` is called
+
+        :type _id: string
+        :param _id: The unique ID of this document.
+
+        :type version: int
+        :param version: Version of the document to remove. The delete will only
+            occur if this version number is higher than the version currently
+            in the index.
+        """
+
         d = {'type': 'delete', 'id': _id, 'version': version}
         self.documents_batch.append(d)
 
     def get_sdf(self):
+        """
+        Generate the working set of documents in Search Data Format (SDF)
+
+        :rtype: string
+        :returns: JSON-formatted string of the documents in SDF
+        """
+
         return self._sdf if self._sdf else json.dumps(self.documents_batch)
 
     def clear_sdf(self):
+        """
+        Clear the working documents from this DocumentServiceConnection
+
+        This should be used after :func:`commit` if the connection will be reused
+        for another set of documents.
+        """
+
         self._sdf = None
         self.documents_batch = []
 
     def add_sdf_from_s3(self, key_obj):
-        """@todo (lucas) would be nice if this could just take an s3://uri..."""
+        """
+        Load an SDF from S3
+
+        Using this method will result in documents added through
+        :func:`add` and :func:`delete` being ignored.
+
+        :type key_obj: :class:`boto.s3.key.Key`
+        :param key_obj: An S3 key which contains an SDF
+        """
+        #@todo:: (lucas) would be nice if this could just take an s3://uri..."
+
         self._sdf = key_obj.get_contents_as_string()
 
     def commit(self):
+        """
+        Actually send an SDF to CloudSearch for processing
+
+        If an SDF file has been explicitly loaded it will be used. Otherwise,
+        documents added through :func:`add` and :func:`delete` will be used.
+
+        :rtype: :class:`CommitResponse`
+        :returns: A summary of documents added and deleted
+        """
+
         sdf = self.get_sdf()
 
         if ': null' in sdf:
@@ -79,15 +187,19 @@
 
         url = "http://%s/2011-02-01/documents/batch" % (self.endpoint)
 
-        request_config = {
-            'pool_connections': 20,
-            'keep_alive': True,
-            'max_retries': 5,
-            'pool_maxsize': 50
-        }
-
-        r = requests.post(url, data=sdf, config=request_config,
-            headers={'Content-Type': 'application/json'})
+        # Keep-alive is automatic in a post-1.0 requests world.
+        session = requests.Session()
+        adapter = requests.adapters.HTTPAdapter(
+            pool_connections=20,
+            pool_maxsize=50
+        )
+        # Now kludge in the right number of retries.
+        # Once we're requiring ``requests>=1.2.1``, this can become an
+        # initialization parameter above.
+        adapter.max_retries = 5
+        session.mount('http://', adapter)
+        session.mount('https://', adapter)
+        r = session.post(url, data=sdf, headers={'Content-Type': 'application/json'})
 
         return CommitResponse(r, self, sdf)
 
@@ -98,12 +210,14 @@
     :type response: :class:`requests.models.Response`
     :param response: Response from Cloudsearch /documents/batch API
 
-    :type doc_service: :class:`exfm.cloudsearch.DocumentServiceConnection`
+    :type doc_service: :class:`boto.cloudsearch.document.DocumentServiceConnection`
     :param doc_service: Object containing the documents posted and methods to
         retry
 
     :raises: :class:`boto.exception.BotoServerError`
-    :raises: :class:`exfm.cloudsearch.SearchServiceException`
+    :raises: :class:`boto.cloudsearch.document.SearchServiceException`
+    :raises: :class:`boto.cloudsearch.document.EncodingError`
+    :raises: :class:`boto.cloudsearch.document.ContentTooLongError`
     """
     def __init__(self, response, doc_service, sdf):
         self.response = response
@@ -113,8 +227,8 @@
         try:
             self.content = json.loads(response.content)
         except:
-            boto.log.error('Error indexing documents.\nResponse Content:\n{}\n\n'
-                'SDF:\n{}'.format(response.content, self.sdf))
+            boto.log.error('Error indexing documents.\nResponse Content:\n{0}\n\n'
+                'SDF:\n{1}'.format(response.content, self.sdf))
             raise boto.exception.BotoServerError(self.response.status_code, '',
                 body=response.content)
 
@@ -122,6 +236,11 @@
         if self.status == 'error':
             self.errors = [e.get('message') for e in self.content.get('errors',
                 [])]
+            for e in self.errors:
+                if "Illegal Unicode character" in e:
+                    raise EncodingError("Illegal Unicode character in document")
+                elif e == "The Content-Length is too long":
+                    raise ContentTooLongError("Content was too long")
         else:
             self.errors = []
 
@@ -139,12 +258,12 @@
         :type response_num: int
         :param response_num: Number of adds or deletes in the response.
 
-        :raises: :class:`exfm.cloudsearch.SearchServiceException`
+        :raises: :class:`boto.cloudsearch.document.CommitMismatchError`
         """
         commit_num = len([d for d in self.doc_service.documents_batch
             if d['type'] == type_])
 
         if response_num != commit_num:
             raise CommitMismatchError(
-                'Incorrect number of {}s returned. Commit: {} Respose: {}'\
+                'Incorrect number of {0}s returned. Commit: {1} Response: {2}'\
                 .format(type_, commit_num, response_num))
diff --git a/boto/cloudsearch/domain.py b/boto/cloudsearch/domain.py
index 43fcac8..9497325 100644
--- a/boto/cloudsearch/domain.py
+++ b/boto/cloudsearch/domain.py
@@ -23,10 +23,7 @@
 #
 
 import boto
-try:
-    import simplejson as json
-except ImportError:
-    import json
+from boto.compat import json
 from .optionstatus import OptionStatus
 from .optionstatus import IndexFieldStatus
 from .optionstatus import ServicePoliciesStatus
diff --git a/boto/cloudsearch/layer1.py b/boto/cloudsearch/layer1.py
index 054fc32..ff71293 100644
--- a/boto/cloudsearch/layer1.py
+++ b/boto/cloudsearch/layer1.py
@@ -82,7 +82,7 @@
             if not inner:
                 return None if list_marker == None else []
             if isinstance(inner, list):
-                return [dict(**i) for i in inner]
+                return inner
             else:
                 return dict(**inner)
         else:
diff --git a/boto/cloudsearch/optionstatus.py b/boto/cloudsearch/optionstatus.py
index 869d82f..dddda76 100644
--- a/boto/cloudsearch/optionstatus.py
+++ b/boto/cloudsearch/optionstatus.py
@@ -22,10 +22,9 @@
 # IN THE SOFTWARE.
 #
 
-try:
-    import simplejson as json
-except ImportError:
-    import json
+import time
+from boto.compat import json
+
 
 class OptionStatus(dict):
     """
diff --git a/boto/cloudsearch/search.py b/boto/cloudsearch/search.py
index f1b16e4..69a1981 100644
--- a/boto/cloudsearch/search.py
+++ b/boto/cloudsearch/search.py
@@ -23,8 +23,8 @@
 #
 from math import ceil
 import time
-import json
 import boto
+from boto.compat import json
 import requests
 
 
@@ -51,6 +51,12 @@
         self.query = attrs['query']
         self.search_service = attrs['search_service']
 
+        self.facets = {}
+        if 'facets' in attrs:
+            for (facet, values) in attrs['facets'].iteritems():
+                if 'constraints' in values:
+                    self.facets[facet] = dict((k, v) for (k, v) in map(lambda x: (x['value'], x['count']), values['constraints']))
+
         self.num_pages_needed = ceil(self.hits / self.query.real_size)
 
     def __len__(self):
@@ -62,8 +68,8 @@
     def next_page(self):
         """Call Cloudsearch to get the next page of search results
 
-        :rtype: :class:`exfm.cloudsearch.SearchResults`
-        :return: A cloudsearch SearchResults object
+        :rtype: :class:`boto.cloudsearch.search.SearchResults`
+        :return: the following page of search results
         """
         if self.query.page <= self.num_pages_needed:
             self.query.start += self.query.real_size
@@ -161,43 +167,105 @@
                size=10, start=0, facet=None, facet_constraints=None,
                facet_sort=None, facet_top_n=None, t=None):
         """
-        Query Cloudsearch
+        Send a query to CloudSearch
 
-        :type q:
-        :param q:
+        Each search query should use at least the q or bq argument to specify
+        the search parameter. The other options are used to specify the
+        criteria of the search.
 
-        :type bq:
-        :param bq:
+        :type q: string
+        :param q: A string to search the default search fields for.
 
-        :type rank:
-        :param rank:
+        :type bq: string
+        :param bq: A string to perform a Boolean search. This can be used to
+            create advanced searches.
 
-        :type return_fields:
-        :param return_fields:
+        :type rank: List of strings
+        :param rank: A list of fields or rank expressions used to order the
+            search results. A field can be reversed by using the - operator.
+            ``['-year', 'author']``
 
-        :type size:
-        :param size:
+        :type return_fields: List of strings
+        :param return_fields: A list of fields which should be returned by the
+            search. If this field is not specified, only IDs will be returned.
+            ``['headline']``
 
-        :type start:
-        :param start:
+        :type size: int
+        :param size: Number of search results to specify
 
-        :type facet:
-        :param facet:
+        :type start: int
+        :param start: Offset of the first search result to return (can be used
+            for paging)
 
-        :type facet_constraints:
-        :param facet_constraints:
+        :type facet: list
+        :param facet: List of fields for which facets should be returned
+            ``['colour', 'size']``
 
-        :type facet_sort:
-        :param facet_sort:
+        :type facet_constraints: dict
+        :param facet_constraints: Use to limit facets to specific values
+            specified as comma-delimited strings in a Dictionary of facets
+            ``{'colour': "'blue','white','red'", 'size': "big"}``
 
-        :type facet_top_n:
-        :param facet_top_n:
+        :type facet_sort: dict
+        :param facet_sort: Rules used to specify the order in which facet
+            values should be returned. Allowed values are *alpha*, *count*,
+            *max*, *sum*. Use *alpha* to sort alphabetical, and *count* to sort
+            the facet by number of available result. 
+            ``{'color': 'alpha', 'size': 'count'}``
 
-        :type t:
-        :param t:
+        :type facet_top_n: dict
+        :param facet_top_n: Dictionary of facets and number of facets to
+            return.
+            ``{'colour': 2}``
 
-        :rtype: :class:`exfm.cloudsearch.SearchResults`
-        :return: A cloudsearch SearchResults object
+        :type t: dict
+        :param t: Specify ranges for specific fields
+            ``{'year': '2000..2005'}``
+
+        :rtype: :class:`boto.cloudsearch.search.SearchResults`
+        :return: Returns the results of this search
+
+        The following examples all assume we have indexed a set of documents
+        with fields: *author*, *date*, *headline*
+
+        A simple search will look for documents whose default text search
+        fields will contain the search word exactly:
+
+        >>> search(q='Tim') # Return documents with the word Tim in them (but not Timothy)
+
+        A simple search with more keywords will return documents whose default
+        text search fields contain the search strings together or separately.
+
+        >>> search(q='Tim apple') # Will match "tim" and "apple"
+
+        More complex searches require the boolean search operator.
+
+        Wildcard searches can be used to search for any words that start with
+        the search string.
+
+        >>> search(bq="'Tim*'") # Return documents with words like Tim or Timothy)
+        
+        Search terms can also be combined. Allowed operators are "and", "or",
+        "not", "field", "optional", "token", "phrase", or "filter"
+        
+        >>> search(bq="(and 'Tim' (field author 'John Smith'))")
+
+        Facets allow you to show classification information about the search
+        results. For example, you can retrieve the authors who have written
+        about Tim:
+
+        >>> search(q='Tim', facet=['Author'])
+
+        With facet_constraints, facet_top_n and facet_sort more complicated
+        constraints can be specified such as returning the top author out of
+        John Smith and Mark Smith who have a document with the word Tim in it.
+        
+        >>> search(q='Tim', 
+        ...     facet=['Author'], 
+        ...     facet_constraints={'author': "'John Smith','Mark Smith'"}, 
+        ...     facet=['author'], 
+        ...     facet_top_n={'author': 1}, 
+        ...     facet_sort={'author': 'count'})
         """
 
         query = self.build_query(q=q, bq=bq, rank=rank,
@@ -211,11 +279,11 @@
     def __call__(self, query):
         """Make a call to CloudSearch
 
-        :type query: :class:`exfm.cloudsearch.Query`
-        :param query: A fully specified Query instance
+        :type query: :class:`boto.cloudsearch.search.Query`
+        :param query: A group of search criteria
 
-        :rtype: :class:`exfm.cloudsearch.SearchResults`
-        :return: A cloudsearch SearchResults object
+        :rtype: :class:`boto.cloudsearch.search.SearchResults`
+        :return: search results
         """
         url = "http://%s/2011-02-01/search" % (self.endpoint)
         params = query.to_params()
@@ -239,14 +307,14 @@
     def get_all_paged(self, query, per_page):
         """Get a generator to iterate over all pages of search results
 
-        :type query: :class:`exfm.cloudsearch.Query`
-        :param query: A fully specified Query instance
+        :type query: :class:`boto.cloudsearch.search.Query`
+        :param query: A group of search criteria
 
         :type per_page: int
-        :param per_page: Number of docs in each SearchResults object.
+        :param per_page: Number of docs in each :class:`boto.cloudsearch.search.SearchResults` object.
 
         :rtype: generator
-        :return: Generator containing :class:`exfm.cloudsearch.SearchResults`
+        :return: Generator containing :class:`boto.cloudsearch.search.SearchResults`
         """
         query.update_size(per_page)
         page = 0
@@ -266,8 +334,8 @@
         you can iterate over all results in a reasonably efficient
         manner.
 
-        :type query: :class:`exfm.cloudsearch.Query`
-        :param query: A fully specified Query instance
+        :type query: :class:`boto.cloudsearch.search.Query`
+        :param query: A group of search criteria
 
         :rtype: generator
         :return: All docs matching query
@@ -285,8 +353,8 @@
     def get_num_hits(self, query):
         """Return the total number of hits for query
 
-        :type query: :class:`exfm.cloudsearch.Query`
-        :param query: A fully specified Query instance
+        :type query: :class:`boto.cloudsearch.search.Query`
+        :param query: a group of search criteria
 
         :rtype: int
         :return: Total number of hits for query
diff --git a/boto/compat.py b/boto/compat.py
new file mode 100644
index 0000000..44fbc3b
--- /dev/null
+++ b/boto/compat.py
@@ -0,0 +1,28 @@
+# Copyright (c) 2012 Amazon.com, Inc. or its affiliates.  All Rights Reserved
+#
+# Permission is hereby granted, free of charge, to any person obtaining a
+# copy of this software and associated documentation files (the
+# "Software"), to deal in the Software without restriction, including
+# without limitation the rights to use, copy, modify, merge, publish, dis-
+# tribute, sublicense, and/or sell copies of the Software, and to permit
+# persons to whom the Software is furnished to do so, subject to the fol-
+# lowing conditions:
+#
+# The above copyright notice and this permission notice shall be included
+# in all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
+# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
+# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+# IN THE SOFTWARE.
+#
+
+# This allows boto modules to say "from boto.compat import json".  This is
+# preferred so that all modules don't have to repeat this idiom.
+try:
+    import simplejson as json
+except ImportError:
+    import json
diff --git a/boto/connection.py b/boto/connection.py
index 080ff5e..1f7392c 100644
--- a/boto/connection.py
+++ b/boto/connection.py
@@ -67,8 +67,10 @@
 import boto.cacerts
 
 from boto import config, UserAgent
-from boto.exception import AWSConnectionError, BotoClientError
+from boto.exception import AWSConnectionError
+from boto.exception import BotoClientError
 from boto.exception import BotoServerError
+from boto.exception import PleaseRetryException
 from boto.provider import Provider
 from boto.resultset import ResultSet
 
@@ -487,7 +489,7 @@
         self.handle_proxy(proxy, proxy_port, proxy_user, proxy_pass)
         # define exceptions from httplib that we want to catch and retry
         self.http_exceptions = (httplib.HTTPException, socket.error,
-                                socket.gaierror)
+                                socket.gaierror, httplib.BadStatusLine)
         # define subclasses of the above that are not retryable.
         self.http_unretryable_exceptions = []
         if HAVE_HTTPS_CONNECTION:
@@ -523,9 +525,11 @@
         # timeouts will only be applied if Python is 2.6 or greater.
         self.http_connection_kwargs = {}
         if (sys.version_info[0], sys.version_info[1]) >= (2, 6):
-            if config.has_option('Boto', 'http_socket_timeout'):
-                timeout = config.getint('Boto', 'http_socket_timeout')
-                self.http_connection_kwargs['timeout'] = timeout
+            # If timeout isn't defined in boto config file, use 70 second
+            # default as recommended by
+            # http://docs.aws.amazon.com/amazonswf/latest/apireference/API_PollForActivityTask.html
+            self.http_connection_kwargs['timeout'] = config.getint(
+                'Boto', 'http_socket_timeout', 70)
 
         if isinstance(provider, Provider):
             # Allow overriding Provider
@@ -537,15 +541,19 @@
                                      aws_secret_access_key,
                                      security_token)
 
-        # allow config file to override default host
+        # Allow config file to override default host and port.
         if self.provider.host:
             self.host = self.provider.host
+        if self.provider.port:
+            self.port = self.provider.port
 
         self._pool = ConnectionPool()
         self._connection = (self.server_name(), self.is_secure)
         self._last_rs = None
         self._auth_handler = auth.get_auth_handler(
               host, config, self.provider, self._required_auth_capability())
+        if getattr(self, 'AuthServiceName', None) is not None:
+            self.auth_service_name = self.AuthServiceName
 
     def __repr__(self):
         return '%s:%s' % (self.__class__.__name__, self.host)
@@ -553,6 +561,23 @@
     def _required_auth_capability(self):
         return []
 
+    def _get_auth_service_name(self):
+        return getattr(self._auth_handler, 'service_name')
+
+    # For Sigv4, the auth_service_name/auth_region_name properties allow
+    # the service_name/region_name to be explicitly set instead of being
+    # derived from the endpoint url.
+    def _set_auth_service_name(self, value):
+        self._auth_handler.service_name = value
+    auth_service_name = property(_get_auth_service_name, _set_auth_service_name)
+
+    def _get_auth_region_name(self):
+        return getattr(self._auth_handler, 'region_name')
+
+    def _set_auth_region_name(self, value):
+        self._auth_handler.region_name = value
+    auth_region_name = property(_get_auth_region_name, _set_auth_region_name)
+
     def connection(self):
         return self.get_http_connection(*self._connection)
     connection = property(connection)
@@ -575,7 +600,7 @@
         # https://groups.google.com/forum/#!topic/boto-dev/-ft0XPUy0y8
         # You can override that behavior with the suppress_consec_slashes param.
         if not self.suppress_consec_slashes:
-            return self.path + re.sub('^/*', "", path)
+            return self.path + re.sub('^(/*)/', "\\1", path)
         pos = path.find('?')
         if pos >= 0:
             params = path[pos:]
@@ -658,7 +683,7 @@
             return self.new_http_connection(host, is_secure)
 
     def new_http_connection(self, host, is_secure):
-        if self.use_proxy:
+        if self.use_proxy and not is_secure:
             host = '%s:%d' % (self.proxy, int(self.proxy_port))
         if host is None:
             host = self.server_name()
@@ -667,7 +692,7 @@
                     'establishing HTTPS connection: host=%s, kwargs=%s',
                     host, self.http_connection_kwargs)
             if self.use_proxy:
-                connection = self.proxy_ssl()
+                connection = self.proxy_ssl(host, is_secure and 443 or 80)
             elif self.https_connection_factory:
                 connection = self.https_connection_factory(host)
             elif self.https_validate_certificates and HAVE_HTTPS_CONNECTION:
@@ -703,11 +728,16 @@
     def put_http_connection(self, host, is_secure, connection):
         self._pool.put_http_connection(host, is_secure, connection)
 
-    def proxy_ssl(self):
-        host = '%s:%d' % (self.host, self.port)
+    def proxy_ssl(self, host=None, port=None):
+        if host and port:
+            host = '%s:%d' % (host, port)
+        else:
+            host = '%s:%d' % (self.host, self.port)
         sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
         try:
             sock.connect((self.proxy, int(self.proxy_port)))
+            if "timeout" in self.http_connection_kwargs:
+                sock.settimeout(self.http_connection_kwargs["timeout"])
         except:
             raise
         boto.log.debug("Proxy connection: CONNECT %s HTTP/1.0\r\n", host)
@@ -789,6 +819,7 @@
         boto.log.debug('Data: %s' % request.body)
         boto.log.debug('Headers: %s' % request.headers)
         boto.log.debug('Host: %s' % request.host)
+        boto.log.debug('Params: %s' % request.params)
         response = None
         body = None
         e = None
@@ -799,7 +830,7 @@
         i = 0
         connection = self.get_http_connection(request.host, self.is_secure)
         while i <= num_retries:
-            # Use binary exponential backoff to desynchronize client requests
+            # Use binary exponential backoff to desynchronize client requests.
             next_sleep = random.random() * (2 ** i)
             try:
                 # we now re-sign each request before it is retried
@@ -849,6 +880,11 @@
                                                           scheme == 'https')
                     response = None
                     continue
+            except PleaseRetryException, e:
+                boto.log.debug('encountered a retry exception: %s' % e)
+                connection = self.new_http_connection(request.host,
+                                                      self.is_secure)
+                response = e.response
             except self.http_exceptions, e:
                 for unretryable in self.http_unretryable_exceptions:
                     if isinstance(e, unretryable):
@@ -865,7 +901,7 @@
         # If we made it here, it's because we have exhausted our retries
         # and stil haven't succeeded.  So, if we have a response object,
         # use it to raise an exception.
-        # Otherwise, raise the exception that must have already h#appened.
+        # Otherwise, raise the exception that must have already happened.
         if response:
             raise BotoServerError(response.status, response.reason, body)
         elif e:
@@ -901,13 +937,14 @@
 
     def make_request(self, method, path, headers=None, data='', host=None,
                      auth_path=None, sender=None, override_num_retries=None,
-                     params=None):
+                     params=None, retry_handler=None):
         """Makes a request to the server, with stock multiple-retry logic."""
         if params is None:
             params = {}
         http_request = self.build_base_http_request(method, path, auth_path,
                                                     params, headers, data, host)
-        return self._mexe(http_request, sender, override_num_retries)
+        return self._mexe(http_request, sender, override_num_retries,
+                          retry_handler=retry_handler)
 
     def close(self):
         """(Optional) Close any open HTTP connections.  This is non-destructive,
@@ -952,11 +989,51 @@
         return self._mexe(http_request)
 
     def build_list_params(self, params, items, label):
-        if isinstance(items, str):
+        if isinstance(items, basestring):
             items = [items]
         for i in range(1, len(items) + 1):
             params['%s.%d' % (label, i)] = items[i - 1]
 
+    def build_complex_list_params(self, params, items, label, names):
+        """Serialize a list of structures.
+
+        For example::
+
+            items = [('foo', 'bar', 'baz'), ('foo2', 'bar2', 'baz2')]
+            label = 'ParamName.member'
+            names = ('One', 'Two', 'Three')
+            self.build_complex_list_params(params, items, label, names)
+
+        would result in the params dict being updated with these params::
+
+            ParamName.member.1.One = foo
+            ParamName.member.1.Two = bar
+            ParamName.member.1.Three = baz
+
+            ParamName.member.2.One = foo2
+            ParamName.member.2.Two = bar2
+            ParamName.member.2.Three = baz2
+
+        :type params: dict
+        :param params: The params dict.  The complex list params
+            will be added to this dict.
+
+        :type items: list of tuples
+        :param items: The list to serialize.
+
+        :type label: string
+        :param label: The prefix to apply to the parameter.
+
+        :type names: tuple of strings
+        :param names: The names associated with each tuple element.
+
+        """
+        for i, item in enumerate(items, 1):
+            current_prefix = '%s.%s' % (label, i)
+            for key, value in zip(names, item):
+                full_key = '%s.%s' % (current_prefix, key)
+                params[full_key] = value
+
     # generics
 
     def get_list(self, action, params, markers, path='/',
diff --git a/boto/core/credentials.py b/boto/core/credentials.py
index 1f315a3..b4b35b5 100644
--- a/boto/core/credentials.py
+++ b/boto/core/credentials.py
@@ -23,8 +23,8 @@
 #
 import os
 from six.moves import configparser
+from boto.compat import json
 import requests
-import json
 
 
 class Credentials(object):
diff --git a/boto/datapipeline/__init__.py b/boto/datapipeline/__init__.py
new file mode 100644
index 0000000..e69de29
--- /dev/null
+++ b/boto/datapipeline/__init__.py
diff --git a/boto/datapipeline/exceptions.py b/boto/datapipeline/exceptions.py
new file mode 100644
index 0000000..c2761e2
--- /dev/null
+++ b/boto/datapipeline/exceptions.py
@@ -0,0 +1,42 @@
+# Copyright (c) 2012 Amazon.com, Inc. or its affiliates.  All Rights Reserved
+#
+# Permission is hereby granted, free of charge, to any person obtaining a
+# copy of this software and associated documentation files (the
+# "Software"), to deal in the Software without restriction, including
+# without limitation the rights to use, copy, modify, merge, publish, dis-
+# tribute, sublicense, and/or sell copies of the Software, and to permit
+# persons to whom the Software is furnished to do so, subject to the fol-
+# lowing conditions:
+#
+# The above copyright notice and this permission notice shall be included
+# in all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
+# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
+# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+# IN THE SOFTWARE.
+#
+from boto.exception import JSONResponseError
+
+
+class PipelineDeletedException(JSONResponseError):
+    pass
+
+
+class InvalidRequestException(JSONResponseError):
+    pass
+
+
+class TaskNotFoundException(JSONResponseError):
+    pass
+
+
+class PipelineNotFoundException(JSONResponseError):
+    pass
+
+
+class InternalServiceError(JSONResponseError):
+    pass
diff --git a/boto/datapipeline/layer1.py b/boto/datapipeline/layer1.py
new file mode 100644
index 0000000..ff0c400
--- /dev/null
+++ b/boto/datapipeline/layer1.py
@@ -0,0 +1,641 @@
+# Copyright (c) 2013 Amazon.com, Inc. or its affiliates.  All Rights Reserved
+#
+# Permission is hereby granted, free of charge, to any person obtaining a
+# copy of this software and associated documentation files (the
+# "Software"), to deal in the Software without restriction, including
+# without limitation the rights to use, copy, modify, merge, publish, dis-
+# tribute, sublicense, and/or sell copies of the Software, and to permit
+# persons to whom the Software is furnished to do so, subject to the fol-
+# lowing conditions:
+#
+# The above copyright notice and this permission notice shall be included
+# in all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
+# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
+# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+# IN THE SOFTWARE.
+#
+
+import boto
+from boto.compat import json
+from boto.connection import AWSQueryConnection
+from boto.regioninfo import RegionInfo
+from boto.exception import JSONResponseError
+from boto.datapipeline import exceptions
+
+
+class DataPipelineConnection(AWSQueryConnection):
+    """
+    This is the AWS Data Pipeline API Reference . This guide provides
+    descriptions and samples of the AWS Data Pipeline API.
+
+    AWS Data Pipeline is a web service that configures and manages a
+    data-driven workflow called a pipeline. AWS Data Pipeline handles
+    the details of scheduling and ensuring that data dependencies are
+    met so your application can focus on processing the data.
+
+    The AWS Data Pipeline API implements two main sets of
+    functionality. The first set of actions configure the pipeline in
+    the web service. You call these actions to create a pipeline and
+    define data sources, schedules, dependencies, and the transforms
+    to be performed on the data.
+
+    The second set of actions are used by a task runner application
+    that calls the AWS Data Pipeline API to receive the next task
+    ready for processing. The logic for performing the task, such as
+    querying the data, running data analysis, or converting the data
+    from one format to another, is contained within the task runner.
+    The task runner performs the task assigned to it by the web
+    service, reporting progress to the web service as it does so. When
+    the task is done, the task runner reports the final success or
+    failure of the task to the web service.
+
+    AWS Data Pipeline provides an open-source implementation of a task
+    runner called AWS Data Pipeline Task Runner. AWS Data Pipeline
+    Task Runner provides logic for common data management scenarios,
+    such as performing database queries and running data analysis
+    using Amazon Elastic MapReduce (Amazon EMR). You can use AWS Data
+    Pipeline Task Runner as your task runner, or you can write your
+    own task runner to provide custom data management.
+
+    The AWS Data Pipeline API uses the Signature Version 4 protocol
+    for signing requests. For more information about how to sign a
+    request with this protocol, see `Signature Version 4 Signing
+    Process`_. In the code examples in this reference, the Signature
+    Version 4 Request parameters are represented as AuthParams.
+    """
+    APIVersion = "2012-10-29"
+    DefaultRegionName = "us-east-1"
+    DefaultRegionEndpoint = "datapipeline.us-east-1.amazonaws.com"
+    ServiceName = "DataPipeline"
+    TargetPrefix = "DataPipeline"
+    ResponseError = JSONResponseError
+
+    _faults = {
+        "PipelineDeletedException": exceptions.PipelineDeletedException,
+        "InvalidRequestException": exceptions.InvalidRequestException,
+        "TaskNotFoundException": exceptions.TaskNotFoundException,
+        "PipelineNotFoundException": exceptions.PipelineNotFoundException,
+        "InternalServiceError": exceptions.InternalServiceError,
+    }
+
+
+    def __init__(self, **kwargs):
+        region = kwargs.get('region')
+        if not region:
+            region = RegionInfo(self, self.DefaultRegionName,
+                                self.DefaultRegionEndpoint)
+        kwargs['host'] = region.endpoint
+        AWSQueryConnection.__init__(self, **kwargs)
+        self.region = region
+
+    def _required_auth_capability(self):
+        return ['hmac-v4']
+
+    def activate_pipeline(self, pipeline_id):
+        """
+        Validates a pipeline and initiates processing. If the pipeline
+        does not pass validation, activation fails.
+
+        Call this action to start processing pipeline tasks of a
+        pipeline you've created using the CreatePipeline and
+        PutPipelineDefinition actions. A pipeline cannot be modified
+        after it has been successfully activated.
+
+        :type pipeline_id: string
+        :param pipeline_id: The identifier of the pipeline to activate.
+
+        """
+        params = {'pipelineId': pipeline_id, }
+        return self.make_request(action='ActivatePipeline',
+                                 body=json.dumps(params))
+
+    def create_pipeline(self, name, unique_id, description=None):
+        """
+        Creates a new empty pipeline. When this action succeeds, you
+        can then use the PutPipelineDefinition action to populate the
+        pipeline.
+
+        :type name: string
+        :param name: The name of the new pipeline. You can use the same name
+            for multiple pipelines associated with your AWS account, because
+            AWS Data Pipeline assigns each new pipeline a unique pipeline
+            identifier.
+
+        :type unique_id: string
+        :param unique_id: A unique identifier that you specify. This identifier
+            is not the same as the pipeline identifier assigned by AWS Data
+            Pipeline. You are responsible for defining the format and ensuring
+            the uniqueness of this identifier. You use this parameter to ensure
+            idempotency during repeated calls to CreatePipeline. For example,
+            if the first call to CreatePipeline does not return a clear
+            success, you can pass in the same unique identifier and pipeline
+            name combination on a subsequent call to CreatePipeline.
+            CreatePipeline ensures that if a pipeline already exists with the
+            same name and unique identifier, a new pipeline will not be
+            created. Instead, you'll receive the pipeline identifier from the
+            previous attempt. The uniqueness of the name and unique identifier
+            combination is scoped to the AWS account or IAM user credentials.
+
+        :type description: string
+        :param description: The description of the new pipeline.
+
+        """
+        params = {'name': name, 'uniqueId': unique_id, }
+        if description is not None:
+            params['description'] = description
+        return self.make_request(action='CreatePipeline',
+                                 body=json.dumps(params))
+
+    def delete_pipeline(self, pipeline_id):
+        """
+        Permanently deletes a pipeline, its pipeline definition and
+        its run history. You cannot query or restore a deleted
+        pipeline. AWS Data Pipeline will attempt to cancel instances
+        associated with the pipeline that are currently being
+        processed by task runners. Deleting a pipeline cannot be
+        undone.
+
+        To temporarily pause a pipeline instead of deleting it, call
+        SetStatus with the status set to Pause on individual
+        components. Components that are paused by SetStatus can be
+        resumed.
+
+        :type pipeline_id: string
+        :param pipeline_id: The identifier of the pipeline to be deleted.
+
+        """
+        params = {'pipelineId': pipeline_id, }
+        return self.make_request(action='DeletePipeline',
+                                 body=json.dumps(params))
+
+    def describe_objects(self, object_ids, pipeline_id, marker=None,
+                         evaluate_expressions=None):
+        """
+        Returns the object definitions for a set of objects associated
+        with the pipeline. Object definitions are composed of a set of
+        fields that define the properties of the object.
+
+        :type pipeline_id: string
+        :param pipeline_id: Identifier of the pipeline that contains the object
+            definitions.
+
+        :type object_ids: list
+        :param object_ids: Identifiers of the pipeline objects that contain the
+            definitions to be described. You can pass as many as 25 identifiers
+            in a single call to DescribeObjects.
+
+        :type evaluate_expressions: boolean
+        :param evaluate_expressions: Indicates whether any expressions in the
+            object should be evaluated when the object descriptions are
+            returned.
+
+        :type marker: string
+        :param marker: The starting point for the results to be returned. The
+            first time you call DescribeObjects, this value should be empty. As
+            long as the action returns `HasMoreResults` as `True`, you can call
+            DescribeObjects again and pass the marker value from the response
+            to retrieve the next set of results.
+
+        """
+        params = {
+            'pipelineId': pipeline_id,
+            'objectIds': object_ids,
+        }
+        if evaluate_expressions is not None:
+            params['evaluateExpressions'] = evaluate_expressions
+        if marker is not None:
+            params['marker'] = marker
+        return self.make_request(action='DescribeObjects',
+                                 body=json.dumps(params))
+
+    def describe_pipelines(self, pipeline_ids):
+        """
+        Retrieve metadata about one or more pipelines. The information
+        retrieved includes the name of the pipeline, the pipeline
+        identifier, its current state, and the user account that owns
+        the pipeline. Using account credentials, you can retrieve
+        metadata about pipelines that you or your IAM users have
+        created. If you are using an IAM user account, you can
+        retrieve metadata about only those pipelines you have read
+        permission for.
+
+        To retrieve the full pipeline definition instead of metadata
+        about the pipeline, call the GetPipelineDefinition action.
+
+        :type pipeline_ids: list
+        :param pipeline_ids: Identifiers of the pipelines to describe. You can
+            pass as many as 25 identifiers in a single call to
+            DescribePipelines. You can obtain pipeline identifiers by calling
+            ListPipelines.
+
+        """
+        params = {'pipelineIds': pipeline_ids, }
+        return self.make_request(action='DescribePipelines',
+                                 body=json.dumps(params))
+
+    def evaluate_expression(self, pipeline_id, expression, object_id):
+        """
+        Evaluates a string in the context of a specified object. A
+        task runner can use this action to evaluate SQL queries stored
+        in Amazon S3.
+
+        :type pipeline_id: string
+        :param pipeline_id: The identifier of the pipeline.
+
+        :type object_id: string
+        :param object_id: The identifier of the object.
+
+        :type expression: string
+        :param expression: The expression to evaluate.
+
+        """
+        params = {
+            'pipelineId': pipeline_id,
+            'objectId': object_id,
+            'expression': expression,
+        }
+        return self.make_request(action='EvaluateExpression',
+                                 body=json.dumps(params))
+
+    def get_pipeline_definition(self, pipeline_id, version=None):
+        """
+        Returns the definition of the specified pipeline. You can call
+        GetPipelineDefinition to retrieve the pipeline definition you
+        provided using PutPipelineDefinition.
+
+        :type pipeline_id: string
+        :param pipeline_id: The identifier of the pipeline.
+
+        :type version: string
+        :param version: The version of the pipeline definition to retrieve.
+            This parameter accepts the values `latest` (default) and `active`.
+            Where `latest` indicates the last definition saved to the pipeline
+            and `active` indicates the last definition of the pipeline that was
+            activated.
+
+        """
+        params = {'pipelineId': pipeline_id, }
+        if version is not None:
+            params['version'] = version
+        return self.make_request(action='GetPipelineDefinition',
+                                 body=json.dumps(params))
+
+    def list_pipelines(self, marker=None):
+        """
+        Returns a list of pipeline identifiers for all active
+        pipelines. Identifiers are returned only for pipelines you
+        have permission to access.
+
+        :type marker: string
+        :param marker: The starting point for the results to be returned. The
+            first time you call ListPipelines, this value should be empty. As
+            long as the action returns `HasMoreResults` as `True`, you can call
+            ListPipelines again and pass the marker value from the response to
+            retrieve the next set of results.
+
+        """
+        params = {}
+        if marker is not None:
+            params['marker'] = marker
+        return self.make_request(action='ListPipelines',
+                                 body=json.dumps(params))
+
+    def poll_for_task(self, worker_group, hostname=None,
+                      instance_identity=None):
+        """
+        Task runners call this action to receive a task to perform
+        from AWS Data Pipeline. The task runner specifies which tasks
+        it can perform by setting a value for the workerGroup
+        parameter of the PollForTask call. The task returned by
+        PollForTask may come from any of the pipelines that match the
+        workerGroup value passed in by the task runner and that was
+        launched using the IAM user credentials specified by the task
+        runner.
+
+        If tasks are ready in the work queue, PollForTask returns a
+        response immediately. If no tasks are available in the queue,
+        PollForTask uses long-polling and holds on to a poll
+        connection for up to a 90 seconds during which time the first
+        newly scheduled task is handed to the task runner. To
+        accomodate this, set the socket timeout in your task runner to
+        90 seconds. The task runner should not call PollForTask again
+        on the same `workerGroup` until it receives a response, and
+        this may take up to 90 seconds.
+
+        :type worker_group: string
+        :param worker_group: Indicates the type of task the task runner is
+            configured to accept and process. The worker group is set as a
+            field on objects in the pipeline when they are created. You can
+            only specify a single value for `workerGroup` in the call to
+            PollForTask. There are no wildcard values permitted in
+            `workerGroup`, the string must be an exact, case-sensitive, match.
+
+        :type hostname: string
+        :param hostname: The public DNS name of the calling task runner.
+
+        :type instance_identity: dict
+        :param instance_identity: Identity information for the Amazon EC2
+            instance that is hosting the task runner. You can get this value by
+            calling the URI, `http://169.254.169.254/latest/meta-data/instance-
+            id`, from the EC2 instance. For more information, go to `Instance
+            Metadata`_ in the Amazon Elastic Compute Cloud User Guide. Passing
+            in this value proves that your task runner is running on an EC2
+            instance, and ensures the proper AWS Data Pipeline service charges
+            are applied to your pipeline.
+
+        """
+        params = {'workerGroup': worker_group, }
+        if hostname is not None:
+            params['hostname'] = hostname
+        if instance_identity is not None:
+            params['instanceIdentity'] = instance_identity
+        return self.make_request(action='PollForTask',
+                                 body=json.dumps(params))
+
+    def put_pipeline_definition(self, pipeline_objects, pipeline_id):
+        """
+        Adds tasks, schedules, and preconditions that control the
+        behavior of the pipeline. You can use PutPipelineDefinition to
+        populate a new pipeline or to update an existing pipeline that
+        has not yet been activated.
+
+        PutPipelineDefinition also validates the configuration as it
+        adds it to the pipeline. Changes to the pipeline are saved
+        unless one of the following three validation errors exists in
+        the pipeline.
+
+        #. An object is missing a name or identifier field.
+        #. A string or reference field is empty.
+        #. The number of objects in the pipeline exceeds the maximum
+           allowed objects.
+
+
+
+        Pipeline object definitions are passed to the
+        PutPipelineDefinition action and returned by the
+        GetPipelineDefinition action.
+
+        :type pipeline_id: string
+        :param pipeline_id: The identifier of the pipeline to be configured.
+
+        :type pipeline_objects: list
+        :param pipeline_objects: The objects that define the pipeline. These
+            will overwrite the existing pipeline definition.
+
+        """
+        params = {
+            'pipelineId': pipeline_id,
+            'pipelineObjects': pipeline_objects,
+        }
+        return self.make_request(action='PutPipelineDefinition',
+                                 body=json.dumps(params))
+
+    def query_objects(self, pipeline_id, sphere, marker=None, query=None,
+                      limit=None):
+        """
+        Queries a pipeline for the names of objects that match a
+        specified set of conditions.
+
+        The objects returned by QueryObjects are paginated and then
+        filtered by the value you set for query. This means the action
+        may return an empty result set with a value set for marker. If
+        `HasMoreResults` is set to `True`, you should continue to call
+        QueryObjects, passing in the returned value for marker, until
+        `HasMoreResults` returns `False`.
+
+        :type pipeline_id: string
+        :param pipeline_id: Identifier of the pipeline to be queried for object
+            names.
+
+        :type query: dict
+        :param query: Query that defines the objects to be returned. The Query
+            object can contain a maximum of ten selectors. The conditions in
+            the query are limited to top-level String fields in the object.
+            These filters can be applied to components, instances, and
+            attempts.
+
+        :type sphere: string
+        :param sphere: Specifies whether the query applies to components or
+            instances. Allowable values: `COMPONENT`, `INSTANCE`, `ATTEMPT`.
+
+        :type marker: string
+        :param marker: The starting point for the results to be returned. The
+            first time you call QueryObjects, this value should be empty. As
+            long as the action returns `HasMoreResults` as `True`, you can call
+            QueryObjects again and pass the marker value from the response to
+            retrieve the next set of results.
+
+        :type limit: integer
+        :param limit: Specifies the maximum number of object names that
+            QueryObjects will return in a single call. The default value is
+            100.
+
+        """
+        params = {'pipelineId': pipeline_id, 'sphere': sphere, }
+        if query is not None:
+            params['query'] = query
+        if marker is not None:
+            params['marker'] = marker
+        if limit is not None:
+            params['limit'] = limit
+        return self.make_request(action='QueryObjects',
+                                 body=json.dumps(params))
+
+    def report_task_progress(self, task_id):
+        """
+        Updates the AWS Data Pipeline service on the progress of the
+        calling task runner. When the task runner is assigned a task,
+        it should call ReportTaskProgress to acknowledge that it has
+        the task within 2 minutes. If the web service does not recieve
+        this acknowledgement within the 2 minute window, it will
+        assign the task in a subsequent PollForTask call. After this
+        initial acknowledgement, the task runner only needs to report
+        progress every 15 minutes to maintain its ownership of the
+        task. You can change this reporting time from 15 minutes by
+        specifying a `reportProgressTimeout` field in your pipeline.
+        If a task runner does not report its status after 5 minutes,
+        AWS Data Pipeline will assume that the task runner is unable
+        to process the task and will reassign the task in a subsequent
+        response to PollForTask. task runners should call
+        ReportTaskProgress every 60 seconds.
+
+        :type task_id: string
+        :param task_id: Identifier of the task assigned to the task runner.
+            This value is provided in the TaskObject that the service returns
+            with the response for the PollForTask action.
+
+        """
+        params = {'taskId': task_id, }
+        return self.make_request(action='ReportTaskProgress',
+                                 body=json.dumps(params))
+
+    def report_task_runner_heartbeat(self, taskrunner_id, worker_group=None,
+                                     hostname=None):
+        """
+        Task runners call ReportTaskRunnerHeartbeat every 15 minutes
+        to indicate that they are operational. In the case of AWS Data
+        Pipeline Task Runner launched on a resource managed by AWS
+        Data Pipeline, the web service can use this call to detect
+        when the task runner application has failed and restart a new
+        instance.
+
+        :type taskrunner_id: string
+        :param taskrunner_id: The identifier of the task runner. This value
+            should be unique across your AWS account. In the case of AWS Data
+            Pipeline Task Runner launched on a resource managed by AWS Data
+            Pipeline, the web service provides a unique identifier when it
+            launches the application. If you have written a custom task runner,
+            you should assign a unique identifier for the task runner.
+
+        :type worker_group: string
+        :param worker_group: Indicates the type of task the task runner is
+            configured to accept and process. The worker group is set as a
+            field on objects in the pipeline when they are created. You can
+            only specify a single value for `workerGroup` in the call to
+            ReportTaskRunnerHeartbeat. There are no wildcard values permitted
+            in `workerGroup`, the string must be an exact, case-sensitive,
+            match.
+
+        :type hostname: string
+        :param hostname: The public DNS name of the calling task runner.
+
+        """
+        params = {'taskrunnerId': taskrunner_id, }
+        if worker_group is not None:
+            params['workerGroup'] = worker_group
+        if hostname is not None:
+            params['hostname'] = hostname
+        return self.make_request(action='ReportTaskRunnerHeartbeat',
+                                 body=json.dumps(params))
+
+    def set_status(self, object_ids, status, pipeline_id):
+        """
+        Requests that the status of an array of physical or logical
+        pipeline objects be updated in the pipeline. This update may
+        not occur immediately, but is eventually consistent. The
+        status that can be set depends on the type of object.
+
+        :type pipeline_id: string
+        :param pipeline_id: Identifies the pipeline that contains the objects.
+
+        :type object_ids: list
+        :param object_ids: Identifies an array of objects. The corresponding
+            objects can be either physical or components, but not a mix of both
+            types.
+
+        :type status: string
+        :param status: Specifies the status to be set on all the objects in
+            `objectIds`. For components, this can be either `PAUSE` or
+            `RESUME`. For instances, this can be either `CANCEL`, `RERUN`, or
+            `MARK_FINISHED`.
+
+        """
+        params = {
+            'pipelineId': pipeline_id,
+            'objectIds': object_ids,
+            'status': status,
+        }
+        return self.make_request(action='SetStatus',
+                                 body=json.dumps(params))
+
+    def set_task_status(self, task_id, task_status, error_id=None,
+                        error_message=None, error_stack_trace=None):
+        """
+        Notifies AWS Data Pipeline that a task is completed and
+        provides information about the final status. The task runner
+        calls this action regardless of whether the task was
+        sucessful. The task runner does not need to call SetTaskStatus
+        for tasks that are canceled by the web service during a call
+        to ReportTaskProgress.
+
+        :type task_id: string
+        :param task_id: Identifies the task assigned to the task runner. This
+            value is set in the TaskObject that is returned by the PollForTask
+            action.
+
+        :type task_status: string
+        :param task_status: If `FINISHED`, the task successfully completed. If
+            `FAILED` the task ended unsuccessfully. The `FALSE` value is used
+            by preconditions.
+
+        :type error_id: string
+        :param error_id: If an error occurred during the task, this value
+            specifies an id value that represents the error. This value is set
+            on the physical attempt object. It is used to display error
+            information to the user. It should not start with string "Service_"
+            which is reserved by the system.
+
+        :type error_message: string
+        :param error_message: If an error occurred during the task, this value
+            specifies a text description of the error. This value is set on the
+            physical attempt object. It is used to display error information to
+            the user. The web service does not parse this value.
+
+        :type error_stack_trace: string
+        :param error_stack_trace: If an error occurred during the task, this
+            value specifies the stack trace associated with the error. This
+            value is set on the physical attempt object. It is used to display
+            error information to the user. The web service does not parse this
+            value.
+
+        """
+        params = {'taskId': task_id, 'taskStatus': task_status, }
+        if error_id is not None:
+            params['errorId'] = error_id
+        if error_message is not None:
+            params['errorMessage'] = error_message
+        if error_stack_trace is not None:
+            params['errorStackTrace'] = error_stack_trace
+        return self.make_request(action='SetTaskStatus',
+                                 body=json.dumps(params))
+
+    def validate_pipeline_definition(self, pipeline_objects, pipeline_id):
+        """
+        Tests the pipeline definition with a set of validation checks
+        to ensure that it is well formed and can run without error.
+
+        :type pipeline_id: string
+        :param pipeline_id: Identifies the pipeline whose definition is to be
+            validated.
+
+        :type pipeline_objects: list
+        :param pipeline_objects: A list of objects that define the pipeline
+            changes to validate against the pipeline.
+
+        """
+        params = {
+            'pipelineId': pipeline_id,
+            'pipelineObjects': pipeline_objects,
+        }
+        return self.make_request(action='ValidatePipelineDefinition',
+                                 body=json.dumps(params))
+
+    def make_request(self, action, body):
+        headers = {
+            'X-Amz-Target': '%s.%s' % (self.TargetPrefix, action),
+            'Host': self.region.endpoint,
+            'Content-Type': 'application/x-amz-json-1.1',
+            'Content-Length': str(len(body)),
+        }
+        http_request = self.build_base_http_request(
+            method='POST', path='/', auth_path='/', params={},
+            headers=headers, data=body)
+        response = self._mexe(http_request, sender=None,
+                              override_num_retries=10)
+        response_body = response.read()
+        boto.log.debug(response_body)
+        if response.status == 200:
+            if response_body:
+                return json.loads(response_body)
+        else:
+            json_body = json.loads(response_body)
+            fault_name = json_body.get('__type', None)
+            exception_class = self._faults.get(fault_name, self.ResponseError)
+            raise exception_class(response.status, response.reason,
+                                  body=json_body)
+
diff --git a/boto/dynamodb/__init__.py b/boto/dynamodb/__init__.py
index c60b5c3..1220436 100644
--- a/boto/dynamodb/__init__.py
+++ b/boto/dynamodb/__init__.py
@@ -47,9 +47,15 @@
             RegionInfo(name='ap-southeast-1',
                        endpoint='dynamodb.ap-southeast-1.amazonaws.com',
                        connection_cls=boto.dynamodb.layer2.Layer2),
+            RegionInfo(name='ap-southeast-2',
+                       endpoint='dynamodb.ap-southeast-2.amazonaws.com',
+                       connection_cls=boto.dynamodb.layer2.Layer2),
             RegionInfo(name='eu-west-1',
                        endpoint='dynamodb.eu-west-1.amazonaws.com',
                        connection_cls=boto.dynamodb.layer2.Layer2),
+            RegionInfo(name='sa-east-1',
+                   endpoint='dynamodb.sa-east-1.amazonaws.com',
+                   connection_cls=boto.dynamodb.layer2.Layer2),
             ]
 
 
diff --git a/boto/dynamodb/batch.py b/boto/dynamodb/batch.py
index 87c84fc..6a755a9 100644
--- a/boto/dynamodb/batch.py
+++ b/boto/dynamodb/batch.py
@@ -41,12 +41,18 @@
     :ivar attributes_to_get: A list of attribute names.
         If supplied, only the specified attribute names will
         be returned.  Otherwise, all attributes will be returned.
+
+    :ivar consistent_read: Specify whether or not to use a
+        consistent read. Defaults to False.
+
     """
 
-    def __init__(self, table, keys, attributes_to_get=None):
+    def __init__(self, table, keys, attributes_to_get=None,
+                 consistent_read=False):
         self.table = table
         self.keys = keys
         self.attributes_to_get = attributes_to_get
+        self.consistent_read = consistent_read
 
     def to_dict(self):
         """
@@ -66,8 +72,13 @@
         batch_dict['Keys'] = key_list
         if self.attributes_to_get:
             batch_dict['AttributesToGet'] = self.attributes_to_get
+        if self.consistent_read:
+            batch_dict['ConsistentRead'] = True
+        else:
+            batch_dict['ConsistentRead'] = False
         return batch_dict
 
+
 class BatchWrite(object):
     """
     Used to construct a BatchWrite request.  Each BatchWrite object
@@ -126,7 +137,8 @@
         self.unprocessed = None
         self.layer2 = layer2
 
-    def add_batch(self, table, keys, attributes_to_get=None):
+    def add_batch(self, table, keys, attributes_to_get=None,
+                  consistent_read=False):
         """
         Add a Batch to this BatchList.
 
@@ -149,7 +161,7 @@
             If supplied, only the specified attribute names will
             be returned.  Otherwise, all attributes will be returned.
         """
-        self.append(Batch(table, keys, attributes_to_get))
+        self.append(Batch(table, keys, attributes_to_get, consistent_read))
 
     def resubmit(self):
         """
@@ -202,6 +214,7 @@
                 d[batch.table.name] = b
         return d
 
+
 class BatchWriteList(list):
     """
     A subclass of a list object that contains a collection of
diff --git a/boto/dynamodb/condition.py b/boto/dynamodb/condition.py
index 0b76790..f5db538 100644
--- a/boto/dynamodb/condition.py
+++ b/boto/dynamodb/condition.py
@@ -92,7 +92,7 @@
         self.values = values
 
     def __repr__(self):
-        return '{}({})'.format(self.__class__.__name__,
+        return '{0}({1})'.format(self.__class__.__name__,
                                ', '.join(self.values))
 
     def to_dict(self):
diff --git a/boto/dynamodb/exceptions.py b/boto/dynamodb/exceptions.py
index b60d5aa..12be2d7 100644
--- a/boto/dynamodb/exceptions.py
+++ b/boto/dynamodb/exceptions.py
@@ -4,6 +4,7 @@
 from boto.exception import BotoServerError, BotoClientError
 from boto.exception import DynamoDBResponseError
 
+
 class DynamoDBExpiredTokenError(BotoServerError):
     """
     Raised when a DynamoDB security token expires. This is generally boto's
@@ -28,6 +29,13 @@
     pass
 
 
+class DynamoDBNumberError(BotoClientError):
+    """
+    Raised in the event of incompatible numeric type casting.
+    """
+    pass
+
+
 class DynamoDBConditionalCheckFailedError(DynamoDBResponseError):
     """
     Raised when a ConditionalCheckFailedException response is received.
@@ -36,6 +44,7 @@
     """
     pass
 
+
 class DynamoDBValidationError(DynamoDBResponseError):
     """
     Raised when a ValidationException response is received. This happens
@@ -43,3 +52,13 @@
     has exceeded the 64Kb size limit.
     """
     pass
+
+
+class DynamoDBThroughputExceededError(DynamoDBResponseError):
+    """
+    Raised when the provisioned throughput has been exceeded.
+    Normally, when provisioned throughput is exceeded the operation
+    is retried.  If the retries are exhausted then this exception
+    will be raised.
+    """
+    pass
diff --git a/boto/dynamodb/item.py b/boto/dynamodb/item.py
index 4d4abda..b2b444d 100644
--- a/boto/dynamodb/item.py
+++ b/boto/dynamodb/item.py
@@ -194,3 +194,9 @@
         if self._updates is not None:
             self.delete_attribute(key)
         dict.__delitem__(self, key)
+
+    # Allow this item to still be pickled
+    def __getstate__(self):
+        return self.__dict__
+    def __setstate__(self, d):
+        self.__dict__.update(d)
diff --git a/boto/dynamodb/layer1.py b/boto/dynamodb/layer1.py
index 40dac5c..95c96a7 100644
--- a/boto/dynamodb/layer1.py
+++ b/boto/dynamodb/layer1.py
@@ -20,25 +20,15 @@
 # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
 # IN THE SOFTWARE.
 #
+import time
+from binascii import crc32
 
 import boto
 from boto.connection import AWSAuthConnection
 from boto.exception import DynamoDBResponseError
 from boto.provider import Provider
 from boto.dynamodb import exceptions as dynamodb_exceptions
-
-import time
-try:
-    import simplejson as json
-except ImportError:
-    import json
-
-#
-# To get full debug output, uncomment the following line and set the
-# value of Debug to be 2
-#
-#boto.set_stream_logger('dynamodb')
-Debug = 0
+from boto.compat import json
 
 
 class Layer1(AWSAuthConnection):
@@ -78,10 +68,13 @@
 
     ResponseError = DynamoDBResponseError
 
+    NumberRetries = 10
+    """The number of times an error is retried."""
+
     def __init__(self, aws_access_key_id=None, aws_secret_access_key=None,
                  is_secure=True, port=None, proxy=None, proxy_port=None,
                  debug=0, security_token=None, region=None,
-                 validate_certs=True):
+                 validate_certs=True, validate_checksums=True):
         if not region:
             region_name = boto.config.get('DynamoDB', 'region',
                                           self.DefaultRegionName)
@@ -98,6 +91,8 @@
                                    debug=debug, security_token=security_token,
                                    validate_certs=validate_certs)
         self.throughput_exceeded_events = 0
+        self._validate_checksums = boto.config.getbool(
+            'DynamoDB', 'validate_checksums', validate_checksums)
 
     def _get_session_token(self):
         self.provider = Provider(self._provider_type)
@@ -119,13 +114,13 @@
                                                     {}, headers, body, None)
         start = time.time()
         response = self._mexe(http_request, sender=None,
-                              override_num_retries=10,
+                              override_num_retries=self.NumberRetries,
                               retry_handler=self._retry_handler)
-        elapsed = (time.time() - start)*1000
+        elapsed = (time.time() - start) * 1000
         request_id = response.getheader('x-amzn-RequestId')
         boto.log.debug('RequestId: %s' % request_id)
-        boto.perflog.info('%s: id=%s time=%sms',
-                          headers['X-Amz-Target'], request_id, int(elapsed))
+        boto.perflog.debug('%s: id=%s time=%sms',
+                           headers['X-Amz-Target'], request_id, int(elapsed))
         response_body = response.read()
         boto.log.debug(response_body)
         return json.loads(response_body, object_hook=object_hook)
@@ -139,12 +134,15 @@
             if self.ThruputError in data.get('__type'):
                 self.throughput_exceeded_events += 1
                 msg = "%s, retry attempt %s" % (self.ThruputError, i)
-                if i == 0:
-                    next_sleep = 0
-                else:
-                    next_sleep = 0.05 * (2 ** i)
+                next_sleep = self._exponential_time(i)
                 i += 1
                 status = (msg, i, next_sleep)
+                if i == self.NumberRetries:
+                    # If this was our last retry attempt, raise
+                    # a specific error saying that the throughput
+                    # was exceeded.
+                    raise dynamodb_exceptions.DynamoDBThroughputExceededError(
+                        response.status, response.reason, data)
             elif self.SessionExpiredError in data.get('__type'):
                 msg = 'Renewing Session Token'
                 self._get_session_token()
@@ -158,8 +156,25 @@
             else:
                 raise self.ResponseError(response.status, response.reason,
                                          data)
+        expected_crc32 = response.getheader('x-amz-crc32')
+        if self._validate_checksums and expected_crc32 is not None:
+            boto.log.debug('Validating crc32 checksum for body: %s',
+                           response.read())
+            actual_crc32 = crc32(response.read()) & 0xffffffff
+            expected_crc32 = int(expected_crc32)
+            if actual_crc32 != expected_crc32:
+                msg = ("The calculated checksum %s did not match the expected "
+                       "checksum %s" % (actual_crc32, expected_crc32))
+                status = (msg, i + 1, self._exponential_time(i))
         return status
 
+    def _exponential_time(self, i):
+        if i == 0:
+            next_sleep = 0
+        else:
+            next_sleep = 0.05 * (2 ** i)
+        return next_sleep
+
     def list_tables(self, limit=None, start_table=None):
         """
         Returns a dictionary of results.  The dictionary contains
@@ -447,7 +462,7 @@
     def query(self, table_name, hash_key_value, range_key_conditions=None,
               attributes_to_get=None, limit=None, consistent_read=False,
               scan_index_forward=True, exclusive_start_key=None,
-              object_hook=None):
+              object_hook=None, count=False):
         """
         Perform a query of DynamoDB.  This version is currently punting
         and expecting you to provide a full and correct JSON body
@@ -471,6 +486,11 @@
         :type limit: int
         :param limit: The maximum number of items to return.
 
+        :type count: bool
+        :param count: If True, Amazon DynamoDB returns a total
+            number of items for the Query operation, even if the
+            operation has no matching items for the assigned filter.
+
         :type consistent_read: bool
         :param consistent_read: If True, a consistent read
             request is issued.  Otherwise, an eventually consistent
@@ -493,6 +513,8 @@
             data['AttributesToGet'] = attributes_to_get
         if limit:
             data['Limit'] = limit
+        if count:
+            data['Count'] = True
         if consistent_read:
             data['ConsistentRead'] = True
         if scan_index_forward:
@@ -507,8 +529,7 @@
 
     def scan(self, table_name, scan_filter=None,
              attributes_to_get=None, limit=None,
-             count=False, exclusive_start_key=None,
-             object_hook=None):
+             exclusive_start_key=None, object_hook=None, count=False):
         """
         Perform a scan of DynamoDB.  This version is currently punting
         and expecting you to provide a full and correct JSON body
@@ -527,7 +548,7 @@
             be returned.  Otherwise, all attributes will be returned.
 
         :type limit: int
-        :param limit: The maximum number of items to return.
+        :param limit: The maximum number of items to evaluate.
 
         :type count: bool
         :param count: If True, Amazon DynamoDB returns a total
diff --git a/boto/dynamodb/layer2.py b/boto/dynamodb/layer2.py
index 45fd069..16fcdbb 100644
--- a/boto/dynamodb/layer2.py
+++ b/boto/dynamodb/layer2.py
@@ -20,92 +20,124 @@
 # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
 # IN THE SOFTWARE.
 #
-import base64
-
 from boto.dynamodb.layer1 import Layer1
 from boto.dynamodb.table import Table
 from boto.dynamodb.schema import Schema
 from boto.dynamodb.item import Item
 from boto.dynamodb.batch import BatchList, BatchWriteList
-from boto.dynamodb.types import get_dynamodb_type, dynamize_value, \
-        convert_num, convert_binary
+from boto.dynamodb.types import get_dynamodb_type, Dynamizer, \
+        LossyFloatDynamizer
 
 
-def item_object_hook(dct):
-    """
-    A custom object hook for use when decoding JSON item bodys.
-    This hook will transform Amazon DynamoDB JSON responses to something
-    that maps directly to native Python types.
-    """
-    if len(dct.keys()) > 1:
-        return dct
-    if 'S' in dct:
-        return dct['S']
-    if 'N' in dct:
-        return convert_num(dct['N'])
-    if 'SS' in dct:
-        return set(dct['SS'])
-    if 'NS' in dct:
-        return set(map(convert_num, dct['NS']))
-    if 'B' in dct:
-        return base64.b64decode(dct['B'])
-    if 'BS' in dct:
-        return set(map(convert_binary, dct['BS']))
-    return dct
-
-
-def table_generator(tgen):
-    """
-    A low-level generator used to page through results from
-    query and scan operations.  This is used by
-    :class:`boto.dynamodb.layer2.TableGenerator` and is not intended
-    to be used outside of that context.
-    """
-    response = True
-    n = 0
-    while response:
-        if tgen.max_results and n == tgen.max_results:
-            break
-        if response is True:
-            pass
-        elif 'LastEvaluatedKey' in response:
-            lek = response['LastEvaluatedKey']
-            esk = tgen.table.layer2.dynamize_last_evaluated_key(lek)
-            tgen.kwargs['exclusive_start_key'] = esk
-        else:
-            break
-        response = tgen.callable(**tgen.kwargs)
-        if 'ConsumedCapacityUnits' in response:
-            tgen.consumed_units += response['ConsumedCapacityUnits']
-        for item in response['Items']:
-            if tgen.max_results and n == tgen.max_results:
-                break
-            yield tgen.item_class(tgen.table, attrs=item)
-            n += 1
-
-
-class TableGenerator:
+class TableGenerator(object):
     """
     This is an object that wraps up the table_generator function.
     The only real reason to have this is that we want to be able
     to accumulate and return the ConsumedCapacityUnits element that
     is part of each response.
 
-    :ivar consumed_units: An integer that holds the number of
-        ConsumedCapacityUnits accumulated thus far for this
-        generator.
+    :ivar last_evaluated_key: A sequence representing the key(s)
+        of the item last evaluated, or None if no additional
+        results are available.
+
+    :ivar remaining: The remaining quantity of results requested.
+
+    :ivar table: The table to which the call was made.
     """
 
-    def __init__(self, table, callable, max_results, item_class, kwargs):
+    def __init__(self, table, callable, remaining, item_class, kwargs):
         self.table = table
         self.callable = callable
-        self.max_results = max_results
+        self.remaining = -1 if remaining is None else remaining
         self.item_class = item_class
         self.kwargs = kwargs
-        self.consumed_units = 0
+        self._consumed_units = 0.0
+        self.last_evaluated_key = None
+        self._count = 0
+        self._scanned_count = 0
+        self._response = None
+
+    @property
+    def count(self):
+        """
+        The total number of items retrieved thus far.  This value changes with
+        iteration and even when issuing a call with count=True, it is necessary
+        to complete the iteration to assert an accurate count value.
+        """
+        self.response
+        return self._count
+
+    @property
+    def scanned_count(self):
+        """
+        As above, but representing the total number of items scanned by
+        DynamoDB, without regard to any filters.
+        """
+        self.response
+        return self._scanned_count
+
+    @property
+    def consumed_units(self):
+        """
+        Returns a float representing the ConsumedCapacityUnits accumulated.
+        """
+        self.response
+        return self._consumed_units
+
+    @property
+    def response(self):
+        """
+        The current response to the call from DynamoDB.
+        """
+        return self.next_response() if self._response is None else self._response
+
+    def next_response(self):
+        """
+        Issue a call and return the result.  You can invoke this method
+        while iterating over the TableGenerator in order to skip to the
+        next "page" of results.
+        """
+        # preserve any existing limit in case the user alters self.remaining
+        limit = self.kwargs.get('limit')
+        if (self.remaining > 0 and (limit is None or limit > self.remaining)):
+            self.kwargs['limit'] = self.remaining
+        self._response = self.callable(**self.kwargs)
+        self.kwargs['limit'] = limit
+        self._consumed_units += self._response.get('ConsumedCapacityUnits', 0.0)
+        self._count += self._response.get('Count', 0)
+        self._scanned_count += self._response.get('ScannedCount', 0)
+        # at the expense of a possibly gratuitous dynamize, ensure that
+        # early generator termination won't result in bad LEK values
+        if 'LastEvaluatedKey' in self._response:
+            lek = self._response['LastEvaluatedKey']
+            esk = self.table.layer2.dynamize_last_evaluated_key(lek)
+            self.kwargs['exclusive_start_key'] = esk
+            lektuple = (lek['HashKeyElement'],)
+            if 'RangeKeyElement' in lek:
+                lektuple += (lek['RangeKeyElement'],)
+            self.last_evaluated_key = lektuple
+        else:
+            self.last_evaluated_key = None
+        return self._response
 
     def __iter__(self):
-        return table_generator(self)
+        while self.remaining != 0:
+            response = self.response
+            for item in response.get('Items', []):
+                self.remaining -= 1
+                yield self.item_class(self.table, attrs=item)
+                if self.remaining == 0:
+                    break
+                if response is not self._response:
+                    break
+            else:
+                if self.last_evaluated_key is not None:
+                    self.next_response()
+                    continue
+                break
+            if response is not self._response:
+                continue
+            break
 
 
 class Layer2(object):
@@ -113,11 +145,24 @@
     def __init__(self, aws_access_key_id=None, aws_secret_access_key=None,
                  is_secure=True, port=None, proxy=None, proxy_port=None,
                  debug=0, security_token=None, region=None,
-                 validate_certs=True):
+                 validate_certs=True, dynamizer=LossyFloatDynamizer):
         self.layer1 = Layer1(aws_access_key_id, aws_secret_access_key,
                              is_secure, port, proxy, proxy_port,
                              debug, security_token, region,
                              validate_certs=validate_certs)
+        self.dynamizer = dynamizer()
+
+    def use_decimals(self):
+        """
+        Use the ``decimal.Decimal`` type for encoding/decoding numeric types.
+
+        By default, ints/floats are used to represent numeric types
+        ('N', 'NS') received from DynamoDB.  Using the ``Decimal``
+        type is recommended to prevent loss of precision.
+
+        """
+        # Eventually this should be made the default dynamizer.
+        self.dynamizer = Dynamizer()
 
     def dynamize_attribute_updates(self, pending_updates):
         """
@@ -132,13 +177,13 @@
                 d[attr_name] = {"Action": action}
             else:
                 d[attr_name] = {"Action": action,
-                                "Value": dynamize_value(value)}
+                                "Value": self.dynamizer.encode(value)}
         return d
 
     def dynamize_item(self, item):
         d = {}
         for attr_name in item:
-            d[attr_name] = dynamize_value(item[attr_name])
+            d[attr_name] = self.dynamizer.encode(item[attr_name])
         return d
 
     def dynamize_range_key_condition(self, range_key_condition):
@@ -176,7 +221,7 @@
                 elif attr_value is False:
                     attr_value = {'Exists': False}
                 else:
-                    val = dynamize_value(expected_value[attr_name])
+                    val = self.dynamizer.encode(expected_value[attr_name])
                     attr_value = {'Value': val}
                 d[attr_name] = attr_value
         return d
@@ -189,10 +234,10 @@
         d = None
         if last_evaluated_key:
             hash_key = last_evaluated_key['HashKeyElement']
-            d = {'HashKeyElement': dynamize_value(hash_key)}
+            d = {'HashKeyElement': self.dynamizer.encode(hash_key)}
             if 'RangeKeyElement' in last_evaluated_key:
                 range_key = last_evaluated_key['RangeKeyElement']
-                d['RangeKeyElement'] = dynamize_value(range_key)
+                d['RangeKeyElement'] = self.dynamizer.encode(range_key)
         return d
 
     def build_key_from_values(self, schema, hash_key, range_key=None):
@@ -204,25 +249,25 @@
         Otherwise, a Python dict version of a Amazon DynamoDB Key
         data structure is returned.
 
-        :type hash_key: int, float, str, or unicode
+        :type hash_key: int|float|str|unicode|Binary
         :param hash_key: The hash key of the item you are looking for.
             The type of the hash key should match the type defined in
             the schema.
 
-        :type range_key: int, float, str or unicode
+        :type range_key: int|float|str|unicode|Binary
         :param range_key: The range key of the item your are looking for.
             This should be supplied only if the schema requires a
             range key.  The type of the range key should match the
             type defined in the schema.
         """
         dynamodb_key = {}
-        dynamodb_value = dynamize_value(hash_key)
+        dynamodb_value = self.dynamizer.encode(hash_key)
         if dynamodb_value.keys()[0] != schema.hash_key_type:
             msg = 'Hashkey must be of type: %s' % schema.hash_key_type
             raise TypeError(msg)
         dynamodb_key['HashKeyElement'] = dynamodb_value
         if range_key is not None:
-            dynamodb_value = dynamize_value(range_key)
+            dynamodb_value = self.dynamizer.encode(range_key)
             if dynamodb_value.keys()[0] != schema.range_key_type:
                 msg = 'RangeKey must be of type: %s' % schema.range_key_type
                 raise TypeError(msg)
@@ -275,6 +320,33 @@
         """
         return self.layer1.describe_table(name)
 
+    def table_from_schema(self, name, schema):
+        """
+        Create a Table object from a schema.
+
+        This method will create a Table object without
+        making any API calls.  If you know the name and schema
+        of the table, you can use this method instead of
+        ``get_table``.
+
+        Example usage::
+
+            table = layer2.table_from_schema(
+                'tablename',
+                Schema.create(hash_key=('foo', 'N')))
+
+        :type name: str
+        :param name: The name of the table.
+
+        :type schema: :class:`boto.dynamodb.schema.Schema`
+        :param schema: The schema associated with the table.
+
+        :rtype: :class:`boto.dynamodb.table.Table`
+        :return: A Table object representing the table.
+
+        """
+        return Table.create_from_schema(self, name, schema)
+
     def get_table(self, name):
         """
         Retrieve the Table object for an existing table.
@@ -286,7 +358,7 @@
         :return: A Table object representing the table.
         """
         response = self.layer1.describe_table(name)
-        return Table(self,  response)
+        return Table(self, response)
 
     lookup = get_table
 
@@ -352,7 +424,7 @@
         :type hash_key_name: str
         :param hash_key_name: The name of the HashKey for the schema.
 
-        :type hash_key_proto_value: int|long|float|str|unicode
+        :type hash_key_proto_value: int|long|float|str|unicode|Binary
         :param hash_key_proto_value: A sample or prototype of the type
             of value you want to use for the HashKey.  Alternatively,
             you can also just pass in the Python type (e.g. int, float, etc.).
@@ -361,25 +433,19 @@
         :param range_key_name: The name of the RangeKey for the schema.
             This parameter is optional.
 
-        :type range_key_proto_value: int|long|float|str|unicode
+        :type range_key_proto_value: int|long|float|str|unicode|Binary
         :param range_key_proto_value: A sample or prototype of the type
             of value you want to use for the RangeKey.  Alternatively,
             you can also pass in the Python type (e.g. int, float, etc.)
             This parameter is optional.
         """
-        schema = {}
-        hash_key = {}
-        hash_key['AttributeName'] = hash_key_name
-        hash_key_type = get_dynamodb_type(hash_key_proto_value)
-        hash_key['AttributeType'] = hash_key_type
-        schema['HashKeyElement'] = hash_key
+        hash_key = (hash_key_name, get_dynamodb_type(hash_key_proto_value))
         if range_key_name and range_key_proto_value is not None:
-            range_key = {}
-            range_key['AttributeName'] = range_key_name
-            range_key_type = get_dynamodb_type(range_key_proto_value)
-            range_key['AttributeType'] = range_key_type
-            schema['RangeKeyElement'] = range_key
-        return Schema(schema)
+            range_key = (range_key_name,
+                         get_dynamodb_type(range_key_proto_value))
+        else:
+            range_key = None
+        return Schema.create(hash_key, range_key)
 
     def get_item(self, table, hash_key, range_key=None,
                  attributes_to_get=None, consistent_read=False,
@@ -390,12 +456,12 @@
         :type table: :class:`boto.dynamodb.table.Table`
         :param table: The Table object from which the item is retrieved.
 
-        :type hash_key: int|long|float|str|unicode
+        :type hash_key: int|long|float|str|unicode|Binary
         :param hash_key: The HashKey of the requested item.  The
             type of the value must match the type defined in the
             schema for the table.
 
-        :type range_key: int|long|float|str|unicode
+        :type range_key: int|long|float|str|unicode|Binary
         :param range_key: The optional RangeKey of the requested item.
             The type of the value must match the type defined in the
             schema for the table.
@@ -418,7 +484,7 @@
         key = self.build_key_from_values(table.schema, hash_key, range_key)
         response = self.layer1.get_item(table.name, key,
                                         attributes_to_get, consistent_read,
-                                        object_hook=item_object_hook)
+                                        object_hook=self.dynamizer.decode)
         item = item_class(table, hash_key, range_key, response['Item'])
         if 'ConsumedCapacityUnits' in response:
             item.consumed_units = response['ConsumedCapacityUnits']
@@ -438,7 +504,7 @@
         """
         request_items = batch_list.to_dict()
         return self.layer1.batch_get_item(request_items,
-                                          object_hook=item_object_hook)
+                                          object_hook=self.dynamizer.decode)
 
     def batch_write_item(self, batch_list):
         """
@@ -452,7 +518,7 @@
         """
         request_items = batch_list.to_dict()
         return self.layer1.batch_write_item(request_items,
-                                            object_hook=item_object_hook)
+                                            object_hook=self.dynamizer.decode)
 
     def put_item(self, item, expected_value=None, return_values=None):
         """
@@ -480,7 +546,7 @@
         response = self.layer1.put_item(item.table.name,
                                         self.dynamize_item(item),
                                         expected_value, return_values,
-                                        object_hook=item_object_hook)
+                                        object_hook=self.dynamizer.decode)
         if 'ConsumedCapacityUnits' in response:
             item.consumed_units = response['ConsumedCapacityUnits']
         return response
@@ -521,7 +587,7 @@
         response = self.layer1.update_item(item.table.name, key,
                                            attr_updates,
                                            expected_value, return_values,
-                                           object_hook=item_object_hook)
+                                           object_hook=self.dynamizer.decode)
         item._updates.clear()
         if 'ConsumedCapacityUnits' in response:
             item.consumed_units = response['ConsumedCapacityUnits']
@@ -554,20 +620,20 @@
         return self.layer1.delete_item(item.table.name, key,
                                        expected=expected_value,
                                        return_values=return_values,
-                                       object_hook=item_object_hook)
+                                       object_hook=self.dynamizer.decode)
 
     def query(self, table, hash_key, range_key_condition=None,
               attributes_to_get=None, request_limit=None,
               max_results=None, consistent_read=False,
               scan_index_forward=True, exclusive_start_key=None,
-              item_class=Item):
+              item_class=Item, count=False):
         """
         Perform a query on the table.
 
         :type table: :class:`boto.dynamodb.table.Table`
         :param table: The Table object that is being queried.
 
-        :type hash_key: int|long|float|str|unicode
+        :type hash_key: int|long|float|str|unicode|Binary
         :param hash_key: The HashKey of the requested item.  The
             type of the value must match the type defined in the
             schema for the table.
@@ -611,6 +677,14 @@
         :param scan_index_forward: Specified forward or backward
             traversal of the index.  Default is forward (True).
 
+        :type count: bool
+        :param count: If True, Amazon DynamoDB returns a total
+            number of items for the Query operation, even if the
+            operation has no matching items for the assigned filter.
+            If count is True, the actual items are not returned and
+            the count is accessible as the ``count`` attribute of
+            the returned object.
+
         :type exclusive_start_key: list or tuple
         :param exclusive_start_key: Primary key of the item from
             which to continue an earlier query.  This would be
@@ -633,20 +707,21 @@
         else:
             esk = None
         kwargs = {'table_name': table.name,
-                  'hash_key_value': dynamize_value(hash_key),
+                  'hash_key_value': self.dynamizer.encode(hash_key),
                   'range_key_conditions': rkc,
                   'attributes_to_get': attributes_to_get,
                   'limit': request_limit,
+                  'count': count,
                   'consistent_read': consistent_read,
                   'scan_index_forward': scan_index_forward,
                   'exclusive_start_key': esk,
-                  'object_hook': item_object_hook}
+                  'object_hook': self.dynamizer.decode}
         return TableGenerator(table, self.layer1.query,
                               max_results, item_class, kwargs)
 
     def scan(self, table, scan_filter=None,
              attributes_to_get=None, request_limit=None, max_results=None,
-             count=False, exclusive_start_key=None, item_class=Item):
+             exclusive_start_key=None, item_class=Item, count=False):
         """
         Perform a scan of DynamoDB.
 
@@ -697,6 +772,9 @@
         :param count: If True, Amazon DynamoDB returns a total
             number of items for the Scan operation, even if the
             operation has no matching items for the assigned filter.
+            If count is True, the actual items are not returned and
+            the count is accessible as the ``count`` attribute of
+            the returned object.
 
         :type exclusive_start_key: list or tuple
         :param exclusive_start_key: Primary key of the item from
@@ -721,6 +799,6 @@
                   'limit': request_limit,
                   'count': count,
                   'exclusive_start_key': esk,
-                  'object_hook': item_object_hook}
+                  'object_hook': self.dynamizer.decode}
         return TableGenerator(table, self.layer1.scan,
                               max_results, item_class, kwargs)
diff --git a/boto/dynamodb/schema.py b/boto/dynamodb/schema.py
index 34ff212..4a697a8 100644
--- a/boto/dynamodb/schema.py
+++ b/boto/dynamodb/schema.py
@@ -47,6 +47,38 @@
             s = 'Schema(%s)' % self.hash_key_name
         return s
 
+    @classmethod
+    def create(cls, hash_key, range_key=None):
+        """Convenience method to create a schema object.
+
+        Example usage::
+
+            schema = Schema.create(hash_key=('foo', 'N'))
+            schema2 = Schema.create(hash_key=('foo', 'N'),
+                                    range_key=('bar', 'S'))
+
+        :type hash_key: tuple
+        :param hash_key: A tuple of (hash_key_name, hash_key_type)
+
+        :type range_key: tuple
+        :param hash_key: A tuple of (range_key_name, range_key_type)
+
+        """
+        reconstructed = {
+            'HashKeyElement': {
+                'AttributeName': hash_key[0],
+                'AttributeType': hash_key[1],
+            }
+        }
+        if range_key is not None:
+            reconstructed['RangeKeyElement'] = {
+                'AttributeName': range_key[0],
+                'AttributeType': range_key[1],
+            }
+        instance = cls(None)
+        instance._dict = reconstructed
+        return instance
+
     @property
     def dict(self):
         return self._dict
@@ -72,3 +104,9 @@
         if 'RangeKeyElement' in self._dict:
             type = self._dict['RangeKeyElement']['AttributeType']
         return type
+
+    def __eq__(self, other):
+        return (self.hash_key_name == other.hash_key_name and
+                self.hash_key_type == other.hash_key_type and
+                self.range_key_name == other.range_key_name and
+                self.range_key_type == other.range_key_type)
diff --git a/boto/dynamodb/table.py b/boto/dynamodb/table.py
index ee73b1a..129b079 100644
--- a/boto/dynamodb/table.py
+++ b/boto/dynamodb/table.py
@@ -27,6 +27,7 @@
 from boto.dynamodb import exceptions as dynamodb_exceptions
 import time
 
+
 class TableBatchGenerator(object):
     """
     A low-level generator used to page through results from
@@ -37,11 +38,13 @@
         generator.
     """
 
-    def __init__(self, table, keys, attributes_to_get=None):
+    def __init__(self, table, keys, attributes_to_get=None,
+                 consistent_read=False):
         self.table = table
         self.keys = keys
         self.consumed_units = 0
         self.attributes_to_get = attributes_to_get
+        self.consistent_read = consistent_read
 
     def _queue_unprocessed(self, res):
         if not u'UnprocessedKeys' in res:
@@ -60,7 +63,8 @@
         while self.keys:
             # Build the next batch
             batch = BatchList(self.table.layer2)
-            batch.add_batch(self.table, self.keys[:100], self.attributes_to_get)
+            batch.add_batch(self.table, self.keys[:100],
+                            self.attributes_to_get)
             res = batch.submit()
 
             # parse the results
@@ -99,10 +103,53 @@
     """
 
     def __init__(self, layer2, response):
+        """
+
+        :type layer2: :class:`boto.dynamodb.layer2.Layer2`
+        :param layer2: A `Layer2` api object.
+
+        :type response: dict
+        :param response: The output of
+            `boto.dynamodb.layer1.Layer1.describe_table`.
+
+        """
         self.layer2 = layer2
         self._dict = {}
         self.update_from_response(response)
 
+    @classmethod
+    def create_from_schema(cls, layer2, name, schema):
+        """Create a Table object.
+
+        If you know the name and schema of your table, you can
+        create a ``Table`` object without having to make any
+        API calls (normally an API call is made to retrieve
+        the schema of a table).
+
+        Example usage::
+
+            table = Table.create_from_schema(
+                boto.connect_dynamodb(),
+                'tablename',
+                Schema.create(hash_key=('keyname', 'N')))
+
+        :type layer2: :class:`boto.dynamodb.layer2.Layer2`
+        :param layer2: A ``Layer2`` api object.
+
+        :type name: str
+        :param name: The name of the table.
+
+        :type schema: :class:`boto.dynamodb.schema.Schema`
+        :param schema: The schema associated with the table.
+
+        :rtype: :class:`boto.dynamodb.table.Table`
+        :return: A Table object representing the table.
+
+        """
+        table = cls(layer2, {'Table': {'TableName': name}})
+        table._schema = schema
+        return table
+
     def __repr__(self):
         return 'Table(%s)' % self.name
 
@@ -112,11 +159,11 @@
 
     @property
     def create_time(self):
-        return self._dict['CreationDateTime']
+        return self._dict.get('CreationDateTime', None)
 
     @property
     def status(self):
-        return self._dict['TableStatus']
+        return self._dict.get('TableStatus', None)
 
     @property
     def item_count(self):
@@ -132,19 +179,27 @@
 
     @property
     def read_units(self):
-        return self._dict['ProvisionedThroughput']['ReadCapacityUnits']
+        try:
+            return self._dict['ProvisionedThroughput']['ReadCapacityUnits']
+        except KeyError:
+            return None
 
     @property
     def write_units(self):
-        return self._dict['ProvisionedThroughput']['WriteCapacityUnits']
+        try:
+            return self._dict['ProvisionedThroughput']['WriteCapacityUnits']
+        except KeyError:
+            return None
 
     def update_from_response(self, response):
         """
         Update the state of the Table object based on the response
         data received from Amazon DynamoDB.
         """
+        # 'Table' is from a describe_table call.
         if 'Table' in response:
             self._dict.update(response['Table'])
+        # 'TableDescription' is from a create_table call.
         elif 'TableDescription' in response:
             self._dict.update(response['TableDescription'])
         if 'KeySchema' in self._dict:
@@ -202,12 +257,12 @@
         """
         Retrieve an existing item from the table.
 
-        :type hash_key: int|long|float|str|unicode
+        :type hash_key: int|long|float|str|unicode|Binary
         :param hash_key: The HashKey of the requested item.  The
             type of the value must match the type defined in the
             schema for the table.
 
-        :type range_key: int|long|float|str|unicode
+        :type range_key: int|long|float|str|unicode|Binary
         :param range_key: The optional RangeKey of the requested item.
             The type of the value must match the type defined in the
             schema for the table.
@@ -240,12 +295,12 @@
         the data that is returned, since this method specifically tells
         Amazon not to return anything but the Item's key.
 
-        :type hash_key: int|long|float|str|unicode
+        :type hash_key: int|long|float|str|unicode|Binary
         :param hash_key: The HashKey of the requested item.  The
             type of the value must match the type defined in the
             schema for the table.
 
-        :type range_key: int|long|float|str|unicode
+        :type range_key: int|long|float|str|unicode|Binary
         :param range_key: The optional RangeKey of the requested item.
             The type of the value must match the type defined in the
             schema for the table.
@@ -280,7 +335,7 @@
         the hash_key and range_key values of the item.  You can use
         these explicit parameters when calling the method, such as::
 
-        >>> my_item = my_table.new_item(hash_key='a', range_key=1,
+            >>> my_item = my_table.new_item(hash_key='a', range_key=1,
                                         attrs={'key1': 'val1', 'key2': 'val2'})
             >>> my_item
             {u'bar': 1, u'foo': 'a', 'key1': 'val1', 'key2': 'val2'}
@@ -302,12 +357,12 @@
            the explicit parameters, the values in the attrs will be
            ignored.
 
-        :type hash_key: int|long|float|str|unicode
+        :type hash_key: int|long|float|str|unicode|Binary
         :param hash_key: The HashKey of the new item.  The
             type of the value must match the type defined in the
             schema for the table.
 
-        :type range_key: int|long|float|str|unicode
+        :type range_key: int|long|float|str|unicode|Binary
         :param range_key: The optional RangeKey of the new item.
             The type of the value must match the type defined in the
             schema for the table.
@@ -323,15 +378,11 @@
         """
         return item_class(self, hash_key, range_key, attrs)
 
-    def query(self, hash_key, range_key_condition=None,
-              attributes_to_get=None, request_limit=None,
-              max_results=None, consistent_read=False,
-              scan_index_forward=True, exclusive_start_key=None,
-              item_class=Item):
+    def query(self, hash_key, *args, **kw):
         """
         Perform a query on the table.
 
-        :type hash_key: int|long|float|str|unicode
+        :type hash_key: int|long|float|str|unicode|Binary
         :param hash_key: The HashKey of the requested item.  The
             type of the value must match the type defined in the
             schema for the table.
@@ -380,31 +431,33 @@
             which to continue an earlier query.  This would be
             provided as the LastEvaluatedKey in that query.
 
+        :type count: bool
+        :param count: If True, Amazon DynamoDB returns a total
+            number of items for the Query operation, even if the
+            operation has no matching items for the assigned filter.
+            If count is True, the actual items are not returned and
+            the count is accessible as the ``count`` attribute of
+            the returned object.
+
+
         :type item_class: Class
         :param item_class: Allows you to override the class used
             to generate the items. This should be a subclass of
             :class:`boto.dynamodb.item.Item`
         """
-        return self.layer2.query(self, hash_key, range_key_condition,
-                                 attributes_to_get, request_limit,
-                                 max_results, consistent_read,
-                                 scan_index_forward, exclusive_start_key,
-                                 item_class=item_class)
+        return self.layer2.query(self, hash_key, *args, **kw)
 
-    def scan(self, scan_filter=None,
-             attributes_to_get=None, request_limit=None, max_results=None,
-             count=False, exclusive_start_key=None, item_class=Item):
+    def scan(self, *args, **kw):
         """
         Scan through this table, this is a very long
         and expensive operation, and should be avoided if
         at all possible.
 
-        :type scan_filter: A list of tuples
-        :param scan_filter: A list of tuples where each tuple consists
-            of an attribute name, a comparison operator, and either
-            a scalar or tuple consisting of the values to compare
-            the attribute to.  Valid comparison operators are shown below
-            along with the expected number of values that should be supplied.
+        :type scan_filter: A dict
+        :param scan_filter: A dictionary where the key is the
+            attribute name and the value is a
+            :class:`boto.dynamodb.condition.Condition` object.
+            Valid Condition objects include:
 
              * EQ - equal (1)
              * NE - not equal (1)
@@ -444,6 +497,9 @@
         :param count: If True, Amazon DynamoDB returns a total
             number of items for the Scan operation, even if the
             operation has no matching items for the assigned filter.
+            If count is True, the actual items are not returned and
+            the count is accessible as the ``count`` attribute of
+            the returned object.
 
         :type exclusive_start_key: list or tuple
         :param exclusive_start_key: Primary key of the item from
@@ -455,12 +511,11 @@
             to generate the items. This should be a subclass of
             :class:`boto.dynamodb.item.Item`
 
-        :return: A TableGenerator (generator) object which will iterate over all results
+        :return: A TableGenerator (generator) object which will iterate
+            over all results
         :rtype: :class:`boto.dynamodb.layer2.TableGenerator`
         """
-        return self.layer2.scan(self, scan_filter, attributes_to_get,
-                                request_limit, max_results, count,
-                                exclusive_start_key, item_class=item_class)
+        return self.layer2.scan(self, *args, **kw)
 
     def batch_get_item(self, keys, attributes_to_get=None):
         """
@@ -484,7 +539,8 @@
             If supplied, only the specified attribute names will
             be returned.  Otherwise, all attributes will be returned.
 
-        :return: A TableBatchGenerator (generator) object which will iterate over all results
+        :return: A TableBatchGenerator (generator) object which will
+            iterate over all results
         :rtype: :class:`boto.dynamodb.table.TableBatchGenerator`
         """
         return TableBatchGenerator(self, keys, attributes_to_get)
diff --git a/boto/dynamodb/types.py b/boto/dynamodb/types.py
index 5b33076..e3b4958 100644
--- a/boto/dynamodb/types.py
+++ b/boto/dynamodb/types.py
@@ -25,10 +25,33 @@
 Python types and vice-versa.
 """
 import base64
+from decimal import (Decimal, DecimalException, Context,
+                     Clamped, Overflow, Inexact, Underflow, Rounded)
+from exceptions import DynamoDBNumberError
+
+
+DYNAMODB_CONTEXT = Context(
+    Emin=-128, Emax=126, rounding=None, prec=38,
+    traps=[Clamped, Overflow, Inexact, Rounded, Underflow])
+
+
+# python2.6 cannot convert floats directly to
+# Decimals.  This is taken from:
+# http://docs.python.org/release/2.6.7/library/decimal.html#decimal-faq
+def float_to_decimal(f):
+    n, d = f.as_integer_ratio()
+    numerator, denominator = Decimal(n), Decimal(d)
+    ctx = DYNAMODB_CONTEXT
+    result = ctx.divide(numerator, denominator)
+    while ctx.flags[Inexact]:
+        ctx.flags[Inexact] = False
+        ctx.prec *= 2
+        result = ctx.divide(numerator, denominator)
+    return result
 
 
 def is_num(n):
-    types = (int, long, float, bool)
+    types = (int, long, float, bool, Decimal)
     return isinstance(n, types) or n in types
 
 
@@ -41,6 +64,15 @@
     return isinstance(n, Binary)
 
 
+def serialize_num(val):
+    """Cast a number to a string and perform
+       validation to ensure no loss of precision.
+    """
+    if isinstance(val, bool):
+        return str(int(val))
+    return str(val)
+
+
 def convert_num(s):
     if '.' in s:
         n = float(s)
@@ -86,23 +118,13 @@
     needs to be sent to Amazon DynamoDB.  If the type of the value
     is not supported, raise a TypeError
     """
-    def _str(val):
-        """
-        DynamoDB stores booleans as numbers. True is 1, False is 0.
-        This function converts Python booleans into DynamoDB friendly
-        representation.
-        """
-        if isinstance(val, bool):
-            return str(int(val))
-        return str(val)
-
     dynamodb_type = get_dynamodb_type(val)
     if dynamodb_type == 'N':
-        val = {dynamodb_type: _str(val)}
+        val = {dynamodb_type: serialize_num(val)}
     elif dynamodb_type == 'S':
         val = {dynamodb_type: val}
     elif dynamodb_type == 'NS':
-        val = {dynamodb_type: [str(n) for n in val]}
+        val = {dynamodb_type: map(serialize_num, val)}
     elif dynamodb_type == 'SS':
         val = {dynamodb_type: [n for n in val]}
     elif dynamodb_type == 'B':
@@ -136,3 +158,169 @@
 
     def __hash__(self):
         return hash(self.value)
+
+
+def item_object_hook(dct):
+    """
+    A custom object hook for use when decoding JSON item bodys.
+    This hook will transform Amazon DynamoDB JSON responses to something
+    that maps directly to native Python types.
+    """
+    if len(dct.keys()) > 1:
+        return dct
+    if 'S' in dct:
+        return dct['S']
+    if 'N' in dct:
+        return convert_num(dct['N'])
+    if 'SS' in dct:
+        return set(dct['SS'])
+    if 'NS' in dct:
+        return set(map(convert_num, dct['NS']))
+    if 'B' in dct:
+        return convert_binary(dct['B'])
+    if 'BS' in dct:
+        return set(map(convert_binary, dct['BS']))
+    return dct
+
+
+class Dynamizer(object):
+    """Control serialization/deserialization of types.
+
+    This class controls the encoding of python types to the
+    format that is expected by the DynamoDB API, as well as
+    taking DynamoDB types and constructing the appropriate
+    python types.
+
+    If you want to customize this process, you can subclass
+    this class and override the encoding/decoding of
+    specific types.  For example::
+
+        'foo'      (Python type)
+            |
+            v
+        encode('foo')
+            |
+            v
+        _encode_s('foo')
+            |
+            v
+        {'S': 'foo'}  (Encoding sent to/received from DynamoDB)
+            |
+            V
+        decode({'S': 'foo'})
+            |
+            v
+        _decode_s({'S': 'foo'})
+            |
+            v
+        'foo'     (Python type)
+
+    """
+    def _get_dynamodb_type(self, attr):
+        return get_dynamodb_type(attr)
+
+    def encode(self, attr):
+        """
+        Encodes a python type to the format expected
+        by DynamoDB.
+
+        """
+        dynamodb_type = self._get_dynamodb_type(attr)
+        try:
+            encoder = getattr(self, '_encode_%s' % dynamodb_type.lower())
+        except AttributeError:
+            raise ValueError("Unable to encode dynamodb type: %s" %
+                             dynamodb_type)
+        return {dynamodb_type: encoder(attr)}
+
+    def _encode_n(self, attr):
+        try:
+            if isinstance(attr, float) and not hasattr(Decimal, 'from_float'):
+                # python2.6 does not support creating Decimals directly
+                # from floats so we have to do this ourself.
+                n = str(float_to_decimal(attr))
+            else:
+                n = str(DYNAMODB_CONTEXT.create_decimal(attr))
+            if filter(lambda x: x in n, ('Infinity', 'NaN')):
+                raise TypeError('Infinity and NaN not supported')
+            return n
+        except (TypeError, DecimalException), e:
+            msg = '{0} numeric for `{1}`\n{2}'.format(
+                e.__class__.__name__, attr, str(e) or '')
+        raise DynamoDBNumberError(msg)
+
+    def _encode_s(self, attr):
+        if isinstance(attr, unicode):
+            attr = attr.encode('utf-8')
+        elif not isinstance(attr, str):
+            attr = str(attr)
+        return attr
+
+    def _encode_ns(self, attr):
+        return map(self._encode_n, attr)
+
+    def _encode_ss(self, attr):
+        return [self._encode_s(n) for n in attr]
+
+    def _encode_b(self, attr):
+        return attr.encode()
+
+    def _encode_bs(self, attr):
+        return [self._encode_b(n) for n in attr]
+
+    def decode(self, attr):
+        """
+        Takes the format returned by DynamoDB and constructs
+        the appropriate python type.
+
+        """
+        if len(attr) > 1 or not attr:
+            return attr
+        dynamodb_type = attr.keys()[0]
+        try:
+            decoder = getattr(self, '_decode_%s' % dynamodb_type.lower())
+        except AttributeError:
+            return attr
+        return decoder(attr[dynamodb_type])
+
+    def _decode_n(self, attr):
+        return DYNAMODB_CONTEXT.create_decimal(attr)
+
+    def _decode_s(self, attr):
+        return attr
+
+    def _decode_ns(self, attr):
+        return set(map(self._decode_n, attr))
+
+    def _decode_ss(self, attr):
+        return set(map(self._decode_s, attr))
+
+    def _decode_b(self, attr):
+        return convert_binary(attr)
+
+    def _decode_bs(self, attr):
+        return set(map(self._decode_b, attr))
+
+
+class LossyFloatDynamizer(Dynamizer):
+    """Use float/int instead of Decimal for numeric types.
+
+    This class is provided for backwards compatibility.  Instead of
+    using Decimals for the 'N', 'NS' types it uses ints/floats.
+
+    This class is deprecated and its usage is not encouraged,
+    as doing so may result in loss of precision.  Use the
+    `Dynamizer` class instead.
+
+    """
+    def _encode_n(self, attr):
+        return serialize_num(attr)
+
+    def _encode_ns(self, attr):
+        return [str(i) for i in attr]
+
+    def _decode_n(self, attr):
+        return convert_num(attr)
+
+    def _decode_ns(self, attr):
+        return set(map(self._decode_n, attr))
diff --git a/boto/dynamodb2/__init__.py b/boto/dynamodb2/__init__.py
new file mode 100644
index 0000000..8cdfcac
--- /dev/null
+++ b/boto/dynamodb2/__init__.py
@@ -0,0 +1,63 @@
+# Copyright (c) 2011 Mitch Garnaat http://garnaat.org/
+# Copyright (c) 2011 Amazon.com, Inc. or its affiliates.  All Rights Reserved
+#
+# Permission is hereby granted, free of charge, to any person obtaining a
+# copy of this software and associated documentation files (the
+# "Software"), to deal in the Software without restriction, including
+# without limitation the rights to use, copy, modify, merge, publish, dis-
+# tribute, sublicense, and/or sell copies of the Software, and to permit
+# persons to whom the Software is furnished to do so, subject to the fol-
+# lowing conditions:
+#
+# The above copyright notice and this permission notice shall be included
+# in all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
+# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
+# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+# IN THE SOFTWARE.
+#
+
+from boto.regioninfo import RegionInfo
+
+
+def regions():
+    """
+    Get all available regions for the Amazon DynamoDB service.
+
+    :rtype: list
+    :return: A list of :class:`boto.regioninfo.RegionInfo`
+    """
+    from boto.dynamodb2.layer1 import DynamoDBConnection
+    return [RegionInfo(name='us-east-1',
+                       endpoint='dynamodb.us-east-1.amazonaws.com',
+                       connection_cls=DynamoDBConnection),
+            RegionInfo(name='us-west-1',
+                       endpoint='dynamodb.us-west-1.amazonaws.com',
+                       connection_cls=DynamoDBConnection),
+            RegionInfo(name='us-west-2',
+                       endpoint='dynamodb.us-west-2.amazonaws.com',
+                       connection_cls=DynamoDBConnection),
+            RegionInfo(name='eu-west-1',
+                       endpoint='dynamodb.eu-west-1.amazonaws.com',
+                       connection_cls=DynamoDBConnection),
+            RegionInfo(name='ap-northeast-1',
+                       endpoint='dynamodb.ap-northeast-1.amazonaws.com',
+                       connection_cls=DynamoDBConnection),
+            RegionInfo(name='ap-southeast-1',
+                       endpoint='dynamodb.ap-southeast-1.amazonaws.com',
+                       connection_cls=DynamoDBConnection),
+            RegionInfo(name='sa-east-1',
+                       endpoint='dynamodb.sa-east-1.amazonaws.com',
+                       connection_cls=DynamoDBConnection),
+            ]
+
+
+def connect_to_region(region_name, **kw_params):
+    for region in regions():
+        if region.name == region_name:
+            return region.connect(**kw_params)
+    return None
diff --git a/boto/dynamodb2/exceptions.py b/boto/dynamodb2/exceptions.py
new file mode 100644
index 0000000..a9fcf75
--- /dev/null
+++ b/boto/dynamodb2/exceptions.py
@@ -0,0 +1,74 @@
+# Copyright (c) 2013 Amazon.com, Inc. or its affiliates.  All Rights Reserved
+#
+# Permission is hereby granted, free of charge, to any person obtaining a
+# copy of this software and associated documentation files (the
+# "Software"), to deal in the Software without restriction, including
+# without limitation the rights to use, copy, modify, merge, publish, dis-
+# tribute, sublicense, and/or sell copies of the Software, and to permit
+# persons to whom the Software is furnished to do so, subject to the fol-
+# lowing conditions:
+#
+# The above copyright notice and this permission notice shall be included
+# in all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
+# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
+# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+# IN THE SOFTWARE.
+#
+from boto.exception import JSONResponseError
+
+
+class ProvisionedThroughputExceededException(JSONResponseError):
+    pass
+
+
+class LimitExceededException(JSONResponseError):
+    pass
+
+
+class ConditionalCheckFailedException(JSONResponseError):
+    pass
+
+
+class ResourceInUseException(JSONResponseError):
+    pass
+
+
+class ResourceNotFoundException(JSONResponseError):
+    pass
+
+
+class InternalServerError(JSONResponseError):
+    pass
+
+
+class ValidationException(JSONResponseError):
+    pass
+
+
+class ItemCollectionSizeLimitExceededException(JSONResponseError):
+    pass
+
+
+class DynamoDBError(Exception):
+    pass
+
+
+class UnknownSchemaFieldError(DynamoDBError):
+    pass
+
+
+class UnknownIndexFieldError(DynamoDBError):
+    pass
+
+
+class UnknownFilterTypeError(DynamoDBError):
+    pass
+
+
+class QueryError(DynamoDBError):
+    pass
diff --git a/boto/dynamodb2/fields.py b/boto/dynamodb2/fields.py
new file mode 100644
index 0000000..25abffd
--- /dev/null
+++ b/boto/dynamodb2/fields.py
@@ -0,0 +1,212 @@
+from boto.dynamodb2.types import STRING
+
+
+class BaseSchemaField(object):
+    """
+    An abstract class for defining schema fields.
+
+    Contains most of the core functionality for the field. Subclasses must
+    define an ``attr_type`` to pass to DynamoDB.
+    """
+    attr_type = None
+
+    def __init__(self, name, data_type=STRING):
+        """
+        Creates a Python schema field, to represent the data to pass to
+        DynamoDB.
+
+        Requires a ``name`` parameter, which should be a string name of the
+        field.
+
+        Optionally accepts a ``data_type`` parameter, which should be a
+        constant from ``boto.dynamodb2.types``. (Default: ``STRING``)
+        """
+        self.name = name
+        self.data_type = data_type
+
+    def definition(self):
+        """
+        Returns the attribute definition structure DynamoDB expects.
+
+        Example::
+
+            >>> field.definition()
+            {
+                'AttributeName': 'username',
+                'AttributeType': 'S',
+            }
+
+        """
+        return {
+            'AttributeName': self.name,
+            'AttributeType': self.data_type,
+        }
+
+    def schema(self):
+        """
+        Returns the schema structure DynamoDB expects.
+
+        Example::
+
+            >>> field.schema()
+            {
+                'AttributeName': 'username',
+                'KeyType': 'HASH',
+            }
+
+        """
+        return {
+            'AttributeName': self.name,
+            'KeyType': self.attr_type,
+        }
+
+
+class HashKey(BaseSchemaField):
+    """
+    An field representing a hash key.
+
+    Example::
+
+        >>> from boto.dynamodb2.types import NUMBER
+        >>> HashKey('username')
+        >>> HashKey('date_joined', data_type=NUMBER)
+
+    """
+    attr_type = 'HASH'
+
+
+class RangeKey(BaseSchemaField):
+    """
+    An field representing a range key.
+
+    Example::
+
+        >>> from boto.dynamodb2.types import NUMBER
+        >>> HashKey('username')
+        >>> HashKey('date_joined', data_type=NUMBER)
+
+    """
+    attr_type = 'RANGE'
+
+
+class BaseIndexField(object):
+    """
+    An abstract class for defining schema fields.
+
+    Contains most of the core functionality for the field. Subclasses must
+    define an ``attr_type`` to pass to DynamoDB.
+    """
+    def __init__(self, name, parts):
+        self.name = name
+        self.parts = parts
+
+    def definition(self):
+        """
+        Returns the attribute definition structure DynamoDB expects.
+
+        Example::
+
+            >>> index.definition()
+            {
+                'AttributeName': 'username',
+                'AttributeType': 'S',
+            }
+
+        """
+        definition = []
+
+        for part in self.parts:
+            definition.append({
+                'AttributeName': part.name,
+                'AttributeType': part.data_type,
+            })
+
+        return definition
+
+    def schema(self):
+        """
+        Returns the schema structure DynamoDB expects.
+
+        Example::
+
+            >>> index.schema()
+            {
+                'IndexName': 'LastNameIndex',
+                'KeySchema': [
+                    {
+                        'AttributeName': 'username',
+                        'KeyType': 'HASH',
+                    },
+                ],
+                'Projection': {
+                    'ProjectionType': 'KEYS_ONLY,
+                }
+            }
+
+        """
+        key_schema = []
+
+        for part in self.parts:
+            key_schema.append(part.schema())
+
+        return {
+            'IndexName': self.name,
+            'KeySchema': key_schema,
+            'Projection': {
+                'ProjectionType': self.projection_type,
+            }
+        }
+
+
+class AllIndex(BaseIndexField):
+    """
+    An index signifying all fields should be in the index.
+
+    Example::
+
+        >>> AllIndex('MostRecentlyJoined', parts=[
+        ...     HashKey('username'),
+        ...     RangeKey('date_joined')
+        ... ])
+
+    """
+    projection_type = 'ALL'
+
+
+class KeysOnlyIndex(BaseIndexField):
+    """
+    An index signifying only key fields should be in the index.
+
+    Example::
+
+        >>> KeysOnlyIndex('MostRecentlyJoined', parts=[
+        ...     HashKey('username'),
+        ...     RangeKey('date_joined')
+        ... ])
+
+    """
+    projection_type = 'KEYS_ONLY'
+
+
+class IncludeIndex(BaseIndexField):
+    """
+    An index signifying only certain fields should be in the index.
+
+    Example::
+
+        >>> IncludeIndex('GenderIndex', parts=[
+        ...     HashKey('username'),
+        ...     RangeKey('date_joined')
+        ... ], includes=['gender'])
+
+    """
+    projection_type = 'INCLUDE'
+
+    def __init__(self, *args, **kwargs):
+        self.includes_fields = kwargs.pop('includes', [])
+        super(IncludeIndex, self).__init__(*args, **kwargs)
+
+    def schema(self):
+        schema_data = super(IncludeIndex, self).schema()
+        schema_data['Projection']['NonKeyAttributes'] = self.includes_fields
+        return schema_data
diff --git a/boto/dynamodb2/items.py b/boto/dynamodb2/items.py
new file mode 100644
index 0000000..8df5102
--- /dev/null
+++ b/boto/dynamodb2/items.py
@@ -0,0 +1,390 @@
+from boto.dynamodb2.types import Dynamizer
+
+
+class NEWVALUE(object):
+    # A marker for new data added.
+    pass
+
+
+class Item(object):
+    """
+    An object representing the item data within a DynamoDB table.
+
+    An item is largely schema-free, meaning it can contain any data. The only
+    limitation is that it must have data for the fields in the ``Table``'s
+    schema.
+
+    This object presents a dictionary-like interface for accessing/storing
+    data. It also tries to intelligently track how data has changed throughout
+    the life of the instance, to be as efficient as possible about updates.
+    """
+    def __init__(self, table, data=None):
+        """
+        Constructs an (unsaved) ``Item`` instance.
+
+        To persist the data in DynamoDB, you'll need to call the ``Item.save``
+        (or ``Item.partial_save``) on the instance.
+
+        Requires a ``table`` parameter, which should be a ``Table`` instance.
+        This is required, as DynamoDB's API is focus around all operations
+        being table-level. It's also for persisting schema around many objects.
+
+        Optionally accepts a ``data`` parameter, which should be a dictionary
+        of the fields & values of the item.
+
+        Example::
+
+            >>> users = Table('users')
+            >>> user = Item(users, data={
+            ...     'username': 'johndoe',
+            ...     'first_name': 'John',
+            ...     'date_joined': 1248o61592,
+            ... })
+
+            # Change existing data.
+            >>> user['first_name'] = 'Johann'
+            # Add more data.
+            >>> user['last_name'] = 'Doe'
+            # Delete data.
+            >>> del user['date_joined']
+
+            # Iterate over all the data.
+            >>> for field, val in user.items():
+            ...     print "%s: %s" % (field, val)
+            username: johndoe
+            first_name: John
+            date_joined: 1248o61592
+
+        """
+        self.table = table
+        self._data = {}
+        self._orig_data = {}
+        self._is_dirty = False
+        self._dynamizer = Dynamizer()
+
+        if data:
+            self._data = data
+            self._is_dirty = True
+
+            for key in data.keys():
+                self._orig_data[key] = NEWVALUE
+
+    def __getitem__(self, key):
+        return self._data.get(key, None)
+
+    def __setitem__(self, key, value):
+        # Stow the original value if present, so we can track what's changed.
+        if key in self._data:
+            self._orig_data[key] = self._data[key]
+        else:
+            # Use a marker to indicate we've never seen a value for this key.
+            self._orig_data[key] = NEWVALUE
+
+        self._data[key] = value
+        self._is_dirty = True
+
+    def __delitem__(self, key):
+        if not key in self._data:
+            return
+
+        # Stow the original value, so we can track what's changed.
+        value = self._data[key]
+        del self._data[key]
+        self._orig_data[key] = value
+        self._is_dirty = True
+
+    def keys(self):
+        return self._data.keys()
+
+    def values(self):
+        return self._data.values()
+
+    def items(self):
+        return self._data.items()
+
+    def get(self, key, default=None):
+        return self._data.get(key, default)
+
+    def __iter__(self):
+        for key in self._data:
+            yield self._data[key]
+
+    def __contains__(self, key):
+        return key in self._data
+
+    def needs_save(self):
+        """
+        Returns whether or not the data has changed on the ``Item``.
+
+        Example:
+
+            >>> user.needs_save()
+            False
+            >>> user['first_name'] = 'Johann'
+            >>> user.needs_save()
+            True
+
+        """
+        return self._is_dirty
+
+    def mark_clean(self):
+        """
+        Marks an ``Item`` instance as no longer needing to be saved.
+
+        Example:
+
+            >>> user.needs_save()
+            False
+            >>> user['first_name'] = 'Johann'
+            >>> user.needs_save()
+            True
+            >>> user.mark_clean()
+            >>> user.needs_save()
+            False
+
+        """
+        self._orig_data = {}
+        self._is_dirty = False
+
+    def mark_dirty(self):
+        """
+        Marks an ``Item`` instance as needing to be saved.
+
+        Example:
+
+            >>> user.needs_save()
+            False
+            >>> user.mark_dirty()
+            >>> user.needs_save()
+            True
+
+        """
+        self._is_dirty = True
+
+    def load(self, data):
+        """
+        This is only useful when being handed raw data from DynamoDB directly.
+        If you have a Python datastructure already, use the ``__init__`` or
+        manually set the data instead.
+
+        Largely internal, unless you know what you're doing or are trying to
+        mix the low-level & high-level APIs.
+        """
+        self._data = {}
+
+        for field_name, field_value in data.get('Item', {}).items():
+            self[field_name] = self._dynamizer.decode(field_value)
+
+        self.mark_clean()
+
+    def get_keys(self):
+        """
+        Returns a Python-style dict of the keys/values.
+
+        Largely internal.
+        """
+        key_fields = self.table.get_key_fields()
+        key_data = {}
+
+        for key in key_fields:
+            key_data[key] = self[key]
+
+        return key_data
+
+    def get_raw_keys(self):
+        """
+        Returns a DynamoDB-style dict of the keys/values.
+
+        Largely internal.
+        """
+        raw_key_data = {}
+
+        for key, value in self.get_keys().items():
+            raw_key_data[key] = self._dynamizer.encode(value)
+
+        return raw_key_data
+
+    def build_expects(self, fields=None):
+        """
+        Builds up a list of expecations to hand off to DynamoDB on save.
+
+        Largely internal.
+        """
+        expects = {}
+
+        if fields is None:
+            fields = self._data.keys() + self._orig_data.keys()
+
+        # Only uniques.
+        fields = set(fields)
+
+        for key in fields:
+            expects[key] = {
+                'Exists': True,
+            }
+            value = None
+
+            # Check for invalid keys.
+            if not key in self._orig_data and not key in self._data:
+                raise ValueError("Unknown key %s provided." % key)
+
+            # States:
+            # * New field (_data & _orig_data w/ marker)
+            # * Unchanged field (only _data)
+            # * Modified field (_data & _orig_data)
+            # * Deleted field (only _orig_data)
+            if not key in self._orig_data:
+                # Existing field unchanged.
+                value = self._data[key]
+            else:
+                if key in self._data:
+                    if self._orig_data[key] is NEWVALUE:
+                        # New field.
+                        expects[key]['Exists'] = False
+                    else:
+                        # Existing field modified.
+                        value = self._orig_data[key]
+                else:
+                   # Existing field deleted.
+                    value = self._orig_data[key]
+
+            if value is not None:
+                expects[key]['Value'] = self._dynamizer.encode(value)
+
+        return expects
+
+    def prepare_full(self):
+        """
+        Runs through all fields & encodes them to be handed off to DynamoDB
+        as part of an ``save`` (``put_item``) call.
+
+        Largely internal.
+        """
+        # This doesn't save on it's own. Rather, we prepare the datastructure
+        # and hand-off to the table to handle creation/update.
+        final_data = {}
+
+        for key, value in self._data.items():
+            final_data[key] = self._dynamizer.encode(value)
+
+        return final_data
+
+    def prepare_partial(self):
+        """
+        Runs through **ONLY** the changed/deleted fields & encodes them to be
+        handed off to DynamoDB as part of an ``partial_save`` (``update_item``)
+        call.
+
+        Largely internal.
+        """
+        # This doesn't save on it's own. Rather, we prepare the datastructure
+        # and hand-off to the table to handle creation/update.
+        final_data = {}
+
+        # Loop over ``_orig_data`` so that we only build up data that's changed.
+        for key, value in self._orig_data.items():
+            if key in self._data:
+                # It changed.
+                final_data[key] = {
+                    'Action': 'PUT',
+                    'Value': self._dynamizer.encode(self._data[key])
+                }
+            else:
+                # It was deleted.
+                final_data[key] = {
+                    'Action': 'DELETE',
+                }
+
+        return final_data
+
+    def partial_save(self):
+        """
+        Saves only the changed data to DynamoDB.
+
+        Extremely useful for high-volume/high-write data sets, this allows
+        you to update only a handful of fields rather than having to push
+        entire items. This prevents many accidental overwrite situations as
+        well as saves on the amount of data to transfer over the wire.
+
+        Returns ``True`` on success, ``False`` if no save was performed or
+        the write failed.
+
+        Example::
+
+            >>> user['last_name'] = 'Doh!'
+            # Only the last name field will be sent to DynamoDB.
+            >>> user.partial_save()
+
+        """
+        if not self.needs_save():
+            return False
+
+        key = self.get_keys()
+        # Build a new dict of only the data we're changing.
+        final_data = self.prepare_partial()
+        # Build expectations of only the fields we're planning to update.
+        expects = self.build_expects(fields=self._orig_data.keys())
+        returned = self.table._update_item(key, final_data, expects=expects)
+        # Mark the object as clean.
+        self.mark_clean()
+        return returned
+
+    def save(self, overwrite=False):
+        """
+        Saves all data to DynamoDB.
+
+        By default, this attempts to ensure that none of the underlying
+        data has changed. If any fields have changed in between when the
+        ``Item`` was constructed & when it is saved, this call will fail so
+        as not to cause any data loss.
+
+        If you're sure possibly overwriting data is acceptable, you can pass
+        an ``overwrite=True``. If that's not acceptable, you may be able to use
+        ``Item.partial_save`` to only write the changed field data.
+
+        Optionally accepts an ``overwrite`` parameter, which should be a
+        boolean. If you provide ``True``, the item will be forcibly overwritten
+        within DynamoDB, even if another process changed the data in the
+        meantime. (Default: ``False``)
+
+        Returns ``True`` on success, ``False`` if no save was performed.
+
+        Example::
+
+            >>> user['last_name'] = 'Doh!'
+            # All data on the Item is sent to DynamoDB.
+            >>> user.save()
+
+            # If it fails, you can overwrite.
+            >>> user.save(overwrite=True)
+
+        """
+        if not self.needs_save():
+            return False
+
+        final_data = self.prepare_full()
+        expects = None
+
+        if overwrite is False:
+            # Build expectations about *all* of the data.
+            expects = self.build_expects()
+
+        returned = self.table._put_item(final_data, expects=expects)
+        # Mark the object as clean.
+        self.mark_clean()
+        return returned
+
+    def delete(self):
+        """
+        Deletes the item's data to DynamoDB.
+
+        Returns ``True`` on success.
+
+        Example::
+
+            # Buh-bye now.
+            >>> user.delete()
+
+        """
+        key_data = self.get_keys()
+        return self.table.delete_item(**key_data)
diff --git a/boto/dynamodb2/layer1.py b/boto/dynamodb2/layer1.py
new file mode 100644
index 0000000..532e2f6
--- /dev/null
+++ b/boto/dynamodb2/layer1.py
@@ -0,0 +1,1539 @@
+# Copyright (c) 2013 Amazon.com, Inc. or its affiliates.  All Rights Reserved
+#
+# Permission is hereby granted, free of charge, to any person obtaining a
+# copy of this software and associated documentation files (the
+# "Software"), to deal in the Software without restriction, including
+# without limitation the rights to use, copy, modify, merge, publish, dis-
+# tribute, sublicense, and/or sell copies of the Software, and to permit
+# persons to whom the Software is furnished to do so, subject to the fol-
+# lowing conditions:
+#
+# The above copyright notice and this permission notice shall be included
+# in all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
+# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
+# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+# IN THE SOFTWARE.
+#
+from binascii import crc32
+
+import json
+import boto
+from boto.connection import AWSQueryConnection
+from boto.regioninfo import RegionInfo
+from boto.exception import JSONResponseError
+from boto.dynamodb2 import exceptions
+
+
+class DynamoDBConnection(AWSQueryConnection):
+    """
+    Amazon DynamoDB is a fast, highly scalable, highly available,
+    cost-effective non-relational database service. Amazon DynamoDB
+    removes traditional scalability limitations on data storage while
+    maintaining low latency and predictable performance.
+    """
+    APIVersion = "2012-08-10"
+    DefaultRegionName = "us-east-1"
+    DefaultRegionEndpoint = "dynamodb.us-east-1.amazonaws.com"
+    ServiceName = "DynamoDB"
+    TargetPrefix = "DynamoDB_20120810"
+    ResponseError = JSONResponseError
+
+    _faults = {
+        "ProvisionedThroughputExceededException": exceptions.ProvisionedThroughputExceededException,
+        "LimitExceededException": exceptions.LimitExceededException,
+        "ConditionalCheckFailedException": exceptions.ConditionalCheckFailedException,
+        "ResourceInUseException": exceptions.ResourceInUseException,
+        "ResourceNotFoundException": exceptions.ResourceNotFoundException,
+        "InternalServerError": exceptions.InternalServerError,
+        "ItemCollectionSizeLimitExceededException": exceptions.ItemCollectionSizeLimitExceededException,
+        "ValidationException": exceptions.ValidationException,
+    }
+
+    NumberRetries = 10
+
+
+    def __init__(self, **kwargs):
+        region = kwargs.pop('region', None)
+        validate_checksums = kwargs.pop('validate_checksums', True)
+        if not region:
+            region_name = boto.config.get('DynamoDB', 'region',
+                                          self.DefaultRegionName)
+            for reg in boto.dynamodb2.regions():
+                if reg.name == region_name:
+                    region = reg
+                    break
+        kwargs['host'] = region.endpoint
+        AWSQueryConnection.__init__(self, **kwargs)
+        self.region = region
+        self._validate_checksums = boto.config.getbool(
+            'DynamoDB', 'validate_checksums', validate_checksums)
+        self.throughput_exceeded_events = 0
+
+    def _required_auth_capability(self):
+        return ['hmac-v4']
+
+    def batch_get_item(self, request_items, return_consumed_capacity=None):
+        """
+        The BatchGetItem operation returns the attributes of one or
+        more items from one or more tables. You identify requested
+        items by primary key.
+
+        A single operation can retrieve up to 1 MB of data, which can
+        comprise as many as 100 items. BatchGetItem will return a
+        partial result if the response size limit is exceeded, the
+        table's provisioned throughput is exceeded, or an internal
+        processing failure occurs. If a partial result is returned,
+        the operation returns a value for UnprocessedKeys . You can
+        use this value to retry the operation starting with the next
+        item to get.
+
+        For example, if you ask to retrieve 100 items, but each
+        individual item is 50 KB in size, the system returns 20 items
+        (1 MB) and an appropriate UnprocessedKeys value so you can get
+        the next page of results. If desired, your application can
+        include its own logic to assemble the pages of results into
+        one dataset.
+
+        If no items can be processed because of insufficient
+        provisioned throughput on each of the tables involved in the
+        request, BatchGetItem throws
+        ProvisionedThroughputExceededException .
+
+        By default, BatchGetItem performs eventually consistent reads
+        on every table in the request. If you want strongly consistent
+        reads instead, you can set ConsistentRead to `True` for any or
+        all tables.
+
+        In order to minimize response latency, BatchGetItem fetches
+        items in parallel.
+
+        When designing your application, keep in mind that Amazon
+        DynamoDB does not return attributes in any particular order.
+        To help parse the response by item, include the primary key
+        values for the items in your request in the AttributesToGet
+        parameter.
+
+        If a requested item does not exist, it is not returned in the
+        result. Requests for nonexistent items consume the minimum
+        read capacity units according to the type of read. For more
+        information, see `Capacity Units Calculations`_ in the Amazon
+        DynamoDB Developer Guide .
+
+        :type request_items: map
+        :param request_items:
+        A map of one or more table names and, for each table, the corresponding
+            primary keys for the items to retrieve. Each table name can be
+            invoked only once.
+
+        Each element in the map consists of the following:
+
+
+        + Keys - An array of primary key attribute values that define specific
+              items in the table.
+        + AttributesToGet - One or more attributes to be retrieved from the
+              table or index. By default, all attributes are returned. If a
+              specified attribute is not found, it does not appear in the result.
+        + ConsistentRead - If `True`, a strongly consistent read is used; if
+              `False` (the default), an eventually consistent read is used.
+
+        :type return_consumed_capacity: string
+        :param return_consumed_capacity:
+
+        """
+        params = {'RequestItems': request_items, }
+        if return_consumed_capacity is not None:
+            params['ReturnConsumedCapacity'] = return_consumed_capacity
+        return self.make_request(action='BatchGetItem',
+                                 body=json.dumps(params))
+
+    def batch_write_item(self, request_items, return_consumed_capacity=None,
+                         return_item_collection_metrics=None):
+        """
+        The BatchWriteItem operation puts or deletes multiple items in
+        one or more tables. A single call to BatchWriteItem can write
+        up to 1 MB of data, which can comprise as many as 25 put or
+        delete requests. Individual items to be written can be as
+        large as 64 KB.
+
+        BatchWriteItem cannot update items. To update items, use the
+        UpdateItem API.
+
+        The individual PutItem and DeleteItem operations specified in
+        BatchWriteItem are atomic; however BatchWriteItem as a whole
+        is not. If any requested operations fail because the table's
+        provisioned throughput is exceeded or an internal processing
+        failure occurs, the failed operations are returned in the
+        UnprocessedItems response parameter. You can investigate and
+        optionally resend the requests. Typically, you would call
+        BatchWriteItem in a loop. Each iteration would check for
+        unprocessed items and submit a new BatchWriteItem request with
+        those unprocessed items until all items have been processed.
+
+        To write one item, you can use the PutItem operation; to
+        delete one item, you can use the DeleteItem operation.
+
+        With BatchWriteItem , you can efficiently write or delete
+        large amounts of data, such as from Amazon Elastic MapReduce
+        (EMR), or copy data from another database into Amazon
+        DynamoDB. In order to improve performance with these large-
+        scale operations, BatchWriteItem does not behave in the same
+        way as individual PutItem and DeleteItem calls would For
+        example, you cannot specify conditions on individual put and
+        delete requests, and BatchWriteItem does not return deleted
+        items in the response.
+
+        If you use a programming language that supports concurrency,
+        such as Java, you can use threads to write items in parallel.
+        Your application must include the necessary logic to manage
+        the threads.
+
+        With languages that don't support threading, such as PHP,
+        BatchWriteItem will write or delete the specified items one at
+        a time. In both situations, BatchWriteItem provides an
+        alternative where the API performs the specified put and
+        delete operations in parallel, giving you the power of the
+        thread pool approach without having to introduce complexity
+        into your application.
+
+        Parallel processing reduces latency, but each specified put
+        and delete request consumes the same number of write capacity
+        units whether it is processed in parallel or not. Delete
+        operations on nonexistent items consume one write capacity
+        unit.
+
+        If one or more of the following is true, Amazon DynamoDB
+        rejects the entire batch write operation:
+
+
+        + One or more tables specified in the BatchWriteItem request
+          does not exist.
+        + Primary key attributes specified on an item in the request
+          do not match those in the corresponding table's primary key
+          schema.
+        + You try to perform multiple operations on the same item in
+          the same BatchWriteItem request. For example, you cannot put
+          and delete the same item in the same BatchWriteItem request.
+        + The total request size exceeds 1 MB.
+        + Any individual item in a batch exceeds 64 KB.
+
+        :type request_items: map
+        :param request_items:
+        A map of one or more table names and, for each table, a list of
+            operations to be performed ( DeleteRequest or PutRequest ). Each
+            element in the map consists of the following:
+
+
+        + DeleteRequest - Perform a DeleteItem operation on the specified item.
+              The item to be deleted is identified by a Key subelement:
+
+            + Key - A map of primary key attribute values that uniquely identify
+                  the item. Each entry in this map consists of an attribute name and
+                  an attribute value.
+
+        + PutRequest - Perform a PutItem operation on the specified item. The
+              item to be put is identified by an Item subelement:
+
+            + Item - A map of attributes and their values. Each entry in this map
+                  consists of an attribute name and an attribute value. Attribute
+                  values must not be null; string and binary type attributes must
+                  have lengths greater than zero; and set type attributes must not be
+                  empty. Requests that contain empty values will be rejected with a
+                  ValidationException . If you specify any attributes that are part
+                  of an index key, then the data types for those attributes must
+                  match those of the schema in the table's attribute definition.
+
+        :type return_consumed_capacity: string
+        :param return_consumed_capacity:
+
+        :type return_item_collection_metrics: string
+        :param return_item_collection_metrics: If set to `SIZE`, statistics
+            about item collections, if any, that were modified during the
+            operation are returned in the response. If set to `NONE` (the
+            default), no statistics are returned..
+
+        """
+        params = {'RequestItems': request_items, }
+        if return_consumed_capacity is not None:
+            params['ReturnConsumedCapacity'] = return_consumed_capacity
+        if return_item_collection_metrics is not None:
+            params['ReturnItemCollectionMetrics'] = return_item_collection_metrics
+        return self.make_request(action='BatchWriteItem',
+                                 body=json.dumps(params))
+
+    def create_table(self, attribute_definitions, table_name, key_schema,
+                     provisioned_throughput, local_secondary_indexes=None):
+        """
+        The CreateTable operation adds a new table to your account. In
+        an AWS account, table names must be unique within each region.
+        That is, you can have two tables with same name if you create
+        the tables in different regions.
+
+        CreateTable is an asynchronous operation. Upon receiving a
+        CreateTable request, Amazon DynamoDB immediately returns a
+        response with a TableStatus of `CREATING`. After the table is
+        created, Amazon DynamoDB sets the TableStatus to `ACTIVE`. You
+        can perform read and write operations only on an `ACTIVE`
+        table.
+
+        If you want to create multiple tables with local secondary
+        indexes on them, you must create them sequentially. Only one
+        table with local secondary indexes can be in the `CREATING`
+        state at any given time.
+
+        You can use the DescribeTable API to check the table status.
+
+        :type attribute_definitions: list
+        :param attribute_definitions: An array of attributes that describe the
+            key schema for the table and indexes.
+
+        :type table_name: string
+        :param table_name: The name of the table to create.
+
+        :type key_schema: list
+        :param key_schema: Specifies the attributes that make up the primary
+            key for the table. The attributes in KeySchema must also be defined
+            in the AttributeDefinitions array. For more information, see `Data
+            Model`_ in the Amazon DynamoDB Developer Guide .
+        Each KeySchemaElement in the array is composed of:
+
+
+        + AttributeName - The name of this key attribute.
+        + KeyType - Determines whether the key attribute is `HASH` or `RANGE`.
+
+
+        For a primary key that consists of a hash attribute, you must specify
+            exactly one element with a KeyType of `HASH`.
+
+        For a primary key that consists of hash and range attributes, you must
+            specify exactly two elements, in this order: The first element must
+            have a KeyType of `HASH`, and the second element must have a
+            KeyType of `RANGE`.
+
+        For more information, see `Specifying the Primary Key`_ in the Amazon
+            DynamoDB Developer Guide .
+
+        :type local_secondary_indexes: list
+        :param local_secondary_indexes:
+        One or more secondary indexes (the maximum is five) to be created on
+            the table. Each index is scoped to a given hash key value. There is
+            a 10 gigabyte size limit per hash key; otherwise, the size of a
+            local secondary index is unconstrained.
+
+        Each secondary index in the array includes the following:
+
+
+        + IndexName - The name of the secondary index. Must be unique only for
+              this table.
+        + KeySchema - Specifies the key schema for the index. The key schema
+              must begin with the same hash key attribute as the table.
+        + Projection - Specifies attributes that are copied (projected) from
+              the table into the index. These are in addition to the primary key
+              attributes and index key attributes, which are automatically
+              projected. Each attribute specification is composed of:
+
+            + ProjectionType - One of the following:
+
+                + `KEYS_ONLY` - Only the index and primary keys are projected into the
+                      index.
+                + `INCLUDE` - Only the specified table attributes are projected into
+                      the index. The list of projected attributes are in NonKeyAttributes
+                      .
+                + `ALL` - All of the table attributes are projected into the index.
+
+            + NonKeyAttributes - A list of one or more non-key attribute names that
+                  are projected into the index. The total count of attributes
+                  specified in NonKeyAttributes , summed across all of the local
+                  secondary indexes, must not exceed 20. If you project the same
+                  attribute into two different indexes, this counts as two distinct
+                  attributes when determining the total.
+
+        :type provisioned_throughput: dict
+        :param provisioned_throughput:
+
+        """
+        params = {
+            'AttributeDefinitions': attribute_definitions,
+            'TableName': table_name,
+            'KeySchema': key_schema,
+            'ProvisionedThroughput': provisioned_throughput,
+        }
+        if local_secondary_indexes is not None:
+            params['LocalSecondaryIndexes'] = local_secondary_indexes
+        return self.make_request(action='CreateTable',
+                                 body=json.dumps(params))
+
+    def delete_item(self, table_name, key, expected=None, return_values=None,
+                    return_consumed_capacity=None,
+                    return_item_collection_metrics=None):
+        """
+        Deletes a single item in a table by primary key. You can
+        perform a conditional delete operation that deletes the item
+        if it exists, or if it has an expected attribute value.
+
+        In addition to deleting an item, you can also return the
+        item's attribute values in the same operation, using the
+        ReturnValues parameter.
+
+        Unless you specify conditions, the DeleteItem is an idempotent
+        operation; running it multiple times on the same item or
+        attribute does not result in an error response.
+
+        Conditional deletes are useful for only deleting items if
+        specific conditions are met. If those conditions are met,
+        Amazon DynamoDB performs the delete. Otherwise, the item is
+        not deleted.
+
+        :type table_name: string
+        :param table_name: The name of the table from which to delete the item.
+
+        :type key: map
+        :param key: A map of attribute names to AttributeValue objects,
+            representing the primary key of the item to delete.
+
+        :type expected: map
+        :param expected: A map of attribute/condition pairs. This is the
+            conditional block for the DeleteItem operation. All the conditions
+            must be met for the operation to succeed.
+        Expected allows you to provide an attribute name, and whether or not
+            Amazon DynamoDB should check to see if the attribute value already
+            exists; or if the attribute value exists and has a particular value
+            before changing it.
+
+        Each item in Expected represents an attribute name for Amazon DynamoDB
+            to check, along with the following:
+
+
+        + Value - The attribute value for Amazon DynamoDB to check.
+        + Exists - Causes Amazon DynamoDB to evaluate the value before
+              attempting a conditional operation:
+
+            + If Exists is `True`, Amazon DynamoDB will check to see if that
+                  attribute value already exists in the table. If it is found, then
+                  the operation succeeds. If it is not found, the operation fails
+                  with a ConditionalCheckFailedException .
+            + If Exists is `False`, Amazon DynamoDB assumes that the attribute
+                  value does not exist in the table. If in fact the value does not
+                  exist, then the assumption is valid and the operation succeeds. If
+                  the value is found, despite the assumption that it does not exist,
+                  the operation fails with a ConditionalCheckFailedException .
+          The default setting for Exists is `True`. If you supply a Value all by
+              itself, Amazon DynamoDB assumes the attribute exists: You don't
+              have to set Exists to `True`, because it is implied. Amazon
+              DynamoDB returns a ValidationException if:
+
+            + Exists is `True` but there is no Value to check. (You expect a value
+                  to exist, but don't specify what that value is.)
+            + Exists is `False` but you also specify a Value . (You cannot expect
+                  an attribute to have a value, while also expecting it not to
+                  exist.)
+
+
+
+        If you specify more than one condition for Exists , then all of the
+            conditions must evaluate to true. (In other words, the conditions
+            are ANDed together.) Otherwise, the conditional operation will
+            fail.
+
+        :type return_values: string
+        :param return_values:
+        Use ReturnValues if you want to get the item attributes as they
+            appeared before they were deleted. For DeleteItem , the valid
+            values are:
+
+
+        + `NONE` - If ReturnValues is not specified, or if its value is `NONE`,
+              then nothing is returned. (This is the default for ReturnValues .)
+        + `ALL_OLD` - The content of the old item is returned.
+
+        :type return_consumed_capacity: string
+        :param return_consumed_capacity:
+
+        :type return_item_collection_metrics: string
+        :param return_item_collection_metrics: If set to `SIZE`, statistics
+            about item collections, if any, that were modified during the
+            operation are returned in the response. If set to `NONE` (the
+            default), no statistics are returned..
+
+        """
+        params = {'TableName': table_name, 'Key': key, }
+        if expected is not None:
+            params['Expected'] = expected
+        if return_values is not None:
+            params['ReturnValues'] = return_values
+        if return_consumed_capacity is not None:
+            params['ReturnConsumedCapacity'] = return_consumed_capacity
+        if return_item_collection_metrics is not None:
+            params['ReturnItemCollectionMetrics'] = return_item_collection_metrics
+        return self.make_request(action='DeleteItem',
+                                 body=json.dumps(params))
+
+    def delete_table(self, table_name):
+        """
+        The DeleteTable operation deletes a table and all of its
+        items. After a DeleteTable request, the specified table is in
+        the `DELETING` state until Amazon DynamoDB completes the
+        deletion. If the table is in the `ACTIVE` state, you can
+        delete it. If a table is in `CREATING` or `UPDATING` states,
+        then Amazon DynamoDB returns a ResourceInUseException . If the
+        specified table does not exist, Amazon DynamoDB returns a
+        ResourceNotFoundException . If table is already in the
+        `DELETING` state, no error is returned.
+
+        Amazon DynamoDB might continue to accept data read and write
+        operations, such as GetItem and PutItem , on a table in the
+        `DELETING` state until the table deletion is complete.
+
+        Tables are unique among those associated with the AWS Account
+        issuing the request, and the AWS region that receives the
+        request (such as dynamodb.us-east-1.amazonaws.com). Each
+        Amazon DynamoDB endpoint is entirely independent. For example,
+        if you have two tables called "MyTable," one in dynamodb.us-
+        east-1.amazonaws.com and one in dynamodb.us-
+        west-1.amazonaws.com, they are completely independent and do
+        not share any data; deleting one does not delete the other.
+
+        When you delete a table, any local secondary indexes on that
+        table are also deleted.
+
+        Use the DescribeTable API to check the status of the table.
+
+        :type table_name: string
+        :param table_name: The name of the table to delete.
+
+        """
+        params = {'TableName': table_name, }
+        return self.make_request(action='DeleteTable',
+                                 body=json.dumps(params))
+
+    def describe_table(self, table_name):
+        """
+        Returns information about the table, including the current
+        status of the table, when it was created, the primary key
+        schema, and any indexes on the table.
+
+        :type table_name: string
+        :param table_name: The name of the table to describe.
+
+        """
+        params = {'TableName': table_name, }
+        return self.make_request(action='DescribeTable',
+                                 body=json.dumps(params))
+
+    def get_item(self, table_name, key, attributes_to_get=None,
+                 consistent_read=None, return_consumed_capacity=None):
+        """
+        The GetItem operation returns a set of attributes for the item
+        with the given primary key. If there is no matching item,
+        GetItem does not return any data.
+
+        GetItem provides an eventually consistent read by default. If
+        your application requires a strongly consistent read, set
+        ConsistentRead to `True`. Although a strongly consistent read
+        might take more time than an eventually consistent read, it
+        always returns the last updated value.
+
+        :type table_name: string
+        :param table_name: The name of the table containing the requested item.
+
+        :type key: map
+        :param key: A map of attribute names to AttributeValue objects,
+            representing the primary key of the item to retrieve.
+
+        :type attributes_to_get: list
+        :param attributes_to_get: The names of one or more attributes to
+            retrieve. If no attribute names are specified, then all attributes
+            will be returned. If any of the requested attributes are not found,
+            they will not appear in the result.
+
+        :type consistent_read: boolean
+        :param consistent_read: If set to `True`, then the operation uses
+            strongly consistent reads; otherwise, eventually consistent reads
+            are used.
+
+        :type return_consumed_capacity: string
+        :param return_consumed_capacity:
+
+        """
+        params = {'TableName': table_name, 'Key': key, }
+        if attributes_to_get is not None:
+            params['AttributesToGet'] = attributes_to_get
+        if consistent_read is not None:
+            params['ConsistentRead'] = consistent_read
+        if return_consumed_capacity is not None:
+            params['ReturnConsumedCapacity'] = return_consumed_capacity
+        return self.make_request(action='GetItem',
+                                 body=json.dumps(params))
+
+    def list_tables(self, exclusive_start_table_name=None, limit=None):
+        """
+        Returns an array of all the tables associated with the current
+        account and endpoint.
+
+        Each Amazon DynamoDB endpoint is entirely independent. For
+        example, if you have two tables called "MyTable," one in
+        dynamodb.us-east-1.amazonaws.com and one in dynamodb.us-
+        west-1.amazonaws.com , they are completely independent and do
+        not share any data. The ListTables operation returns all of
+        the table names associated with the account making the
+        request, for the endpoint that receives the request.
+
+        :type exclusive_start_table_name: string
+        :param exclusive_start_table_name: The name of the table that starts
+            the list. If you already ran a ListTables operation and received a
+            LastEvaluatedTableName value in the response, use that value here
+            to continue the list.
+
+        :type limit: integer
+        :param limit: A maximum number of table names to return.
+
+        """
+        params = {}
+        if exclusive_start_table_name is not None:
+            params['ExclusiveStartTableName'] = exclusive_start_table_name
+        if limit is not None:
+            params['Limit'] = limit
+        return self.make_request(action='ListTables',
+                                 body=json.dumps(params))
+
+    def put_item(self, table_name, item, expected=None, return_values=None,
+                 return_consumed_capacity=None,
+                 return_item_collection_metrics=None):
+        """
+        Creates a new item, or replaces an old item with a new item.
+        If an item already exists in the specified table with the same
+        primary key, the new item completely replaces the existing
+        item. You can perform a conditional put (insert a new item if
+        one with the specified primary key doesn't exist), or replace
+        an existing item if it has certain attribute values.
+
+        In addition to putting an item, you can also return the item's
+        attribute values in the same operation, using the ReturnValues
+        parameter.
+
+        When you add an item, the primary key attribute(s) are the
+        only required attributes. Attribute values cannot be null.
+        String and binary type attributes must have lengths greater
+        than zero. Set type attributes cannot be empty. Requests with
+        empty values will be rejected with a ValidationException .
+
+        You can request that PutItem return either a copy of the old
+        item (before the update) or a copy of the new item (after the
+        update). For more information, see the ReturnValues
+        description.
+
+        To prevent a new item from replacing an existing item, use a
+        conditional put operation with Exists set to `False` for the
+        primary key attribute, or attributes.
+
+        For more information about using this API, see `Working with
+        Items`_ in the Amazon DynamoDB Developer Guide .
+
+        :type table_name: string
+        :param table_name: The name of the table to contain the item.
+
+        :type item: map
+        :param item: A map of attribute name/value pairs, one for each
+            attribute. Only the primary key attributes are required; you can
+            optionally provide other attribute name-value pairs for the item.
+        If you specify any attributes that are part of an index key, then the
+            data types for those attributes must match those of the schema in
+            the table's attribute definition.
+
+        For more information about primary keys, see `Primary Key`_ in the
+            Amazon DynamoDB Developer Guide .
+
+        Each element in the Item map is an AttributeValue object.
+
+        :type expected: map
+        :param expected: A map of attribute/condition pairs. This is the
+            conditional block for the PutItem operation. All the conditions
+            must be met for the operation to succeed.
+        Expected allows you to provide an attribute name, and whether or not
+            Amazon DynamoDB should check to see if the attribute value already
+            exists; or if the attribute value exists and has a particular value
+            before changing it.
+
+        Each item in Expected represents an attribute name for Amazon DynamoDB
+            to check, along with the following:
+
+
+        + Value - The attribute value for Amazon DynamoDB to check.
+        + Exists - Causes Amazon DynamoDB to evaluate the value before
+              attempting a conditional operation:
+
+            + If Exists is `True`, Amazon DynamoDB will check to see if that
+                  attribute value already exists in the table. If it is found, then
+                  the operation succeeds. If it is not found, the operation fails
+                  with a ConditionalCheckFailedException .
+            + If Exists is `False`, Amazon DynamoDB assumes that the attribute
+                  value does not exist in the table. If in fact the value does not
+                  exist, then the assumption is valid and the operation succeeds. If
+                  the value is found, despite the assumption that it does not exist,
+                  the operation fails with a ConditionalCheckFailedException .
+          The default setting for Exists is `True`. If you supply a Value all by
+              itself, Amazon DynamoDB assumes the attribute exists: You don't
+              have to set Exists to `True`, because it is implied. Amazon
+              DynamoDB returns a ValidationException if:
+
+            + Exists is `True` but there is no Value to check. (You expect a value
+                  to exist, but don't specify what that value is.)
+            + Exists is `False` but you also specify a Value . (You cannot expect
+                  an attribute to have a value, while also expecting it not to
+                  exist.)
+
+
+
+        If you specify more than one condition for Exists , then all of the
+            conditions must evaluate to true. (In other words, the conditions
+            are ANDed together.) Otherwise, the conditional operation will
+            fail.
+
+        :type return_values: string
+        :param return_values:
+        Use ReturnValues if you want to get the item attributes as they
+            appeared before they were updated with the PutItem request. For
+            PutItem , the valid values are:
+
+
+        + `NONE` - If ReturnValues is not specified, or if its value is `NONE`,
+              then nothing is returned. (This is the default for ReturnValues .)
+        + `ALL_OLD` - If PutItem overwrote an attribute name-value pair, then
+              the content of the old item is returned.
+
+        :type return_consumed_capacity: string
+        :param return_consumed_capacity:
+
+        :type return_item_collection_metrics: string
+        :param return_item_collection_metrics: If set to `SIZE`, statistics
+            about item collections, if any, that were modified during the
+            operation are returned in the response. If set to `NONE` (the
+            default), no statistics are returned..
+
+        """
+        params = {'TableName': table_name, 'Item': item, }
+        if expected is not None:
+            params['Expected'] = expected
+        if return_values is not None:
+            params['ReturnValues'] = return_values
+        if return_consumed_capacity is not None:
+            params['ReturnConsumedCapacity'] = return_consumed_capacity
+        if return_item_collection_metrics is not None:
+            params['ReturnItemCollectionMetrics'] = return_item_collection_metrics
+        return self.make_request(action='PutItem',
+                                 body=json.dumps(params))
+
+    def query(self, table_name, index_name=None, select=None,
+              attributes_to_get=None, limit=None, consistent_read=None,
+              key_conditions=None, scan_index_forward=None,
+              exclusive_start_key=None, return_consumed_capacity=None):
+        """
+        A Query operation directly accesses items from a table using
+        the table primary key, or from an index using the index key.
+        You must provide a specific hash key value. You can narrow the
+        scope of the query by using comparison operators on the range
+        key value, or on the index key. You can use the
+        ScanIndexForward parameter to get results in forward or
+        reverse order, by range key or by index key.
+
+        Queries that do not return results consume the minimum read
+        capacity units according to the type of read.
+
+        If the total number of items meeting the query criteria
+        exceeds the result set size limit of 1 MB, the query stops and
+        results are returned to the user with a LastEvaluatedKey to
+        continue the query in a subsequent operation. Unlike a Scan
+        operation, a Query operation never returns an empty result set
+        and a LastEvaluatedKey . The LastEvaluatedKey is only provided
+        if the results exceed 1 MB, or if you have used Limit .
+
+        To request a strongly consistent result, set ConsistentRead to
+        true.
+
+        :type table_name: string
+        :param table_name: The name of the table containing the requested
+            items.
+
+        :type index_name: string
+        :param index_name: The name of an index on the table to query.
+
+        :type select: string
+        :param select: The attributes to be returned in the result. You can
+            retrieve all item attributes, specific item attributes, the count
+            of matching items, or in the case of an index, some or all of the
+            attributes projected into the index.
+
+        + `ALL_ATTRIBUTES`: Returns all of the item attributes. For a table,
+              this is the default. For an index, this mode causes Amazon DynamoDB
+              to fetch the full item from the table for each matching item in the
+              index. If the index is configured to project all item attributes,
+              the matching items will not be fetched from the table. Fetching
+              items from the table incurs additional throughput cost and latency.
+        + `ALL_PROJECTED_ATTRIBUTES`: Allowed only when querying an index.
+              Retrieves all attributes which have been projected into the index.
+              If the index is configured to project all attributes, this is
+              equivalent to specifying ALL_ATTRIBUTES .
+        + `COUNT`: Returns the number of matching items, rather than the
+              matching items themselves.
+        + `SPECIFIC_ATTRIBUTES` : Returns only the attributes listed in
+              AttributesToGet . This is equivalent to specifying AttributesToGet
+              without specifying any value for Select . If you are querying an
+              index and request only attributes that are projected into that
+              index, the operation will read only the index and not the table. If
+              any of the requested attributes are not projected into the index,
+              Amazon DynamoDB will need to fetch each matching item from the
+              table. This extra fetching incurs additional throughput cost and
+              latency.
+
+
+        When neither Select nor AttributesToGet are specified, Amazon DynamoDB
+            defaults to `ALL_ATTRIBUTES` when accessing a table, and
+            `ALL_PROJECTED_ATTRIBUTES` when accessing an index. You cannot use
+            both Select and AttributesToGet together in a single request,
+            unless the value for Select is `SPECIFIC_ATTRIBUTES`. (This usage
+            is equivalent to specifying AttributesToGet without any value for
+            Select .)
+
+        :type attributes_to_get: list
+        :param attributes_to_get: The names of one or more attributes to
+            retrieve. If no attribute names are specified, then all attributes
+            will be returned. If any of the requested attributes are not found,
+            they will not appear in the result.
+        If you are querying an index and request only attributes that are
+            projected into that index, the operation will read only the index
+            and not the table. If any of the requested attributes are not
+            projected into the index, Amazon DynamoDB will need to fetch each
+            matching item from the table. This extra fetching incurs additional
+            throughput cost and latency.
+
+        You cannot use both AttributesToGet and Select together in a Query
+            request, unless the value for Select is `SPECIFIC_ATTRIBUTES`.
+            (This usage is equivalent to specifying AttributesToGet without any
+            value for Select .)
+
+        :type limit: integer
+        :param limit: The maximum number of items to evaluate (not necessarily
+            the number of matching items). If Amazon DynamoDB processes the
+            number of items up to the limit while processing the results, it
+            stops the operation and returns the matching values up to that
+            point, and a LastEvaluatedKey to apply in a subsequent operation,
+            so that you can pick up where you left off. Also, if the processed
+            data set size exceeds 1 MB before Amazon DynamoDB reaches this
+            limit, it stops the operation and returns the matching values up to
+            the limit, and a LastEvaluatedKey to apply in a subsequent
+            operation to continue the operation. For more information see
+            `Query and Scan`_ in the Amazon DynamoDB Developer Guide .
+
+        :type consistent_read: boolean
+        :param consistent_read: If set to `True`, then the operation uses
+            strongly consistent reads; otherwise, eventually consistent reads
+            are used.
+
+        :type key_conditions: map
+        :param key_conditions:
+        The selection criteria for the query.
+
+        For a query on a table, you can only have conditions on the table
+            primary key attributes. you must specify the hash key attribute
+            name and value as an `EQ` condition. You can optionally specify a
+            second condition, referring to the range key attribute.
+
+        For a query on a secondary index, you can only have conditions on the
+            index key attributes. You must specify the index hash attribute
+            name and value as an EQ condition. You can optionally specify a
+            second condition, referring to the index key range attribute.
+
+        Multiple conditions are evaluated using "AND"; in other words, all of
+            the conditions must be met in order for an item to appear in the
+            results results.
+
+        Each KeyConditions element consists of an attribute name to compare,
+            along with the following:
+
+
+        + AttributeValueList - One or more values to evaluate against the
+              supplied attribute. This list contains exactly one value, except
+              for a `BETWEEN` or `IN` comparison, in which case the list contains
+              two values. For type Number, value comparisons are numeric. String
+              value comparisons for greater than, equals, or less than are based
+              on ASCII character code values. For example, `a` is greater than
+              `A`, and `aa` is greater than `B`. For a list of code values, see
+              `http://en.wikipedia.org/wiki/ASCII#ASCII_printable_characters`_.
+              For Binary, Amazon DynamoDB treats each byte of the binary data as
+              unsigned when it compares binary values, for example when
+              evaluating query expressions.
+        + ComparisonOperator - A comparator for evaluating attributes. For
+              example, equals, greater than, less than, etc. Valid comparison
+              operators for Query: `EQ | LE | LT | GE | GT | BEGINS_WITH |
+              BETWEEN` For information on specifying data types in JSON, see
+              `JSON Data Format`_ in the Amazon DynamoDB Developer Guide . The
+              following are descriptions of each comparison operator.
+
+            + `EQ` : Equal. AttributeValueList can contain only one AttributeValue
+                  of type String, Number, or Binary (not a set). If an item contains
+                  an AttributeValue of a different type than the one specified in the
+                  request, the value does not match. For example, `{"S":"6"}` does
+                  not equal `{"N":"6"}`. Also, `{"N":"6"}` does not equal
+                  `{"NS":["6", "2", "1"]}`.
+            + `LE` : Less than or equal. AttributeValueList can contain only one
+                  AttributeValue of type String, Number, or Binary (not a set). If an
+                  item contains an AttributeValue of a different type than the one
+                  specified in the request, the value does not match. For example,
+                  `{"S":"6"}` does not equal `{"N":"6"}`. Also, `{"N":"6"}` does not
+                  compare to `{"NS":["6", "2", "1"]}`.
+            + `LT` : Less than. AttributeValueList can contain only one
+                  AttributeValue of type String, Number, or Binary (not a set). If an
+                  item contains an AttributeValue of a different type than the one
+                  specified in the request, the value does not match. For example,
+                  `{"S":"6"}` does not equal `{"N":"6"}`. Also, `{"N":"6"}` does not
+                  compare to `{"NS":["6", "2", "1"]}`.
+            + `GE` : Greater than or equal. AttributeValueList can contain only one
+                  AttributeValue of type String, Number, or Binary (not a set). If an
+                  item contains an AttributeValue of a different type than the one
+                  specified in the request, the value does not match. For example,
+                  `{"S":"6"}` does not equal `{"N":"6"}`. Also, `{"N":"6"}` does not
+                  compare to `{"NS":["6", "2", "1"]}`.
+            + `GT` : Greater than. AttributeValueList can contain only one
+                  AttributeValue of type String, Number, or Binary (not a set). If an
+                  item contains an AttributeValue of a different type than the one
+                  specified in the request, the value does not match. For example,
+                  `{"S":"6"}` does not equal `{"N":"6"}`. Also, `{"N":"6"}` does not
+                  compare to `{"NS":["6", "2", "1"]}`.
+            + `BEGINS_WITH` : checks for a prefix. AttributeValueList can contain
+                  only one AttributeValue of type String or Binary (not a Number or a
+                  set). The target attribute of the comparison must be a String or
+                  Binary (not a Number or a set).
+            + `BETWEEN` : Greater than or equal to the first value, and less than
+                  or equal to the second value. AttributeValueList must contain two
+                  AttributeValue elements of the same type, either String, Number, or
+                  Binary (not a set). A target attribute matches if the target value
+                  is greater than, or equal to, the first element and less than, or
+                  equal to, the second element. If an item contains an AttributeValue
+                  of a different type than the one specified in the request, the
+                  value does not match. For example, `{"S":"6"}` does not compare to
+                  `{"N":"6"}`. Also, `{"N":"6"}` does not compare to `{"NS":["6",
+                  "2", "1"]}`
+
+        :type scan_index_forward: boolean
+        :param scan_index_forward: Specifies ascending (true) or descending
+            (false) traversal of the index. Amazon DynamoDB returns results
+            reflecting the requested order determined by the range key. If the
+            data type is Number, the results are returned in numeric order. For
+            String, the results are returned in order of ASCII character code
+            values. For Binary, Amazon DynamoDB treats each byte of the binary
+            data as unsigned when it compares binary values.
+        If ScanIndexForward is not specified, the results are returned in
+            ascending order.
+
+        :type exclusive_start_key: map
+        :param exclusive_start_key: The primary key of the item from which to
+            continue an earlier operation. An earlier operation might provide
+            this value as the LastEvaluatedKey if that operation was
+            interrupted before completion; either because of the result set
+            size or because of the setting for Limit . The LastEvaluatedKey can
+            be passed back in a new request to continue the operation from that
+            point.
+        The data type for ExclusiveStartKey must be String, Number or Binary.
+            No set data types are allowed.
+
+        :type return_consumed_capacity: string
+        :param return_consumed_capacity:
+
+        """
+        params = {'TableName': table_name, }
+        if index_name is not None:
+            params['IndexName'] = index_name
+        if select is not None:
+            params['Select'] = select
+        if attributes_to_get is not None:
+            params['AttributesToGet'] = attributes_to_get
+        if limit is not None:
+            params['Limit'] = limit
+        if consistent_read is not None:
+            params['ConsistentRead'] = consistent_read
+        if key_conditions is not None:
+            params['KeyConditions'] = key_conditions
+        if scan_index_forward is not None:
+            params['ScanIndexForward'] = scan_index_forward
+        if exclusive_start_key is not None:
+            params['ExclusiveStartKey'] = exclusive_start_key
+        if return_consumed_capacity is not None:
+            params['ReturnConsumedCapacity'] = return_consumed_capacity
+        return self.make_request(action='Query',
+                                 body=json.dumps(params))
+
+    def scan(self, table_name, attributes_to_get=None, limit=None,
+             select=None, scan_filter=None, exclusive_start_key=None,
+             return_consumed_capacity=None, total_segments=None,
+             segment=None):
+        """
+        The Scan operation returns one or more items and item
+        attributes by accessing every item in the table. To have
+        Amazon DynamoDB return fewer items, you can provide a
+        ScanFilter .
+
+        If the total number of scanned items exceeds the maximum data
+        set size limit of 1 MB, the scan stops and results are
+        returned to the user with a LastEvaluatedKey to continue the
+        scan in a subsequent operation. The results also include the
+        number of items exceeding the limit. A scan can result in no
+        table data meeting the filter criteria.
+
+        The result set is eventually consistent.
+
+        By default, Scan operations proceed sequentially; however, for
+        faster performance on large tables, applications can perform a
+        parallel Scan by specifying the Segment and TotalSegments
+        parameters. For more information, see `Parallel Scan`_ in the
+        Amazon DynamoDB Developer Guide .
+
+        :type table_name: string
+        :param table_name: The name of the table containing the requested
+            items.
+
+        :type attributes_to_get: list
+        :param attributes_to_get: The names of one or more attributes to
+            retrieve. If no attribute names are specified, then all attributes
+            will be returned. If any of the requested attributes are not found,
+            they will not appear in the result.
+
+        :type limit: integer
+        :param limit: The maximum number of items to evaluate (not necessarily
+            the number of matching items). If Amazon DynamoDB processes the
+            number of items up to the limit while processing the results, it
+            stops the operation and returns the matching values up to that
+            point, and a LastEvaluatedKey to apply in a subsequent operation,
+            so that you can pick up where you left off. Also, if the processed
+            data set size exceeds 1 MB before Amazon DynamoDB reaches this
+            limit, it stops the operation and returns the matching values up to
+            the limit, and a LastEvaluatedKey to apply in a subsequent
+            operation to continue the operation. For more information see
+            `Query and Scan`_ in the Amazon DynamoDB Developer Guide .
+
+        :type select: string
+        :param select: The attributes to be returned in the result. You can
+            retrieve all item attributes, specific item attributes, the count
+            of matching items, or in the case of an index, some or all of the
+            attributes projected into the index.
+
+        + `ALL_ATTRIBUTES`: Returns all of the item attributes. For a table,
+              this is the default. For an index, this mode causes Amazon DynamoDB
+              to fetch the full item from the table for each matching item in the
+              index. If the index is configured to project all item attributes,
+              the matching items will not be fetched from the table. Fetching
+              items from the table incurs additional throughput cost and latency.
+        + `ALL_PROJECTED_ATTRIBUTES`: Retrieves all attributes which have been
+              projected into the index. If the index is configured to project all
+              attributes, this is equivalent to specifying ALL_ATTRIBUTES .
+        + `COUNT`: Returns the number of matching items, rather than the
+              matching items themselves.
+        + `SPECIFIC_ATTRIBUTES` : Returns only the attributes listed in
+              AttributesToGet . This is equivalent to specifying AttributesToGet
+              without specifying any value for Select . If you are querying an
+              index and request only attributes that are projected into that
+              index, the operation will read only the index and not the table. If
+              any of the requested attributes are not projected into the index,
+              Amazon DynamoDB will need to fetch each matching item from the
+              table. This extra fetching incurs additional throughput cost and
+              latency.
+
+
+        When neither Select nor AttributesToGet are specified, Amazon DynamoDB
+            defaults to `ALL_ATTRIBUTES` when accessing a table, and
+            `ALL_PROJECTED_ATTRIBUTES` when accessing an index. You cannot use
+            both Select and AttributesToGet together in a single request,
+            unless the value for Select is `SPECIFIC_ATTRIBUTES`. (This usage
+            is equivalent to specifying AttributesToGet without any value for
+            Select .)
+
+        :type scan_filter: map
+        :param scan_filter:
+        Evaluates the scan results and returns only the desired values.
+            Multiple conditions are treated as "AND" operations: all conditions
+            must be met to be included in the results.
+
+        Each ScanConditions element consists of an attribute name to compare,
+            along with the following:
+
+
+        + AttributeValueList - One or more values to evaluate against the
+              supplied attribute. This list contains exactly one value, except
+              for a `BETWEEN` or `IN` comparison, in which case the list contains
+              two values. For type Number, value comparisons are numeric. String
+              value comparisons for greater than, equals, or less than are based
+              on ASCII character code values. For example, `a` is greater than
+              `A`, and `aa` is greater than `B`. For a list of code values, see
+              `http://en.wikipedia.org/wiki/ASCII#ASCII_printable_characters`_.
+              For Binary, Amazon DynamoDB treats each byte of the binary data as
+              unsigned when it compares binary values, for example when
+              evaluating query expressions.
+        + ComparisonOperator - A comparator for evaluating attributes. For
+              example, equals, greater than, less than, etc. Valid comparison
+              operators for Scan: `EQ | NE | LE | LT | GE | GT | NOT_NULL | NULL
+              | CONTAINS | NOT_CONTAINS | BEGINS_WITH | IN | BETWEEN` For
+              information on specifying data types in JSON, see `JSON Data
+              Format`_ in the Amazon DynamoDB Developer Guide . The following are
+              descriptions of each comparison operator.
+
+            + `EQ` : Equal. AttributeValueList can contain only one AttributeValue
+                  of type String, Number, or Binary (not a set). If an item contains
+                  an AttributeValue of a different type than the one specified in the
+                  request, the value does not match. For example, `{"S":"6"}` does
+                  not equal `{"N":"6"}`. Also, `{"N":"6"}` does not equal
+                  `{"NS":["6", "2", "1"]}`.
+            + `NE` : Not equal. AttributeValueList can contain only one
+                  AttributeValue of type String, Number, or Binary (not a set). If an
+                  item contains an AttributeValue of a different type than the one
+                  specified in the request, the value does not match. For example,
+                  `{"S":"6"}` does not equal `{"N":"6"}`. Also, `{"N":"6"}` does not
+                  equal `{"NS":["6", "2", "1"]}`.
+            + `LE` : Less than or equal. AttributeValueList can contain only one
+                  AttributeValue of type String, Number, or Binary (not a set). If an
+                  item contains an AttributeValue of a different type than the one
+                  specified in the request, the value does not match. For example,
+                  `{"S":"6"}` does not equal `{"N":"6"}`. Also, `{"N":"6"}` does not
+                  compare to `{"NS":["6", "2", "1"]}`.
+            + `LT` : Less than. AttributeValueList can contain only one
+                  AttributeValue of type String, Number, or Binary (not a set). If an
+                  item contains an AttributeValue of a different type than the one
+                  specified in the request, the value does not match. For example,
+                  `{"S":"6"}` does not equal `{"N":"6"}`. Also, `{"N":"6"}` does not
+                  compare to `{"NS":["6", "2", "1"]}`.
+            + `GE` : Greater than or equal. AttributeValueList can contain only one
+                  AttributeValue of type String, Number, or Binary (not a set). If an
+                  item contains an AttributeValue of a different type than the one
+                  specified in the request, the value does not match. For example,
+                  `{"S":"6"}` does not equal `{"N":"6"}`. Also, `{"N":"6"}` does not
+                  compare to `{"NS":["6", "2", "1"]}`.
+            + `GT` : Greater than. AttributeValueList can contain only one
+                  AttributeValue of type String, Number, or Binary (not a set). If an
+                  item contains an AttributeValue of a different type than the one
+                  specified in the request, the value does not match. For example,
+                  `{"S":"6"}` does not equal `{"N":"6"}`. Also, `{"N":"6"}` does not
+                  compare to `{"NS":["6", "2", "1"]}`.
+            + `NOT_NULL` : The attribute exists.
+            + `NULL` : The attribute does not exist.
+            + `CONTAINS` : checks for a subsequence, or value in a set.
+                  AttributeValueList can contain only one AttributeValue of type
+                  String, Number, or Binary (not a set). If the target attribute of
+                  the comparison is a String, then the operation checks for a
+                  substring match. If the target attribute of the comparison is
+                  Binary, then the operation looks for a subsequence of the target
+                  that matches the input. If the target attribute of the comparison
+                  is a set ("SS", "NS", or "BS"), then the operation checks for a
+                  member of the set (not as a substring).
+            + `NOT_CONTAINS` : checks for absence of a subsequence, or absence of a
+                  value in a set. AttributeValueList can contain only one
+                  AttributeValue of type String, Number, or Binary (not a set). If
+                  the target attribute of the comparison is a String, then the
+                  operation checks for the absence of a substring match. If the
+                  target attribute of the comparison is Binary, then the operation
+                  checks for the absence of a subsequence of the target that matches
+                  the input. If the target attribute of the comparison is a set
+                  ("SS", "NS", or "BS"), then the operation checks for the absence of
+                  a member of the set (not as a substring).
+            + `BEGINS_WITH` : checks for a prefix. AttributeValueList can contain
+                  only one AttributeValue of type String or Binary (not a Number or a
+                  set). The target attribute of the comparison must be a String or
+                  Binary (not a Number or a set).
+            + `IN` : checks for exact matches. AttributeValueList can contain more
+                  than one AttributeValue of type String, Number, or Binary (not a
+                  set). The target attribute of the comparison must be of the same
+                  type and exact value to match. A String never matches a String set.
+            + `BETWEEN` : Greater than or equal to the first value, and less than
+                  or equal to the second value. AttributeValueList must contain two
+                  AttributeValue elements of the same type, either String, Number, or
+                  Binary (not a set). A target attribute matches if the target value
+                  is greater than, or equal to, the first element and less than, or
+                  equal to, the second element. If an item contains an AttributeValue
+                  of a different type than the one specified in the request, the
+                  value does not match. For example, `{"S":"6"}` does not compare to
+                  `{"N":"6"}`. Also, `{"N":"6"}` does not compare to `{"NS":["6",
+                  "2", "1"]}`
+
+        :type exclusive_start_key: map
+        :param exclusive_start_key: The primary key of the item from which to
+            continue an earlier operation. An earlier operation might provide
+            this value as the LastEvaluatedKey if that operation was
+            interrupted before completion; either because of the result set
+            size or because of the setting for Limit . The LastEvaluatedKey can
+            be passed back in a new request to continue the operation from that
+            point.
+        The data type for ExclusiveStartKey must be String, Number or Binary.
+            No set data types are allowed.
+
+        If you are performing a parallel scan, the value of ExclusiveStartKey
+            must fall into the key space of the Segment being scanned. For
+            example, suppose that there are two application threads scanning a
+            table using the following Scan parameters
+
+
+        + Thread 0: Segment =0; TotalSegments =2
+        + Thread 1: Segment =1; TotalSegments =2
+
+
+        Now suppose that the Scan request for Thread 0 completed and returned a
+            LastEvaluatedKey of "X". Because "X" is part of Segment 0's key
+            space, it cannot be used anywhere else in the table. If Thread 1
+            were to issue another Scan request with an ExclusiveStartKey of
+            "X", Amazon DynamoDB would throw an InputValidationError because
+            hash key "X" cannot be in Segment 1.
+
+        :type return_consumed_capacity: string
+        :param return_consumed_capacity:
+
+        :type total_segments: integer
+        :param total_segments: For parallel Scan requests, TotalSegments
+            represents the total number of segments for a table that is being
+            scanned. Segments are a way to logically divide a table into
+            equally sized portions, for the duration of the Scan request. The
+            value of TotalSegments corresponds to the number of application
+            "workers" (such as threads or processes) that will perform the
+            parallel Scan . For example, if you want to scan a table using four
+            application threads, you would specify a TotalSegments value of 4.
+        The value for TotalSegments must be greater than or equal to 1, and
+            less than or equal to 4096. If you specify a TotalSegments value of
+            1, the Scan will be sequential rather than parallel.
+
+        If you specify TotalSegments , you must also specify Segment .
+
+        :type segment: integer
+        :param segment: For parallel Scan requests, Segment identifies an
+            individual segment to be scanned by an application "worker" (such
+            as a thread or a process). Each worker issues a Scan request with a
+            distinct value for the segment it will scan.
+        Segment IDs are zero-based, so the first segment is always 0. For
+            example, if you want to scan a table using four application
+            threads, the first thread would specify a Segment value of 0, the
+            second thread would specify 1, and so on.
+
+        The value for Segment must be greater than or equal to 0, and less than
+            the value provided for TotalSegments .
+
+        If you specify Segment , you must also specify TotalSegments .
+
+        """
+        params = {'TableName': table_name, }
+        if attributes_to_get is not None:
+            params['AttributesToGet'] = attributes_to_get
+        if limit is not None:
+            params['Limit'] = limit
+        if select is not None:
+            params['Select'] = select
+        if scan_filter is not None:
+            params['ScanFilter'] = scan_filter
+        if exclusive_start_key is not None:
+            params['ExclusiveStartKey'] = exclusive_start_key
+        if return_consumed_capacity is not None:
+            params['ReturnConsumedCapacity'] = return_consumed_capacity
+        if total_segments is not None:
+            params['TotalSegments'] = total_segments
+        if segment is not None:
+            params['Segment'] = segment
+        return self.make_request(action='Scan',
+                                 body=json.dumps(params))
+
+    def update_item(self, table_name, key, attribute_updates=None,
+                    expected=None, return_values=None,
+                    return_consumed_capacity=None,
+                    return_item_collection_metrics=None):
+        """
+        Edits an existing item's attributes, or inserts a new item if
+        it does not already exist. You can put, delete, or add
+        attribute values. You can also perform a conditional update
+        (insert a new attribute name-value pair if it doesn't exist,
+        or replace an existing name-value pair if it has certain
+        expected attribute values).
+
+        In addition to updating an item, you can also return the
+        item's attribute values in the same operation, using the
+        ReturnValues parameter.
+
+        :type table_name: string
+        :param table_name: The name of the table containing the item to update.
+
+        :type key: map
+        :param key: The primary key that defines the item. Each element
+            consists of an attribute name and a value for that attribute.
+
+        :type attribute_updates: map
+        :param attribute_updates: The names of attributes to be modified, the
+            action to perform on each, and the new value for each. If you are
+            updating an attribute that is an index key attribute for any
+            indexes on that table, the attribute type must match the index key
+            type defined in the AttributesDefinition of the table description.
+            You can use UpdateItem to update any non-key attributes.
+        Attribute values cannot be null. String and binary type attributes must
+            have lengths greater than zero. Set type attributes must not be
+            empty. Requests with empty values will be rejected with a
+            ValidationException .
+
+        Each AttributeUpdates element consists of an attribute name to modify,
+            along with the following:
+
+
+        + Value - The new value, if applicable, for this attribute.
+        + Action - Specifies how to perform the update. Valid values for Action
+              are `PUT`, `DELETE`, and `ADD`. The behavior depends on whether the
+              specified primary key already exists in the table. **If an item
+              with the specified Key is found in the table:**
+
+            + `PUT` - Adds the specified attribute to the item. If the attribute
+                  already exists, it is replaced by the new value.
+            + `DELETE` - If no value is specified, the attribute and its value are
+                  removed from the item. The data type of the specified value must
+                  match the existing value's data type. If a set of values is
+                  specified, then those values are subtracted from the old set. For
+                  example, if the attribute value was the set `[a,b,c]` and the
+                  DELETE action specified `[a,c]`, then the final attribute value
+                  would be `[b]`. Specifying an empty set is an error.
+            + `ADD` - If the attribute does not already exist, then the attribute
+                  and its values are added to the item. If the attribute does exist,
+                  then the behavior of `ADD` depends on the data type of the
+                  attribute:
+
+                + If the existing attribute is a number, and if Value is also a number,
+                      then the Value is mathematically added to the existing attribute.
+                      If Value is a negative number, then it is subtracted from the
+                      existing attribute. If you use `ADD` to increment or decrement a
+                      number value for an item that doesn't exist before the update,
+                      Amazon DynamoDB uses 0 as the initial value. In addition, if you
+                      use `ADD` to update an existing item, and intend to increment or
+                      decrement an attribute value which does not yet exist, Amazon
+                      DynamoDB uses `0` as the initial value. For example, suppose that
+                      the item you want to update does not yet have an attribute named
+                      itemcount , but you decide to `ADD` the number `3` to this
+                      attribute anyway, even though it currently does not exist. Amazon
+                      DynamoDB will create the itemcount attribute, set its initial value
+                      to `0`, and finally add `3` to it. The result will be a new
+                      itemcount attribute in the item, with a value of `3`.
+                + If the existing data type is a set, and if the Value is also a set,
+                      then the Value is added to the existing set. (This is a set
+                      operation, not mathematical addition.) For example, if the
+                      attribute value was the set `[1,2]`, and the `ADD` action specified
+                      `[3]`, then the final attribute value would be `[1,2,3]`. An error
+                      occurs if an Add action is specified for a set attribute and the
+                      attribute type specified does not match the existing set type. Both
+                      sets must have the same primitive data type. For example, if the
+                      existing data type is a set of strings, the Value must also be a
+                      set of strings. The same holds true for number sets and binary
+                      sets.
+              This action is only valid for an existing attribute whose data type is
+                  number or is a set. Do not use `ADD` for any other data types.
+          **If no item with the specified Key is found:**
+
+            + `PUT` - Amazon DynamoDB creates a new item with the specified primary
+                  key, and then adds the attribute.
+            + `DELETE` - Nothing happens; there is no attribute to delete.
+            + `ADD` - Amazon DynamoDB creates an item with the supplied primary key
+                  and number (or set of numbers) for the attribute value. The only
+                  data types allowed are number and number set; no other data types
+                  can be specified.
+
+
+
+        If you specify any attributes that are part of an index key, then the
+            data types for those attributes must match those of the schema in
+            the table's attribute definition.
+
+        :type expected: map
+        :param expected: A map of attribute/condition pairs. This is the
+            conditional block for the UpdateItem operation. All the conditions
+            must be met for the operation to succeed.
+        Expected allows you to provide an attribute name, and whether or not
+            Amazon DynamoDB should check to see if the attribute value already
+            exists; or if the attribute value exists and has a particular value
+            before changing it.
+
+        Each item in Expected represents an attribute name for Amazon DynamoDB
+            to check, along with the following:
+
+
+        + Value - The attribute value for Amazon DynamoDB to check.
+        + Exists - Causes Amazon DynamoDB to evaluate the value before
+              attempting a conditional operation:
+
+            + If Exists is `True`, Amazon DynamoDB will check to see if that
+                  attribute value already exists in the table. If it is found, then
+                  the operation succeeds. If it is not found, the operation fails
+                  with a ConditionalCheckFailedException .
+            + If Exists is `False`, Amazon DynamoDB assumes that the attribute
+                  value does not exist in the table. If in fact the value does not
+                  exist, then the assumption is valid and the operation succeeds. If
+                  the value is found, despite the assumption that it does not exist,
+                  the operation fails with a ConditionalCheckFailedException .
+          The default setting for Exists is `True`. If you supply a Value all by
+              itself, Amazon DynamoDB assumes the attribute exists: You don't
+              have to set Exists to `True`, because it is implied. Amazon
+              DynamoDB returns a ValidationException if:
+
+            + Exists is `True` but there is no Value to check. (You expect a value
+                  to exist, but don't specify what that value is.)
+            + Exists is `False` but you also specify a Value . (You cannot expect
+                  an attribute to have a value, while also expecting it not to
+                  exist.)
+
+
+
+        If you specify more than one condition for Exists , then all of the
+            conditions must evaluate to true. (In other words, the conditions
+            are ANDed together.) Otherwise, the conditional operation will
+            fail.
+
+        :type return_values: string
+        :param return_values:
+        Use ReturnValues if you want to get the item attributes as they
+            appeared either before or after they were updated. For UpdateItem ,
+            the valid values are:
+
+
+        + `NONE` - If ReturnValues is not specified, or if its value is `NONE`,
+              then nothing is returned. (This is the default for ReturnValues .)
+        + `ALL_OLD` - If UpdateItem overwrote an attribute name-value pair,
+              then the content of the old item is returned.
+        + `UPDATED_OLD` - The old versions of only the updated attributes are
+              returned.
+        + `ALL_NEW` - All of the attributes of the new version of the item are
+              returned.
+        + `UPDATED_NEW` - The new versions of only the updated attributes are
+              returned.
+
+        :type return_consumed_capacity: string
+        :param return_consumed_capacity:
+
+        :type return_item_collection_metrics: string
+        :param return_item_collection_metrics: If set to `SIZE`, statistics
+            about item collections, if any, that were modified during the
+            operation are returned in the response. If set to `NONE` (the
+            default), no statistics are returned..
+
+        """
+        params = {'TableName': table_name, 'Key': key, }
+        if attribute_updates is not None:
+            params['AttributeUpdates'] = attribute_updates
+        if expected is not None:
+            params['Expected'] = expected
+        if return_values is not None:
+            params['ReturnValues'] = return_values
+        if return_consumed_capacity is not None:
+            params['ReturnConsumedCapacity'] = return_consumed_capacity
+        if return_item_collection_metrics is not None:
+            params['ReturnItemCollectionMetrics'] = return_item_collection_metrics
+        return self.make_request(action='UpdateItem',
+                                 body=json.dumps(params))
+
+    def update_table(self, table_name, provisioned_throughput):
+        """
+        Updates the provisioned throughput for the given table.
+        Setting the throughput for a table helps you manage
+        performance and is part of the provisioned throughput feature
+        of Amazon DynamoDB.
+
+        The provisioned throughput values can be upgraded or
+        downgraded based on the maximums and minimums listed in the
+        `Limits`_ section in the Amazon DynamoDB Developer Guide .
+
+        The table must be in the `ACTIVE` state for this operation to
+        succeed. UpdateTable is an asynchronous operation; while
+        executing the operation, the table is in the `UPDATING` state.
+        While the table is in the `UPDATING` state, the table still
+        has the provisioned throughput from before the call. The new
+        provisioned throughput setting is in effect only when the
+        table returns to the `ACTIVE` state after the UpdateTable
+        operation.
+
+        You cannot add, modify or delete local secondary indexes using
+        UpdateTable . Local secondary indexes can only be defined at
+        table creation time.
+
+        :type table_name: string
+        :param table_name: The name of the table to be updated.
+
+        :type provisioned_throughput: dict
+        :param provisioned_throughput:
+
+        """
+        params = {
+            'TableName': table_name,
+            'ProvisionedThroughput': provisioned_throughput,
+        }
+        return self.make_request(action='UpdateTable',
+                                 body=json.dumps(params))
+
+    def make_request(self, action, body):
+        headers = {
+            'X-Amz-Target': '%s.%s' % (self.TargetPrefix, action),
+            'Host': self.region.endpoint,
+            'Content-Type': 'application/x-amz-json-1.0',
+            'Content-Length': str(len(body)),
+        }
+        http_request = self.build_base_http_request(
+            method='POST', path='/', auth_path='/', params={},
+            headers=headers, data=body)
+        response = self._mexe(http_request, sender=None,
+                              override_num_retries=self.NumberRetries,
+                              retry_handler=self._retry_handler)
+        response_body = response.read()
+        boto.log.debug(response_body)
+        if response.status == 200:
+            if response_body:
+                return json.loads(response_body)
+        else:
+            json_body = json.loads(response_body)
+            fault_name = json_body.get('__type', None)
+            exception_class = self._faults.get(fault_name, self.ResponseError)
+            raise exception_class(response.status, response.reason,
+                                  body=json_body)
+
+    def _retry_handler(self, response, i, next_sleep):
+        status = None
+        if response.status == 400:
+            response_body = response.read()
+            boto.log.debug(response_body)
+            data = json.loads(response_body)
+            if 'ProvisionedThroughputExceededException' in data.get('__type'):
+                self.throughput_exceeded_events += 1
+                msg = "%s, retry attempt %s" % (
+                    'ProvisionedThroughputExceededException',
+                    i
+                )
+                next_sleep = self._exponential_time(i)
+                i += 1
+                status = (msg, i, next_sleep)
+                if i == self.NumberRetries:
+                    # If this was our last retry attempt, raise
+                    # a specific error saying that the throughput
+                    # was exceeded.
+                    raise exceptions.ProvisionedThroughputExceededException(
+                        response.status, response.reason, data)
+            elif 'ConditionalCheckFailedException' in data.get('__type'):
+                raise exceptions.ConditionalCheckFailedException(
+                    response.status, response.reason, data)
+            elif 'ValidationException' in data.get('__type'):
+                raise exceptions.ValidationException(
+                    response.status, response.reason, data)
+            else:
+                raise self.ResponseError(response.status, response.reason,
+                                         data)
+        expected_crc32 = response.getheader('x-amz-crc32')
+        if self._validate_checksums and expected_crc32 is not None:
+            boto.log.debug('Validating crc32 checksum for body: %s',
+                           response.read())
+            actual_crc32 = crc32(response.read()) & 0xffffffff
+            expected_crc32 = int(expected_crc32)
+            if actual_crc32 != expected_crc32:
+                msg = ("The calculated checksum %s did not match the expected "
+                       "checksum %s" % (actual_crc32, expected_crc32))
+                status = (msg, i + 1, self._exponential_time(i))
+        return status
+
+    def _exponential_time(self, i):
+        if i == 0:
+            next_sleep = 0
+        else:
+            next_sleep = 0.05 * (2 ** i)
+        return next_sleep
diff --git a/boto/dynamodb2/results.py b/boto/dynamodb2/results.py
new file mode 100644
index 0000000..bcd855c
--- /dev/null
+++ b/boto/dynamodb2/results.py
@@ -0,0 +1,160 @@
+class ResultSet(object):
+    """
+    A class used to lazily handle page-to-page navigation through a set of
+    results.
+
+    It presents a transparent iterator interface, so that all the user has
+    to do is use it in a typical ``for`` loop (or list comprehension, etc.)
+    to fetch results, even if they weren't present in the current page of
+    results.
+
+    This is used by the ``Table.query`` & ``Table.scan`` methods.
+
+    Example::
+
+        >>> users = Table('users')
+        >>> results = ResultSet()
+        >>> results.to_call(users.query, username__gte='johndoe')
+        # Now iterate. When it runs out of results, it'll fetch the next page.
+        >>> for res in results:
+        ...     print res['username']
+
+    """
+    def __init__(self):
+        super(ResultSet, self).__init__()
+        self.the_callable = None
+        self.call_args = []
+        self.call_kwargs = {}
+        self._results = []
+        self._offset = -1
+        self._results_left = True
+        self._last_key_seen = None
+
+    @property
+    def first_key(self):
+        return 'exclusive_start_key'
+
+    def _reset(self):
+        """
+        Resets the internal state of the ``ResultSet``.
+
+        This prevents results from being cached long-term & consuming
+        excess memory.
+
+        Largely internal.
+        """
+        self._results = []
+        self._offset = 0
+
+    def __iter__(self):
+        return self
+
+    def next(self):
+        self._offset += 1
+
+        if self._offset >= len(self._results):
+            if self._results_left is False:
+                raise StopIteration()
+
+            self.fetch_more()
+
+        if self._offset < len(self._results):
+            return self._results[self._offset]
+        else:
+            raise StopIteration()
+
+    def to_call(self, the_callable, *args, **kwargs):
+        """
+        Sets up the callable & any arguments to run it with.
+
+        This is stored for subsequent calls so that those queries can be
+        run without requiring user intervention.
+
+        Example::
+
+            # Just an example callable.
+            >>> def squares_to(y):
+            ...     for x in range(1, y):
+            ...         yield x**2
+            >>> rs = ResultSet()
+            # Set up what to call & arguments.
+            >>> rs.to_call(squares_to, y=3)
+
+        """
+        if not callable(the_callable):
+            raise ValueError(
+                'You must supply an object or function to be called.'
+            )
+
+        self.the_callable = the_callable
+        self.call_args = args
+        self.call_kwargs = kwargs
+
+    def fetch_more(self):
+        """
+        When the iterator runs out of results, this method is run to re-execute
+        the callable (& arguments) to fetch the next page.
+
+        Largely internal.
+        """
+        self._reset()
+
+        args = self.call_args[:]
+        kwargs = self.call_kwargs.copy()
+
+        if self._last_key_seen is not None:
+            kwargs[self.first_key] = self._last_key_seen
+
+        results = self.the_callable(*args, **kwargs)
+
+        if not len(results.get('results', [])):
+            self._results_left = False
+            return
+
+        self._results.extend(results['results'])
+        self._last_key_seen = results.get('last_key', None)
+
+        if self._last_key_seen is None:
+            self._results_left = False
+
+        # Decrease the limit, if it's present.
+        if self.call_kwargs.get('limit'):
+            self.call_kwargs['limit'] -= len(results['results'])
+
+
+class BatchGetResultSet(ResultSet):
+    def __init__(self, *args, **kwargs):
+        self._keys_left = kwargs.pop('keys', [])
+        self._max_batch_get = kwargs.pop('max_batch_get', 100)
+        super(BatchGetResultSet, self).__init__(*args, **kwargs)
+
+    def fetch_more(self):
+        self._reset()
+
+        args = self.call_args[:]
+        kwargs = self.call_kwargs.copy()
+
+        # Slice off the max we can fetch.
+        kwargs['keys'] = self._keys_left[:self._max_batch_get]
+        self._keys_left = self._keys_left[self._max_batch_get:]
+
+        results = self.the_callable(*args, **kwargs)
+
+        if not len(results.get('results', [])):
+            self._results_left = False
+            return
+
+        self._results.extend(results['results'])
+
+        for offset, key_data in enumerate(results.get('unprocessed_keys', [])):
+            # We've got an unprocessed key. Reinsert it into the list.
+            # DynamoDB only returns valid keys, so there should be no risk of
+            # missing keys ever making it here.
+            self._keys_left.insert(offset, key_data)
+
+        if len(self._keys_left) <= 0:
+            self._results_left = False
+
+        # Decrease the limit, if it's present.
+        if self.call_kwargs.get('limit'):
+            self.call_kwargs['limit'] -= len(results['results'])
diff --git a/boto/dynamodb2/table.py b/boto/dynamodb2/table.py
new file mode 100644
index 0000000..36f918e
--- /dev/null
+++ b/boto/dynamodb2/table.py
@@ -0,0 +1,1061 @@
+from boto.dynamodb2 import exceptions
+from boto.dynamodb2.fields import (HashKey, RangeKey,
+                                   AllIndex, KeysOnlyIndex, IncludeIndex)
+from boto.dynamodb2.items import Item
+from boto.dynamodb2.layer1 import DynamoDBConnection
+from boto.dynamodb2.results import ResultSet, BatchGetResultSet
+from boto.dynamodb2.types import Dynamizer, FILTER_OPERATORS, QUERY_OPERATORS
+
+
+class Table(object):
+    """
+    Interacts & models the behavior of a DynamoDB table.
+
+    The ``Table`` object represents a set (or rough categorization) of
+    records within DynamoDB. The important part is that all records within the
+    table, while largely-schema-free, share the same schema & are essentially
+    namespaced for use in your application. For example, you might have a
+    ``users`` table or a ``forums`` table.
+    """
+    max_batch_get = 100
+
+    def __init__(self, table_name, schema=None, throughput=None, indexes=None,
+                 connection=None):
+        """
+        Sets up a new in-memory ``Table``.
+
+        This is useful if the table already exists within DynamoDB & you simply
+        want to use it for additional interactions. The only required parameter
+        is the ``table_name``. However, under the hood, the object will call
+        ``describe_table`` to determine the schema/indexes/throughput. You
+        can avoid this extra call by passing in ``schema`` & ``indexes``.
+
+        **IMPORTANT** - If you're creating a new ``Table`` for the first time,
+        you should use the ``Table.create`` method instead, as it will
+        persist the table structure to DynamoDB.
+
+        Requires a ``table_name`` parameter, which should be a simple string
+        of the name of the table.
+
+        Optionally accepts a ``schema`` parameter, which should be a list of
+        ``BaseSchemaField`` subclasses representing the desired schema.
+
+        Optionally accepts a ``throughput`` parameter, which should be a
+        dictionary. If provided, it should specify a ``read`` & ``write`` key,
+        both of which should have an integer value associated with them.
+
+        Optionally accepts a ``indexes`` parameter, which should be a list of
+        ``BaseIndexField`` subclasses representing the desired indexes.
+
+        Optionally accepts a ``connection`` parameter, which should be a
+        ``DynamoDBConnection`` instance (or subclass). This is primarily useful
+        for specifying alternate connection parameters.
+
+        Example::
+
+            # The simple, it-already-exists case.
+            >>> conn = Table('users')
+
+            # The full, minimum-extra-calls case.
+            >>> from boto.dynamodb2.layer1 import DynamoDBConnection
+            >>> users = Table('users', schema=[
+            ...     HashKey('username'),
+            ...     RangeKey('date_joined', data_type=NUMBER)
+            ... ], throughput={
+            ...     'read':20,
+            ...     'write': 10,
+            ... }, indexes=[
+            ...     KeysOnlyIndex('MostRecentlyJoined', parts=[
+            ...         RangeKey('date_joined')
+            ...     ]),
+            ... ],
+            ... connection=DynamoDBConnection(
+            ...     aws_access_key_id='key',
+            ...     aws_secret_access_key='key',
+            ...     region='us-west-2'
+            ... ))
+
+        """
+        self.table_name = table_name
+        self.connection = connection
+        self.throughput = {
+            'read': 5,
+            'write': 5,
+        }
+        self.schema = schema
+        self.indexes = indexes
+
+        if self.connection is None:
+            self.connection = DynamoDBConnection()
+
+        if throughput is not None:
+            self.throughput = throughput
+
+        self._dynamizer = Dynamizer()
+
+    @classmethod
+    def create(cls, table_name, schema, throughput=None, indexes=None,
+               connection=None):
+        """
+        Creates a new table in DynamoDB & returns an in-memory ``Table`` object.
+
+        This will setup a brand new table within DynamoDB. The ``table_name``
+        must be unique for your AWS account. The ``schema`` is also required
+        to define the key structure of the table.
+
+        **IMPORTANT** - You should consider the usage pattern of your table
+        up-front, as the schema & indexes can **NOT** be modified once the
+        table is created, requiring the creation of a new table & migrating
+        the data should you wish to revise it.
+
+        **IMPORTANT** - If the table already exists in DynamoDB, additional
+        calls to this method will result in an error. If you just need
+        a ``Table`` object to interact with the existing table, you should
+        just initialize a new ``Table`` object, which requires only the
+        ``table_name``.
+
+        Requires a ``table_name`` parameter, which should be a simple string
+        of the name of the table.
+
+        Requires a ``schema`` parameter, which should be a list of
+        ``BaseSchemaField`` subclasses representing the desired schema.
+
+        Optionally accepts a ``throughput`` parameter, which should be a
+        dictionary. If provided, it should specify a ``read`` & ``write`` key,
+        both of which should have an integer value associated with them.
+
+        Optionally accepts a ``indexes`` parameter, which should be a list of
+        ``BaseIndexField`` subclasses representing the desired indexes.
+
+        Optionally accepts a ``connection`` parameter, which should be a
+        ``DynamoDBConnection`` instance (or subclass). This is primarily useful
+        for specifying alternate connection parameters.
+
+        Example::
+
+            >>> users = Table.create_table('users', schema=[
+            ...     HashKey('username'),
+            ...     RangeKey('date_joined', data_type=NUMBER)
+            ... ], throughput={
+            ...     'read':20,
+            ...     'write': 10,
+            ... }, indexes=[
+            ...     KeysOnlyIndex('MostRecentlyJoined', parts=[
+            ...         RangeKey('date_joined')
+            ...     ]),
+            ... ])
+
+        """
+        table = cls(table_name=table_name, connection=connection)
+        table.schema = schema
+
+        if throughput is not None:
+            table.throughput = throughput
+
+        if indexes is not None:
+            table.indexes = indexes
+
+        # Prep the schema.
+        raw_schema = []
+        attr_defs = []
+
+        for field in table.schema:
+            raw_schema.append(field.schema())
+            # Build the attributes off what we know.
+            attr_defs.append(field.definition())
+
+        raw_throughput = {
+            'ReadCapacityUnits': int(table.throughput['read']),
+            'WriteCapacityUnits': int(table.throughput['write']),
+        }
+        kwargs = {}
+
+        if table.indexes:
+            # Prep the LSIs.
+            raw_lsi = []
+
+            for index_field in table.indexes:
+                raw_lsi.append(index_field.schema())
+                # Again, build the attributes off what we know.
+                # HOWEVER, only add attributes *NOT* already seen.
+                attr_define = index_field.definition()
+
+                for part in attr_define:
+                    attr_names = [attr['AttributeName'] for attr in attr_defs]
+
+                    if not part['AttributeName'] in attr_names:
+                        attr_defs.append(part)
+
+            kwargs['local_secondary_indexes'] = raw_lsi
+
+        table.connection.create_table(
+            table_name=table.table_name,
+            attribute_definitions=attr_defs,
+            key_schema=raw_schema,
+            provisioned_throughput=raw_throughput,
+            **kwargs
+        )
+        return table
+
+    def _introspect_schema(self, raw_schema):
+        """
+        Given a raw schema structure back from a DynamoDB response, parse
+        out & build the high-level Python objects that represent them.
+        """
+        schema = []
+
+        for field in raw_schema:
+            if field['KeyType'] == 'HASH':
+                schema.append(HashKey(field['AttributeName']))
+            elif field['KeyType'] == 'RANGE':
+                schema.append(RangeKey(field['AttributeName']))
+            else:
+                raise exceptions.UnknownSchemaFieldError(
+                    "%s was seen, but is unknown. Please report this at "
+                    "https://github.com/boto/boto/issues." % field['KeyType']
+                )
+
+        return schema
+
+    def _introspect_indexes(self, raw_indexes):
+        """
+        Given a raw index structure back from a DynamoDB response, parse
+        out & build the high-level Python objects that represent them.
+        """
+        indexes = []
+
+        for field in raw_indexes:
+            index_klass = AllIndex
+            kwargs = {
+                'parts': []
+            }
+
+            if field['Projection']['ProjectionType'] == 'ALL':
+                index_klass = AllIndex
+            elif field['Projection']['ProjectionType'] == 'KEYS_ONLY':
+                index_klass = KeysOnlyIndex
+            elif field['Projection']['ProjectionType'] == 'INCLUDE':
+                index_klass = IncludeIndex
+                kwargs['includes'] = field['Projection']['NonKeyAttributes']
+            else:
+                raise exceptions.UnknownIndexFieldError(
+                    "%s was seen, but is unknown. Please report this at "
+                    "https://github.com/boto/boto/issues." % \
+                    field['Projection']['ProjectionType']
+                )
+
+            name = field['IndexName']
+            kwargs['parts'] = self._introspect_schema(field['KeySchema'])
+            indexes.append(index_klass(name, **kwargs))
+
+        return indexes
+
+    def describe(self):
+        """
+        Describes the current structure of the table in DynamoDB.
+
+        This information will be used to update the ``schema``, ``indexes``
+        and ``throughput`` information on the ``Table``. Some calls, such as
+        those involving creating keys or querying, will require this
+        information to be populated.
+
+        It also returns the full raw datastructure from DynamoDB, in the
+        event you'd like to parse out additional information (such as the
+        ``ItemCount`` or usage information).
+
+        Example::
+
+            >>> users.describe()
+            {
+                # Lots of keys here...
+            }
+            >>> len(users.schema)
+            2
+
+        """
+        result = self.connection.describe_table(self.table_name)
+
+        # Blindly update throughput, since what's on DynamoDB's end is likely
+        # more correct.
+        raw_throughput = result['Table']['ProvisionedThroughput']
+        self.throughput['read'] = int(raw_throughput['ReadCapacityUnits'])
+        self.throughput['write'] = int(raw_throughput['WriteCapacityUnits'])
+
+        if not self.schema:
+            # Since we have the data, build the schema.
+            raw_schema = result['Table'].get('KeySchema', [])
+            self.schema = self._introspect_schema(raw_schema)
+
+        if not self.indexes:
+            # Build the index information as well.
+            raw_indexes = result['Table'].get('LocalSecondaryIndexes', [])
+            self.indexes = self._introspect_indexes(raw_indexes)
+
+        # This is leaky.
+        return result
+
+    def update(self, throughput):
+        """
+        Updates table attributes in DynamoDB.
+
+        Currently, the only thing you can modify about a table after it has
+        been created is the throughput.
+
+        Requires a ``throughput`` parameter, which should be a
+        dictionary. If provided, it should specify a ``read`` & ``write`` key,
+        both of which should have an integer value associated with them.
+
+        Returns ``True`` on success.
+
+        Example::
+
+            # For a read-heavier application...
+            >>> users.update(throughput={
+            ...     'read': 20,
+            ...     'write': 10,
+            ... })
+            True
+
+        """
+        self.throughput = throughput
+        self.connection.update_table(self.table_name, {
+            'ReadCapacityUnits': int(self.throughput['read']),
+            'WriteCapacityUnits': int(self.throughput['write']),
+        })
+        return True
+
+    def delete(self):
+        """
+        Deletes a table in DynamoDB.
+
+        **IMPORTANT** - Be careful when using this method, there is no undo.
+
+        Returns ``True`` on success.
+
+        Example::
+
+            >>> users.delete()
+            True
+
+        """
+        self.connection.delete_table(self.table_name)
+        return True
+
+    def _encode_keys(self, keys):
+        """
+        Given a flat Python dictionary of keys/values, converts it into the
+        nested dictionary DynamoDB expects.
+
+        Converts::
+
+            {
+                'username': 'john',
+                'tags': [1, 2, 5],
+            }
+
+        ...to...::
+
+            {
+                'username': {'S': 'john'},
+                'tags': {'NS': ['1', '2', '5']},
+            }
+
+        """
+        raw_key = {}
+
+        for key, value in keys.items():
+            raw_key[key] = self._dynamizer.encode(value)
+
+        return raw_key
+
+    def get_item(self, consistent=False, **kwargs):
+        """
+        Fetches an item (record) from a table in DynamoDB.
+
+        To specify the key of the item you'd like to get, you can specify the
+        key attributes as kwargs.
+
+        Optionally accepts a ``consistent`` parameter, which should be a
+        boolean. If you provide ``True``, it will perform
+        a consistent (but more expensive) read from DynamoDB.
+        (Default: ``False``)
+
+        Returns an ``Item`` instance containing all the data for that record.
+
+        Example::
+
+            # A simple hash key.
+            >>> john = users.get_item(username='johndoe')
+            >>> john['first_name']
+            'John'
+
+            # A complex hash+range key.
+            >>> john = users.get_item(username='johndoe', last_name='Doe')
+            >>> john['first_name']
+            'John'
+
+            # A consistent read (assuming the data might have just changed).
+            >>> john = users.get_item(username='johndoe', consistent=True)
+            >>> john['first_name']
+            'Johann'
+
+            # With a key that is an invalid variable name in Python.
+            # Also, assumes a different schema than previous examples.
+            >>> john = users.get_item(**{
+            ...     'date-joined': 127549192,
+            ... })
+            >>> john['first_name']
+            'John'
+
+        """
+        raw_key = self._encode_keys(kwargs)
+        item_data = self.connection.get_item(
+            self.table_name,
+            raw_key,
+            consistent_read=consistent
+        )
+        item = Item(self)
+        item.load(item_data)
+        return item
+
+    def put_item(self, data, overwrite=False):
+        """
+        Saves an entire item to DynamoDB.
+
+        By default, if any part of the ``Item``'s original data doesn't match
+        what's currently in DynamoDB, this request will fail. This prevents
+        other processes from updating the data in between when you read the
+        item & when your request to update the item's data is processed, which
+        would typically result in some data loss.
+
+        Requires a ``data`` parameter, which should be a dictionary of the data
+        you'd like to store in DynamoDB.
+
+        Optionally accepts an ``overwrite`` parameter, which should be a
+        boolean. If you provide ``True``, this will tell DynamoDB to blindly
+        overwrite whatever data is present, if any.
+
+        Returns ``True`` on success.
+
+        Example::
+
+            >>> users.put_item(data={
+            ...     'username': 'jane',
+            ...     'first_name': 'Jane',
+            ...     'last_name': 'Doe',
+            ...     'date_joined': 126478915,
+            ... })
+            True
+
+        """
+        item = Item(self, data=data)
+        return item.save(overwrite=overwrite)
+
+    def _put_item(self, item_data, expects=None):
+        """
+        The internal variant of ``put_item`` (full data). This is used by the
+        ``Item`` objects, since that operation is represented at the
+        table-level by the API, but conceptually maps better to telling an
+        individual ``Item`` to save itself.
+        """
+        kwargs = {}
+
+        if expects is not None:
+            kwargs['expected'] = expects
+
+        self.connection.put_item(self.table_name, item_data, **kwargs)
+        return True
+
+    def _update_item(self, key, item_data, expects=None):
+        """
+        The internal variant of ``put_item`` (partial data). This is used by the
+        ``Item`` objects, since that operation is represented at the
+        table-level by the API, but conceptually maps better to telling an
+        individual ``Item`` to save itself.
+        """
+        raw_key = self._encode_keys(key)
+        kwargs = {}
+
+        if expects is not None:
+            kwargs['expected'] = expects
+
+        self.connection.update_item(self.table_name, raw_key, item_data, **kwargs)
+        return True
+
+    def delete_item(self, **kwargs):
+        """
+        Deletes an item in DynamoDB.
+
+        **IMPORTANT** - Be careful when using this method, there is no undo.
+
+        To specify the key of the item you'd like to get, you can specify the
+        key attributes as kwargs.
+
+        Returns ``True`` on success.
+
+        Example::
+
+            # A simple hash key.
+            >>> users.delete_item(username='johndoe')
+            True
+
+            # A complex hash+range key.
+            >>> users.delete_item(username='jane', last_name='Doe')
+            True
+
+            # With a key that is an invalid variable name in Python.
+            # Also, assumes a different schema than previous examples.
+            >>> users.delete_item(**{
+            ...     'date-joined': 127549192,
+            ... })
+            True
+
+        """
+        raw_key = self._encode_keys(kwargs)
+        self.connection.delete_item(self.table_name, raw_key)
+        return True
+
+    def get_key_fields(self):
+        """
+        Returns the fields necessary to make a key for a table.
+
+        If the ``Table`` does not already have a populated ``schema``,
+        this will request it via a ``Table.describe`` call.
+
+        Returns a list of fieldnames (strings).
+
+        Example::
+
+            # A simple hash key.
+            >>> users.get_key_fields()
+            ['username']
+
+            # A complex hash+range key.
+            >>> users.get_key_fields()
+            ['username', 'last_name']
+
+        """
+        if not self.schema:
+            # We don't know the structure of the table. Get a description to
+            # populate the schema.
+            self.describe()
+
+        return [field.name for field in self.schema]
+
+    def batch_write(self):
+        """
+        Allows the batching of writes to DynamoDB.
+
+        Since each write/delete call to DynamoDB has a cost associated with it,
+        when loading lots of data, it makes sense to batch them, creating as
+        few calls as possible.
+
+        This returns a context manager that will transparently handle creating
+        these batches. The object you get back lightly-resembles a ``Table``
+        object, sharing just the ``put_item`` & ``delete_item`` methods
+        (which are all that DynamoDB can batch in terms of writing data).
+
+        DynamoDB's maximum batch size is 25 items per request. If you attempt
+        to put/delete more than that, the context manager will batch as many
+        as it can up to that number, then flush them to DynamoDB & continue
+        batching as more calls come in.
+
+        Example::
+
+            # Assuming a table with one record...
+            >>> with users.batch_write() as batch:
+            ...     batch.put_item(data={
+            ...         'username': 'johndoe',
+            ...         'first_name': 'John',
+            ...         'last_name': 'Doe',
+            ...         'owner': 1,
+            ...     })
+            ...     # Nothing across the wire yet.
+            ...     batch.delete_item(username='bob')
+            ...     # Still no requests sent.
+            ...     batch.put_item(data={
+            ...         'username': 'jane',
+            ...         'first_name': 'Jane',
+            ...         'last_name': 'Doe',
+            ...         'date_joined': 127436192,
+            ...     })
+            ...     # Nothing yet, but once we leave the context, the
+            ...     # put/deletes will be sent.
+
+        """
+        # PHENOMENAL COSMIC DOCS!!! itty-bitty code.
+        return BatchTable(self)
+
+    def _build_filters(self, filter_kwargs, using=QUERY_OPERATORS):
+        """
+        An internal method for taking query/scan-style ``**kwargs`` & turning
+        them into the raw structure DynamoDB expects for filtering.
+        """
+        filters = {}
+
+        for field_and_op, value in filter_kwargs.items():
+            field_bits = field_and_op.split('__')
+            fieldname = '__'.join(field_bits[:-1])
+
+            try:
+                op = using[field_bits[-1]]
+            except KeyError:
+                raise exceptions.UnknownFilterTypeError(
+                    "Operator '%s' from '%s' is not recognized." % (
+                        field_bits[-1],
+                        field_and_op
+                    )
+                )
+
+            lookup = {
+                'AttributeValueList': [],
+                'ComparisonOperator': op,
+            }
+ 
+            # Special-case the ``NULL/NOT_NULL`` case.
+            if field_bits[-1] == 'null':
+                del lookup['AttributeValueList']
+
+                if value is False:
+                    lookup['ComparisonOperator'] = 'NOT_NULL'
+                else:
+                    lookup['ComparisonOperator'] = 'NULL'
+            # Special-case the ``BETWEEN`` case.
+            elif field_bits[-1] == 'between':
+                if len(value) == 2 and isinstance(value, (list, tuple)):
+                    lookup['AttributeValueList'].append(
+                        self._dynamizer.encode(value[0])
+                    )
+                    lookup['AttributeValueList'].append(
+                        self._dynamizer.encode(value[1])
+                    )
+            else:
+                # Fix up the value for encoding, because it was built to only work
+                # with ``set``s.
+                if isinstance(value, (list, tuple)):
+                    value = set(value)
+                lookup['AttributeValueList'].append(
+                    self._dynamizer.encode(value)
+                )
+
+            # Finally, insert it into the filters.
+            filters[fieldname] = lookup
+
+        return filters
+
+    def query(self, limit=None, index=None, reverse=False, consistent=False,
+              **filter_kwargs):
+        """
+        Queries for a set of matching items in a DynamoDB table.
+
+        Queries can be performed against a hash key, a hash+range key or
+        against any data stored in your local secondary indexes.
+
+        **Note** - You can not query against arbitrary fields within the data
+        stored in DynamoDB.
+
+        To specify the filters of the items you'd like to get, you can specify
+        the filters as kwargs. Each filter kwarg should follow the pattern
+        ``<fieldname>__<filter_operation>=<value_to_look_for>``.
+
+        Optionally accepts a ``limit`` parameter, which should be an integer
+        count of the total number of items to return. (Default: ``None`` -
+        all results)
+
+        Optionally accepts an ``index`` parameter, which should be a string of
+        name of the local secondary index you want to query against.
+        (Default: ``None``)
+
+        Optionally accepts a ``reverse`` parameter, which will present the
+        results in reverse order. (Default: ``None`` - normal order)
+
+        Optionally accepts a ``consistent`` parameter, which should be a
+        boolean. If you provide ``True``, it will force a consistent read of
+        the data (more expensive). (Default: ``False`` - use eventually
+        consistent reads)
+
+        Returns a ``ResultSet``, which transparently handles the pagination of
+        results you get back.
+
+        Example::
+
+            # Look for last names equal to "Doe".
+            >>> results = users.query(last_name__eq='Doe')
+            >>> for res in results:
+            ...     print res['first_name']
+            'John'
+            'Jane'
+
+            # Look for last names beginning with "D", in reverse order, limit 3.
+            >>> results = users.query(
+            ...     last_name__beginswith='D',
+            ...     reverse=True,
+            ...     limit=3
+            ... )
+            >>> for res in results:
+            ...     print res['first_name']
+            'Alice'
+            'Jane'
+            'John'
+
+            # Use an LSI & a consistent read.
+            >>> results = users.query(
+            ...     date_joined__gte=1236451000,
+            ...     owner__eq=1,
+            ...     index='DateJoinedIndex',
+            ...     consistent=True
+            ... )
+            >>> for res in results:
+            ...     print res['first_name']
+            'Alice'
+            'Bob'
+            'John'
+            'Fred'
+
+        """
+        if self.schema:
+            if len(self.schema) == 1 and len(filter_kwargs) <= 1:
+                raise exceptions.QueryError(
+                    "You must specify more than one key to filter on."
+                )
+
+        results = ResultSet()
+        kwargs = filter_kwargs.copy()
+        kwargs.update({
+            'limit': limit,
+            'index': index,
+            'reverse': reverse,
+            'consistent': consistent,
+        })
+        results.to_call(self._query, **kwargs)
+        return results
+
+    def _query(self, limit=None, index=None, reverse=False, consistent=False,
+               exclusive_start_key=None, **filter_kwargs):
+        """
+        The internal method that performs the actual queries. Used extensively
+        by ``ResultSet`` to perform each (paginated) request.
+        """
+        kwargs = {
+            'limit': limit,
+            'index_name': index,
+            'scan_index_forward': reverse,
+            'consistent_read': consistent,
+        }
+
+        if exclusive_start_key:
+            kwargs['exclusive_start_key'] = {}
+
+            for key, value in exclusive_start_key.items():
+                kwargs['exclusive_start_key'][key] = \
+                    self._dynamizer.encode(value)
+
+        # Convert the filters into something we can actually use.
+        kwargs['key_conditions'] = self._build_filters(
+            filter_kwargs,
+            using=QUERY_OPERATORS
+        )
+
+        raw_results = self.connection.query(
+            self.table_name,
+            **kwargs
+        )
+        results = []
+        last_key = None
+
+        for raw_item in raw_results.get('Items', []):
+            item = Item(self)
+            item.load({
+                'Item': raw_item,
+            })
+            results.append(item)
+
+        if raw_results.get('LastEvaluatedKey', None):
+            last_key = {}
+
+            for key, value in raw_results['LastEvaluatedKey'].items():
+                last_key[key] = self._dynamizer.decode(value)
+
+        return {
+            'results': results,
+            'last_key': last_key,
+        }
+
+    def scan(self, limit=None, segment=None, total_segments=None,
+             **filter_kwargs):
+        """
+        Scans across all items within a DynamoDB table.
+
+        Scans can be performed against a hash key or a hash+range key. You can
+        additionally filter the results after the table has been read but
+        before the response is returned.
+
+        To specify the filters of the items you'd like to get, you can specify
+        the filters as kwargs. Each filter kwarg should follow the pattern
+        ``<fieldname>__<filter_operation>=<value_to_look_for>``.
+
+        Optionally accepts a ``limit`` parameter, which should be an integer
+        count of the total number of items to return. (Default: ``None`` -
+        all results)
+
+        Returns a ``ResultSet``, which transparently handles the pagination of
+        results you get back.
+
+        Example::
+
+            # All results.
+            >>> everything = users.scan()
+
+            # Look for last names beginning with "D".
+            >>> results = users.scan(last_name__beginswith='D')
+            >>> for res in results:
+            ...     print res['first_name']
+            'Alice'
+            'John'
+            'Jane'
+
+            # Use an ``IN`` filter & limit.
+            >>> results = users.scan(
+            ...     age__in=[25, 26, 27, 28, 29],
+            ...     limit=1
+            ... )
+            >>> for res in results:
+            ...     print res['first_name']
+            'Alice'
+
+        """
+        results = ResultSet()
+        kwargs = filter_kwargs.copy()
+        kwargs.update({
+            'limit': limit,
+            'segment': segment,
+            'total_segments': total_segments,
+        })
+        results.to_call(self._scan, **kwargs)
+        return results
+
+    def _scan(self, limit=None, exclusive_start_key=None, segment=None,
+              total_segments=None, **filter_kwargs):
+        """
+        The internal method that performs the actual scan. Used extensively
+        by ``ResultSet`` to perform each (paginated) request.
+        """
+        kwargs = {
+            'limit': limit,
+            'segment': segment,
+            'total_segments': total_segments,
+        }
+
+        if exclusive_start_key:
+            kwargs['exclusive_start_key'] = {}
+
+            for key, value in exclusive_start_key.items():
+                kwargs['exclusive_start_key'][key] = \
+                    self._dynamizer.encode(value)
+
+        # Convert the filters into something we can actually use.
+        kwargs['scan_filter'] = self._build_filters(
+            filter_kwargs,
+            using=FILTER_OPERATORS
+        )
+
+        raw_results = self.connection.scan(
+            self.table_name,
+            **kwargs
+        )
+        results = []
+        last_key = None
+
+        for raw_item in raw_results.get('Items', []):
+            item = Item(self)
+            item.load({
+                'Item': raw_item,
+            })
+            results.append(item)
+
+        if raw_results.get('LastEvaluatedKey', None):
+            last_key = {}
+
+            for key, value in raw_results['LastEvaluatedKey'].items():
+                last_key[key] = self._dynamizer.decode(value)
+
+        return {
+            'results': results,
+            'last_key': last_key,
+        }
+
+    def batch_get(self, keys, consistent=False):
+        """
+        Fetches many specific items in batch from a table.
+
+        Requires a ``keys`` parameter, which should be a list of dictionaries.
+        Each dictionary should consist of the keys values to specify.
+
+        Optionally accepts a ``consistent`` parameter, which should be a
+        boolean. If you provide ``True``, a strongly consistent read will be
+        used. (Default: False)
+
+        Returns a ``ResultSet``, which transparently handles the pagination of
+        results you get back.
+
+        Example::
+
+            >>> results = users.batch_get(keys=[
+            ...     {
+            ...         'username': 'johndoe',
+            ...     },
+            ...     {
+            ...         'username': 'jane',
+            ...     },
+            ...     {
+            ...         'username': 'fred',
+            ...     },
+            ... ])
+            >>> for res in results:
+            ...     print res['first_name']
+            'John'
+            'Jane'
+            'Fred'
+
+        """
+        # We pass the keys to the constructor instead, so it can maintain it's
+        # own internal state as to what keys have been processed.
+        results = BatchGetResultSet(keys=keys, max_batch_get=self.max_batch_get)
+        results.to_call(self._batch_get, consistent=False)
+        return results
+
+    def _batch_get(self, keys, consistent=False):
+        """
+        The internal method that performs the actual batch get. Used extensively
+        by ``BatchGetResultSet`` to perform each (paginated) request.
+        """
+        items = {
+            self.table_name: {
+                'Keys': [],
+            },
+        }
+
+        if consistent:
+            items[self.table_name]['ConsistentRead'] = True
+
+        for key_data in keys:
+            raw_key = {}
+
+            for key, value in key_data.items():
+                raw_key[key] = self._dynamizer.encode(value)
+
+            items[self.table_name]['Keys'].append(raw_key)
+
+        raw_results = self.connection.batch_get_item(request_items=items)
+        results = []
+        unprocessed_keys = []
+
+        for raw_item in raw_results['Responses'].get(self.table_name, []):
+            item = Item(self)
+            item.load({
+                'Item': raw_item,
+            })
+            results.append(item)
+
+        raw_unproccessed = raw_results.get('UnprocessedKeys', {})
+
+        for raw_key in raw_unproccessed.get('Keys', []):
+            py_key = {}
+
+            for key, value in raw_key.items():
+                py_key[key] = self._dynamizer.decode(value)
+
+            unprocessed_keys.append(py_key)
+
+        return {
+            'results': results,
+            # NEVER return a ``last_key``. Just in-case any part of
+            # ``ResultSet`` peeks through, since much of the
+            # original underlying implementation is based on this key.
+            'last_key': None,
+            'unprocessed_keys': unprocessed_keys,
+        }
+
+    def count(self):
+        """
+        Returns a (very) eventually consistent count of the number of items
+        in a table.
+
+        Lag time is about 6 hours, so don't expect a high degree of accuracy.
+
+        Example::
+
+            >>> users.count()
+            6
+
+        """
+        info = self.describe()
+        return info['Table'].get('ItemCount', 0)
+
+
+class BatchTable(object):
+    """
+    Used by ``Table`` as the context manager for batch writes.
+
+    You likely don't want to try to use this object directly.
+    """
+    def __init__(self, table):
+        self.table = table
+        self._to_put = []
+        self._to_delete = []
+
+    def __enter__(self):
+        return self
+
+    def __exit__(self, type, value, traceback):
+        if not self._to_put and not self._to_delete:
+            return False
+
+        # Flush anything that's left.
+        self.flush()
+        return True
+
+    def put_item(self, data, overwrite=False):
+        self._to_put.append(data)
+
+        if self.should_flush():
+            self.flush()
+
+    def delete_item(self, **kwargs):
+        self._to_delete.append(kwargs)
+
+        if self.should_flush():
+            self.flush()
+
+    def should_flush(self):
+        if len(self._to_put) + len(self._to_delete) == 25:
+            return True
+
+        return False
+
+    def flush(self):
+        batch_data = {
+            self.table.table_name: [
+                # We'll insert data here shortly.
+            ],
+        }
+
+        for put in self._to_put:
+            item = Item(self.table, data=put)
+            batch_data[self.table.table_name].append({
+                'PutRequest': {
+                    'Item': item.prepare_full(),
+                }
+            })
+
+        for delete in self._to_delete:
+            batch_data[self.table.table_name].append({
+                'DeleteRequest': {
+                    'Key': self.table._encode_keys(delete),
+                }
+            })
+
+        self.table.connection.batch_write_item(batch_data)
+        self._to_put = []
+        self._to_delete = []
+        return True
diff --git a/boto/dynamodb2/types.py b/boto/dynamodb2/types.py
new file mode 100644
index 0000000..fc67aa0
--- /dev/null
+++ b/boto/dynamodb2/types.py
@@ -0,0 +1,40 @@
+# Shadow the DynamoDB v1 bits.
+# This way, no end user should have to cross-import between versions & we
+# reserve the namespace to extend v2 if it's ever needed.
+from boto.dynamodb.types import Dynamizer
+
+
+# Some constants for our use.
+STRING = 'S'
+NUMBER = 'N'
+BINARY = 'B'
+STRING_SET = 'SS'
+NUMBER_SET = 'NS'
+BINARY_SET = 'BS'
+
+QUERY_OPERATORS = {
+    'eq': 'EQ',
+    'lte': 'LE',
+    'lt': 'LT',
+    'gte': 'GE',
+    'gt': 'GT',
+    'beginswith': 'BEGINS_WITH',
+    'between': 'BETWEEN',
+}
+
+FILTER_OPERATORS = {
+    'eq': 'EQ',
+    'ne': 'NE',
+    'lte': 'LE',
+    'lt': 'LT',
+    'gte': 'GE',
+    'gt': 'GT',
+    # FIXME: Is this necessary? i.e. ``whatever__null=False``
+    'nnull': 'NOT_NULL',
+    'null': 'NULL',
+    'contains': 'CONTAINS',
+    'ncontains': 'NOT_CONTAINS',
+    'beginswith': 'BEGINS_WITH',
+    'in': 'IN',
+    'between': 'BETWEEN',
+}
diff --git a/boto/ec2/__init__.py b/boto/ec2/__init__.py
index 963b6d9..cc1e582 100644
--- a/boto/ec2/__init__.py
+++ b/boto/ec2/__init__.py
@@ -24,6 +24,19 @@
 service from AWS.
 """
 from boto.ec2.connection import EC2Connection
+from boto.regioninfo import RegionInfo
+
+
+RegionData = {
+    'us-east-1': 'ec2.us-east-1.amazonaws.com',
+    'us-west-1': 'ec2.us-west-1.amazonaws.com',
+    'us-west-2': 'ec2.us-west-2.amazonaws.com',
+    'sa-east-1': 'ec2.sa-east-1.amazonaws.com',
+    'eu-west-1': 'ec2.eu-west-1.amazonaws.com',
+    'ap-northeast-1': 'ec2.ap-northeast-1.amazonaws.com',
+    'ap-southeast-1': 'ec2.ap-southeast-1.amazonaws.com',
+    'ap-southeast-2': 'ec2.ap-southeast-2.amazonaws.com',
+}
 
 
 def regions(**kw_params):
@@ -36,8 +49,13 @@
     :rtype: list
     :return: A list of :class:`boto.ec2.regioninfo.RegionInfo`
     """
-    c = EC2Connection(**kw_params)
-    return c.get_all_regions()
+    regions = []
+    for region_name in RegionData:
+        region = RegionInfo(name=region_name,
+                            endpoint=RegionData[region_name],
+                            connection_cls=EC2Connection)
+        regions.append(region)
+    return regions
 
 
 def connect_to_region(region_name, **kw_params):
diff --git a/boto/ec2/attributes.py b/boto/ec2/attributes.py
new file mode 100644
index 0000000..d76e5c5
--- /dev/null
+++ b/boto/ec2/attributes.py
@@ -0,0 +1,71 @@
+# Copyright (c) 2012 Amazon.com, Inc. or its affiliates.  All Rights Reserved
+#
+# Permission is hereby granted, free of charge, to any person obtaining a
+# copy of this software and associated documentation files (the
+# "Software"), to deal in the Software without restriction, including
+# without limitation the rights to use, copy, modify, merge, publish, dis-
+# tribute, sublicense, and/or sell copies of the Software, and to permit
+# persons to whom the Software is furnished to do so, subject to the fol-
+# lowing conditions:
+#
+# The above copyright notice and this permission notice shall be included
+# in all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
+# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
+# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+# IN THE SOFTWARE.
+
+
+class AccountAttribute(object):
+    def __init__(self, connection=None):
+        self.connection = connection
+        self.attribute_name = None
+        self.attribute_values = None
+
+    def startElement(self, name, attrs, connection):
+        if name == 'attributeValueSet':
+            self.attribute_values = AttributeValues()
+            return self.attribute_values
+
+    def endElement(self, name, value, connection):
+        if name == 'attributeName':
+            self.attribute_name = value
+
+
+class AttributeValues(list):
+    def startElement(self, name, attrs, connection):
+        pass
+
+    def endElement(self, name, value, connection):
+        if name == 'attributeValue':
+            self.append(value)
+
+
+class VPCAttribute(object):
+    def __init__(self, connection=None):
+        self.connection = connection
+        self.vpc_id = None
+        self.enable_dns_hostnames = None
+        self.enable_dns_support = None
+        self._current_attr = None
+
+    def startElement(self, name, attrs, connection):
+        if name in ('enableDnsHostnames', 'enableDnsSupport'):
+            self._current_attr = name
+
+    def endElement(self, name, value, connection):
+        if name == 'vpcId':
+            self.vpc_id = value
+        elif name == 'value':
+            if value == 'true':
+                value = True
+            else:
+                value = False
+            if self._current_attr == 'enableDnsHostnames':
+                self.enable_dns_hostnames = value
+            elif self._current_attr == 'enableDnsSupport':
+                self.enable_dns_support = value
diff --git a/boto/ec2/autoscale/__init__.py b/boto/ec2/autoscale/__init__.py
index 80c3c85..17a89e1 100644
--- a/boto/ec2/autoscale/__init__.py
+++ b/boto/ec2/autoscale/__init__.py
@@ -40,6 +40,7 @@
 from boto.ec2.autoscale.policy import AdjustmentType
 from boto.ec2.autoscale.policy import MetricCollectionTypes
 from boto.ec2.autoscale.policy import ScalingPolicy
+from boto.ec2.autoscale.policy import TerminationPolicies
 from boto.ec2.autoscale.instance import Instance
 from boto.ec2.autoscale.scheduled import ScheduledUpdateGroupAction
 from boto.ec2.autoscale.tag import Tag
@@ -51,7 +52,9 @@
     'sa-east-1': 'autoscaling.sa-east-1.amazonaws.com',
     'eu-west-1': 'autoscaling.eu-west-1.amazonaws.com',
     'ap-northeast-1': 'autoscaling.ap-northeast-1.amazonaws.com',
-    'ap-southeast-1': 'autoscaling.ap-southeast-1.amazonaws.com'}
+    'ap-southeast-1': 'autoscaling.ap-southeast-1.amazonaws.com',
+    'ap-southeast-2': 'autoscaling.ap-southeast-2.amazonaws.com',
+}
 
 
 def regions():
@@ -171,6 +174,9 @@
             params['DefaultCooldown'] = as_group.default_cooldown
         if as_group.placement_group:
             params['PlacementGroup'] = as_group.placement_group
+        if as_group.termination_policies:
+            self.build_list_params(params, as_group.termination_policies,
+                                   'TerminationPolicies')
         if op.startswith('Create'):
             # you can only associate load balancers with an autoscale
             # group at creation time
@@ -231,6 +237,10 @@
             params['SpotPrice'] = str(launch_config.spot_price)
         if launch_config.instance_profile_name is not None:
             params['IamInstanceProfile'] = launch_config.instance_profile_name
+        if launch_config.ebs_optimized:
+            params['EbsOptimized'] = 'true'
+        else:
+            params['EbsOptimized'] = 'false'
         return self.get_object('CreateLaunchConfiguration', params,
                                Request, verb='POST')
 
@@ -361,6 +371,15 @@
         return self.get_list('DescribeScalingActivities',
                              params, [('member', Activity)])
 
+    def get_termination_policies(self):
+        """Gets all valid termination policies.
+
+        These values can then be used as the termination_policies arg
+        when creating and updating autoscale groups.
+        """
+        return self.get_object('DescribeTerminationPolicyTypes',
+                               {}, TerminationPolicies)
+
     def delete_scheduled_action(self, scheduled_action_name,
                                 autoscale_group=None):
         """
@@ -533,9 +552,11 @@
                                    'ScalingProcesses')
         return self.get_status('ResumeProcesses', params)
 
-    def create_scheduled_group_action(self, as_group, name, time,
+    def create_scheduled_group_action(self, as_group, name, time=None,
                                       desired_capacity=None,
-                                      min_size=None, max_size=None):
+                                      min_size=None, max_size=None,
+                                      start_time=None, end_time=None,
+                                      recurrence=None):
         """
         Creates a scheduled scaling action for a Auto Scaling group. If you
         leave a parameter unspecified, the corresponding value remains
@@ -548,7 +569,7 @@
         :param name: Scheduled action name.
 
         :type time: datetime.datetime
-        :param time: The time for this action to start.
+        :param time: The time for this action to start. (Depracated)
 
         :type desired_capacity: int
         :param desired_capacity: The number of EC2 instances that should
@@ -559,10 +580,26 @@
 
         :type max_size: int
         :param max_size: The minimum size for the new auto scaling group.
+
+        :type start_time: datetime.datetime
+        :param start_time: The time for this action to start. When StartTime and EndTime are specified with Recurrence, they form the boundaries of when the recurring action will start and stop.
+
+        :type end_time: datetime.datetime
+        :param end_time: The time for this action to end. When StartTime and EndTime are specified with Recurrence, they form the boundaries of when the recurring action will start and stop.
+
+        :type recurrence: string
+        :param recurrence: The time when recurring future actions will start. Start time is specified by the user following the Unix cron syntax format. EXAMPLE: '0 10 * * *'
         """
         params = {'AutoScalingGroupName': as_group,
-                  'ScheduledActionName': name,
-                  'Time': time.isoformat()}
+                  'ScheduledActionName': name}
+        if start_time is not None:
+            params['StartTime'] = start_time.isoformat()
+        if end_time is not None:
+            params['EndTime'] = end_time.isoformat()
+        if recurrence is not None:
+            params['Recurrence'] = recurrence
+        if time:
+            params['Time'] = time.isoformat()
         if desired_capacity is not None:
             params['DesiredCapacity'] = desired_capacity
         if min_size is not None:
@@ -688,6 +725,29 @@
             params['ShouldRespectGracePeriod'] = 'false'
         return self.get_status('SetInstanceHealth', params)
 
+    def set_desired_capacity(self, group_name, desired_capacity, honor_cooldown=False):
+        """
+        Adjusts the desired size of the AutoScalingGroup by initiating scaling
+        activities. When reducing the size of the group, it is not possible to define
+        which Amazon EC2 instances will be terminated. This applies to any Auto Scaling
+        decisions that might result in terminating instances.
+
+        :type group_name: string
+        :param group_name: name of the auto scaling group
+
+        :type desired_capacity: integer
+        :param desired_capacity: new capacity setting for auto scaling group
+
+        :type honor_cooldown: boolean
+        :param honor_cooldown: by default, overrides any cooldown period
+        """
+        params = {'AutoScalingGroupName': group_name,
+                  'DesiredCapacity': desired_capacity}
+        if honor_cooldown:
+            params['HonorCooldown'] = json.dumps('True')
+
+        return self.get_status('SetDesiredCapacity', params)
+
     # Tag methods
 
     def get_all_tags(self, filters=None, max_records=None, next_token=None):
diff --git a/boto/ec2/autoscale/group.py b/boto/ec2/autoscale/group.py
index eb72f6f..e9fadce 100644
--- a/boto/ec2/autoscale/group.py
+++ b/boto/ec2/autoscale/group.py
@@ -98,7 +98,7 @@
                  health_check_type=None, health_check_period=None,
                  placement_group=None, vpc_zone_identifier=None,
                  desired_capacity=None, min_size=None, max_size=None,
-                 tags=None, **kwargs):
+                 tags=None, termination_policies=None, **kwargs):
         """
         Creates a new AutoScalingGroup with the specified name.
 
@@ -136,10 +136,10 @@
         :param load_balancers: List of load balancers.
 
         :type max_size: int
-        :param maxsize: Maximum size of group (required).
+        :param max_size: Maximum size of group (required).
 
         :type min_size: int
-        :param minsize: Minimum size of group (required).
+        :param min_size: Minimum size of group (required).
 
         :type placement_group: str
         :param placement_group: Physical location of your cluster placement
@@ -149,6 +149,12 @@
         :param vpc_zone_identifier: The subnet identifier of the Virtual
             Private Cloud.
 
+        :type termination_policies: list
+        :param termination_policies: A list of termination policies. Valid values
+            are: "OldestInstance", "NewestInstance", "OldestLaunchConfiguration",
+            "ClosestToNextInstanceHour", "Default".  If no value is specified,
+            the "Default" value is used.
+
         :rtype: :class:`boto.ec2.autoscale.group.AutoScalingGroup`
         :return: An autoscale group.
         """
@@ -177,7 +183,8 @@
         self.vpc_zone_identifier = vpc_zone_identifier
         self.instances = None
         self.tags = tags or None
-        self.termination_policies = TerminationPolicies()
+        termination_policies = termination_policies or []
+        self.termination_policies = ListElement(termination_policies)
 
     # backwards compatible access to 'cooldown' param
     def _get_cooldown(self):
diff --git a/boto/ec2/autoscale/launchconfig.py b/boto/ec2/autoscale/launchconfig.py
index e6e38fd..f558041 100644
--- a/boto/ec2/autoscale/launchconfig.py
+++ b/boto/ec2/autoscale/launchconfig.py
@@ -94,7 +94,7 @@
                  instance_type='m1.small', kernel_id=None,
                  ramdisk_id=None, block_device_mappings=None,
                  instance_monitoring=False, spot_price=None,
-                 instance_profile_name=None):
+                 instance_profile_name=None, ebs_optimized=False):
         """
         A launch configuration.
 
@@ -140,6 +140,10 @@
         :param instance_profile_name: The name or the Amazon Resource
             Name (ARN) of the instance profile associated with the IAM
             role for the instance.
+
+        :type ebs_optimized: bool
+        :param ebs_optimized: Specifies whether the instance is optimized
+            for EBS I/O (true) or not (false).
         """
         self.connection = connection
         self.name = name
@@ -158,6 +162,7 @@
         self.spot_price = spot_price
         self.instance_profile_name = instance_profile_name
         self.launch_configuration_arn = None
+        self.ebs_optimized = ebs_optimized
 
     def __repr__(self):
         return 'LaunchConfiguration:%s' % self.name
@@ -201,6 +206,8 @@
             self.spot_price = float(value)
         elif name == 'IamInstanceProfile':
             self.instance_profile_name = value
+        elif name == 'EbsOptimized':
+            self.ebs_optimized = True if value.lower() == 'true' else False
         else:
             setattr(self, name, value)
 
diff --git a/boto/ec2/autoscale/policy.py b/boto/ec2/autoscale/policy.py
index d9d1ac6..adcdbdc 100644
--- a/boto/ec2/autoscale/policy.py
+++ b/boto/ec2/autoscale/policy.py
@@ -153,3 +153,14 @@
     def delete(self):
         return self.connection.delete_policy(self.name, self.as_name)
 
+
+class TerminationPolicies(list):
+    def __init__(self, connection=None, **kwargs):
+        pass
+
+    def startElement(self, name, attrs, connection):
+        pass
+
+    def endElement(self, name, value, connection):
+        if name == 'member':
+            self.append(value)
diff --git a/boto/ec2/autoscale/scheduled.py b/boto/ec2/autoscale/scheduled.py
index d8f051c..8e307c2 100644
--- a/boto/ec2/autoscale/scheduled.py
+++ b/boto/ec2/autoscale/scheduled.py
@@ -28,7 +28,11 @@
         self.connection = connection
         self.name = None
         self.action_arn = None
+        self.as_group = None
         self.time = None
+        self.start_time = None
+        self.end_time = None
+        self.recurrence = None
         self.desired_capacity = None
         self.max_size = None
         self.min_size = None
@@ -44,17 +48,31 @@
             self.desired_capacity = value
         elif name == 'ScheduledActionName':
             self.name = value
+        elif name == 'AutoScalingGroupName':
+            self.as_group = value
         elif name == 'MaxSize':
             self.max_size = int(value)
         elif name == 'MinSize':
             self.min_size = int(value)
         elif name == 'ScheduledActionARN':
             self.action_arn = value
+        elif name == 'Recurrence':
+            self.recurrence = value
         elif name == 'Time':
             try:
                 self.time = datetime.strptime(value, '%Y-%m-%dT%H:%M:%S.%fZ')
             except ValueError:
                 self.time = datetime.strptime(value, '%Y-%m-%dT%H:%M:%SZ')
+        elif name == 'StartTime':
+            try:
+                self.start_time = datetime.strptime(value, '%Y-%m-%dT%H:%M:%S.%fZ')
+            except ValueError:
+                self.start_time = datetime.strptime(value, '%Y-%m-%dT%H:%M:%SZ')
+        elif name == 'EndTime':
+            try:
+                self.end_time = datetime.strptime(value, '%Y-%m-%dT%H:%M:%S.%fZ')
+            except ValueError:
+                self.end_time = datetime.strptime(value, '%Y-%m-%dT%H:%M:%SZ')
         else:
             setattr(self, name, value)
 
diff --git a/boto/ec2/blockdevicemapping.py b/boto/ec2/blockdevicemapping.py
index ca0e937..df774ae 100644
--- a/boto/ec2/blockdevicemapping.py
+++ b/boto/ec2/blockdevicemapping.py
@@ -125,17 +125,18 @@
                 params['%s.VirtualName' % pre] = block_dev.ephemeral_name
             else:
                 if block_dev.no_device:
-                    params['%s.Ebs.NoDevice' % pre] = 'true'
-                if block_dev.snapshot_id:
-                    params['%s.Ebs.SnapshotId' % pre] = block_dev.snapshot_id
-                if block_dev.size:
-                    params['%s.Ebs.VolumeSize' % pre] = block_dev.size
-                if block_dev.delete_on_termination:
-                    params['%s.Ebs.DeleteOnTermination' % pre] = 'true'
+                    params['%s.NoDevice' % pre] = ''
                 else:
-                    params['%s.Ebs.DeleteOnTermination' % pre] = 'false'
-                if block_dev.volume_type:
-                    params['%s.Ebs.VolumeType' % pre] = block_dev.volume_type
-                if block_dev.iops is not None:
-                    params['%s.Ebs.Iops' % pre] = block_dev.iops
+                    if block_dev.snapshot_id:
+                        params['%s.Ebs.SnapshotId' % pre] = block_dev.snapshot_id
+                    if block_dev.size:
+                        params['%s.Ebs.VolumeSize' % pre] = block_dev.size
+                    if block_dev.delete_on_termination:
+                        params['%s.Ebs.DeleteOnTermination' % pre] = 'true'
+                    else:
+                        params['%s.Ebs.DeleteOnTermination' % pre] = 'false'
+                    if block_dev.volume_type:
+                        params['%s.Ebs.VolumeType' % pre] = block_dev.volume_type
+                    if block_dev.iops is not None:
+                        params['%s.Ebs.Iops' % pre] = block_dev.iops
             i += 1
diff --git a/boto/ec2/cloudwatch/__init__.py b/boto/ec2/cloudwatch/__init__.py
index 5b8db5b..dd7b681 100644
--- a/boto/ec2/cloudwatch/__init__.py
+++ b/boto/ec2/cloudwatch/__init__.py
@@ -23,11 +23,7 @@
 This module provides an interface to the Elastic Compute Cloud (EC2)
 CloudWatch service from AWS.
 """
-try:
-    import simplejson as json
-except ImportError:
-    import json
-
+from boto.compat import json
 from boto.connection import AWSQueryConnection
 from boto.ec2.cloudwatch.metric import Metric
 from boto.ec2.cloudwatch.alarm import MetricAlarm, MetricAlarms, AlarmHistoryItem
@@ -42,7 +38,9 @@
     'sa-east-1': 'monitoring.sa-east-1.amazonaws.com',
     'eu-west-1': 'monitoring.eu-west-1.amazonaws.com',
     'ap-northeast-1': 'monitoring.ap-northeast-1.amazonaws.com',
-    'ap-southeast-1': 'monitoring.ap-southeast-1.amazonaws.com'}
+    'ap-southeast-1': 'monitoring.ap-southeast-1.amazonaws.com',
+    'ap-southeast-2': 'monitoring.ap-southeast-2.amazonaws.com',
+}
 
 
 def regions():
@@ -179,16 +177,16 @@
                 metric_data['StatisticValues.SampleCount'] = s['samplecount']
                 metric_data['StatisticValues.Sum'] = s['sum']
                 if value != None:
-                    msg = 'You supplied a value and statistics for a metric.'
-                    msg += 'Posting statistics and not value.'
+                    msg = 'You supplied a value and statistics for a ' + \
+                          'metric.Posting statistics and not value.'
                     boto.log.warn(msg)
             elif value != None:
                 metric_data['Value'] = v
             else:
                 raise Exception('Must specify a value or statistics to put.')
 
-            for key, value in metric_data.iteritems():
-                params['MetricData.member.%d.%s' % (index + 1, key)] = value
+            for key, val in metric_data.iteritems():
+                params['MetricData.member.%d.%s' % (index + 1, key)] = val
 
     def get_metric_statistics(self, period, start_time, end_time, metric_name,
                               namespace, statistics, dimensions=None,
@@ -390,8 +388,12 @@
             params['NextToken'] = next_token
         if state_value:
             params['StateValue'] = state_value
-        return self.get_list('DescribeAlarms', params,
-                             [('MetricAlarms', MetricAlarms)])[0]
+
+        result = self.get_list('DescribeAlarms', params,
+                               [('MetricAlarms', MetricAlarms)])
+        ret = result[0]
+        ret.next_token = result.next_token
+        return ret
 
     def describe_alarm_history(self, alarm_name=None,
                                start_date=None, end_date=None,
diff --git a/boto/ec2/cloudwatch/alarm.py b/boto/ec2/cloudwatch/alarm.py
index b0b9fd0..e0f7242 100644
--- a/boto/ec2/cloudwatch/alarm.py
+++ b/boto/ec2/cloudwatch/alarm.py
@@ -24,11 +24,7 @@
 from boto.resultset import ResultSet
 from boto.ec2.cloudwatch.listelement import ListElement
 from boto.ec2.cloudwatch.dimension import Dimension
-
-try:
-    import simplejson as json
-except ImportError:
-    import json
+from boto.compat import json
 
 
 class MetricAlarms(list):
@@ -312,5 +308,9 @@
         elif name == 'HistorySummary':
             self.summary = value
         elif name == 'Timestamp':
-            self.timestamp = datetime.strptime(value, '%Y-%m-%dT%H:%M:%S.%fZ')
+            try:
+                self.timestamp = datetime.strptime(value,
+                                                   '%Y-%m-%dT%H:%M:%S.%fZ')
+            except ValueError:
+                self.timestamp = datetime.strptime(value, '%Y-%m-%dT%H:%M:%SZ')
 
diff --git a/boto/ec2/connection.py b/boto/ec2/connection.py
index 029c796..7752d23 100644
--- a/boto/ec2/connection.py
+++ b/boto/ec2/connection.py
@@ -1,6 +1,6 @@
 # Copyright (c) 2006-2012 Mitch Garnaat http://garnaat.org/
 # Copyright (c) 2010, Eucalyptus Systems, Inc.
-# Copyright (c) 2012 Amazon.com, Inc. or its affiliates.  All Rights Reserved
+# Copyright (c) 2013 Amazon.com, Inc. or its affiliates.  All Rights Reserved
 #
 # Permission is hereby granted, free of charge, to any person obtaining a
 # copy of this software and associated documentation files (the
@@ -33,7 +33,7 @@
 import boto
 from boto.connection import AWSQueryConnection
 from boto.resultset import ResultSet
-from boto.ec2.image import Image, ImageAttribute
+from boto.ec2.image import Image, ImageAttribute, CopyImage
 from boto.ec2.instance import Reservation, Instance
 from boto.ec2.instance import ConsoleOutput, InstanceAttribute
 from boto.ec2.keypair import KeyPair
@@ -54,9 +54,11 @@
 from boto.ec2.bundleinstance import BundleInstanceTask
 from boto.ec2.placementgroup import PlacementGroup
 from boto.ec2.tag import Tag
+from boto.ec2.vmtype import VmType
 from boto.ec2.instancestatus import InstanceStatusSet
 from boto.ec2.volumestatus import VolumeStatusSet
 from boto.ec2.networkinterface import NetworkInterface
+from boto.ec2.attributes import AccountAttribute, VPCAttribute
 from boto.exception import EC2ResponseError
 
 #boto.set_stream_logger('ec2')
@@ -64,7 +66,7 @@
 
 class EC2Connection(AWSQueryConnection):
 
-    APIVersion = boto.config.get('Boto', 'ec2_version', '2012-08-15')
+    APIVersion = boto.config.get('Boto', 'ec2_version', '2013-02-01')
     DefaultRegionName = boto.config.get('Boto', 'ec2_region_name', 'us-east-1')
     DefaultRegionEndpoint = boto.config.get('Boto', 'ec2_region_endpoint',
                                             'ec2.us-east-1.amazonaws.com')
@@ -146,14 +148,12 @@
                               user ID has explicit launch permissions
 
         :type filters: dict
-        :param filters: Optional filters that can be used to limit
-                        the results returned.  Filters are provided
-                        in the form of a dictionary consisting of
-                        filter names as the key and filter values
-                        as the value.  The set of allowable filter
-                        names/values is dependent on the request
-                        being performed.  Check the EC2 API guide
-                        for details.
+        :param filters: Optional filters that can be used to limit the
+            results returned.  Filters are provided in the form of a
+            dictionary consisting of filter names as the key and
+            filter values as the value.  The set of allowable filter
+            names/values is dependent on the request being performed.
+            Check the EC2 API guide for details.
 
         :rtype: list
         :return: A list of :class:`boto.ec2.image.Image`
@@ -298,8 +298,7 @@
 
         :type delete_snapshot: bool
         :param delete_snapshot: Set to True if we should delete the
-                                snapshot associated with an EBS volume
-                                mounted at /dev/sda1
+            snapshot associated with an EBS volume mounted at /dev/sda1
 
         :rtype: bool
         :return: True if successful
@@ -332,14 +331,14 @@
 
         :type description: string
         :param description: An optional human-readable string describing
-                            the contents and purpose of the AMI.
+            the contents and purpose of the AMI.
 
         :type no_reboot: bool
-        :param no_reboot: An optional flag indicating that the bundling process
-                          should not attempt to shutdown the instance before
-                          bundling.  If this flag is True, the responsibility
-                          of maintaining file system integrity is left to the
-                          owner of the instance.
+        :param no_reboot: An optional flag indicating that the
+            bundling process should not attempt to shutdown the
+            instance before bundling.  If this flag is True, the
+            responsibility of maintaining file system integrity is
+            left to the owner of the instance.
 
         :rtype: string
         :return: The new image id
@@ -364,10 +363,10 @@
 
         :type attribute: string
         :param attribute: The attribute you need information about.
-                          Valid choices are:
-                          * launchPermission
-                          * productCodes
-                          * blockDeviceMapping
+            Valid choices are:
+            * launchPermission
+            * productCodes
+            * blockDeviceMapping
 
         :rtype: :class:`boto.ec2.image.ImageAttribute`
         :return: An ImageAttribute object representing the value of the
@@ -392,7 +391,7 @@
 
         :type operation: string
         :param operation: Either add or remove (this is required for changing
-                          launchPermissions)
+            launchPermissions)
 
         :type user_ids: list
         :param user_ids: The Amazon IDs of users to add/remove attributes
@@ -402,8 +401,8 @@
 
         :type product_codes: list
         :param product_codes: Amazon DevPay product code. Currently only one
-                              product code can be associated with an AMI. Once
-                              set, the product code cannot be changed or reset.
+            product code can be associated with an AMI. Once
+            set, the product code cannot be changed or reset.
         """
         params = {'ImageId': image_id,
                   'Attribute': attribute,
@@ -525,7 +524,7 @@
                       security_group_ids=None,
                       additional_info=None, instance_profile_name=None,
                       instance_profile_arn=None, tenancy=None,
-                      ebs_optimized=False):
+                      ebs_optimized=False, network_interfaces=None):
         """
         Runs an image on EC2.
 
@@ -544,10 +543,11 @@
 
         :type security_groups: list of strings
         :param security_groups: The names of the security groups with which to
-            associate instances
+            associate instances.
 
         :type user_data: string
-        :param user_data: The user data passed to the launched instances
+        :param user_data: The Base64-encoded MIME user data to be made
+            available to the instance(s) in this reservation.
 
         :type instance_type: string
         :param instance_type: The type of instance to run:
@@ -557,18 +557,22 @@
             * m1.medium
             * m1.large
             * m1.xlarge
+            * m3.xlarge
+            * m3.2xlarge
             * c1.medium
             * c1.xlarge
             * m2.xlarge
             * m2.2xlarge
             * m2.4xlarge
+            * cr1.8xlarge
+            * hi1.4xlarge
+            * hs1.8xlarge
             * cc1.4xlarge
             * cg1.4xlarge
             * cc2.8xlarge
 
         :type placement: string
-        :param placement: The availability zone in which to launch
-            the instances.
+        :param placement: The Availability Zone to launch the instance into.
 
         :type kernel_id: string
         :param kernel_id: The ID of the kernel with which to launch the
@@ -594,7 +598,7 @@
 
         :type block_device_map: :class:`boto.ec2.blockdevicemapping.BlockDeviceMapping`
         :param block_device_map: A BlockDeviceMapping data structure
-            describing the EBS volumes associated  with the Image.
+            describing the EBS volumes associated with the Image.
 
         :type disable_api_termination: bool
         :param disable_api_termination: If True, the instances will be locked
@@ -614,7 +618,7 @@
 
         :type client_token: string
         :param client_token: Unique, case-sensitive identifier you provide
-            to ensure idempotency of the request.  Maximum 64 ASCII characters.
+            to ensure idempotency of the request. Maximum 64 ASCII characters.
 
         :type security_group_ids: list of strings
         :param security_group_ids: The ID of the VPC security groups with
@@ -647,6 +651,10 @@
             provide optimal EBS I/O performance.  This optimization
             isn't available with all instance types.
 
+        :type network_interfaces: list
+        :param network_interfaces: A list of
+            :class:`boto.ec2.networkinterface.NetworkInterfaceSpecification`
+
         :rtype: Reservation
         :return: The :class:`boto.ec2.instance.Reservation` associated with
                  the request for machines
@@ -711,6 +719,8 @@
             params['IamInstanceProfile.Arn'] = instance_profile_arn
         if ebs_optimized:
             params['EbsOptimized'] = 'true'
+        if network_interfaces:
+            network_interfaces.build_list_params(params)
         return self.get_object('RunInstances', params, Reservation,
                                verb='POST')
 
@@ -853,6 +863,7 @@
             * userData - Base64 encoded String (None)
             * disableApiTermination - Boolean (true)
             * instanceInitiatedShutdownBehavior - stop|terminate
+            * blockDeviceMapping - List of strings - ie: ['/dev/sda=false']
             * sourceDestCheck - Boolean (true)
             * groupSet - Set of Security Groups or IDs
             * ebsOptimized - Boolean (false)
@@ -882,6 +893,12 @@
                 if isinstance(sg, SecurityGroup):
                     sg = sg.id
                 params['GroupId.%s' % (idx + 1)] = sg
+        elif attribute.lower() == 'blockdevicemapping':
+            for idx, kv in enumerate(value):
+                dev_name, _, flag = kv.partition('=')
+                pre = 'BlockDeviceMapping.%d' % (idx + 1)
+                params['%s.DeviceName' % pre] = dev_name
+                params['%s.Ebs.DeleteOnTermination' % pre] = flag or 'true'
         else:
             # for backwards compatibility handle lowercase first letter
             attribute = attribute[0].upper() + attribute[1:]
@@ -1008,7 +1025,8 @@
                                instance_profile_arn=None,
                                instance_profile_name=None,
                                security_group_ids=None,
-                               ebs_optimized=False):
+                               ebs_optimized=False,
+                               network_interfaces=None):
         """
         Request instances on the spot market at a particular price.
 
@@ -1111,6 +1129,10 @@
             provide optimal EBS I/O performance.  This optimization
             isn't available with all instance types.
 
+        :type network_interfaces: list
+        :param network_interfaces: A list of
+            :class:`boto.ec2.networkinterface.NetworkInterfaceSpecification`
+
         :rtype: Reservation
         :return: The :class:`boto.ec2.spotinstancerequest.SpotInstanceRequest`
                  associated with the request for machines
@@ -1174,6 +1196,8 @@
             params['%s.IamInstanceProfile.Arn' % ls] = instance_profile_arn
         if ebs_optimized:
             params['%s.EbsOptimized' % ls] = 'true'
+        if network_interfaces:
+            network_interfaces.build_list_params(params, prefix=ls + '.')
         return self.get_list('RequestSpotInstances', params,
                              [('item', SpotInstanceRequest)],
                              verb='POST')
@@ -1313,6 +1337,11 @@
         """
         Allocate a new Elastic IP address and associate it with your account.
 
+        :type domain: string
+        :param domain: Optional string. If domain is set to "vpc" the address
+            will be allocated to VPC . Will return address object with
+            allocation_id.
+
         :rtype: :class:`boto.ec2.address.Address`
         :return: The newly allocated Address
         """
@@ -1809,8 +1838,8 @@
         :param description: A description of the snapshot.
                             Limited to 255 characters.
 
-        :rtype: bool
-        :return: True if successful
+        :rtype: :class:`boto.ec2.snapshot.Snapshot`
+        :return: The created Snapshot object
         """
         params = {'VolumeId': volume_id}
         if description:
@@ -1827,6 +1856,40 @@
         params = {'SnapshotId': snapshot_id}
         return self.get_status('DeleteSnapshot', params, verb='POST')
 
+    def copy_snapshot(self, source_region, source_snapshot_id,
+                      description=None):
+        """
+        Copies a point-in-time snapshot of an Amazon Elastic Block Store
+        (Amazon EBS) volume and stores it in Amazon Simple Storage Service
+        (Amazon S3). You can copy the snapshot within the same region or from
+        one region to another. You can use the snapshot to create new Amazon
+        EBS volumes or Amazon Machine Images (AMIs).
+
+
+        :type source_region: str
+        :param source_region: The ID of the AWS region that contains the
+            snapshot to be copied (e.g 'us-east-1', 'us-west-2', etc.).
+
+        :type source_snapshot_id: str
+        :param source_snapshot_id: The ID of the Amazon EBS snapshot to copy
+
+        :type description: str
+        :param description: A description of the new Amazon EBS snapshot.
+
+        :rtype: str
+        :return: The snapshot ID
+
+        """
+        params = {
+            'SourceRegion': source_region,
+            'SourceSnapshotId': source_snapshot_id,
+        }
+        if description is not None:
+            params['Description'] = description
+        snapshot = self.get_object('CopySnapshot', params, Snapshot,
+                                   verb='POST')
+        return snapshot.id
+
     def trim_snapshots(self, hourly_backups=8, daily_backups=7,
                        weekly_backups=4):
         """
@@ -2150,9 +2213,9 @@
                                     it to AWS.
 
         :rtype: :class:`boto.ec2.keypair.KeyPair`
-        :return: The newly created :class:`boto.ec2.keypair.KeyPair`.
-                 The material attribute of the new KeyPair object
-                 will contain the the unencrypted PEM encoded RSA private key.
+        :return: A :class:`boto.ec2.keypair.KeyPair` object representing
+            the newly imported key pair.  This object will contain only
+            the key name and the fingerprint.
         """
         public_key_material = base64.b64encode(public_key_material)
         params = {'KeyName': key_name,
@@ -2216,7 +2279,7 @@
                        if any.
 
         :rtype: :class:`boto.ec2.securitygroup.SecurityGroup`
-        :return: The newly created :class:`boto.ec2.keypair.KeyPair`.
+        :return: The newly created :class:`boto.ec2.securitygroup.SecurityGroup`.
         """
         params = {'GroupName': name,
                   'GroupDescription': description}
@@ -2713,11 +2776,9 @@
             single-tenant hardware and can only be launched within a VPC.
 
         :type offering_type: string
-        :param offering_type: The Reserved Instance offering type.
-            Valid Values:
-                * Heavy Utilization
-                * Medium Utilization
-                * Light Utilization
+        :param offering_type: The Reserved Instance offering type.  Valid
+            Values: `"Heavy Utilization" | "Medium Utilization" | "Light
+            Utilization"`
 
         :type include_marketplace: bool
         :param include_marketplace: Include Marketplace offerings in the
@@ -2785,22 +2846,20 @@
     def get_all_reserved_instances(self, reserved_instances_id=None,
                                    filters=None):
         """
-        Describes Reserved Instance offerings that are available for purchase.
+        Describes one or more of the Reserved Instances that you purchased.
 
         :type reserved_instance_ids: list
         :param reserved_instance_ids: A list of the reserved instance ids that
-                                      will be returned. If not provided, all
-                                      reserved instances will be returned.
+            will be returned. If not provided, all reserved instances
+            will be returned.
 
         :type filters: dict
-        :param filters: Optional filters that can be used to limit
-                        the results returned.  Filters are provided
-                        in the form of a dictionary consisting of
-                        filter names as the key and filter values
-                        as the value.  The set of allowable filter
-                        names/values is dependent on the request
-                        being performed.  Check the EC2 API guide
-                        for details.
+        :param filters: Optional filters that can be used to limit the
+            results returned.  Filters are provided in the form of a
+            dictionary consisting of filter names as the key and
+            filter values as the value.  The set of allowable filter
+            names/values is dependent on the request being performed.
+            Check the EC2 API guide for details.
 
         :rtype: list
         :return: A list of :class:`boto.ec2.reservedinstance.ReservedInstance`
@@ -2825,16 +2884,16 @@
 
         :type reserved_instances_offering_id: string
         :param reserved_instances_offering_id: The offering ID of the Reserved
-                                               Instance to purchase
+            Instance to purchase
 
         :type instance_count: int
         :param instance_count: The number of Reserved Instances to purchase.
-                               Default value is 1.
+            Default value is 1.
 
         :type limit_price: tuple
         :param instance_count: Limit the price on the total order.
-                               Must be a tuple of (amount, currency_code), for example:
-                                   (100.0, 'USD').
+            Must be a tuple of (amount, currency_code), for example:
+            (100.0, 'USD').
 
         :rtype: :class:`boto.ec2.reservedinstance.ReservedInstance`
         :return: The newly created Reserved Instance
@@ -3314,7 +3373,7 @@
                   'DeviceIndex': device_index}
         return self.get_status('AttachNetworkInterface', params, verb='POST')
 
-    def detach_network_interface(self, attachement_id, force=False):
+    def detach_network_interface(self, attachment_id, force=False):
         """
         Detaches a network interface from an instance.
 
@@ -3325,7 +3384,7 @@
         :param force: Set to true to force a detachment.
 
         """
-        params = {'AttachmentId': network_interface_id}
+        params = {'AttachmentId': attachment_id}
         if force:
             params['Force'] = 'true'
         return self.get_status('DetachNetworkInterface', params, verb='POST')
@@ -3340,3 +3399,59 @@
         """
         params = {'NetworkInterfaceId': network_interface_id}
         return self.get_status('DeleteNetworkInterface', params, verb='POST')
+
+    def get_all_vmtypes(self):
+        """
+        Get all vmtypes available on this cloud (eucalyptus specific)
+
+        :rtype: list of :class:`boto.ec2.vmtype.VmType`
+        :return: The requested VmType objects
+        """
+        params = {}
+        return self.get_list('DescribeVmTypes', params, [('euca:item', VmType)], verb='POST')
+
+    def copy_image(self, source_region, source_image_id, name,
+                   description=None, client_token=None):
+        params = {
+            'SourceRegion': source_region,
+            'SourceImageId': source_image_id,
+            'Name': name
+        }
+        if description is not None:
+            params['Description'] = description
+        if client_token is not None:
+            params['ClientToken'] = client_token
+        image = self.get_object('CopyImage', params, CopyImage,
+                                 verb='POST')
+        return image
+
+    def describe_account_attributes(self, attribute_names=None):
+        params = {}
+        if attribute_names is not None:
+            self.build_list_params(params, attribute_names, 'AttributeName')
+        return self.get_list('DescribeAccountAttributes', params,
+                             [('item', AccountAttribute)], verb='POST')
+
+    def describe_vpc_attribute(self, vpc_id, attribute=None):
+        params = {
+            'VpcId': vpc_id
+        }
+        if attribute is not None:
+            params['Attribute'] = attribute
+        attr = self.get_object('DescribeVpcAttribute', params,
+                               VPCAttribute, verb='POST')
+        return attr
+
+    def modify_vpc_attribute(self, vpc_id, enable_dns_support=None,
+                             enable_dns_hostnames=None):
+        params = {
+            'VpcId': vpc_id
+        }
+        if enable_dns_support is not None:
+            params['EnableDnsSupport.Value'] = (
+                'true' if enable_dns_support else 'false')
+        if enable_dns_hostnames is not None:
+            params['EnableDnsHostnames.Value'] = (
+                'true' if enable_dns_hostnames else 'false')
+        result = self.get_status('ModifyVpcAttribute', params, verb='POST')
+        return result
diff --git a/boto/ec2/elb/__init__.py b/boto/ec2/elb/__init__.py
index 9a5e324..c5e71b9 100644
--- a/boto/ec2/elb/__init__.py
+++ b/boto/ec2/elb/__init__.py
@@ -1,4 +1,6 @@
-# Copyright (c) 2006-2011 Mitch Garnaat http://garnaat.org/
+# Copyright (c) 2006-2012 Mitch Garnaat http://garnaat.org/
+# Copyright (c) 2012 Amazon.com, Inc. or its affiliates.
+# All Rights Reserved
 #
 # Permission is hereby granted, free of charge, to any person obtaining a
 # copy of this software and associated documentation files (the
@@ -25,9 +27,10 @@
 """
 from boto.connection import AWSQueryConnection
 from boto.ec2.instanceinfo import InstanceInfo
-from boto.ec2.elb.loadbalancer import LoadBalancer
+from boto.ec2.elb.loadbalancer import LoadBalancer, LoadBalancerZones
 from boto.ec2.elb.instancestate import InstanceState
 from boto.ec2.elb.healthcheck import HealthCheck
+from boto.ec2.elb.listelement import ListElement
 from boto.regioninfo import RegionInfo
 import boto
 
@@ -38,7 +41,9 @@
     'sa-east-1': 'elasticloadbalancing.sa-east-1.amazonaws.com',
     'eu-west-1': 'elasticloadbalancing.eu-west-1.amazonaws.com',
     'ap-northeast-1': 'elasticloadbalancing.ap-northeast-1.amazonaws.com',
-    'ap-southeast-1': 'elasticloadbalancing.ap-southeast-1.amazonaws.com'}
+    'ap-southeast-1': 'elasticloadbalancing.ap-southeast-1.amazonaws.com',
+    'ap-southeast-2': 'elasticloadbalancing.ap-southeast-2.amazonaws.com',
+}
 
 
 def regions():
@@ -76,7 +81,7 @@
 
 class ELBConnection(AWSQueryConnection):
 
-    APIVersion = boto.config.get('Boto', 'elb_version', '2011-11-15')
+    APIVersion = boto.config.get('Boto', 'elb_version', '2012-06-01')
     DefaultRegionName = boto.config.get('Boto', 'elb_region_name', 'us-east-1')
     DefaultRegionEndpoint = boto.config.get('Boto', 'elb_region_endpoint',
                                             'elasticloadbalancing.us-east-1.amazonaws.com')
@@ -178,9 +183,11 @@
             to an Amazon VPC.
 
         :rtype: :class:`boto.ec2.elb.loadbalancer.LoadBalancer`
-        :return: The newly created :class:`boto.ec2.elb.loadbalancer.LoadBalancer`
+        :return: The newly created
+            :class:`boto.ec2.elb.loadbalancer.LoadBalancer`
         """
-        params = {'LoadBalancerName': name}
+        params = {'LoadBalancerName': name,
+                  'Scheme': scheme}
         for index, listener in enumerate(listeners):
             i = index + 1
             protocol = listener[2].upper()
@@ -286,8 +293,9 @@
         params = {'LoadBalancerName': load_balancer_name}
         self.build_list_params(params, zones_to_add,
                                'AvailabilityZones.member.%d')
-        return self.get_list('EnableAvailabilityZonesForLoadBalancer',
-                             params, None)
+        obj = self.get_object('EnableAvailabilityZonesForLoadBalancer',
+                               params, LoadBalancerZones)
+        return obj.zones
 
     def disable_availability_zones(self, load_balancer_name, zones_to_remove):
         """
@@ -310,8 +318,9 @@
         params = {'LoadBalancerName': load_balancer_name}
         self.build_list_params(params, zones_to_remove,
                                'AvailabilityZones.member.%d')
-        return self.get_list('DisableAvailabilityZonesForLoadBalancer',
-                             params, None)
+        obj = self.get_object('DisableAvailabilityZonesForLoadBalancer',
+                               params, LoadBalancerZones)
+        return obj.zones
 
     def register_instances(self, load_balancer_name, instances):
         """
@@ -450,10 +459,13 @@
         from the same user to that server. The validity of the cookie is based
         on the cookie expiration time, which is specified in the policy
         configuration.
+
+        None may be passed for cookie_expiration_period.
         """
-        params = {'CookieExpirationPeriod': cookie_expiration_period,
-                  'LoadBalancerName': lb_name,
+        params = {'LoadBalancerName': lb_name,
                   'PolicyName': policy_name}
+        if cookie_expiration_period is not None:
+            params['CookieExpirationPeriod'] = cookie_expiration_period
         return self.get_status('CreateLBCookieStickinessPolicy', params)
 
     def delete_lb_policy(self, lb_name, policy_name):
diff --git a/boto/ec2/elb/healthcheck.py b/boto/ec2/elb/healthcheck.py
index 6661ea1..040f962 100644
--- a/boto/ec2/elb/healthcheck.py
+++ b/boto/ec2/elb/healthcheck.py
@@ -1,4 +1,6 @@
-# Copyright (c) 2006-2009 Mitch Garnaat http://garnaat.org/
+# Copyright (c) 2006-2012 Mitch Garnaat http://garnaat.org/
+# Copyright (c) 2012 Amazon.com, Inc. or its affiliates.
+# All Rights Reserved
 #
 # Permission is hereby granted, free of charge, to any person obtaining a
 # copy of this software and associated documentation files (the
@@ -19,6 +21,7 @@
 # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
 # IN THE SOFTWARE.
 
+
 class HealthCheck(object):
     """
     Represents an EC2 Access Point Health Check. See
@@ -77,11 +80,10 @@
         if not self.access_point:
             return
 
-        new_hc = self.connection.configure_health_check(self.access_point, self)
+        new_hc = self.connection.configure_health_check(self.access_point,
+                                                        self)
         self.interval = new_hc.interval
         self.target = new_hc.target
         self.healthy_threshold = new_hc.healthy_threshold
         self.unhealthy_threshold = new_hc.unhealthy_threshold
         self.timeout = new_hc.timeout
-
-
diff --git a/boto/ec2/elb/instancestate.py b/boto/ec2/elb/instancestate.py
index 37a4727..40f4cbe 100644
--- a/boto/ec2/elb/instancestate.py
+++ b/boto/ec2/elb/instancestate.py
@@ -60,6 +60,3 @@
             self.reason_code = value
         else:
             setattr(self, name, value)
-
-
-
diff --git a/boto/ec2/elb/listelement.py b/boto/ec2/elb/listelement.py
index 3529041..0fe3a1e 100644
--- a/boto/ec2/elb/listelement.py
+++ b/boto/ec2/elb/listelement.py
@@ -1,4 +1,6 @@
-# Copyright (c) 2006-2009 Mitch Garnaat http://garnaat.org/
+# Copyright (c) 2006-2012 Mitch Garnaat http://garnaat.org/
+# Copyright (c) 2012 Amazon.com, Inc. or its affiliates.
+# All Rights Reserved
 #
 # Permission is hereby granted, free of charge, to any person obtaining a
 # copy of this software and associated documentation files (the
@@ -14,15 +16,16 @@
 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
 # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
 # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
-# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, 
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
 # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
 # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
 # IN THE SOFTWARE.
 
+
 class ListElement(list):
     """
-    A :py:class:`list` subclass that has some additional methods for interacting
-    with Amazon's XML API.
+    A :py:class:`list` subclass that has some additional methods
+    for interacting with Amazon's XML API.
     """
 
     def startElement(self, name, attrs, connection):
@@ -31,5 +34,3 @@
     def endElement(self, name, value, connection):
         if name == 'member':
             self.append(value)
-    
-    
diff --git a/boto/ec2/elb/listener.py b/boto/ec2/elb/listener.py
index bbb49d0..a50b02c 100644
--- a/boto/ec2/elb/listener.py
+++ b/boto/ec2/elb/listener.py
@@ -1,4 +1,6 @@
-# Copyright (c) 2006-2009 Mitch Garnaat http://garnaat.org/
+# Copyright (c) 2006-2012 Mitch Garnaat http://garnaat.org/
+# Copyright (c) 2012 Amazon.com, Inc. or its affiliates.
+# All Rights Reserved
 #
 # Permission is hereby granted, free of charge, to any person obtaining a
 # copy of this software and associated documentation files (the
@@ -21,6 +23,7 @@
 
 from boto.ec2.elb.listelement import ListElement
 
+
 class Listener(object):
     """
     Represents an EC2 Load Balancer Listener tuple
@@ -70,7 +73,3 @@
         if key == 2:
             return self.protocol
         raise KeyError
-
-
-
-
diff --git a/boto/ec2/elb/loadbalancer.py b/boto/ec2/elb/loadbalancer.py
index efb7151..7b6afc7 100644
--- a/boto/ec2/elb/loadbalancer.py
+++ b/boto/ec2/elb/loadbalancer.py
@@ -29,6 +29,22 @@
 from boto.resultset import ResultSet
 
 
+class LoadBalancerZones(object):
+    """
+    Used to collect the zones for a Load Balancer when enable_zones
+    or disable_zones are called.
+    """
+    def __init__(self, connection=None):
+        self.connection = connection
+        self.zones = ListElement()
+
+    def startElement(self, name, attrs, connection):
+        if name == 'AvailabilityZones':
+            return self.zones
+
+    def endElement(self, name, value, connection):
+        pass
+
 class LoadBalancer(object):
     """
     Represents an EC2 Load Balancer.
diff --git a/boto/ec2/image.py b/boto/ec2/image.py
index f00e55a..376fc86 100644
--- a/boto/ec2/image.py
+++ b/boto/ec2/image.py
@@ -15,7 +15,7 @@
 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
 # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
 # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
-# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, 
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
 # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
 # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
 # IN THE SOFTWARE.
@@ -31,12 +31,12 @@
     def endElement(self, name, value, connection):
         if name == 'productCode':
             self.append(value)
-    
+
 class Image(TaggedEC2Object):
     """
     Represents an EC2 Image
     """
-    
+
     def __init__(self, connection=None):
         TaggedEC2Object.__init__(self, connection)
         self.id = None
@@ -94,7 +94,7 @@
             else:
                 raise Exception(
                     'Unexpected value of isPublic %s for image %s'%(
-                        value, 
+                        value,
                         self.id
                     )
                 )
@@ -151,7 +151,7 @@
             raise ValueError('%s is not a valid Image ID' % self.id)
         return self.state
 
-    def run(self, min_count=1, max_count=1, key_name=None, 
+    def run(self, min_count=1, max_count=1, key_name=None,
             security_groups=None, user_data=None,
             addressing_type=None, instance_type='m1.small', placement=None,
             kernel_id=None, ramdisk_id=None,
@@ -166,95 +166,119 @@
 
         """
         Runs this instance.
-        
+
         :type min_count: int
         :param min_count: The minimum number of instances to start
-        
+
         :type max_count: int
         :param max_count: The maximum number of instances to start
-        
+
         :type key_name: string
-        :param key_name: The name of the keypair to run this instance with.
-        
-        :type security_groups: 
-        :param security_groups:
-        
-        :type user_data: 
-        :param user_data:
-        
-        :type addressing_type: 
-        :param daddressing_type:
-        
+        :param key_name: The name of the key pair with which to
+            launch instances.
+
+        :type security_groups: list of strings
+        :param security_groups: The names of the security groups with which to
+            associate instances.
+
+        :type user_data: string
+        :param user_data: The Base64-encoded MIME user data to be made
+            available to the instance(s) in this reservation.
+
         :type instance_type: string
-        :param instance_type: The type of instance to run.  Current choices are:
-                              m1.small | m1.large | m1.xlarge | c1.medium |
-                              c1.xlarge | m2.xlarge | m2.2xlarge |
-                              m2.4xlarge | cc1.4xlarge
-        
+        :param instance_type: The type of instance to run:
+
+            * t1.micro
+            * m1.small
+            * m1.medium
+            * m1.large
+            * m1.xlarge
+            * m3.xlarge
+            * m3.2xlarge
+            * c1.medium
+            * c1.xlarge
+            * m2.xlarge
+            * m2.2xlarge
+            * m2.4xlarge
+            * cr1.8xlarge
+            * hi1.4xlarge
+            * hs1.8xlarge
+            * cc1.4xlarge
+            * cg1.4xlarge
+            * cc2.8xlarge
+
         :type placement: string
-        :param placement: The availability zone in which to launch the instances
+        :param placement: The Availability Zone to launch the instance into.
 
         :type kernel_id: string
-        :param kernel_id: The ID of the kernel with which to launch the instances
-        
+        :param kernel_id: The ID of the kernel with which to launch the
+            instances.
+
         :type ramdisk_id: string
-        :param ramdisk_id: The ID of the RAM disk with which to launch the instances
-        
+        :param ramdisk_id: The ID of the RAM disk with which to launch the
+            instances.
+
         :type monitoring_enabled: bool
-        :param monitoring_enabled: Enable CloudWatch monitoring on the instance.
-        
-        :type subnet_id: string
-        :param subnet_id: The subnet ID within which to launch the instances for VPC.
-        
+        :param monitoring_enabled: Enable CloudWatch monitoring on
+            the instance.
+
+         :type subnet_id: string
+        :param subnet_id: The subnet ID within which to launch the instances
+            for VPC.
+
         :type private_ip_address: string
-        :param private_ip_address: If you're using VPC, you can optionally use
-                                   this parameter to assign the instance a
-                                   specific available IP address from the
-                                   subnet (e.g., 10.0.0.25).
+        :param private_ip_address: If you're using VPC, you can
+            optionally use this parameter to assign the instance a
+            specific available IP address from the subnet (e.g.,
+            10.0.0.25).
 
         :type block_device_map: :class:`boto.ec2.blockdevicemapping.BlockDeviceMapping`
         :param block_device_map: A BlockDeviceMapping data structure
-                                 describing the EBS volumes associated
-                                 with the Image.
+            describing the EBS volumes associated with the Image.
 
         :type disable_api_termination: bool
         :param disable_api_termination: If True, the instances will be locked
-                                        and will not be able to be terminated
-                                        via the API.
+            and will not be able to be terminated via the API.
 
         :type instance_initiated_shutdown_behavior: string
-        :param instance_initiated_shutdown_behavior: Specifies whether the instance
-                                                     stops or terminates on instance-initiated
-                                                     shutdown. Valid values are:
-                                                     stop | terminate
+        :param instance_initiated_shutdown_behavior: Specifies whether the
+            instance stops or terminates on instance-initiated shutdown.
+            Valid values are:
+
+            * stop
+            * terminate
 
         :type placement_group: string
         :param placement_group: If specified, this is the name of the placement
-                                group in which the instance(s) will be launched.
+            group in which the instance(s) will be launched.
 
         :type additional_info: string
-        :param additional_info:  Specifies additional information to make
-            available to the instance(s)
+        :param additional_info: Specifies additional information to make
+            available to the instance(s).
 
-        :type security_group_ids: 
-        :param security_group_ids:
+        :type security_group_ids: list of strings
+        :param security_group_ids: The ID of the VPC security groups with
+            which to associate instances.
 
         :type instance_profile_name: string
-        :param instance_profile_name: The name of an IAM instance profile to use.
+        :param instance_profile_name: The name of
+            the IAM Instance Profile (IIP) to associate with the instances.
 
         :type instance_profile_arn: string
-        :param instance_profile_arn: The ARN of an IAM instance profile to use.
-        
+        :param instance_profile_arn: The Amazon resource name (ARN) of
+            the IAM Instance Profile (IIP) to associate with the instances.
+
         :type tenancy: string
-        :param tenancy: The tenancy of the instance you want to launch. An
-                        instance with a tenancy of 'dedicated' runs on
-                        single-tenant hardware and can only be launched into a
-                        VPC. Valid values are: "default" or "dedicated".
-                        NOTE: To use dedicated tenancy you MUST specify a VPC
-                        subnet-ID as well.
+        :param tenancy: The tenancy of the instance you want to
+            launch. An instance with a tenancy of 'dedicated' runs on
+            single-tenant hardware and can only be launched into a
+            VPC. Valid values are:"default" or "dedicated".
+            NOTE: To use dedicated tenancy you MUST specify a VPC
+            subnet-ID as well.
 
         :rtype: Reservation
-        :return: The :class:`boto.ec2.instance.Reservation` associated with the request for machines
+        :return: The :class:`boto.ec2.instance.Reservation` associated with
+                 the request for machines
 
         """
 
@@ -266,9 +290,9 @@
                                              monitoring_enabled, subnet_id,
                                              block_device_map, disable_api_termination,
                                              instance_initiated_shutdown_behavior,
-                                             private_ip_address, placement_group, 
+                                             private_ip_address, placement_group,
                                              security_group_ids=security_group_ids,
-                                             additional_info=additional_info, 
+                                             additional_info=additional_info,
                                              instance_profile_name=instance_profile_name,
                                              instance_profile_arn=instance_profile_arn,
                                              tenancy=tenancy)
@@ -348,3 +372,16 @@
             self.ramdisk = value
         else:
             setattr(self, name, value)
+
+
+class CopyImage(object):
+    def __init__(self, parent=None):
+        self._parent = parent
+        self.image_id = None
+
+    def startElement(self, name, attrs, connection):
+        pass
+
+    def endElement(self, name, value, connection):
+        if name == 'imageId':
+            self.image_id = value
diff --git a/boto/ec2/instance.py b/boto/ec2/instance.py
index 7435788..5be701f 100644
--- a/boto/ec2/instance.py
+++ b/boto/ec2/instance.py
@@ -188,6 +188,8 @@
     :ivar product_codes: A list of product codes associated with this instance.
     :ivar ami_launch_index: This instances position within it's launch group.
     :ivar monitored: A boolean indicating whether monitoring is enabled or not.
+    :ivar monitoring_state: A string value that contains the actual value
+        of the monitoring element returned by EC2.
     :ivar spot_instance_request_id: The ID of the spot instance request
         if this is a spot instance.
     :ivar subnet_id: The VPC Subnet ID, if running in VPC.
@@ -223,6 +225,7 @@
         self.product_codes = ProductCodes()
         self.ami_launch_index = None
         self.monitored = False
+        self.monitoring_state = None
         self.spot_instance_request_id = None
         self.subnet_id = None
         self.vpc_id = None
@@ -273,10 +276,6 @@
         return 0
 
     @property
-    def state(self):
-        return self._state.name
-
-    @property
     def placement(self):
         return self._placement.zone
 
@@ -310,6 +309,7 @@
             return self.eventsSet
         elif name == 'networkInterfaceSet':
             self.interfaces = ResultSet([('item', NetworkInterface)])
+            return self.interfaces
         elif name == 'iamInstanceProfile':
             self.instance_profile = SubParse('iamInstanceProfile')
             return self.instance_profile
@@ -364,6 +364,7 @@
             self.ramdisk = value
         elif name == 'state':
             if self._in_monitoring_element:
+                self.monitoring_state = value
                 if value == 'enabled':
                     self.monitored = True
                 self._in_monitoring_element = False
@@ -473,6 +474,18 @@
         return self.connection.confirm_product_instance(self.id, product_code)
 
     def use_ip(self, ip_address):
+        """
+        Associates an Elastic IP to the instance.
+
+        :type ip_address: Either an instance of
+            :class:`boto.ec2.address.Address` or a string.
+        :param ip_address: The IP address to associate
+            with the instance.
+
+        :rtype: bool
+        :return: True if successful
+        """
+
         if isinstance(ip_address, Address):
             ip_address = ip_address.public_ip
         return self.connection.associate_address(self.id, ip_address)
@@ -549,6 +562,33 @@
         """
         return self.connection.reset_instance_attribute(self.id, attribute)
 
+    def create_image(
+        self, name,
+        description=None, no_reboot=False
+    ):
+        """
+        Will create an AMI from the instance in the running or stopped
+        state.
+
+        :type name: string
+        :param name: The name of the new image
+
+        :type description: string
+        :param description: An optional human-readable string describing
+                            the contents and purpose of the AMI.
+
+        :type no_reboot: bool
+        :param no_reboot: An optional flag indicating that the bundling process
+                          should not attempt to shutdown the instance before
+                          bundling.  If this flag is True, the responsibility
+                          of maintaining file system integrity is left to the
+                          owner of the instance.
+
+        :rtype: string
+        :return: The new image id
+        """
+        return self.connection.create_image(self.id, name, description, no_reboot)
+
 
 class ConsoleOutput:
 
diff --git a/boto/ec2/instancestatus.py b/boto/ec2/instancestatus.py
index 3a9b543..b09b55e 100644
--- a/boto/ec2/instancestatus.py
+++ b/boto/ec2/instancestatus.py
@@ -16,11 +16,12 @@
 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
 # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
 # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
-# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, 
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
 # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
 # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
 # IN THE SOFTWARE.
 
+
 class Details(dict):
     """
     A dict object that contains name/value pairs which provide
@@ -38,7 +39,8 @@
             self[self._name] = value
         else:
             setattr(self, name, value)
-    
+
+
 class Event(object):
     """
     A status event for an instance.
@@ -57,7 +59,7 @@
         self.description = description
         self.not_before = not_before
         self.not_after = not_after
-        
+
     def __repr__(self):
         return 'Event:%s' % self.code
 
@@ -76,6 +78,7 @@
         else:
             setattr(self, name, value)
 
+
 class Status(object):
     """
     A generic Status object used for system status and instance status.
@@ -90,7 +93,7 @@
         if not details:
             details = Details()
         self.details = details
-        
+
     def __repr__(self):
         return 'Status:%s' % self.status
 
@@ -105,8 +108,9 @@
         else:
             setattr(self, name, value)
 
+
 class EventSet(list):
-    
+
     def startElement(self, name, attrs, connection):
         if name == 'item':
             event = Event()
@@ -118,6 +122,7 @@
     def endElement(self, name, value, connection):
         setattr(self, name, value)
 
+
 class InstanceStatus(object):
     """
     Represents an EC2 Instance status as reported by
@@ -137,7 +142,7 @@
     :ivar instance_status: A Status object that reports impaired
         functionality that arises from problems internal to the instance.
     """
-    
+
     def __init__(self, id=None, zone=None, events=None,
                  state_code=None, state_name=None):
         self.id = id
@@ -174,6 +179,7 @@
         else:
             setattr(self, name, value)
 
+
 class InstanceStatusSet(list):
     """
     A list object that contains the results of a call to
@@ -191,7 +197,7 @@
         list.__init__(self)
         self.connection = connection
         self.next_token = None
-    
+
     def startElement(self, name, attrs, connection):
         if name == 'item':
             status = InstanceStatus()
@@ -201,7 +207,6 @@
             return None
 
     def endElement(self, name, value, connection):
-        if name == 'NextToken':
+        if name == 'nextToken':
             self.next_token = value
         setattr(self, name, value)
-
diff --git a/boto/ec2/networkinterface.py b/boto/ec2/networkinterface.py
index 2658e3f..5c6088f 100644
--- a/boto/ec2/networkinterface.py
+++ b/boto/ec2/networkinterface.py
@@ -1,4 +1,5 @@
 # Copyright (c) 2011 Mitch Garnaat http://garnaat.org/
+# Copyright (c) 2012 Amazon.com, Inc. or its affiliates.  All Rights Reserved
 #
 # Permission is hereby granted, free of charge, to any person obtaining a
 # copy of this software and associated documentation files (the
@@ -14,7 +15,7 @@
 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
 # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
 # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
-# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, 
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
 # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
 # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
 # IN THE SOFTWARE.
@@ -26,6 +27,7 @@
 from boto.resultset import ResultSet
 from boto.ec2.group import Group
 
+
 class Attachment(object):
     """
     :ivar id: The ID of the attachment.
@@ -45,13 +47,13 @@
         self.status = None
         self.attach_time = None
         self.delete_on_termination = False
-        
+
     def __repr__(self):
         return 'Attachment:%s' % self.id
 
     def startElement(self, name, attrs, connection):
         return None
-        
+
     def endElement(self, name, value, connection):
         if name == 'attachmentId':
             self.id = value
@@ -71,6 +73,7 @@
         else:
             setattr(self, name, value)
 
+
 class NetworkInterface(TaggedEC2Object):
     """
     An Elastic Network Interface.
@@ -80,7 +83,7 @@
     :ivar vpc_id: The ID of the VPC.
     :ivar description: The description.
     :ivar owner_id: The ID of the owner of the ENI.
-    :ivar requester_managed: 
+    :ivar requester_managed:
     :ivar status: The interface's status (available|in-use).
     :ivar mac_address: The MAC address of the interface.
     :ivar private_ip_address: The IP address of the interface within
@@ -89,8 +92,9 @@
         network traffic to or from this network interface.
     :ivar groups: List of security groups associated with the interface.
     :ivar attachment: The attachment object.
+    :ivar private_ip_addresses: A list of PrivateIPAddress objects.
     """
-    
+
     def __init__(self, connection=None):
         TaggedEC2Object.__init__(self, connection)
         self.id = None
@@ -106,6 +110,7 @@
         self.source_dest_check = None
         self.groups = []
         self.attachment = None
+        self.private_ip_addresses = []
 
     def __repr__(self):
         return 'NetworkInterface:%s' % self.id
@@ -120,9 +125,12 @@
         elif name == 'attachment':
             self.attachment = Attachment()
             return self.attachment
+        elif name == 'privateIpAddressesSet':
+            self.private_ip_addresses = ResultSet([('item', PrivateIPAddress)])
+            return self.private_ip_addresses
         else:
             return None
-        
+
     def endElement(self, name, value, connection):
         if name == 'networkInterfaceId':
             self.id = value
@@ -159,5 +167,81 @@
         return self.connection.delete_network_interface(self.id)
 
 
+class PrivateIPAddress(object):
+    def __init__(self, connection=None, private_ip_address=None,
+                 primary=None):
+        self.connection = connection
+        self.private_ip_address = private_ip_address
+        self.primary = primary
 
-            
+    def startElement(self, name, attrs, connection):
+        pass
+
+    def endElement(self, name, value, connection):
+        if name == 'privateIpAddress':
+            self.private_ip_address = value
+        elif name == 'primary':
+            self.primary = True if value.lower() == 'true' else False
+
+    def __repr__(self):
+        return "PrivateIPAddress(%s, primary=%s)" % (self.private_ip_address,
+                                                     self.primary)
+
+
+class NetworkInterfaceCollection(list):
+    def __init__(self, *interfaces):
+        self.extend(interfaces)
+
+    def build_list_params(self, params, prefix=''):
+        for i, spec in enumerate(self):
+            full_prefix = '%sNetworkInterface.%s.' % (prefix, i+1)
+            if spec.network_interface_id is not None:
+                params[full_prefix + 'NetworkInterfaceId'] = \
+                        str(spec.network_interface_id)
+            if spec.device_index is not None:
+                params[full_prefix + 'DeviceIndex'] = \
+                        str(spec.device_index)
+            if spec.subnet_id is not None:
+                params[full_prefix + 'SubnetId'] = str(spec.subnet_id)
+            if spec.description is not None:
+                params[full_prefix + 'Description'] = str(spec.description)
+            if spec.delete_on_termination is not None:
+                params[full_prefix + 'DeleteOnTermination'] = \
+                        'true' if spec.delete_on_termination else 'false'
+            if spec.secondary_private_ip_address_count is not None:
+                params[full_prefix + 'SecondaryPrivateIpAddressCount'] = \
+                        str(spec.secondary_private_ip_address_count)
+            if spec.private_ip_address is not None:
+                params[full_prefix + 'PrivateIpAddress'] = \
+                        str(spec.private_ip_address)
+            if spec.groups is not None:
+                for j, group_id in enumerate(spec.groups):
+                    query_param_key = '%sSecurityGroupId.%s' % (full_prefix, j+1)
+                    params[query_param_key] = str(group_id)
+            if spec.private_ip_addresses is not None:
+                for k, ip_addr in enumerate(spec.private_ip_addresses):
+                    query_param_key_prefix = (
+                        '%sPrivateIpAddresses.%s' % (full_prefix, k+1))
+                    params[query_param_key_prefix + '.PrivateIpAddress'] = \
+                            str(ip_addr.private_ip_address)
+                    if ip_addr.primary is not None:
+                        params[query_param_key_prefix + '.Primary'] = \
+                                'true' if ip_addr.primary else 'false'
+
+
+class NetworkInterfaceSpecification(object):
+    def __init__(self, network_interface_id=None, device_index=None,
+                 subnet_id=None, description=None, private_ip_address=None,
+                 groups=None, delete_on_termination=None,
+                 private_ip_addresses=None,
+                 secondary_private_ip_address_count=None):
+        self.network_interface_id = network_interface_id
+        self.device_index = device_index
+        self.subnet_id = subnet_id
+        self.description = description
+        self.private_ip_address = private_ip_address
+        self.groups = groups
+        self.delete_on_termination = delete_on_termination
+        self.private_ip_addresses = private_ip_addresses
+        self.secondary_private_ip_address_count = \
+                secondary_private_ip_address_count
diff --git a/boto/ec2/reservedinstance.py b/boto/ec2/reservedinstance.py
index e71c1ad..d92f168 100644
--- a/boto/ec2/reservedinstance.py
+++ b/boto/ec2/reservedinstance.py
@@ -128,6 +128,7 @@
                                            usage_price, description)
         self.instance_count = instance_count
         self.state = state
+        self.start = None
 
     def __repr__(self):
         return 'ReservedInstance:%s' % self.id
@@ -139,6 +140,8 @@
             self.instance_count = int(value)
         elif name == 'state':
             self.state = value
+        elif name == 'start':
+            self.start = value
         else:
             ReservedInstancesOffering.endElement(self, name, value, connection)
 
diff --git a/boto/ec2/spotinstancerequest.py b/boto/ec2/spotinstancerequest.py
index a3562ac..54fba1d 100644
--- a/boto/ec2/spotinstancerequest.py
+++ b/boto/ec2/spotinstancerequest.py
@@ -29,6 +29,12 @@
 
 
 class SpotInstanceStateFault(object):
+    """
+    The fault codes for the Spot Instance request, if any.
+
+    :ivar code: The reason code for the Spot Instance state change.
+    :ivar message: The message for the Spot Instance state change.
+    """
 
     def __init__(self, code=None, message=None):
         self.code = code
@@ -48,7 +54,70 @@
         setattr(self, name, value)
 
 
+class SpotInstanceStatus(object):
+    """
+    Contains the status of a Spot Instance Request.
+
+    :ivar code: Status code of the request.
+    :ivar message: The description for the status code for the Spot request.
+    :ivar update_time: Time the status was stated.
+    """
+
+    def __init__(self, code=None, update_time=None, message=None):
+        self.code = code
+        self.update_time = update_time
+        self.message = message
+
+    def __repr__(self):
+        return '<Status: %s>' % self.code
+
+    def startElement(self, name, attrs, connection):
+        return None
+
+    def endElement(self, name, value, connection):
+        if name == 'code':
+            self.code = value
+        elif name == 'message':
+            self.message = value
+        elif name == 'updateTime':
+            self.update_time = value
+
+
 class SpotInstanceRequest(TaggedEC2Object):
+    """
+
+    :ivar id: The ID of the Spot Instance Request.
+    :ivar price: The maximum hourly price for any Spot Instance launched to
+        fulfill the request.
+    :ivar type: The Spot Instance request type.
+    :ivar state: The state of the Spot Instance request.
+    :ivar fault: The fault codes for the Spot Instance request, if any.
+    :ivar valid_from: The start date of the request. If this is a one-time
+        request, the request becomes active at this date and time and remains
+        active until all instances launch, the request expires, or the request is
+        canceled. If the request is persistent, the request becomes active at this
+        date and time and remains active until it expires or is canceled.
+    :ivar valid_until: The end date of the request. If this is a one-time
+        request, the request remains active until all instances launch, the request
+        is canceled, or this date is reached. If the request is persistent, it
+        remains active until it is canceled or this date is reached.
+    :ivar launch_group: The instance launch group. Launch groups are Spot
+        Instances that launch together and terminate together.
+    :ivar launched_availability_zone: foo
+    :ivar product_description: The Availability Zone in which the bid is
+        launched.
+    :ivar availability_zone_group: The Availability Zone group. If you specify
+        the same Availability Zone group for all Spot Instance requests, all Spot
+        Instances are launched in the same Availability Zone.
+    :ivar create_time: The time stamp when the Spot Instance request was
+        created.
+    :ivar launch_specification: Additional information for launching instances.
+    :ivar instance_id: The instance ID, if an instance has been launched to
+        fulfill the Spot Instance request.
+    :ivar status: The status code and status message describing the Spot
+        Instance request.
+
+    """
 
     def __init__(self, connection=None):
         TaggedEC2Object.__init__(self, connection)