Merging in latest boto.

BUG=http://code.google.com/p/chromium/issues/detail?id=102454
TEST=None
R=nsylvain@google.com

Review URL: http://codereview.chromium.org/8386013

git-svn-id: svn://svn.chromium.org/boto@2 4f2e627c-b00b-48dd-b1fb-2c643665b734
diff --git a/.gitignore b/.gitignore
new file mode 100644
index 0000000..31b12a1
--- /dev/null
+++ b/.gitignore
@@ -0,0 +1,7 @@
+*.pyc
+.*.swp
+*.log
+boto.egg-info
+build/
+dist/
+MANIFEST
diff --git a/Changelog.rst b/Changelog.rst
new file mode 100644
index 0000000..46317b9
--- /dev/null
+++ b/Changelog.rst
@@ -0,0 +1,35 @@
+==============
+Change history
+==============
+
+.. contents::
+    :local:
+
+.. _version-2.0:
+
+2.0
+===
+:release-date: 2011-07-14
+
+.. _v20-important:
+
+Important Notes
+---------------
+
+* Backwards-incompatible filter changes in the latest 2011 EC2 APIs
+
+    In the latest 2011 EC2 APIs all security groups are assigned a unique
+    identifier (sg-\*).  As a consequence, some existing filters which used to take
+    the group name now require the group *id* instead:
+
+    1. *group-id* filter in DescribeInstances (ie get_all_instances())
+
+        To filter by group name you must instead use the *group-name* filter
+
+    2. *launch.group-id* filter in DescribeSpotInstanceRequests (ie get_all_spot_instance_requests())
+
+        Unfortunately for now, it is *not* possible to filter spot instance
+        requests by group name; the security group id *must* be used instead.
+
+    This new security group id can be found in the *id* attribute of a boto
+    SecurityGroup instance.
diff --git a/MANIFEST.in b/MANIFEST.in
new file mode 100644
index 0000000..fceffb7
--- /dev/null
+++ b/MANIFEST.in
@@ -0,0 +1 @@
+include boto/cacerts/cacerts.txt
diff --git a/README b/README
deleted file mode 100644
index 35b7232..0000000
--- a/README
+++ /dev/null
@@ -1,65 +0,0 @@
-boto 2.0b4
-13-Feb-2011
-
-Copyright (c) 2006-2011 Mitch Garnaat <mitch@garnaat.org>
-Copyright (c) 2010-2011, Eucalyptus Systems, Inc.
-All rights reserved.
-
-http://code.google.com/p/boto
-
-Boto is a Python package that provides interfaces to Amazon Web Services.
-At the moment, boto supports:
-
- * Simple Storage Service (S3)
- * SimpleQueue Service (SQS)
- * Elastic Compute Cloud (EC2)
- * Mechanical Turk
- * SimpleDB
- * CloudFront
- * CloudWatch
- * AutoScale
- * Elastic Load Balancer (ELB)
- * Virtual Private Cloud (VPC)
- * Elastic Map Reduce (EMR)
- * Relational Data Service (RDS) 
- * Simple Notification Server (SNS)
- * Google Storage
- * Identity and Access Management (IAM)
- * Route53 DNS Service (route53)
- * Simple Email Service (SES)
-
-The intent is to support additional services in the future.
-
-The goal of boto is to provide a very simple, easy to use, lightweight
-wrapper around the Amazon services.  Not all features supported by the
-Amazon Web Services will be supported in boto.  Basically, those
-features I need to do what I want to do are supported first.  Other
-features and requests are welcome and will be accomodated to the best
-of my ability.  Patches and contributions are welcome!
-
-Boto was written using Python 2.6.5 on Mac OSX.  It has also been tested
-on Linux Ubuntu using Python 2.6.5.  Boto requires no additional
-libraries or packages other than those that are distributed with Python 2.6.5.
-Efforts are made to keep boto compatible with Python 2.4.x but no
-guarantees are made.
-
-Documentation for boto can be found at:
-
-http://boto.cloudhackers.com/
-
-Join our `IRC channel`_ (#boto on FreeNode).
-    IRC channel: http://webchat.freenode.net/?channels=boto
- 
-Your credentials can be passed into the methods that create 
-connections.  Alternatively, boto will check for the existance of the
-following environment variables to ascertain your credentials:
-
-AWS_ACCESS_KEY_ID - Your AWS Access Key ID
-AWS_SECRET_ACCESS_KEY - Your AWS Secret Access Key
-
-Credentials and other boto-related settings can also be stored in a boto config
-file.  See:
-
-http://code.google.com/p/boto/wiki/BotoConfig
-
-for details.
\ No newline at end of file
diff --git a/README.markdown b/README.markdown
new file mode 100644
index 0000000..95d32f0
--- /dev/null
+++ b/README.markdown
@@ -0,0 +1,72 @@
+# boto
+boto 2.1.1
+31-Oct-2011
+
+## Introduction
+
+Boto is a Python package that provides interfaces to Amazon Web Services.
+At the moment, boto supports:
+
+ * Simple Storage Service (S3)
+ * SimpleQueue Service (SQS)
+ * Elastic Compute Cloud (EC2)
+ * Mechanical Turk
+ * SimpleDB
+ * CloudFront
+ * CloudWatch
+ * AutoScale
+ * Elastic Load Balancer (ELB)
+ * Virtual Private Cloud (VPC)
+ * Elastic Map Reduce (EMR)
+ * Relational Data Service (RDS) 
+ * Simple Notification Server (SNS)
+ * Google Storage
+ * Identity and Access Management (IAM)
+ * Route53 DNS Service (route53)
+ * Simple Email Service (SES)
+ * Flexible Payment Service (FPS)
+ * CloudFormation
+
+The goal of boto is to support the full breadth and depth of Amazon
+Web Services.  In addition, boto provides support for other public
+services such as Google Storage in addition to private cloud systems
+like Eucalyptus, OpenStack and Open Nebula.
+
+Boto is developed mainly using Python 2.6.6 and Python 2.7.1 on Mac OSX
+and Ubuntu Maverick.  It is known to work on other Linux distributions
+and on Windows.  Boto requires no additional libraries or packages
+other than those that are distributed with Python.  Efforts are made
+to keep boto compatible with Python 2.5.x but no guarantees are made.
+
+## Finding Out More About Boto
+
+The main source code repository for boto can be found on
+[github.com](http://github.com/boto/boto)
+
+[Online documentation](http://readthedocs.org/docs/boto/) is also
+available.  The online documentation includes full API documentation
+as well as Getting Started Guides for many of the boto modules.
+
+Boto releases can be found on the [Google Project
+page](http://code.google.com/p/boto/downloads/list) or on the [Python
+Cheese Shop](http://pypi.python.org/).
+
+Join our `IRC channel`_ (#boto on FreeNode).
+    IRC channel: http://webchat.freenode.net/?channels=boto
+
+## Getting Started with Boto
+
+Your credentials can be passed into the methods that create 
+connections.  Alternatively, boto will check for the existance of the
+following environment variables to ascertain your credentials:
+
+AWS_ACCESS_KEY_ID - Your AWS Access Key ID
+AWS_SECRET_ACCESS_KEY - Your AWS Secret Access Key
+
+Credentials and other boto-related settings can also be stored in a
+boto config file.  See
+[this](http://code.google.com/p/boto/wiki/BotoConfig) for details.
+
+Copyright (c) 2006-2011 Mitch Garnaat <mitch@garnaat.com>
+Copyright (c) 2010-2011, Eucalyptus Systems, Inc.
+All rights reserved.
diff --git a/bin/asadmin b/bin/asadmin
new file mode 100755
index 0000000..a8a38f1
--- /dev/null
+++ b/bin/asadmin
@@ -0,0 +1,290 @@
+#!/usr/bin/env python
+# Copyright (c) 2011 Joel Barciauskas http://joel.barciausk.as/
+#
+# Permission is hereby granted, free of charge, to any person obtaining a
+# copy of this software and associated documentation files (the
+# "Software"), to deal in the Software without restriction, including
+# without limitation the rights to use, copy, modify, merge, publish, dis-
+# tribute, sublicense, and/or sell copies of the Software, and to permit
+# persons to whom the Software is furnished to do so, subject to the fol-
+# lowing conditions:
+#
+# The above copyright notice and this permission notice shall be included
+# in all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
+# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
+# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+
+#
+# Auto Scaling Groups Tool
+#
+VERSION="0.1"
+usage = """%prog [options] [command]
+Commands:
+    list|ls                           List all Auto Scaling Groups
+    list-lc|ls-lc                     List all Launch Configurations
+    delete    <name>                  Delete ASG <name>
+    delete-lc <name>                  Delete Launch Configuration <name>
+    get       <name>                  Get details of ASG <name>
+    create    <name>                  Create an ASG
+    create-lc <name>                  Create a Launch Configuration
+    update    <name> <prop> <value>   Update a property of an ASG
+    update-image <asg-name> <lc-name> Update image ID for ASG by creating a new LC
+    migrate-instances <name>          Shut down current instances one by one and wait for ASG to start up a new instance with the current AMI (useful in conjunction with update-image)
+
+Examples:
+
+    1) Create launch configuration
+        bin/asadmin create-lc my-lc-1 -i ami-1234abcd -t c1.xlarge -k my-key -s web-group -m
+
+    2) Create auto scaling group in us-east-1a and us-east-1c with a load balancer and min size of 2 and max size of 6
+        bin/asadmin create my-asg -z us-east-1a -z us-east-1c -l my-lc-1 -b my-lb -H ELB -p 180 -x 2 -X 6
+"""
+
+def get_group(autoscale, name):
+    g = autoscale.get_all_groups(names=[name])
+    if len(g) < 1:
+        print "No auto scaling groups by the name of %s found" % name
+        return sys.exit(1)
+    return g[0]
+
+def get_lc(autoscale, name):
+    l = autoscale.get_all_launch_configurations(names=[name])
+    if len(l) < 1:
+        print "No launch configurations by the name of %s found" % name
+        sys.exit(1)
+    return l[0]
+
+def list(autoscale):
+    """List all ASGs"""
+    print "%-20s %s" %  ("Name", "LC Name")
+    print "-"*80
+    groups = autoscale.get_all_groups()
+    for g in groups:
+        print "%-20s %s" % (g.name, g.launch_config_name)
+
+def list_lc(autoscale):
+    """List all LCs"""
+    print "%-30s %-20s %s" %  ("Name", "Image ID", "Instance Type")
+    print "-"*80
+    for l in autoscale.get_all_launch_configurations():
+        print "%-30s %-20s %s" % (l.name, l.image_id, l.instance_type)
+
+def get(autoscale, name):
+    """Get details about ASG <name>"""
+    g = get_group(autoscale, name)
+    print "="*80
+    print "%-30s %s" % ('Name:', g.name)
+    print "%-30s %s" % ('Launch configuration:', g.launch_config_name)
+    print "%-30s %s" % ('Minimum size:', g.min_size)
+    print "%-30s %s" % ('Maximum size:', g.max_size)
+    print "%-30s %s" % ('Desired capacity:', g.desired_capacity)
+    print "%-30s %s" % ('Load balancers:', ','.join(g.load_balancers))
+
+    print
+
+    print "Instances"
+    print "---------"
+    print "%-20s %-20s %-20s %s" % ("ID", "Status", "Health", "AZ")
+    for i in g.instances:
+        print "%-20s %-20s %-20s %s" % \
+        (i.instance_id, i.lifecycle_state, i.health_status, i.availability_zone)
+
+    print
+
+def create(autoscale, name, zones, lc_name, load_balancers, hc_type, hc_period,
+        min_size, max_size, cooldown, capacity):
+    """Create an ASG named <name>"""
+    g = AutoScalingGroup(name=name, launch_config=lc_name,
+            availability_zones=zones, load_balancers=load_balancers,
+            default_cooldown=cooldown, health_check_type=hc_type,
+            health_check_period=hc_period, desired_capacity=capacity,
+            min_size=min_size, max_size=max_size)
+    g = autoscale.create_auto_scaling_group(g)
+    return list(autoscale)
+
+def create_lc(autoscale, name, image_id, instance_type, key_name,
+        security_groups, instance_monitoring):
+    l = LaunchConfiguration(name=name, image_id=image_id,
+            instance_type=instance_type,key_name=key_name,
+            security_groups=security_groups,
+            instance_monitoring=instance_monitoring)
+    l = autoscale.create_launch_configuration(l)
+    return list_lc(autoscale)
+
+def update(autoscale, name, prop, value):
+    g = get_group(autoscale, name)
+    setattr(g, prop, value)
+    g.update()
+    return get(autoscale, name)
+
+def delete(autoscale, name, force_delete=False):
+    """Delete this ASG"""
+    g = get_group(autoscale, name)
+    autoscale.delete_auto_scaling_group(g.name, force_delete)
+    print "Auto scaling group %s deleted" % name
+    return list(autoscale)
+
+def delete_lc(autoscale, name):
+    """Delete this LC"""
+    l = get_lc(autoscale, name)
+    autoscale.delete_launch_configuration(name)
+    print "Launch configuration %s deleted" % name
+    return list_lc(autoscale)
+
+def update_image(autoscale, name, lc_name, image_id, is_migrate_instances=False):
+    """ Get the current launch config,
+        Update its name and image id
+        Re-create it as a new launch config
+        Update the ASG with the new LC
+        Delete the old LC """
+
+    g = get_group(autoscale, name)
+    l = get_lc(autoscale, g.launch_config_name)
+
+    old_lc_name = l.name
+    l.name = lc_name
+    l.image_id = image_id
+    autoscale.create_launch_configuration(l)
+    g.launch_config_name = l.name
+    g.update()
+
+    if(is_migrate_instances):
+        migrate_instances(autoscale, name)
+    else:
+        return get(autoscale, name)
+
+def migrate_instances(autoscale, name):
+    """ Shut down instances of the old image type one by one
+        and let the ASG start up instances with the new image """
+    g = get_group(autoscale, name)
+
+    old_instances = g.instances
+    ec2 = boto.connect_ec2()
+    for old_instance in old_instances:
+        print "Terminating instance " + old_instance.instance_id
+        ec2.terminate_instances([old_instance.instance_id])
+        while True:
+            g = get_group(autoscale, name)
+            new_instances = g.instances
+            for new_instance in new_instances:
+                hasOldInstance = False
+                instancesReady = True
+                if(old_instance.instance_id == new_instance.instance_id):
+                    hasOldInstance = True
+                    print "Waiting for old instance to shut down..."
+                    break
+                elif(new_instance.lifecycle_state != 'InService'):
+                    instancesReady = False
+                    print "Waiting for instances to be ready...."
+                    break
+            if(not hasOldInstance and instancesReady):
+                break
+            else:
+                time.sleep(20)
+    return get(autoscale, name)
+
+if __name__ == "__main__":
+    try:
+        import readline
+    except ImportError:
+        pass
+    import boto
+    import sys
+    import time
+    from optparse import OptionParser
+    from boto.mashups.iobject import IObject
+    from boto.ec2.autoscale import AutoScalingGroup
+    from boto.ec2.autoscale import LaunchConfiguration
+    parser = OptionParser(version=VERSION, usage=usage)
+    """ Create launch config options """
+    parser.add_option("-i", "--image-id",
+            help="Image (AMI) ID", action="store",
+            type="string", default=None, dest="image_id")
+    parser.add_option("-t", "--instance-type",
+            help="EC2 Instance Type (e.g., m1.large, c1.xlarge), default is m1.large",
+            action="store", type="string", default="m1.large", dest="instance_type")
+    parser.add_option("-k", "--key-name",
+            help="EC2 Key Name",
+            action="store", type="string", dest="key_name")
+    parser.add_option("-s", "--security-group",
+            help="EC2 Security Group",
+            action="append", default=[], dest="security_groups")
+    parser.add_option("-m", "--monitoring",
+            help="Enable instance monitoring",
+            action="store_true", default=False, dest="instance_monitoring")
+
+    """ Create auto scaling group options """
+    parser.add_option("-z", "--zone", help="Add availability zone", action="append", default=[], dest="zones")
+    parser.add_option("-l", "--lc-name",
+            help="Launch configuration name",
+            action="store", default=None, type="string", dest="lc_name")
+    parser.add_option("-b", "--load-balancer",
+            help="Load balancer name",
+            action="append", default=[], dest="load_balancers")
+    parser.add_option("-H", "--health-check-type",
+            help="Health check type (EC2 or ELB)",
+            action="store", default="EC2", type="string", dest="hc_type")
+    parser.add_option("-p", "--health-check-period",
+            help="Health check period in seconds (default 300s)",
+            action="store", default=300, type="int", dest="hc_period")
+    parser.add_option("-X", "--max-size",
+            help="Max size of ASG (default 10)",
+            action="store", default=10, type="int", dest="max_size")
+    parser.add_option("-x", "--min-size",
+            help="Min size of ASG (default 2)",
+            action="store", default=2, type="int", dest="min_size")
+    parser.add_option("-c", "--cooldown",
+            help="Cooldown time after a scaling activity in seconds (default 300s)",
+            action="store", default=300, type="int", dest="cooldown")
+    parser.add_option("-C", "--desired-capacity",
+            help="Desired capacity of the ASG",
+            action="store", default=None, type="int", dest="capacity")
+    parser.add_option("-f", "--force",
+            help="Force delete ASG",
+            action="store_true", default=False, dest="force")
+    parser.add_option("-y", "--migrate-instances",
+            help="Automatically migrate instances to new image when running update-image",
+            action="store_true", default=False, dest="migrate_instances")
+
+    (options, args) = parser.parse_args()
+
+    if len(args) < 1:
+        parser.print_help()
+        sys.exit(1)
+
+    autoscale = boto.connect_autoscale()
+
+    print "%s" % (autoscale.region.endpoint)
+
+    command = args[0].lower()
+    if command in ("ls", "list"):
+        list(autoscale)
+    elif command in ("ls-lc", "list-lc"):
+        list_lc(autoscale)
+    elif command == "get":
+        get(autoscale, args[1])
+    elif command == "create":
+        create(autoscale, args[1], options.zones, options.lc_name,
+                options.load_balancers, options.hc_type,
+                options.hc_period, options.min_size, options.max_size,
+                options.cooldown, options.capacity)
+    elif command == "create-lc":
+        create_lc(autoscale, args[1], options.image_id, options.instance_type,
+                options.key_name, options.security_groups,
+                options.instance_monitoring)
+    elif command == "update":
+        update(autoscale, args[1], args[2], args[3])
+    elif command == "delete":
+        delete(autoscale, args[1], options.force)
+    elif command == "delete-lc":
+        delete_lc(autoscale, args[1])
+    elif command == "update-image":
+        update_image(autoscale, args[1], args[2],
+                options.image_id, options.migrate_instances)
+    elif command == "migrate-instances":
+        migrate_instances(autoscale, args[1])
diff --git a/bin/cfadmin b/bin/cfadmin
index 97726c1..7073452 100755
--- a/bin/cfadmin
+++ b/bin/cfadmin
@@ -5,80 +5,84 @@
 # console utility to perform the most frequent tasks with CloudFront
 #
 def _print_distributions(dists):
-	"""Internal function to print out all the distributions provided"""
-	print "%-12s %-50s %s" % ("Status", "Domain Name", "Origin")
-	print "-"*80
-	for d in dists:
-		print "%-12s %-50s %-30s" % (d.status, d.domain_name, d.origin)
-		for cname in d.cnames:
-			print " "*12, "CNAME => %s" % cname
-	print ""
+    """Internal function to print out all the distributions provided"""
+    print "%-12s %-50s %s" % ("Status", "Domain Name", "Origin")
+    print "-"*80
+    for d in dists:
+        print "%-12s %-50s %-30s" % (d.status, d.domain_name, d.origin)
+        for cname in d.cnames:
+            print " "*12, "CNAME => %s" % cname
+    print ""
 
 def help(cf, fnc=None):
-	"""Print help message, optionally about a specific function"""
-	import inspect
-	self = sys.modules['__main__']
-	if fnc:
-		try:
-			cmd = getattr(self, fnc)
-		except:
-			cmd = None
-		if not inspect.isfunction(cmd):
-			print "No function named: %s found" % fnc
-			sys.exit(2)
-		(args, varargs, varkw, defaults) = inspect.getargspec(cmd)
-		print cmd.__doc__
-		print "Usage: %s %s" % (fnc, " ".join([ "[%s]" % a for a in args[1:]]))
-	else:
-		print "Usage: cfadmin [command]"
-		for cname in dir(self):
-			if not cname.startswith("_"):
-				cmd = getattr(self, cname)
-				if inspect.isfunction(cmd):
-					doc = cmd.__doc__
-					print "\t%s - %s" % (cname, doc)
-	sys.exit(1)
+    """Print help message, optionally about a specific function"""
+    import inspect
+    self = sys.modules['__main__']
+    if fnc:
+        try:
+            cmd = getattr(self, fnc)
+        except:
+            cmd = None
+        if not inspect.isfunction(cmd):
+            print "No function named: %s found" % fnc
+            sys.exit(2)
+        (args, varargs, varkw, defaults) = inspect.getargspec(cmd)
+        print cmd.__doc__
+        print "Usage: %s %s" % (fnc, " ".join([ "[%s]" % a for a in args[1:]]))
+    else:
+        print "Usage: cfadmin [command]"
+        for cname in dir(self):
+            if not cname.startswith("_"):
+                cmd = getattr(self, cname)
+                if inspect.isfunction(cmd):
+                    doc = cmd.__doc__
+                    print "\t%s - %s" % (cname, doc)
+    sys.exit(1)
 
 def ls(cf):
-	"""List all distributions and streaming distributions"""
-	print "Standard Distributions"
-	_print_distributions(cf.get_all_distributions())
-	print "Streaming Distributions"
-	_print_distributions(cf.get_all_streaming_distributions())
+    """List all distributions and streaming distributions"""
+    print "Standard Distributions"
+    _print_distributions(cf.get_all_distributions())
+    print "Streaming Distributions"
+    _print_distributions(cf.get_all_streaming_distributions())
 
 def invalidate(cf, origin_or_id, *paths):
-	"""Create a cloudfront invalidation request"""
-	if not paths:
-		print "Usage: cfadmin invalidate distribution_origin_or_id [path] [path2]..."
-		sys.exit(1)
-	dist = None
-	for d in cf.get_all_distributions():
-		if d.id == origin_or_id or d.origin.dns_name == origin_or_id:
-			dist = d
-			break
-	if not dist:
-		print "Distribution not found: %s" % origin_or_id
-		sys.exit(1)
-	cf.create_invalidation_request(dist.id, paths)
+    """Create a cloudfront invalidation request"""
+    # Allow paths to be passed using stdin
+    if not paths:
+        paths = []
+        for path in sys.stdin.readlines():
+            path = path.strip()
+            if path:
+                paths.append(path)
+    dist = None
+    for d in cf.get_all_distributions():
+        if d.id == origin_or_id or d.origin.dns_name == origin_or_id:
+            dist = d
+            break
+    if not dist:
+        print "Distribution not found: %s" % origin_or_id
+        sys.exit(1)
+    cf.create_invalidation_request(dist.id, paths)
 
 if __name__ == "__main__":
-	import boto
-	import sys
-	cf = boto.connect_cloudfront()
-	self = sys.modules['__main__']
-	if len(sys.argv) >= 2:
-		try:
-			cmd = getattr(self, sys.argv[1])
-		except:
-			cmd = None
-		args = sys.argv[2:]
-	else:
-		cmd = help
-		args = []
-	if not cmd:
-		cmd = help
-	try:
-		cmd(cf, *args)
-	except TypeError, e:
-		print e
-		help(cf, cmd.__name__)
+    import boto
+    import sys
+    cf = boto.connect_cloudfront()
+    self = sys.modules['__main__']
+    if len(sys.argv) >= 2:
+        try:
+            cmd = getattr(self, sys.argv[1])
+        except:
+            cmd = None
+        args = sys.argv[2:]
+    else:
+        cmd = help
+        args = []
+    if not cmd:
+        cmd = help
+    try:
+        cmd(cf, *args)
+    except TypeError, e:
+        print e
+        help(cf, cmd.__name__)
diff --git a/bin/cq b/bin/cq
index 258002d..dd9b914 100755
--- a/bin/cq
+++ b/bin/cq
@@ -21,23 +21,25 @@
 # IN THE SOFTWARE.
 #
 import getopt, sys
+import boto.sqs
 from boto.sqs.connection import SQSConnection
 from boto.exception import SQSError
 
 def usage():
-    print 'cq [-c] [-q queue_name] [-o output_file] [-t timeout]'
+    print 'cq [-c] [-q queue_name] [-o output_file] [-t timeout] [-r region]'
   
 def main():
     try:
-        opts, args = getopt.getopt(sys.argv[1:], 'hcq:o:t:',
+        opts, args = getopt.getopt(sys.argv[1:], 'hcq:o:t:r:',
                                    ['help', 'clear', 'queue',
-                                    'output', 'timeout'])
+                                    'output', 'timeout', 'region'])
     except:
         usage()
         sys.exit(2)
     queue_name = ''
     output_file = ''
     timeout = 30
+    region = ''
     clear = False
     for o, a in opts:
         if o in ('-h', '--help'):
@@ -51,7 +53,12 @@
             clear = True
         if o in ('-t', '--timeout'):
             timeout = int(a)
-    c = SQSConnection()
+        if o in ('-r', '--region'):
+            region = a
+    if region:
+        c = boto.sqs.connect_to_region(region)
+    else:
+        c = SQSConnection()
     if queue_name:
         try:
             rs = [c.create_queue(queue_name)]
diff --git a/bin/cwutil b/bin/cwutil
new file mode 100755
index 0000000..e22b64c
--- /dev/null
+++ b/bin/cwutil
@@ -0,0 +1,140 @@
+#!/usr/bin/env python
+# Author: Chris Moyer <cmoyer@newstex.com>
+# Description: CloudWatch Utility
+# For listing stats, creating alarms, and managing 
+# other CloudWatch aspects
+
+import boto
+cw = boto.connect_cloudwatch()
+
+from datetime import datetime, timedelta
+
+def _parse_time(time_string):
+    """Internal function to parse a time string"""
+
+def _parse_dict(d_string):
+    result = {}
+    if d_string:
+        for d in d_string.split(","):
+            d = d.split(":")
+            result[d[0]] = d[1]
+    return result
+
+def ls(namespace=None):
+    """
+    List metrics, optionally filtering by a specific namespace
+        namespace: Optional Namespace to filter on
+    """
+    print "%-10s %-50s %s" % ("Namespace", "Metric Name", "Dimensions")
+    print "-"*80
+    for m in cw.list_metrics():
+        if namespace is None or namespace.upper() in m.namespace:
+            print "%-10s %-50s %s" % (m.namespace, m.name, m.dimensions)
+
+def stats(namespace, metric_name, dimensions=None, statistics="Average", start_time=None, end_time=None, period=60, unit=None):
+    """
+    Lists the statistics for a specific metric
+        namespace: The namespace to use, usually "AWS/EC2", "AWS/SQS", etc.
+        metric_name: The name of the metric to track, pulled from `ls`
+        dimensions: The dimensions to use, formatted as Name:Value (such as QueueName:myQueue)
+        statistics: The statistics to measure, defaults to "Average"
+             'Minimum', 'Maximum', 'Sum', 'Average', 'SampleCount'
+        start_time: Start time, default to now - 1 day
+        end_time: End time, default to now
+        period: Period/interval for counts, default to 60 minutes
+        unit: Unit to track, default depends on what metric is being tracked
+    """
+
+    # Parse the dimensions
+    dimensions = _parse_dict(dimensions)
+
+    # Parse the times
+    if end_time:
+        end_time = _parse_time(end_time)
+    else:
+        end_time = datetime.utcnow()
+    if start_time:
+        start_time = _parse_time(start_time)
+    else:
+        start_time = datetime.utcnow() - timedelta(days=1)
+            
+    print "%-30s %s" % ('Timestamp', statistics)
+    print "-"*50
+    data = {}
+    for m in cw.get_metric_statistics(int(period), start_time, end_time, metric_name, namespace, statistics, dimensions, unit):
+        data[m['Timestamp']] = m[statistics]
+    keys = data.keys()
+    keys.sort()
+    for k in keys:
+        print "%-30s %s" % (k, data[k])
+
+def put(namespace, metric_name, dimensions=None, value=None, unit=None, statistics=None, timestamp=None):
+    """
+    Publish custom metrics
+        namespace: The namespace to use; values starting with "AWS/" are reserved
+        metric_name: The name of the metric to update
+        dimensions: The dimensions to use, formatted as Name:Value (such as QueueName:myQueue)
+        value: The value to store, mutually exclusive with `statistics`
+        statistics: The statistics to store, mutually exclusive with `value`
+            (must specify all of "Minimum", "Maximum", "Sum", "SampleCount")
+        timestamp: The timestamp of this measurement, default is current server time
+        unit: Unit to track, default depends on what metric is being tracked
+    """
+    
+    def simplify(lst):
+        return lst[0] if len(lst) == 1 else lst
+
+    print cw.put_metric_data(namespace, simplify(metric_name.split(';')),
+        dimensions = simplify(map(_parse_dict, dimensions.split(';'))) if dimensions else None,
+        value = simplify(value.split(';')) if value else None,
+        statistics = simplify(map(_parse_dict, statistics.split(';'))) if statistics else None,
+        timestamp = simplify(timestamp.split(';')) if timestamp else None,
+        unit = simplify(unit.split(';')) if unit else None)
+
+def help(fnc=None):
+    """
+    Print help message, optionally about a specific function
+    """
+    import inspect
+    self = sys.modules['__main__']
+    if fnc:
+        try:
+            cmd = getattr(self, fnc)
+        except:
+            cmd = None
+        if not inspect.isfunction(cmd):
+            print "No function named: %s found" % fnc
+            sys.exit(2)
+        (args, varargs, varkw, defaults) = inspect.getargspec(cmd)
+        print cmd.__doc__
+        print "Usage: %s %s" % (fnc, " ".join([ "[%s]" % a for a in args]))
+    else:
+        print "Usage: cwutil [command]"
+        for cname in dir(self):
+            if not cname.startswith("_") and not cname == "cmd":
+                cmd = getattr(self, cname)
+                if inspect.isfunction(cmd):
+                    doc = cmd.__doc__
+                    print "\t%s - %s" % (cname, doc)
+    sys.exit(1)
+
+
+if __name__ == "__main__":
+    import sys
+    self = sys.modules['__main__']
+    if len(sys.argv) >= 2:
+        try:
+            cmd = getattr(self, sys.argv[1])
+        except:
+            cmd = None
+        args = sys.argv[2:]
+    else:
+        cmd = help
+        args = []
+    if not cmd:
+        cmd = help
+    try:
+        cmd(*args)
+    except TypeError, e:
+        print e
+        help(cmd.__name__)
diff --git a/bin/launch_instance b/bin/launch_instance
index cb857d9..53032ad 100755
--- a/bin/launch_instance
+++ b/bin/launch_instance
@@ -31,7 +31,7 @@
 f.close()
 """
 import boto.pyami.config
-from boto.utils import fetch_file
+import boto.utils
 import re, os
 import ConfigParser
 
@@ -55,7 +55,7 @@
                 file_url = os.path.join(os.getcwd(), file_url)
             file_url = "file://%s" % file_url
         (base_url, file_name) = file_url.rsplit("/", 1)
-        base_config = fetch_file(file_url)
+        base_config = boto.utils.fetch_file(file_url)
         base_config.seek(0)
         for line in base_config.readlines():
             match = re.match("^#import[\s\t]*([^\s^\t]*)[\s\t]*$", line)
@@ -71,7 +71,7 @@
             self.set('Credentials', 'aws_access_key_id', ec2.aws_access_key_id)
             self.set('Credentials', 'aws_secret_access_key', ec2.aws_secret_access_key)
 
-    
+
     def __str__(self):
         """Get config as string"""
         from StringIO import StringIO
@@ -79,6 +79,31 @@
         self.write(s)
         return s.getvalue()
 
+SCRIPTS = []
+
+def scripts_callback(option, opt, value, parser):
+    arg = value.split(',')
+    if len(arg) == 1:
+        SCRIPTS.append(arg[0])
+    else:
+        SCRIPTS.extend(arg)
+    setattr(parser.values, option.dest, SCRIPTS)
+
+def add_script(scr_url):
+    """Read a script and any scripts that are added using #import"""
+    base_url = '/'.join(scr_url.split('/')[:-1]) + '/'
+    script_raw = boto.utils.fetch_file(scr_url)
+    script_content = ''
+    for line in script_raw.readlines():
+        match = re.match("^#import[\s\t]*([^\s^\t]*)[\s\t]*$", line)
+        #if there is an import
+        if match:
+            #Read the other script and put it in that spot
+            script_content += add_script("%s/%s" % (base_url, match.group(1)))
+        else:
+            #Otherwise, add the line and move on
+            script_content += line
+    return script_content
 
 if __name__ == "__main__":
     try:
@@ -107,6 +132,7 @@
     parser.add_option("-w", "--wait", help="Wait until instance is running", default=False, action="store_true", dest="wait")
     parser.add_option("-d", "--dns", help="Returns public and private DNS (implicates --wait)", default=False, action="store_true", dest="dns")
     parser.add_option("-T", "--tag", help="Set tag", default=None, action="append", dest="tags", metavar="key:value")
+    parser.add_option("-s", "--scripts", help="Pass in a script or a folder containing scripts to be run when the instance starts up, assumes cloud-init. Specify scripts in a list specified by commas. If multiple scripts are specified, they are run lexically (A good way to ensure they run in the order is to prefix filenames with numbers)", type='string', action="callback", callback=scripts_callback)
 
     (options, args) = parser.parse_args()
 
@@ -158,8 +184,32 @@
     # If it's a cloud init AMI,
     # then we need to wrap the config in our
     # little wrapper shell script
+
     if options.cloud_init:
         user_data = CLOUD_INIT_SCRIPT % user_data
+        scriptuples = []
+        if options.scripts:
+            scripts = options.scripts
+            scriptuples.append(('user_data', user_data))
+            for scr in scripts:
+                scr_url = scr
+                if not re.match("^([a-zA-Z0-9]*:\/\/)(.*)", scr_url):
+                    if not scr_url.startswith("/"):
+                        scr_url = os.path.join(os.getcwd(), scr_url)
+                    try:
+                        newfiles = os.listdir(scr_url)
+                        for f in newfiles:
+                            #put the scripts in the folder in the array such that they run in the correct order
+                            scripts.insert(scripts.index(scr) + 1, scr.split("/")[-1] + "/" + f)
+                    except OSError:
+                        scr_url = "file://%s" % scr_url
+                try:
+                    scriptuples.append((scr, add_script(scr_url)))
+                except Exception, e:
+                    pass
+
+            user_data = boto.utils.write_mime_multipart(scriptuples, compress=True)
+
     shutdown_proc = "terminate"
     if options.save_ebs:
         shutdown_proc = "save"
@@ -194,4 +244,3 @@
     if options.dns:
         print "Public DNS name: %s" % instance.public_dns_name
         print "Private DNS name: %s" % instance.private_dns_name
-
diff --git a/bin/lss3 b/bin/lss3
index 1fba89d..377a5a5 100755
--- a/bin/lss3
+++ b/bin/lss3
@@ -1,5 +1,6 @@
 #!/usr/bin/env python
 import boto
+from boto.s3.connection import OrdinaryCallingFormat
 
 def sizeof_fmt(num):
     for x in ['b ','KB','MB','GB','TB', 'XB']:
@@ -8,21 +9,43 @@
         num /= 1024.0
     return "%3.1f %s" % (num, x)
 
-def list_bucket(b):
+def list_bucket(b, prefix=None):
     """List everything in a bucket"""
+    from boto.s3.prefix import Prefix
+    from boto.s3.key import Key
     total = 0
-    for k in b:
+    query = b
+    if prefix:
+        if not prefix.endswith("/"):
+            prefix = prefix + "/"
+        query = b.list(prefix=prefix, delimiter="/")
+        print "%s" % prefix
+    num = 0
+    for k in query:
+        num += 1
         mode = "-rwx---"
-        for g in k.get_acl().acl.grants:
-            if g.id == None:
-                if g.permission == "READ":
-                    mode = "-rwxr--"
-                elif g.permission == "FULL_CONTROL":
-                    mode = "-rwxrwx"
-        print "%s\t%010s\t%s" % (mode, sizeof_fmt(k.size), k.name)
-        total += k.size
-    print "="*60
-    print "TOTAL: \t%010s" % sizeof_fmt(total)
+        if isinstance(k, Prefix):
+            mode = "drwxr--"
+            size = 0
+        else:
+            size = k.size
+            for g in k.get_acl().acl.grants:
+                if g.id == None:
+                    if g.permission == "READ":
+                        mode = "-rwxr--"
+                    elif g.permission == "FULL_CONTROL":
+                        mode = "-rwxrwx"
+        if isinstance(k, Key):
+           print "%s\t%s\t%010s\t%s" % (mode, k.last_modified,
+                 sizeof_fmt(size), k.name)
+        else:
+           #If it's not a Key object, it doesn't have a last_modified time, so
+           #print nothing instead
+           print "%s\t%s\t%010s\t%s" % (mode, ' '*24,
+              sizeof_fmt(size), k.name)
+        total += size
+    print "="*80
+    print "\t\tTOTAL:  \t%010s \t%i Files" % (sizeof_fmt(total), num)
 
 def list_buckets(s3):
     """List all the buckets"""
@@ -31,9 +54,24 @@
 
 if __name__ == "__main__":
     import sys
-    s3 = boto.connect_s3()
+
+    pairs = []
+    mixedCase = False
+    for name in sys.argv[1:]:
+            if "/" in name:
+                pairs.append(name.split("/",1))
+            else:
+                pairs.append([name, None])
+            if pairs[-1][0].lower() != pairs[-1][0]:
+                mixedCase = True
+
+    if mixedCase:
+        s3 = boto.connect_s3(calling_format=OrdinaryCallingFormat())
+    else:
+        s3 = boto.connect_s3()
+
     if len(sys.argv) < 2:
         list_buckets(s3)
     else:
-        for name in sys.argv[1:]:
-            list_bucket(s3.get_bucket(name))
+        for name, prefix in pairs:
+            list_bucket(s3.get_bucket(name), prefix)
diff --git a/bin/pyami_sendmail b/bin/pyami_sendmail
index 78e3003..5502145 100755
--- a/bin/pyami_sendmail
+++ b/bin/pyami_sendmail
@@ -37,6 +37,8 @@
     parser.add_option("-t", "--to", help="Optional to address to send to (default from your boto.cfg)", action="store", default=None, dest="to")
     parser.add_option("-s", "--subject", help="Optional Subject to send this report as", action="store", default="Report", dest="subject")
     parser.add_option("-f", "--file", help="Optionally, read from a file instead of STDIN", action="store", default=None, dest="file")
+    parser.add_option("--html", help="HTML Format the email", action="store_true", default=False, dest="html")
+    parser.add_option("--no-instance-id", help="If set, don't append the instance id", action="store_false", default=True, dest="append_instance_id")
 
     (options, args) = parser.parse_args()
     if options.file:
@@ -44,4 +46,7 @@
     else:
         body = sys.stdin.read()
 
-    notify(options.subject, body=body, to_string=options.to)
+    if options.html:
+        notify(options.subject, html_body=body, to_string=options.to, append_instance_id=options.append_instance_id)
+    else:
+        notify(options.subject, body=body, to_string=options.to, append_instance_id=options.append_instance_id)
diff --git a/bin/route53 b/bin/route53
index 55f86a5..3f6327d 100755
--- a/bin/route53
+++ b/bin/route53
@@ -35,25 +35,61 @@
 def get(conn, hosted_zone_id, type=None, name=None, maxitems=None):
     """Get all the records for a single zone"""
     response = conn.get_all_rrsets(hosted_zone_id, type, name, maxitems=maxitems)
-    print '%-20s %-20s %-20s %s' % ("Name", "Type", "TTL", "Value(s)")
+    # If a maximum number of items was set, we limit to that number
+    # by turning the response into an actual list (copying it)
+    # instead of allowing it to page
+    if maxitems:
+        response = response[:]
+    print '%-40s %-5s %-20s %s' % ("Name", "Type", "TTL", "Value(s)")
     for record in response:
-        print '%-20s %-20s %-20s %s' % (record.name, record.type, record.ttl, ",".join(record.resource_records))
+        print '%-40s %-5s %-20s %s' % (record.name, record.type, record.ttl, record.to_print())
 
 
-def add_record(conn, hosted_zone_id, name, type, value, ttl=600, comment=""):
+def add_record(conn, hosted_zone_id, name, type, values, ttl=600, comment=""):
     """Add a new record to a zone"""
     from boto.route53.record import ResourceRecordSets
     changes = ResourceRecordSets(conn, hosted_zone_id, comment)
     change = changes.add_change("CREATE", name, type, ttl)
-    change.add_value(value)
+    for value in values.split(','):
+        change.add_value(value)
     print changes.commit()
 
-def del_record(conn, hosted_zone_id, name, type, value, ttl=600, comment=""):
+def del_record(conn, hosted_zone_id, name, type, values, ttl=600, comment=""):
     """Delete a record from a zone"""
     from boto.route53.record import ResourceRecordSets
     changes = ResourceRecordSets(conn, hosted_zone_id, comment)
     change = changes.add_change("DELETE", name, type, ttl)
-    change.add_value(value)
+    for value in values.split(','):
+        change.add_value(value)
+    print changes.commit()
+
+def add_alias(conn, hosted_zone_id, name, type, alias_hosted_zone_id, alias_dns_name, comment=""):
+    """Add a new alias to a zone"""
+    from boto.route53.record import ResourceRecordSets
+    changes = ResourceRecordSets(conn, hosted_zone_id, comment)
+    change = changes.add_change("CREATE", name, type)
+    change.set_alias(alias_hosted_zone_id, alias_dns_name)
+    print changes.commit()
+
+def del_alias(conn, hosted_zone_id, name, type, alias_hosted_zone_id, alias_dns_name, comment=""):
+    """Delete an alias from a zone"""
+    from boto.route53.record import ResourceRecordSets
+    changes = ResourceRecordSets(conn, hosted_zone_id, comment)
+    change = changes.add_change("DELETE", name, type)
+    change.set_alias(alias_hosted_zone_id, alias_dns_name)
+    print changes.commit()
+
+def change_record(conn, hosted_zone_id, name, type, values, ttl=600, comment=""):
+    """Delete and then add a record to a zone"""
+    from boto.route53.record import ResourceRecordSets
+    changes = ResourceRecordSets(conn, hosted_zone_id, comment)
+    response = conn.get_all_rrsets(hosted_zone_id, type, name, maxitems=1)[0]
+    change1 = changes.add_change("DELETE", name, type, response.ttl)
+    for old_value in response.resource_records:
+        change1.add_value(old_value)
+    change2 = changes.add_change("CREATE", name, type, ttl)
+    for new_value in values.split(','):
+        change2.add_value(new_value)
     print changes.commit()
 
 def help(conn, fnc=None):
@@ -78,7 +114,7 @@
                 cmd = getattr(self, cname)
                 if inspect.isfunction(cmd):
                     doc = cmd.__doc__
-                    print "\t%s - %s" % (cname, doc)
+                    print "\t%-20s  %s" % (cname, doc)
     sys.exit(1)
 
 
diff --git a/bin/s3multiput b/bin/s3multiput
new file mode 100755
index 0000000..df6e9fe
--- /dev/null
+++ b/bin/s3multiput
@@ -0,0 +1,317 @@
+#!/usr/bin/env python
+# Copyright (c) 2006,2007,2008 Mitch Garnaat http://garnaat.org/
+#
+# Permission is hereby granted, free of charge, to any person obtaining a
+# copy of this software and associated documentation files (the
+# "Software"), to deal in the Software without restriction, including
+# without limitation the rights to use, copy, modify, merge, publish, dis-
+# tribute, sublicense, and/or sell copies of the Software, and to permit
+# persons to whom the Software is furnished to do so, subject to the fol-
+# lowing conditions:
+#
+# The above copyright notice and this permission notice shall be included
+# in all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
+# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
+# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+# IN THE SOFTWARE.
+#
+
+# multipart portions copyright Fabian Topfstedt
+# https://gist.github.com/924094
+
+
+import math
+import mimetypes
+from multiprocessing import Pool
+import getopt, sys, os
+
+import boto
+from boto.exception import S3ResponseError
+
+from boto.s3.connection import S3Connection
+from filechunkio import FileChunkIO
+
+usage_string = """
+SYNOPSIS
+    s3put [-a/--access_key <access_key>] [-s/--secret_key <secret_key>]
+          -b/--bucket <bucket_name> [-c/--callback <num_cb>]
+          [-d/--debug <debug_level>] [-i/--ignore <ignore_dirs>]
+          [-n/--no_op] [-p/--prefix <prefix>] [-q/--quiet]
+          [-g/--grant grant] [-w/--no_overwrite] [-r/--reduced] path
+
+    Where
+        access_key - Your AWS Access Key ID.  If not supplied, boto will
+                     use the value of the environment variable
+                     AWS_ACCESS_KEY_ID
+        secret_key - Your AWS Secret Access Key.  If not supplied, boto
+                     will use the value of the environment variable
+                     AWS_SECRET_ACCESS_KEY
+        bucket_name - The name of the S3 bucket the file(s) should be
+                      copied to.
+        path - A path to a directory or file that represents the items
+               to be uploaded.  If the path points to an individual file,
+               that file will be uploaded to the specified bucket.  If the
+               path points to a directory, s3_it will recursively traverse
+               the directory and upload all files to the specified bucket.
+        debug_level - 0 means no debug output (default), 1 means normal
+                      debug output from boto, and 2 means boto debug output
+                      plus request/response output from httplib
+        ignore_dirs - a comma-separated list of directory names that will
+                      be ignored and not uploaded to S3.
+        num_cb - The number of progress callbacks to display.  The default
+                 is zero which means no callbacks.  If you supplied a value
+                 of "-c 10" for example, the progress callback would be
+                 called 10 times for each file transferred.
+        prefix - A file path prefix that will be stripped from the full
+                 path of the file when determining the key name in S3.
+                 For example, if the full path of a file is:
+                     /home/foo/bar/fie.baz
+                 and the prefix is specified as "-p /home/foo/" the
+                 resulting key name in S3 will be:
+                     /bar/fie.baz
+                 The prefix must end in a trailing separator and if it
+                 does not then one will be added.
+        reduced - Use Reduced Redundancy storage
+        grant - A canned ACL policy that will be granted on each file
+                transferred to S3.  The value of provided must be one
+                of the "canned" ACL policies supported by S3:
+                private|public-read|public-read-write|authenticated-read
+        no_overwrite - No files will be overwritten on S3, if the file/key
+                       exists on s3 it will be kept. This is useful for
+                       resuming interrupted transfers. Note this is not a
+                       sync, even if the file has been updated locally if
+                       the key exists on s3 the file on s3 will not be
+                       updated.
+
+     If the -n option is provided, no files will be transferred to S3 but
+     informational messages will be printed about what would happen.
+"""
+def usage():
+    print usage_string
+    sys.exit()
+
+def submit_cb(bytes_so_far, total_bytes):
+    print '%d bytes transferred / %d bytes total' % (bytes_so_far, total_bytes)
+
+def get_key_name(fullpath, prefix):
+    key_name = fullpath[len(prefix):]
+    l = key_name.split(os.sep)
+    return '/'.join(l)
+
+def _upload_part(bucketname, aws_key, aws_secret, multipart_id, part_num,
+    source_path, offset, bytes, debug, cb, num_cb, amount_of_retries=10):
+    if debug == 1:
+        print "_upload_part(%s, %s, %s)" % (source_path, offset, bytes)
+    """
+    Uploads a part with retries.
+    """
+    def _upload(retries_left=amount_of_retries):
+        try:
+            if debug == 1:
+                print 'Start uploading part #%d ...' % part_num
+            conn = S3Connection(aws_key, aws_secret)
+            conn.debug = debug
+            bucket = conn.get_bucket(bucketname)
+            for mp in bucket.get_all_multipart_uploads():
+                if mp.id == multipart_id:
+                    with FileChunkIO(source_path, 'r', offset=offset,
+                        bytes=bytes) as fp:
+                        mp.upload_part_from_file(fp=fp, part_num=part_num, cb=cb, num_cb=num_cb)
+                    break
+        except Exception, exc:
+            if retries_left:
+                _upload(retries_left=retries_left - 1)
+            else:
+                print 'Failed uploading part #%d' % part_num
+                raise exc
+        else:
+            if debug == 1:
+                print '... Uploaded part #%d' % part_num
+
+    _upload()
+
+def upload(bucketname, aws_key, aws_secret, source_path, keyname,
+    reduced, debug, cb, num_cb,
+    acl='private', headers={}, guess_mimetype=True, parallel_processes=4):
+    """
+    Parallel multipart upload.
+    """
+    conn = S3Connection(aws_key, aws_secret)
+    conn.debug = debug
+    bucket = conn.get_bucket(bucketname)
+
+    if guess_mimetype:
+        mtype = mimetypes.guess_type(keyname)[0] or 'application/octet-stream'
+        headers.update({'Content-Type': mtype})
+
+    mp = bucket.initiate_multipart_upload(keyname, headers=headers, reduced_redundancy=reduced)
+
+    source_size = os.stat(source_path).st_size
+    bytes_per_chunk = max(int(math.sqrt(5242880) * math.sqrt(source_size)),
+        5242880)
+    chunk_amount = int(math.ceil(source_size / float(bytes_per_chunk)))
+
+    pool = Pool(processes=parallel_processes)
+    for i in range(chunk_amount):
+        offset = i * bytes_per_chunk
+        remaining_bytes = source_size - offset
+        bytes = min([bytes_per_chunk, remaining_bytes])
+        part_num = i + 1
+        pool.apply_async(_upload_part, [bucketname, aws_key, aws_secret, mp.id,
+            part_num, source_path, offset, bytes, debug, cb, num_cb])
+    pool.close()
+    pool.join()
+
+    if len(mp.get_all_parts()) == chunk_amount:
+        mp.complete_upload()
+        key = bucket.get_key(keyname)
+        key.set_acl(acl)
+    else:
+        mp.cancel_upload()
+
+
+def main():
+
+    # default values
+    aws_access_key_id     = None
+    aws_secret_access_key = None
+    bucket_name = ''
+    ignore_dirs = []
+    total  = 0
+    debug  = 0
+    cb     = None
+    num_cb = 0
+    quiet  = False
+    no_op  = False
+    prefix = '/'
+    grant  = None
+    no_overwrite = False
+    reduced = False
+
+    try:
+        opts, args = getopt.getopt(sys.argv[1:], 'a:b:c::d:g:hi:np:qs:wr',
+                                   ['access_key', 'bucket', 'callback', 'debug', 'help', 'grant',
+                                    'ignore', 'no_op', 'prefix', 'quiet', 'secret_key', 'no_overwrite',
+                                    'reduced'])
+    except:
+        usage()
+
+    # parse opts
+    for o, a in opts:
+        if o in ('-h', '--help'):
+            usage()
+        if o in ('-a', '--access_key'):
+            aws_access_key_id = a
+        if o in ('-b', '--bucket'):
+            bucket_name = a
+        if o in ('-c', '--callback'):
+            num_cb = int(a)
+            cb = submit_cb
+        if o in ('-d', '--debug'):
+            debug = int(a)
+        if o in ('-g', '--grant'):
+            grant = a
+        if o in ('-i', '--ignore'):
+            ignore_dirs = a.split(',')
+        if o in ('-n', '--no_op'):
+            no_op = True
+        if o in ('w', '--no_overwrite'):
+            no_overwrite = True
+        if o in ('-p', '--prefix'):
+            prefix = a
+            if prefix[-1] != os.sep:
+                prefix = prefix + os.sep
+        if o in ('-q', '--quiet'):
+            quiet = True
+        if o in ('-s', '--secret_key'):
+            aws_secret_access_key = a
+        if o in ('-r', '--reduced'):
+            reduced = True
+
+    if len(args) != 1:
+        usage()
+
+
+    path = os.path.expanduser(args[0])
+    path = os.path.expandvars(path)
+    path = os.path.abspath(path)
+
+    if not bucket_name:
+        print "bucket name is required!"
+        usage()
+
+    c = boto.connect_s3(aws_access_key_id=aws_access_key_id,
+                        aws_secret_access_key=aws_secret_access_key)
+    c.debug = debug
+    b = c.get_bucket(bucket_name)
+
+    # upload a directory of files recursively
+    if os.path.isdir(path):
+        if no_overwrite:
+            if not quiet:
+                print 'Getting list of existing keys to check against'
+            keys = []
+            for key in b.list():
+                keys.append(key.name)
+        for root, dirs, files in os.walk(path):
+            for ignore in ignore_dirs:
+                if ignore in dirs:
+                    dirs.remove(ignore)
+            for file in files:
+                fullpath = os.path.join(root, file)
+                key_name = get_key_name(fullpath, prefix)
+                copy_file = True
+                if no_overwrite:
+                    if key_name in keys:
+                        copy_file = False
+                        if not quiet:
+                            print 'Skipping %s as it exists in s3' % file
+
+                if copy_file:
+                    if not quiet:
+                        print 'Copying %s to %s/%s' % (file, bucket_name, key_name)
+
+                    if not no_op:
+                        if os.stat(fullpath).st_size == 0:
+                            # 0-byte files don't work and also don't need multipart upload
+                            k = b.new_key(key_name)
+                            k.set_contents_from_filename(fullpath, cb=cb, num_cb=num_cb,
+                                                         policy=grant, reduced_redundancy=reduced)
+                        else:
+                            upload(bucket_name, aws_access_key_id,
+                                   aws_secret_access_key, fullpath, key_name,
+                                   reduced, debug, cb, num_cb)
+                total += 1
+
+    # upload a single file
+    elif os.path.isfile(path):
+        key_name = get_key_name(os.path.abspath(path), prefix)
+        copy_file = True
+        if no_overwrite:
+            if b.get_key(key_name):
+                copy_file = False
+                if not quiet:
+                    print 'Skipping %s as it exists in s3' % path
+
+        if copy_file:
+            if not quiet:
+                print 'Copying %s to %s/%s' % (path, bucket_name, key_name)
+
+            if not no_op:
+                if os.stat(path).st_size == 0:
+                    # 0-byte files don't work and also don't need multipart upload
+                    k = b.new_key(key_name)
+                    k.set_contents_from_filename(path, cb=cb, num_cb=num_cb, policy=grant,
+                                                 reduced_redundancy=reduced)
+                else:
+                    upload(bucket_name, aws_access_key_id,
+                           aws_secret_access_key, path, key_name,
+                           reduced, debug, cb, num_cb)
+
+if __name__ == "__main__":
+    main()
diff --git a/bin/s3put b/bin/s3put
index b5467d9..a748ec3 100755
--- a/bin/s3put
+++ b/bin/s3put
@@ -30,7 +30,7 @@
           -b/--bucket <bucket_name> [-c/--callback <num_cb>]
           [-d/--debug <debug_level>] [-i/--ignore <ignore_dirs>]
           [-n/--no_op] [-p/--prefix <prefix>] [-q/--quiet]
-          [-g/--grant grant] [-w/--no_overwrite] path
+          [-g/--grant grant] [-w/--no_overwrite] [-r/--reduced] path
 
     Where
         access_key - Your AWS Access Key ID.  If not supplied, boto will
@@ -74,6 +74,8 @@
                        sync, even if the file has been updated locally if 
                        the key exists on s3 the file on s3 will not be 
                        updated.
+        reduced - Use Reduced Redundancy storage
+
 
      If the -n option is provided, no files will be transferred to S3 but
      informational messages will be printed about what would happen.
@@ -92,9 +94,12 @@
 
 def main():
     try:
-        opts, args = getopt.getopt(sys.argv[1:], 'a:b:c::d:g:hi:np:qs:vw',
-                                   ['access_key', 'bucket', 'callback', 'debug', 'help', 'grant',
-                                    'ignore', 'no_op', 'prefix', 'quiet', 'secret_key', 'no_overwrite'])
+        opts, args = getopt.getopt(
+                sys.argv[1:], 'a:b:c::d:g:hi:np:qs:vwr',
+                ['access_key', 'bucket', 'callback', 'debug', 'help', 'grant',
+                 'ignore', 'no_op', 'prefix', 'quiet', 'secret_key',
+                 'no_overwrite', 'reduced']
+                )
     except:
         usage()
     ignore_dirs = []
@@ -110,6 +115,7 @@
     prefix = '/'
     grant = None
     no_overwrite = False
+    reduced = False
     for o, a in opts:
         if o in ('-h', '--help'):
             usage()
@@ -129,8 +135,10 @@
             ignore_dirs = a.split(',')
         if o in ('-n', '--no_op'):
             no_op = True
-        if o in ('w', '--no_overwrite'):
+        if o in ('-w', '--no_overwrite'):
             no_overwrite = True
+        if o in ('-r', '--reduced'):
+            reduced = True
         if o in ('-p', '--prefix'):
             prefix = a
             if prefix[-1] != os.sep:
@@ -174,8 +182,10 @@
                             print 'Copying %s to %s/%s' % (file, bucket_name, key_name)
                         if not no_op:
                             k = b.new_key(key_name)
-                            k.set_contents_from_filename(fullpath, cb=cb,
-                                                            num_cb=num_cb, policy=grant)
+                            k.set_contents_from_filename(
+                                    fullpath, cb=cb, num_cb=num_cb,
+                                    policy=grant, reduced_redundancy=reduced,
+                                    )
                     total += 1
         elif os.path.isfile(path):
             key_name = os.path.split(path)[1]
@@ -187,10 +197,12 @@
                         print 'Skipping %s as it exists in s3' % path
             if copy_file:
                 k = b.new_key(key_name)
-                k.set_contents_from_filename(path, cb=cb, num_cb=num_cb, policy=grant)
+                k.set_contents_from_filename(path, cb=cb, num_cb=num_cb,
+                                             policy=grant,
+                                             reduced_redundancy=reduced)
     else:
         print usage()
 
 if __name__ == "__main__":
     main()
-        
+
diff --git a/boto/__init__.py b/boto/__init__.py
index d11b578..00e2fc8 100644
--- a/boto/__init__.py
+++ b/boto/__init__.py
@@ -1,5 +1,6 @@
 # Copyright (c) 2006-2011 Mitch Garnaat http://garnaat.org/
 # Copyright (c) 2010-2011, Eucalyptus Systems, Inc.
+# Copyright (c) 2011, Nexenta Systems Inc.
 # All rights reserved.
 #
 # Permission is hereby granted, free of charge, to any person obtaining a
@@ -21,7 +22,6 @@
 # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
 # IN THE SOFTWARE.
 #
-import boto
 from boto.pyami.config import Config, BotoConfigLocations
 from boto.storage_uri import BucketStorageUri, FileStorageUri
 import boto.plugin
@@ -30,7 +30,7 @@
 import logging.config
 from boto.exception import InvalidUriError
 
-__version__ = '2.0b4'
+__version__ = '2.1.1'
 Version = __version__ # for backware compatibility
 
 UserAgent = 'Boto/%s (%s)' % (__version__, sys.platform)
@@ -109,10 +109,10 @@
 def connect_gs(gs_access_key_id=None, gs_secret_access_key=None, **kwargs):
     """
     @type gs_access_key_id: string
-    @param gs_access_key_id: Your Google Storage Access Key ID
+    @param gs_access_key_id: Your Google Cloud Storage Access Key ID
 
     @type gs_secret_access_key: string
-    @param gs_secret_access_key: Your Google Storage Secret Access Key
+    @param gs_secret_access_key: Your Google Cloud Storage Secret Access Key
 
     @rtype: L{GSConnection<boto.gs.connection.GSConnection>}
     @return: A connection to Google's Storage service
@@ -264,10 +264,10 @@
     """
     :type aws_access_key_id: string
     :param aws_access_key_id: Your AWS Access Key ID
-   
+
     :type aws_secret_access_key: string
     :param aws_secret_access_key: Your AWS Secret Access Key
-   
+
     :rtype: :class:`boto.emr.EmrConnection`
     :return: A connection to Elastic mapreduce
     """
@@ -317,7 +317,7 @@
     from boto.route53 import Route53Connection
     return Route53Connection(aws_access_key_id, aws_secret_access_key, **kwargs)
 
-def connect_euca(host, aws_access_key_id=None, aws_secret_access_key=None,
+def connect_euca(host=None, aws_access_key_id=None, aws_secret_access_key=None,
                  port=8773, path='/services/Eucalyptus', is_secure=False,
                  **kwargs):
     """
@@ -325,7 +325,7 @@
 
     :type host: string
     :param host: the host name or ip address of the Eucalyptus server
-    
+
     :type aws_access_key_id: string
     :param aws_access_key_id: Your AWS Access Key ID
 
@@ -338,12 +338,24 @@
     from boto.ec2 import EC2Connection
     from boto.ec2.regioninfo import RegionInfo
 
+    # Check for values in boto config, if not supplied as args
+    if not aws_access_key_id:
+        aws_access_key_id = config.get('Credentials',
+                                       'euca_access_key_id',
+                                       None)
+    if not aws_secret_access_key:
+        aws_secret_access_key = config.get('Credentials',
+                                           'euca_secret_access_key',
+                                           None)
+    if not host:
+        host = config.get('Boto', 'eucalyptus_host', None)
+
     reg = RegionInfo(name='eucalyptus', endpoint=host)
     return EC2Connection(aws_access_key_id, aws_secret_access_key,
                          region=reg, port=port, path=path,
                          is_secure=is_secure, **kwargs)
 
-def connect_walrus(host, aws_access_key_id=None, aws_secret_access_key=None,
+def connect_walrus(host=None, aws_access_key_id=None, aws_secret_access_key=None,
                    port=8773, path='/services/Walrus', is_secure=False,
                    **kwargs):
     """
@@ -351,7 +363,7 @@
 
     :type host: string
     :param host: the host name or ip address of the Walrus server
-    
+
     :type aws_access_key_id: string
     :param aws_access_key_id: Your AWS Access Key ID
 
@@ -364,6 +376,18 @@
     from boto.s3.connection import S3Connection
     from boto.s3.connection import OrdinaryCallingFormat
 
+    # Check for values in boto config, if not supplied as args
+    if not aws_access_key_id:
+        aws_access_key_id = config.get('Credentials',
+                                       'euca_access_key_id',
+                                       None)
+    if not aws_secret_access_key:
+        aws_secret_access_key = config.get('Credentials',
+                                           'euca_secret_access_key',
+                                           None)
+    if not host:
+        host = config.get('Boto', 'walrus_host', None)
+        
     return S3Connection(aws_access_key_id, aws_secret_access_key,
                         host=host, port=port, path=path,
                         calling_format=OrdinaryCallingFormat(),
@@ -383,6 +407,20 @@
     from boto.ses import SESConnection
     return SESConnection(aws_access_key_id, aws_secret_access_key, **kwargs)
 
+def connect_sts(aws_access_key_id=None, aws_secret_access_key=None, **kwargs):
+    """
+    :type aws_access_key_id: string
+    :param aws_access_key_id: Your AWS Access Key ID
+
+    :type aws_secret_access_key: string
+    :param aws_secret_access_key: Your AWS Secret Access Key
+
+    :rtype: :class:`boto.sts.STSConnection`
+    :return: A connection to Amazon's STS
+    """
+    from boto.sts import STSConnection
+    return STSConnection(aws_access_key_id, aws_secret_access_key, **kwargs)
+
 def connect_ia(ia_access_key_id=None, ia_secret_access_key=None,
                is_secure=False, **kwargs):
     """
@@ -394,9 +432,10 @@
                              section called "ia_access_key_id"
 
     :type ia_secret_access_key: string
-    :param ia_secret_access_key: Your IA Secret Access Key.  This will also look in your
-                                 boto config file for an entry in the Credentials
-                                 section called "ia_secret_access_key"
+    :param ia_secret_access_key: Your IA Secret Access Key.  This will also
+                                 look in your boto config file for an entry
+                                 in the Credentials section called
+                                 "ia_secret_access_key"
 
     :rtype: :class:`boto.s3.connection.S3Connection`
     :return: A connection to the Internet Archive
@@ -506,7 +545,10 @@
     if scheme == 'file':
         # For file URIs we have no bucket name, and use the complete path
         # (minus 'file://') as the object name.
-        return FileStorageUri(path, debug)
+        is_stream = False
+        if path == '-':
+            is_stream = True
+        return FileStorageUri(path, debug, is_stream)
     else:
         path_parts = path.split('/', 1)
         bucket_name = path_parts[0]
diff --git a/boto/auth.py b/boto/auth.py
index 6c6c1f2..084dde9 100644
--- a/boto/auth.py
+++ b/boto/auth.py
@@ -34,8 +34,8 @@
 import boto.utils
 import hmac
 import sys
-import time
 import urllib
+from email.utils import formatdate
 
 from boto.auth_handler import AuthHandler
 from boto.exception import BotoClientError
@@ -77,7 +77,8 @@
         self._provider = provider
         self._hmac = hmac.new(self._provider.secret_key, digestmod=sha)
         if sha256:
-            self._hmac_256 = hmac.new(self._provider.secret_key, digestmod=sha256)
+            self._hmac_256 = hmac.new(self._provider.secret_key,
+                                      digestmod=sha256)
         else:
             self._hmac_256 = None
 
@@ -111,9 +112,11 @@
         method = http_request.method
         auth_path = http_request.auth_path
         if not headers.has_key('Date'):
-            headers['Date'] = time.strftime("%a, %d %b %Y %H:%M:%S GMT",
-                                            time.gmtime())
+            headers['Date'] = formatdate(usegmt=True)
 
+        if self._provider.security_token:
+            key = self._provider.security_token_header
+            headers[key] = self._provider.security_token
         c_string = boto.utils.canonical_string(method, auth_path, headers,
                                                None, self._provider)
         b64_hmac = self.sign_string(c_string)
@@ -136,8 +139,7 @@
     def add_auth(self, http_request, **kwargs):
         headers = http_request.headers
         if not headers.has_key('Date'):
-            headers['Date'] = time.strftime("%a, %d %b %Y %H:%M:%S GMT",
-                                            time.gmtime())
+            headers['Date'] = formatdate(usegmt=True)
 
         b64_hmac = self.sign_string(headers['Date'])
         auth_hdr = self._provider.auth_header
@@ -157,8 +159,7 @@
     def add_auth(self, http_request, **kwargs):
         headers = http_request.headers
         if not headers.has_key('Date'):
-            headers['Date'] = time.strftime("%a, %d %b %Y %H:%M:%S GMT",
-                                            time.gmtime())
+            headers['Date'] = formatdate(usegmt=True)
 
         b64_hmac = self.sign_string(headers['Date'])
         s = "AWS3-HTTPS AWSAccessKeyId=%s," % self._provider.access_key
@@ -179,20 +180,22 @@
         params['Timestamp'] = boto.utils.get_ts()
         qs, signature = self._calc_signature(
             http_request.params, http_request.method,
-            http_request.path, http_request.host)
+            http_request.auth_path, http_request.host)
         boto.log.debug('query_string: %s Signature: %s' % (qs, signature))
         if http_request.method == 'POST':
             headers['Content-Type'] = 'application/x-www-form-urlencoded; charset=UTF-8'
             http_request.body = qs + '&Signature=' + urllib.quote(signature)
+            http_request.headers['Content-Length'] = str(len(http_request.body))
         else:
             http_request.body = ''
-            http_request.path = (http_request.path + '?' + qs + '&Signature=' + urllib.quote(signature))
-        # Now that query params are part of the path, clear the 'params' field
-        # in request.
-        http_request.params = {}
+            # if this is a retried request, the qs from the previous try will
+            # already be there, we need to get rid of that and rebuild it
+            http_request.path = http_request.path.split('?')[0]
+            http_request.path = (http_request.path + '?' + qs +
+                                 '&Signature=' + urllib.quote(signature))
 
 class QuerySignatureV0AuthHandler(QuerySignatureHelper, AuthHandler):
-    """Class SQS query signature based Auth handler."""
+    """Provides Signature V0 Signing"""
 
     SignatureVersion = 0
     capability = ['sign-v0']
@@ -206,7 +209,7 @@
         keys.sort(cmp = lambda x, y: cmp(x.lower(), y.lower()))
         pairs = []
         for key in keys:
-            val = bot.utils.get_utf8_value(params[key])
+            val = boto.utils.get_utf8_value(params[key])
             pairs.append(key + '=' + urllib.quote(val))
         qs = '&'.join(pairs)
         return (qs, base64.b64encode(hmac.digest()))
@@ -238,7 +241,7 @@
 
     SignatureVersion = 2
     capability = ['sign-v2', 'ec2', 'ec2', 'emr', 'fps', 'ecs',
-                  'sdb', 'iam', 'rds', 'sns', 'sqs']
+                  'sdb', 'iam', 'rds', 'sns', 'sqs', 'cloudformation']
 
     def _calc_signature(self, params, verb, path, server_name):
         boto.log.debug('using _calc_signature_2')
@@ -249,6 +252,8 @@
         else:
             hmac = self._hmac.copy()
             params['SignatureMethod'] = 'HmacSHA1'
+        if self._provider.security_token:
+            params['SecurityToken'] = self._provider.security_token
         keys = params.keys()
         keys.sort()
         pairs = []
@@ -303,7 +308,8 @@
         names = [handler.__name__ for handler in checked_handlers]
         raise boto.exception.NoAuthHandlerFound(
               'No handler was ready to authenticate. %d handlers were checked.'
-              ' %s ' % (len(names), str(names)))
+              ' %s ' 
+              'Check your credentials' % (len(names), str(names)))
 
     if len(ready_handlers) > 1:
         # NOTE: Even though it would be nice to accept more than one handler
@@ -313,7 +319,10 @@
         # on the wrong account.
         names = [handler.__class__.__name__ for handler in ready_handlers]
         raise boto.exception.TooManyAuthHandlerReadyToAuthenticate(
-               '%d AuthHandlers ready to authenticate, '
-               'only 1 expected: %s' % (len(names), str(names)))
+               '%d AuthHandlers %s ready to authenticate for requested_capability '
+               '%s, only 1 expected. This happens if you import multiple '
+               'pluging.Plugin implementations that declare support for the '
+               'requested_capability.' % (len(names), str(names),
+               requested_capability))
 
     return ready_handlers[0]
diff --git a/boto/tests/__init__.py b/boto/cacerts/__init__.py
similarity index 93%
rename from boto/tests/__init__.py
rename to boto/cacerts/__init__.py
index 449bd16..1b2dec7 100644
--- a/boto/tests/__init__.py
+++ b/boto/cacerts/__init__.py
@@ -1,4 +1,5 @@
-# Copyright (c) 2006,2007 Mitch Garnaat http://garnaat.org/
+# Copyright 2010 Google Inc.
+# All rights reserved.
 #
 # Permission is hereby granted, free of charge, to any person obtaining a
 # copy of this software and associated documentation files (the
@@ -14,10 +15,8 @@
 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
 # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
 # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
-# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, 
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
 # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
 # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
 # IN THE SOFTWARE.
 #
-
-
diff --git a/boto/cacerts/cacerts.txt b/boto/cacerts/cacerts.txt
new file mode 100644
index 0000000..e65f21d
--- /dev/null
+++ b/boto/cacerts/cacerts.txt
@@ -0,0 +1,633 @@
+# Certifcate Authority certificates for validating SSL connections.
+#
+# This file contains PEM format certificates generated from
+# http://mxr.mozilla.org/seamonkey/source/security/nss/lib/ckfw/builtins/certdata.txt
+#
+# ***** BEGIN LICENSE BLOCK *****
+# Version: MPL 1.1/GPL 2.0/LGPL 2.1
+#
+# The contents of this file are subject to the Mozilla Public License Version
+# 1.1 (the "License"); you may not use this file except in compliance with
+# the License. You may obtain a copy of the License at
+# http://www.mozilla.org/MPL/
+#
+# Software distributed under the License is distributed on an "AS IS" basis,
+# WITHOUT WARRANTY OF ANY KIND, either express or implied. See the License
+# for the specific language governing rights and limitations under the
+# License.
+#
+# The Original Code is the Netscape security libraries.
+#
+# The Initial Developer of the Original Code is
+# Netscape Communications Corporation.
+# Portions created by the Initial Developer are Copyright (C) 1994-2000
+# the Initial Developer. All Rights Reserved.
+#
+# Contributor(s):
+#
+# Alternatively, the contents of this file may be used under the terms of
+# either the GNU General Public License Version 2 or later (the "GPL"), or
+# the GNU Lesser General Public License Version 2.1 or later (the "LGPL"),
+# in which case the provisions of the GPL or the LGPL are applicable instead
+# of those above. If you wish to allow use of your version of this file only
+# under the terms of either the GPL or the LGPL, and not to allow others to
+# use your version of this file under the terms of the MPL, indicate your
+# decision by deleting the provisions above and replace them with the notice
+# and other provisions required by the GPL or the LGPL. If you do not delete
+# the provisions above, a recipient may use your version of this file under
+# the terms of any one of the MPL, the GPL or the LGPL.
+#
+# ***** END LICENSE BLOCK *****
+
+Verisign/RSA Secure Server CA
+=============================
+
+-----BEGIN CERTIFICATE-----
+MIICNDCCAaECEAKtZn5ORf5eV288mBle3cAwDQYJKoZIhvcNAQECBQAwXzELMAkG
+A1UEBhMCVVMxIDAeBgNVBAoTF1JTQSBEYXRhIFNlY3VyaXR5LCBJbmMuMS4wLAYD
+VQQLEyVTZWN1cmUgU2VydmVyIENlcnRpZmljYXRpb24gQXV0aG9yaXR5MB4XDTk0
+MTEwOTAwMDAwMFoXDTEwMDEwNzIzNTk1OVowXzELMAkGA1UEBhMCVVMxIDAeBgNV
+BAoTF1JTQSBEYXRhIFNlY3VyaXR5LCBJbmMuMS4wLAYDVQQLEyVTZWN1cmUgU2Vy
+dmVyIENlcnRpZmljYXRpb24gQXV0aG9yaXR5MIGbMA0GCSqGSIb3DQEBAQUAA4GJ
+ADCBhQJ+AJLOesGugz5aqomDV6wlAXYMra6OLDfO6zV4ZFQD5YRAUcm/jwjiioII
+0haGN1XpsSECrXZogZoFokvJSyVmIlZsiAeP94FZbYQHZXATcXY+m3dM41CJVphI
+uR2nKRoTLkoRWZweFdVJVCxzOmmCsZc5nG1wZ0jl3S3WyB57AgMBAAEwDQYJKoZI
+hvcNAQECBQADfgBl3X7hsuyw4jrg7HFGmhkRuNPHoLQDQCYCPgmc4RKz0Vr2N6W3
+YQO2WxZpO8ZECAyIUwxrl0nHPjXcbLm7qt9cuzovk2C2qUtN8iD3zV9/ZHuO3ABc
+1/p3yjkWWW8O6tO1g39NTUJWdrTJXwT4OPjr0l91X817/OWOgHz8UA==
+-----END CERTIFICATE-----
+
+Thawte Personal Basic CA
+========================
+
+-----BEGIN CERTIFICATE-----
+MIIDITCCAoqgAwIBAgIBADANBgkqhkiG9w0BAQQFADCByzELMAkGA1UEBhMCWkEx
+FTATBgNVBAgTDFdlc3Rlcm4gQ2FwZTESMBAGA1UEBxMJQ2FwZSBUb3duMRowGAYD
+VQQKExFUaGF3dGUgQ29uc3VsdGluZzEoMCYGA1UECxMfQ2VydGlmaWNhdGlvbiBT
+ZXJ2aWNlcyBEaXZpc2lvbjEhMB8GA1UEAxMYVGhhd3RlIFBlcnNvbmFsIEJhc2lj
+IENBMSgwJgYJKoZIhvcNAQkBFhlwZXJzb25hbC1iYXNpY0B0aGF3dGUuY29tMB4X
+DTk2MDEwMTAwMDAwMFoXDTIwMTIzMTIzNTk1OVowgcsxCzAJBgNVBAYTAlpBMRUw
+EwYDVQQIEwxXZXN0ZXJuIENhcGUxEjAQBgNVBAcTCUNhcGUgVG93bjEaMBgGA1UE
+ChMRVGhhd3RlIENvbnN1bHRpbmcxKDAmBgNVBAsTH0NlcnRpZmljYXRpb24gU2Vy
+dmljZXMgRGl2aXNpb24xITAfBgNVBAMTGFRoYXd0ZSBQZXJzb25hbCBCYXNpYyBD
+QTEoMCYGCSqGSIb3DQEJARYZcGVyc29uYWwtYmFzaWNAdGhhd3RlLmNvbTCBnzAN
+BgkqhkiG9w0BAQEFAAOBjQAwgYkCgYEAvLyTU23AUE+CFeZIlDWmWr5vQvoPR+53
+dXLdjUmbllegeNTKP1GzaQuRdhciB5dqxFGTS+CN7zeVoQxN2jSQHReJl+A1OFdK
+wPQIcOk8RHtQfmGakOMj04gRRif1CwcOu93RfyAKiLlWCy4cgNrx454p7xS9CkT7
+G1sY0b8jkyECAwEAAaMTMBEwDwYDVR0TAQH/BAUwAwEB/zANBgkqhkiG9w0BAQQF
+AAOBgQAt4plrsD16iddZopQBHyvdEktTwq1/qqcAXJFAVyVKOKqEcLnZgA+le1z7
+c8a914phXAPjLSeoF+CEhULcXpvGt7Jtu3Sv5D/Lp7ew4F2+eIMllNLbgQ95B21P
+9DkVWlIBe94y1k049hJcBlDfBVu9FEuh3ym6O0GN92NWod8isQ==
+-----END CERTIFICATE-----
+
+Thawte Personal Premium CA
+==========================
+
+-----BEGIN CERTIFICATE-----
+MIIDKTCCApKgAwIBAgIBADANBgkqhkiG9w0BAQQFADCBzzELMAkGA1UEBhMCWkEx
+FTATBgNVBAgTDFdlc3Rlcm4gQ2FwZTESMBAGA1UEBxMJQ2FwZSBUb3duMRowGAYD
+VQQKExFUaGF3dGUgQ29uc3VsdGluZzEoMCYGA1UECxMfQ2VydGlmaWNhdGlvbiBT
+ZXJ2aWNlcyBEaXZpc2lvbjEjMCEGA1UEAxMaVGhhd3RlIFBlcnNvbmFsIFByZW1p
+dW0gQ0ExKjAoBgkqhkiG9w0BCQEWG3BlcnNvbmFsLXByZW1pdW1AdGhhd3RlLmNv
+bTAeFw05NjAxMDEwMDAwMDBaFw0yMDEyMzEyMzU5NTlaMIHPMQswCQYDVQQGEwJa
+QTEVMBMGA1UECBMMV2VzdGVybiBDYXBlMRIwEAYDVQQHEwlDYXBlIFRvd24xGjAY
+BgNVBAoTEVRoYXd0ZSBDb25zdWx0aW5nMSgwJgYDVQQLEx9DZXJ0aWZpY2F0aW9u
+IFNlcnZpY2VzIERpdmlzaW9uMSMwIQYDVQQDExpUaGF3dGUgUGVyc29uYWwgUHJl
+bWl1bSBDQTEqMCgGCSqGSIb3DQEJARYbcGVyc29uYWwtcHJlbWl1bUB0aGF3dGUu
+Y29tMIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQDJZtn4B0TPuYwu8KHvE0Vs
+Bd/eJxZRNkERbGw77f4QfRKe5ZtCmv5gMcNmt3M6SK5O0DI3lIi1DbbZ8/JE2dWI
+Et12TfIa/G8jHnrx2JhFTgcQ7xZC0EN1bUre4qrJMf8fAHB8Zs8QJQi6+u4A6UYD
+ZicRFTuqW/KY3TZCstqIdQIDAQABoxMwETAPBgNVHRMBAf8EBTADAQH/MA0GCSqG
+SIb3DQEBBAUAA4GBAGk2ifc0KjNyL2071CKyuG+axTZmDhs8obF1Wub9NdP4qPIH
+b4Vnjt4rueIXsDqg8A6iAJrf8xQVbrvIhVqYgPn/vnQdPfP+MCXRNzRn+qVxeTBh
+KXLA4CxM+1bkOqhv5TJZUtt1KFBZDPgLGeSs2a+WjS9Q2wfD6h+rM+D1KzGJ
+-----END CERTIFICATE-----
+
+Thawte Personal Freemail CA
+===========================
+
+-----BEGIN CERTIFICATE-----
+MIIDLTCCApagAwIBAgIBADANBgkqhkiG9w0BAQQFADCB0TELMAkGA1UEBhMCWkEx
+FTATBgNVBAgTDFdlc3Rlcm4gQ2FwZTESMBAGA1UEBxMJQ2FwZSBUb3duMRowGAYD
+VQQKExFUaGF3dGUgQ29uc3VsdGluZzEoMCYGA1UECxMfQ2VydGlmaWNhdGlvbiBT
+ZXJ2aWNlcyBEaXZpc2lvbjEkMCIGA1UEAxMbVGhhd3RlIFBlcnNvbmFsIEZyZWVt
+YWlsIENBMSswKQYJKoZIhvcNAQkBFhxwZXJzb25hbC1mcmVlbWFpbEB0aGF3dGUu
+Y29tMB4XDTk2MDEwMTAwMDAwMFoXDTIwMTIzMTIzNTk1OVowgdExCzAJBgNVBAYT
+AlpBMRUwEwYDVQQIEwxXZXN0ZXJuIENhcGUxEjAQBgNVBAcTCUNhcGUgVG93bjEa
+MBgGA1UEChMRVGhhd3RlIENvbnN1bHRpbmcxKDAmBgNVBAsTH0NlcnRpZmljYXRp
+b24gU2VydmljZXMgRGl2aXNpb24xJDAiBgNVBAMTG1RoYXd0ZSBQZXJzb25hbCBG
+cmVlbWFpbCBDQTErMCkGCSqGSIb3DQEJARYccGVyc29uYWwtZnJlZW1haWxAdGhh
+d3RlLmNvbTCBnzANBgkqhkiG9w0BAQEFAAOBjQAwgYkCgYEA1GnX1LCUZFtx6UfY
+DFG26nKRsIRefS0Nj3sS34UldSh0OkIsYyeflXtL734Zhx2G6qPduc6WZBrCFG5E
+rHzmj+hND3EfQDimAKOHePb5lIZererAXnbr2RSjXW56fAylS1V/Bhkpf56aJtVq
+uzgkCGqYx7Hao5iR/Xnb5VrEHLkCAwEAAaMTMBEwDwYDVR0TAQH/BAUwAwEB/zAN
+BgkqhkiG9w0BAQQFAAOBgQDH7JJ+Tvj1lqVnYiqk8E0RYNBvjWBYYawmu1I1XAjP
+MPuoSpaKH2JCI4wXD/S6ZJwXrEcp352YXtJsYHFcoqzceePnbgBHH7UNKOgCneSa
+/RP0ptl8sfjcXyMmCZGAc9AUG95DqYMl8uacLxXK/qarigd1iwzdUYRr5PjRznei
+gQ==
+-----END CERTIFICATE-----
+
+Thawte Server CA
+================
+
+-----BEGIN CERTIFICATE-----
+MIIDEzCCAnygAwIBAgIBATANBgkqhkiG9w0BAQQFADCBxDELMAkGA1UEBhMCWkEx
+FTATBgNVBAgTDFdlc3Rlcm4gQ2FwZTESMBAGA1UEBxMJQ2FwZSBUb3duMR0wGwYD
+VQQKExRUaGF3dGUgQ29uc3VsdGluZyBjYzEoMCYGA1UECxMfQ2VydGlmaWNhdGlv
+biBTZXJ2aWNlcyBEaXZpc2lvbjEZMBcGA1UEAxMQVGhhd3RlIFNlcnZlciBDQTEm
+MCQGCSqGSIb3DQEJARYXc2VydmVyLWNlcnRzQHRoYXd0ZS5jb20wHhcNOTYwODAx
+MDAwMDAwWhcNMjAxMjMxMjM1OTU5WjCBxDELMAkGA1UEBhMCWkExFTATBgNVBAgT
+DFdlc3Rlcm4gQ2FwZTESMBAGA1UEBxMJQ2FwZSBUb3duMR0wGwYDVQQKExRUaGF3
+dGUgQ29uc3VsdGluZyBjYzEoMCYGA1UECxMfQ2VydGlmaWNhdGlvbiBTZXJ2aWNl
+cyBEaXZpc2lvbjEZMBcGA1UEAxMQVGhhd3RlIFNlcnZlciBDQTEmMCQGCSqGSIb3
+DQEJARYXc2VydmVyLWNlcnRzQHRoYXd0ZS5jb20wgZ8wDQYJKoZIhvcNAQEBBQAD
+gY0AMIGJAoGBANOkUG7I/1Zr5s9dtuoMaHVHoqrC2oQl/Kj0R1HahbUgdJSGHg91
+yekIYfUGbTBuFRkC6VLAYttNmZ7iagxEOM3+vuNkCXDF/rFrKbYvScg71CcEJRCX
+L+eQbcAoQpnXTEPew/UhbVSfXcNY4cDk2VuwuNy0e982OsK1ZiIS1ocNAgMBAAGj
+EzARMA8GA1UdEwEB/wQFMAMBAf8wDQYJKoZIhvcNAQEEBQADgYEAB/pMaVz7lcxG
+7oWDTSEwjsrZqG9JGubaUeNgcGyEYRGhGshIPllDfU+VPaGLtwtimHp1it2ITk6e
+QNuozDJ0uW8NxuOzRAvZim+aKZuZGCg70eNAKJpaPNW15yAbi8qkq43pUdniTCxZ
+qdq5snUb9kLy78fyGPmJvKP/iiMucEc=
+-----END CERTIFICATE-----
+
+Thawte Premium Server CA
+========================
+
+-----BEGIN CERTIFICATE-----
+MIIDJzCCApCgAwIBAgIBATANBgkqhkiG9w0BAQQFADCBzjELMAkGA1UEBhMCWkEx
+FTATBgNVBAgTDFdlc3Rlcm4gQ2FwZTESMBAGA1UEBxMJQ2FwZSBUb3duMR0wGwYD
+VQQKExRUaGF3dGUgQ29uc3VsdGluZyBjYzEoMCYGA1UECxMfQ2VydGlmaWNhdGlv
+biBTZXJ2aWNlcyBEaXZpc2lvbjEhMB8GA1UEAxMYVGhhd3RlIFByZW1pdW0gU2Vy
+dmVyIENBMSgwJgYJKoZIhvcNAQkBFhlwcmVtaXVtLXNlcnZlckB0aGF3dGUuY29t
+MB4XDTk2MDgwMTAwMDAwMFoXDTIwMTIzMTIzNTk1OVowgc4xCzAJBgNVBAYTAlpB
+MRUwEwYDVQQIEwxXZXN0ZXJuIENhcGUxEjAQBgNVBAcTCUNhcGUgVG93bjEdMBsG
+A1UEChMUVGhhd3RlIENvbnN1bHRpbmcgY2MxKDAmBgNVBAsTH0NlcnRpZmljYXRp
+b24gU2VydmljZXMgRGl2aXNpb24xITAfBgNVBAMTGFRoYXd0ZSBQcmVtaXVtIFNl
+cnZlciBDQTEoMCYGCSqGSIb3DQEJARYZcHJlbWl1bS1zZXJ2ZXJAdGhhd3RlLmNv
+bTCBnzANBgkqhkiG9w0BAQEFAAOBjQAwgYkCgYEA0jY2aovXwlue2oFBYo847kkE
+VdbQ7xwblRZH7xhINTpS9CtqBo87L+pW46+GjZ4X9560ZXUCTe/LCaIhUdib0GfQ
+ug2SBhRz1JPLlyoAnFxODLz6FVL88kRu2hFKbgifLy3j+ao6hnO2RlNYyIkFvYMR
+uHM/qgeN9EJN50CdHDcCAwEAAaMTMBEwDwYDVR0TAQH/BAUwAwEB/zANBgkqhkiG
+9w0BAQQFAAOBgQAmSCwWwlj66BZ0DKqqX1Q/8tfJeGBeXm43YyJ3Nn6yF8Q0ufUI
+hfzJATj/Tb7yFkJD57taRvvBxhEf8UqwKEbJw8RCfbz6q1lu1bdRiBHjpIUZa4JM
+pAwSremkrj/xw0llmozFyD4lt5SZu5IycQfwhl7tUCemDaYj+bvLpgcUQg==
+-----END CERTIFICATE-----
+
+Equifax Secure CA
+=================
+
+-----BEGIN CERTIFICATE-----
+MIIDIDCCAomgAwIBAgIENd70zzANBgkqhkiG9w0BAQUFADBOMQswCQYDVQQGEwJV
+UzEQMA4GA1UEChMHRXF1aWZheDEtMCsGA1UECxMkRXF1aWZheCBTZWN1cmUgQ2Vy
+dGlmaWNhdGUgQXV0aG9yaXR5MB4XDTk4MDgyMjE2NDE1MVoXDTE4MDgyMjE2NDE1
+MVowTjELMAkGA1UEBhMCVVMxEDAOBgNVBAoTB0VxdWlmYXgxLTArBgNVBAsTJEVx
+dWlmYXggU2VjdXJlIENlcnRpZmljYXRlIEF1dGhvcml0eTCBnzANBgkqhkiG9w0B
+AQEFAAOBjQAwgYkCgYEAwV2xWGcIYu6gmi0fCG2RFGiYCh7+2gRvE4RiIcPRfM6f
+BeC4AfBONOziipUEZKzxa1NfBbPLZ4C/QgKO/t0BCezhABRP/PvwDN1Dulsr4R+A
+cJkVV5MW8Q+XarfCaCMczE1ZMKxRHjuvK9buY0V7xdlfUNLjUA86iOe/FP3gx7kC
+AwEAAaOCAQkwggEFMHAGA1UdHwRpMGcwZaBjoGGkXzBdMQswCQYDVQQGEwJVUzEQ
+MA4GA1UEChMHRXF1aWZheDEtMCsGA1UECxMkRXF1aWZheCBTZWN1cmUgQ2VydGlm
+aWNhdGUgQXV0aG9yaXR5MQ0wCwYDVQQDEwRDUkwxMBoGA1UdEAQTMBGBDzIwMTgw
+ODIyMTY0MTUxWjALBgNVHQ8EBAMCAQYwHwYDVR0jBBgwFoAUSOZo+SvSspXXR9gj
+IBBPM5iQn9QwHQYDVR0OBBYEFEjmaPkr0rKV10fYIyAQTzOYkJ/UMAwGA1UdEwQF
+MAMBAf8wGgYJKoZIhvZ9B0EABA0wCxsFVjMuMGMDAgbAMA0GCSqGSIb3DQEBBQUA
+A4GBAFjOKer89961zgK5F7WF0bnj4JXMJTENAKaSbn+2kmOeUJXRmm/kEd5jhW6Y
+7qj/WsjTVbJmcVfewCHrPSqnI0kBBIZCe/zuf6IWUrVnZ9NA2zsmWLIodz2uFHdh
+1voqZiegDfqnc1zqcPGUIWVEX/r87yloqaKHee9570+sB3c4
+-----END CERTIFICATE-----
+
+Verisign Class 1 Public Primary Certification Authority
+=======================================================
+
+-----BEGIN CERTIFICATE-----
+MIICPTCCAaYCEQDNun9W8N/kvFT+IqyzcqpVMA0GCSqGSIb3DQEBAgUAMF8xCzAJ
+BgNVBAYTAlVTMRcwFQYDVQQKEw5WZXJpU2lnbiwgSW5jLjE3MDUGA1UECxMuQ2xh
+c3MgMSBQdWJsaWMgUHJpbWFyeSBDZXJ0aWZpY2F0aW9uIEF1dGhvcml0eTAeFw05
+NjAxMjkwMDAwMDBaFw0yODA4MDEyMzU5NTlaMF8xCzAJBgNVBAYTAlVTMRcwFQYD
+VQQKEw5WZXJpU2lnbiwgSW5jLjE3MDUGA1UECxMuQ2xhc3MgMSBQdWJsaWMgUHJp
+bWFyeSBDZXJ0aWZpY2F0aW9uIEF1dGhvcml0eTCBnzANBgkqhkiG9w0BAQEFAAOB
+jQAwgYkCgYEA5Rm/baNWYS2ZSHH2Z965jeu3noaACpEO+jglr0aIguVzqKCbJF0N
+H8xlbgyw0FaEGIeaBpsQoXPftFg5a27B9hXVqKg/qhIGjTGsf7A01480Z4gJzRQR
+4k5FVmkfeAKA2txHkSm7NsljXMXg1y2He6G3MrB7MLoqLzGq7qNn2tsCAwEAATAN
+BgkqhkiG9w0BAQIFAAOBgQBMP7iLxmjf7kMzDl3ppssHhE16M/+SG/Q2rdiVIjZo
+EWx8QszznC7EBz8UsA9P/5CSdvnivErpj82ggAr3xSnxgiJduLHdgSOjeyUVRjB5
+FvjqBUuUfx3CHMjjt/QQQDwTw18fU+hI5Ia0e6E1sHslurjTjqs/OJ0ANACY89Fx
+lA==
+-----END CERTIFICATE-----
+
+Verisign Class 2 Public Primary Certification Authority
+=======================================================
+
+-----BEGIN CERTIFICATE-----
+MIICPDCCAaUCEC0b/EoXjaOR6+f/9YtFvgswDQYJKoZIhvcNAQECBQAwXzELMAkG
+A1UEBhMCVVMxFzAVBgNVBAoTDlZlcmlTaWduLCBJbmMuMTcwNQYDVQQLEy5DbGFz
+cyAyIFB1YmxpYyBQcmltYXJ5IENlcnRpZmljYXRpb24gQXV0aG9yaXR5MB4XDTk2
+MDEyOTAwMDAwMFoXDTI4MDgwMTIzNTk1OVowXzELMAkGA1UEBhMCVVMxFzAVBgNV
+BAoTDlZlcmlTaWduLCBJbmMuMTcwNQYDVQQLEy5DbGFzcyAyIFB1YmxpYyBQcmlt
+YXJ5IENlcnRpZmljYXRpb24gQXV0aG9yaXR5MIGfMA0GCSqGSIb3DQEBAQUAA4GN
+ADCBiQKBgQC2WoujDWojg4BrzzmH9CETMwZMJaLtVRKXxaeAufqDwSCg+i8VDXyh
+YGt+eSz6Bg86rvYbb7HS/y8oUl+DfUvEerf4Zh+AVPy3wo5ZShRXRtGak75BkQO7
+FYCTXOvnzAhsPz6zSvz/S2wj1VCCJkQZjiPDceoZJEcEnnW/yKYAHwIDAQABMA0G
+CSqGSIb3DQEBAgUAA4GBAIobK/o5wXTXXtgZZKJYSi034DNHD6zt96rbHuSLBlxg
+J8pFUs4W7z8GZOeUaHxgMxURaa+dYo2jA1Rrpr7l7gUYYAS/QoD90KioHgE796Nc
+r6Pc5iaAIzy4RHT3Cq5Ji2F4zCS/iIqnDupzGUH9TQPwiNHleI2lKk/2lw0Xd8rY
+-----END CERTIFICATE-----
+
+Verisign Class 3 Public Primary Certification Authority
+=======================================================
+
+-----BEGIN CERTIFICATE-----
+MIICPDCCAaUCEHC65B0Q2Sk0tjjKewPMur8wDQYJKoZIhvcNAQECBQAwXzELMAkG
+A1UEBhMCVVMxFzAVBgNVBAoTDlZlcmlTaWduLCBJbmMuMTcwNQYDVQQLEy5DbGFz
+cyAzIFB1YmxpYyBQcmltYXJ5IENlcnRpZmljYXRpb24gQXV0aG9yaXR5MB4XDTk2
+MDEyOTAwMDAwMFoXDTI4MDgwMTIzNTk1OVowXzELMAkGA1UEBhMCVVMxFzAVBgNV
+BAoTDlZlcmlTaWduLCBJbmMuMTcwNQYDVQQLEy5DbGFzcyAzIFB1YmxpYyBQcmlt
+YXJ5IENlcnRpZmljYXRpb24gQXV0aG9yaXR5MIGfMA0GCSqGSIb3DQEBAQUAA4GN
+ADCBiQKBgQDJXFme8huKARS0EN8EQNvjV69qRUCPhAwL0TPZ2RHP7gJYHyX3KqhE
+BarsAx94f56TuZoAqiN91qyFomNFx3InzPRMxnVx0jnvT0Lwdd8KkMaOIG+YD/is
+I19wKTakyYbnsZogy1Olhec9vn2a/iRFM9x2Fe0PonFkTGUugWhFpwIDAQABMA0G
+CSqGSIb3DQEBAgUAA4GBALtMEivPLCYATxQT3ab7/AoRhIzzKBxnki98tsX63/Do
+lbwdj2wsqFHMc9ikwFPwTtYmwHYBV4GSXiHx0bH/59AhWM1pF+NEHJwZRDmJXNyc
+AA9WjQKZ7aKQRUzkuxCkPfAyAw7xzvjoyVGM5mKf5p/AfbdynMk2OmufTqj/ZA1k
+-----END CERTIFICATE-----
+
+Verisign Class 1 Public Primary Certification Authority - G2
+============================================================
+
+-----BEGIN CERTIFICATE-----
+MIIDAjCCAmsCEEzH6qqYPnHTkxD4PTqJkZIwDQYJKoZIhvcNAQEFBQAwgcExCzAJ
+BgNVBAYTAlVTMRcwFQYDVQQKEw5WZXJpU2lnbiwgSW5jLjE8MDoGA1UECxMzQ2xh
+c3MgMSBQdWJsaWMgUHJpbWFyeSBDZXJ0aWZpY2F0aW9uIEF1dGhvcml0eSAtIEcy
+MTowOAYDVQQLEzEoYykgMTk5OCBWZXJpU2lnbiwgSW5jLiAtIEZvciBhdXRob3Jp
+emVkIHVzZSBvbmx5MR8wHQYDVQQLExZWZXJpU2lnbiBUcnVzdCBOZXR3b3JrMB4X
+DTk4MDUxODAwMDAwMFoXDTI4MDgwMTIzNTk1OVowgcExCzAJBgNVBAYTAlVTMRcw
+FQYDVQQKEw5WZXJpU2lnbiwgSW5jLjE8MDoGA1UECxMzQ2xhc3MgMSBQdWJsaWMg
+UHJpbWFyeSBDZXJ0aWZpY2F0aW9uIEF1dGhvcml0eSAtIEcyMTowOAYDVQQLEzEo
+YykgMTk5OCBWZXJpU2lnbiwgSW5jLiAtIEZvciBhdXRob3JpemVkIHVzZSBvbmx5
+MR8wHQYDVQQLExZWZXJpU2lnbiBUcnVzdCBOZXR3b3JrMIGfMA0GCSqGSIb3DQEB
+AQUAA4GNADCBiQKBgQCq0Lq+Fi24g9TK0g+8djHKlNgdk4xWArzZbxpvUjZudVYK
+VdPfQ4chEWWKfo+9Id5rMj8bhDSVBZ1BNeuS65bdqlk/AVNtmU/t5eIqWpDBucSm
+Fc/IReumXY6cPvBkJHalzasab7bYe1FhbqZ/h8jit+U03EGI6glAvnOSPWvndQID
+AQABMA0GCSqGSIb3DQEBBQUAA4GBAKlPww3HZ74sy9mozS11534Vnjty637rXC0J
+h9ZrbWB85a7FkCMMXErQr7Fd88e2CtvgFZMN3QO8x3aKtd1Pw5sTdbgBwObJW2ul
+uIncrKTdcu1OofdPvAbT6shkdHvClUGcZXNY8ZCaPGqxmMnEh7zPRW1F4m4iP/68
+DzFc6PLZ
+-----END CERTIFICATE-----
+
+Verisign Class 2 Public Primary Certification Authority - G2
+============================================================
+
+-----BEGIN CERTIFICATE-----
+MIIDAzCCAmwCEQC5L2DMiJ+hekYJuFtwbIqvMA0GCSqGSIb3DQEBBQUAMIHBMQsw
+CQYDVQQGEwJVUzEXMBUGA1UEChMOVmVyaVNpZ24sIEluYy4xPDA6BgNVBAsTM0Ns
+YXNzIDIgUHVibGljIFByaW1hcnkgQ2VydGlmaWNhdGlvbiBBdXRob3JpdHkgLSBH
+MjE6MDgGA1UECxMxKGMpIDE5OTggVmVyaVNpZ24sIEluYy4gLSBGb3IgYXV0aG9y
+aXplZCB1c2Ugb25seTEfMB0GA1UECxMWVmVyaVNpZ24gVHJ1c3QgTmV0d29yazAe
+Fw05ODA1MTgwMDAwMDBaFw0yODA4MDEyMzU5NTlaMIHBMQswCQYDVQQGEwJVUzEX
+MBUGA1UEChMOVmVyaVNpZ24sIEluYy4xPDA6BgNVBAsTM0NsYXNzIDIgUHVibGlj
+IFByaW1hcnkgQ2VydGlmaWNhdGlvbiBBdXRob3JpdHkgLSBHMjE6MDgGA1UECxMx
+KGMpIDE5OTggVmVyaVNpZ24sIEluYy4gLSBGb3IgYXV0aG9yaXplZCB1c2Ugb25s
+eTEfMB0GA1UECxMWVmVyaVNpZ24gVHJ1c3QgTmV0d29yazCBnzANBgkqhkiG9w0B
+AQEFAAOBjQAwgYkCgYEAp4gBIXQs5xoD8JjhlzwPIQjxnNuX6Zr8wgQGE75fUsjM
+HiwSViy4AWkszJkfrbCWrnkE8hM5wXuYuggs6MKEEyyqaekJ9MepAqRCwiNPStjw
+DqL7MWzJ5m+ZJwf15vRMeJ5t60aG+rmGyVTyssSv1EYcWskVMP8NbPUtDm3Of3cC
+AwEAATANBgkqhkiG9w0BAQUFAAOBgQByLvl/0fFx+8Se9sVeUYpAmLho+Jscg9ji
+nb3/7aHmZuovCfTK1+qlK5X2JGCGTUQug6XELaDTrnhpb3LabK4I8GOSN+a7xDAX
+rXfMSTWqz9iP0b63GJZHc2pUIjRkLbYWm1lbtFFZOrMLFPQS32eg9K0yZF6xRnIn
+jBJ7xUS0rg==
+-----END CERTIFICATE-----
+
+Verisign Class 3 Public Primary Certification Authority - G2
+============================================================
+
+-----BEGIN CERTIFICATE-----
+MIIDAjCCAmsCEH3Z/gfPqB63EHln+6eJNMYwDQYJKoZIhvcNAQEFBQAwgcExCzAJ
+BgNVBAYTAlVTMRcwFQYDVQQKEw5WZXJpU2lnbiwgSW5jLjE8MDoGA1UECxMzQ2xh
+c3MgMyBQdWJsaWMgUHJpbWFyeSBDZXJ0aWZpY2F0aW9uIEF1dGhvcml0eSAtIEcy
+MTowOAYDVQQLEzEoYykgMTk5OCBWZXJpU2lnbiwgSW5jLiAtIEZvciBhdXRob3Jp
+emVkIHVzZSBvbmx5MR8wHQYDVQQLExZWZXJpU2lnbiBUcnVzdCBOZXR3b3JrMB4X
+DTk4MDUxODAwMDAwMFoXDTI4MDgwMTIzNTk1OVowgcExCzAJBgNVBAYTAlVTMRcw
+FQYDVQQKEw5WZXJpU2lnbiwgSW5jLjE8MDoGA1UECxMzQ2xhc3MgMyBQdWJsaWMg
+UHJpbWFyeSBDZXJ0aWZpY2F0aW9uIEF1dGhvcml0eSAtIEcyMTowOAYDVQQLEzEo
+YykgMTk5OCBWZXJpU2lnbiwgSW5jLiAtIEZvciBhdXRob3JpemVkIHVzZSBvbmx5
+MR8wHQYDVQQLExZWZXJpU2lnbiBUcnVzdCBOZXR3b3JrMIGfMA0GCSqGSIb3DQEB
+AQUAA4GNADCBiQKBgQDMXtERXVxp0KvTuWpMmR9ZmDCOFoUgRm1HP9SFIIThbbP4
+pO0M8RcPO/mn+SXXwc+EY/J8Y8+iR/LGWzOOZEAEaMGAuWQcRXfH2G71lSk8UOg0
+13gfqLptQ5GVj0VXXn7F+8qkBOvqlzdUMG+7AUcyM83cV5tkaWH4mx0ciU9cZwID
+AQABMA0GCSqGSIb3DQEBBQUAA4GBAFFNzb5cy5gZnBWyATl4Lk0PZ3BwmcYQWpSk
+U01UbSuvDV1Ai2TT1+7eVmGSX6bEHRBhNtMsJzzoKQm5EWR0zLVznxxIqbxhAe7i
+F6YM40AIOw7n60RzKprxaZLvcRTDOaxxp5EJb+RxBrO6WVcmeQD2+A2iMzAo1KpY
+oJ2daZH9
+-----END CERTIFICATE-----
+
+Verisign Class 4 Public Primary Certification Authority - G2
+============================================================
+
+-----BEGIN CERTIFICATE-----
+MIIDAjCCAmsCEDKIjprS9esTR/h/xCA3JfgwDQYJKoZIhvcNAQEFBQAwgcExCzAJ
+BgNVBAYTAlVTMRcwFQYDVQQKEw5WZXJpU2lnbiwgSW5jLjE8MDoGA1UECxMzQ2xh
+c3MgNCBQdWJsaWMgUHJpbWFyeSBDZXJ0aWZpY2F0aW9uIEF1dGhvcml0eSAtIEcy
+MTowOAYDVQQLEzEoYykgMTk5OCBWZXJpU2lnbiwgSW5jLiAtIEZvciBhdXRob3Jp
+emVkIHVzZSBvbmx5MR8wHQYDVQQLExZWZXJpU2lnbiBUcnVzdCBOZXR3b3JrMB4X
+DTk4MDUxODAwMDAwMFoXDTI4MDgwMTIzNTk1OVowgcExCzAJBgNVBAYTAlVTMRcw
+FQYDVQQKEw5WZXJpU2lnbiwgSW5jLjE8MDoGA1UECxMzQ2xhc3MgNCBQdWJsaWMg
+UHJpbWFyeSBDZXJ0aWZpY2F0aW9uIEF1dGhvcml0eSAtIEcyMTowOAYDVQQLEzEo
+YykgMTk5OCBWZXJpU2lnbiwgSW5jLiAtIEZvciBhdXRob3JpemVkIHVzZSBvbmx5
+MR8wHQYDVQQLExZWZXJpU2lnbiBUcnVzdCBOZXR3b3JrMIGfMA0GCSqGSIb3DQEB
+AQUAA4GNADCBiQKBgQC68OTP+cSuhVS5B1f5j8V/aBH4xBewRNzjMHPVKmIquNDM
+HO0oW369atyzkSTKQWI8/AIBvxwWMZQFl3Zuoq29YRdsTjCG8FE3KlDHqGKB3FtK
+qsGgtG7rL+VXxbErQHDbWk2hjh+9Ax/YA9SPTJlxvOKCzFjomDqG04Y48wApHwID
+AQABMA0GCSqGSIb3DQEBBQUAA4GBAIWMEsGnuVAVess+rLhDityq3RS6iYF+ATwj
+cSGIL4LcY/oCRaxFWdcqWERbt5+BO5JoPeI3JPV7bI92NZYJqFmduc4jq3TWg/0y
+cyfYaT5DdPauxYma51N86Xv2S/PBZYPejYqcPIiNOVn8qj8ijaHBZlCBckztImRP
+T8qAkbYp
+-----END CERTIFICATE-----
+
+Verisign Class 1 Public Primary Certification Authority - G3
+============================================================
+
+-----BEGIN CERTIFICATE-----
+MIIEGjCCAwICEQCLW3VWhFSFCwDPrzhIzrGkMA0GCSqGSIb3DQEBBQUAMIHKMQsw
+CQYDVQQGEwJVUzEXMBUGA1UEChMOVmVyaVNpZ24sIEluYy4xHzAdBgNVBAsTFlZl
+cmlTaWduIFRydXN0IE5ldHdvcmsxOjA4BgNVBAsTMShjKSAxOTk5IFZlcmlTaWdu
+LCBJbmMuIC0gRm9yIGF1dGhvcml6ZWQgdXNlIG9ubHkxRTBDBgNVBAMTPFZlcmlT
+aWduIENsYXNzIDEgUHVibGljIFByaW1hcnkgQ2VydGlmaWNhdGlvbiBBdXRob3Jp
+dHkgLSBHMzAeFw05OTEwMDEwMDAwMDBaFw0zNjA3MTYyMzU5NTlaMIHKMQswCQYD
+VQQGEwJVUzEXMBUGA1UEChMOVmVyaVNpZ24sIEluYy4xHzAdBgNVBAsTFlZlcmlT
+aWduIFRydXN0IE5ldHdvcmsxOjA4BgNVBAsTMShjKSAxOTk5IFZlcmlTaWduLCBJ
+bmMuIC0gRm9yIGF1dGhvcml6ZWQgdXNlIG9ubHkxRTBDBgNVBAMTPFZlcmlTaWdu
+IENsYXNzIDEgUHVibGljIFByaW1hcnkgQ2VydGlmaWNhdGlvbiBBdXRob3JpdHkg
+LSBHMzCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAN2E1Lm0+afY8wR4
+nN493GwTFtl63SRRZsDHJlkNrAYIwpTRMx/wgzUfbhvI3qpuFU5UJ+/EbRrsC+MO
+8ESlV8dAWB6jRx9x7GD2bZTIGDnt/kIYVt/kTEkQeE4BdjVjEjbdZrwBBDajVWjV
+ojYJrKshJlQGrT/KFOCsyq0GHZXi+J3x4GD/wn91K0zM2v6HmSHquv4+VNfSWXjb
+PG7PoBMAGrgnoeS+Z5bKoMWznN3JdZ7rMJpfo83ZrngZPyPpXNspva1VyBtUjGP2
+6KbqxzcSXKMpHgLZ2x87tNcPVkeBFQRKr4Mn0cVYiMHd9qqnoxjaaKptEVHhv2Vr
+n5Z20T0CAwEAATANBgkqhkiG9w0BAQUFAAOCAQEAq2aN17O6x5q25lXQBfGfMY1a
+qtmqRiYPce2lrVNWYgFHKkTp/j90CxObufRNG7LRX7K20ohcs5/Ny9Sn2WCVhDr4
+wTcdYcrnsMXlkdpUpqwxga6X3s0IrLjAl4B/bnKk52kTlWUfxJM8/XmPBNQ+T+r3
+ns7NZ3xPZQL/kYVUc8f/NveGLezQXk//EZ9yBta4GvFMDSZl4kSAHsef493oCtrs
+pSCAaWihT37ha88HQfqDjrw43bAuEbFrskLMmrz5SCJ5ShkPshw+IHTZasO+8ih4
+E1Z5T21Q6huwtVexN2ZYI/PcD98Kh8TvhgXVOBRgmaNL3gaWcSzy27YfpO8/7g==
+-----END CERTIFICATE-----
+
+Verisign Class 2 Public Primary Certification Authority - G3
+============================================================
+
+-----BEGIN CERTIFICATE-----
+MIIEGTCCAwECEGFwy0mMX5hFKeewptlQW3owDQYJKoZIhvcNAQEFBQAwgcoxCzAJ
+BgNVBAYTAlVTMRcwFQYDVQQKEw5WZXJpU2lnbiwgSW5jLjEfMB0GA1UECxMWVmVy
+aVNpZ24gVHJ1c3QgTmV0d29yazE6MDgGA1UECxMxKGMpIDE5OTkgVmVyaVNpZ24s
+IEluYy4gLSBGb3IgYXV0aG9yaXplZCB1c2Ugb25seTFFMEMGA1UEAxM8VmVyaVNp
+Z24gQ2xhc3MgMiBQdWJsaWMgUHJpbWFyeSBDZXJ0aWZpY2F0aW9uIEF1dGhvcml0
+eSAtIEczMB4XDTk5MTAwMTAwMDAwMFoXDTM2MDcxNjIzNTk1OVowgcoxCzAJBgNV
+BAYTAlVTMRcwFQYDVQQKEw5WZXJpU2lnbiwgSW5jLjEfMB0GA1UECxMWVmVyaVNp
+Z24gVHJ1c3QgTmV0d29yazE6MDgGA1UECxMxKGMpIDE5OTkgVmVyaVNpZ24sIElu
+Yy4gLSBGb3IgYXV0aG9yaXplZCB1c2Ugb25seTFFMEMGA1UEAxM8VmVyaVNpZ24g
+Q2xhc3MgMiBQdWJsaWMgUHJpbWFyeSBDZXJ0aWZpY2F0aW9uIEF1dGhvcml0eSAt
+IEczMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEArwoNwtUs22e5LeWU
+J92lvuCwTY+zYVY81nzD9M0+hsuiiOLh2KRpxbXiv8GmR1BeRjmL1Za6tW8UvxDO
+JxOeBUebMXoT2B/Z0wI3i60sR/COgQanDTAM6/c8DyAd3HJG7qUCyFvDyVZpTMUY
+wZF7C9UTAJu878NIPkZgIIUq1ZC2zYugzDLdt/1AVbJQHFauzI13TccgTacxdu9o
+koqQHgiBVrKtaaNS0MscxCM9H5n+TOgWY47GCI72MfbS+uV23bUckqNJzc0BzWjN
+qWm6o+sdDZykIKbBoMXRRkwXbdKsZj+WjOCE1Db/IlnF+RFgqF8EffIa9iVCYQ/E
+Srg+iQIDAQABMA0GCSqGSIb3DQEBBQUAA4IBAQA0JhU8wI1NQ0kdvekhktdmnLfe
+xbjQ5F1fdiLAJvmEOjr5jLX77GDx6M4EsMjdpwOPMPOY36TmpDHf0xwLRtxyID+u
+7gU8pDM/CzmscHhzS5kr3zDCVLCoO1Wh/hYozUK9dG6A2ydEp85EXdQbkJgNHkKU
+sQAsBNB0owIFImNjzYO1+8FtYmtpdf1dcEG59b98377BMnMiIYtYgXsVkXq642RI
+sH/7NiXaldDxJBQX3RiAa0YjOVT1jmIJBB2UkKab5iXiQkWquJCtvgiPqQtCGJTP
+cjnhsUPgKM+351psE2tJs//jGHyJizNdrDPXp/naOlXJWBD5qu9ats9LS98q
+-----END CERTIFICATE-----
+
+Verisign Class 3 Public Primary Certification Authority - G3
+============================================================
+
+-----BEGIN CERTIFICATE-----
+MIIEGjCCAwICEQCbfgZJoz5iudXukEhxKe9XMA0GCSqGSIb3DQEBBQUAMIHKMQsw
+CQYDVQQGEwJVUzEXMBUGA1UEChMOVmVyaVNpZ24sIEluYy4xHzAdBgNVBAsTFlZl
+cmlTaWduIFRydXN0IE5ldHdvcmsxOjA4BgNVBAsTMShjKSAxOTk5IFZlcmlTaWdu
+LCBJbmMuIC0gRm9yIGF1dGhvcml6ZWQgdXNlIG9ubHkxRTBDBgNVBAMTPFZlcmlT
+aWduIENsYXNzIDMgUHVibGljIFByaW1hcnkgQ2VydGlmaWNhdGlvbiBBdXRob3Jp
+dHkgLSBHMzAeFw05OTEwMDEwMDAwMDBaFw0zNjA3MTYyMzU5NTlaMIHKMQswCQYD
+VQQGEwJVUzEXMBUGA1UEChMOVmVyaVNpZ24sIEluYy4xHzAdBgNVBAsTFlZlcmlT
+aWduIFRydXN0IE5ldHdvcmsxOjA4BgNVBAsTMShjKSAxOTk5IFZlcmlTaWduLCBJ
+bmMuIC0gRm9yIGF1dGhvcml6ZWQgdXNlIG9ubHkxRTBDBgNVBAMTPFZlcmlTaWdu
+IENsYXNzIDMgUHVibGljIFByaW1hcnkgQ2VydGlmaWNhdGlvbiBBdXRob3JpdHkg
+LSBHMzCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAMu6nFL8eB8aHm8b
+N3O9+MlrlBIwT/A2R/XQkQr1F8ilYcEWQE37imGQ5XYgwREGfassbqb1EUGO+i2t
+KmFZpGcmTNDovFJbcCAEWNF6yaRpvIMXZK0Fi7zQWM6NjPXr8EJJC52XJ2cybuGu
+kxUccLwgTS8Y3pKI6GyFVxEa6X7jJhFUokWWVYPKMIno3Nij7SqAP395ZVc+FSBm
+CC+Vk7+qRy+oRpfwEuL+wgorUeZ25rdGt+INpsyow0xZVYnm6FNcHOqd8GIWC6fJ
+Xwzw3sJ2zq/3avL6QaaiMxTJ5Xpj055iN9WFZZ4O5lMkdBteHRJTW8cs54NJOxWu
+imi5V5cCAwEAATANBgkqhkiG9w0BAQUFAAOCAQEAERSWwauSCPc/L8my/uRan2Te
+2yFPhpk0djZX3dAVL8WtfxUfN2JzPtTnX84XA9s1+ivbrmAJXx5fj267Cz3qWhMe
+DGBvtcC1IyIuBwvLqXTLR7sdwdela8wv0kL9Sd2nic9TutoAWii/gt/4uhMdUIaC
+/Y4wjylGsB49Ndo4YhYYSq3mtlFs3q9i6wHQHiT+eo8SGhJouPtmmRQURVyu565p
+F4ErWjfJXir0xuKhXFSbplQAz/DxwceYMBo7Nhbbo27q/a2ywtrvAkcTisDxszGt
+TxzhT5yvDwyd93gN2PQ1VoDat20Xj50egWTh/sVFuq1ruQp6Tk9LhO5L8X3dEQ==
+-----END CERTIFICATE-----
+
+Verisign Class 4 Public Primary Certification Authority - G3
+============================================================
+
+-----BEGIN CERTIFICATE-----
+MIIEGjCCAwICEQDsoKeLbnVqAc/EfMwvlF7XMA0GCSqGSIb3DQEBBQUAMIHKMQsw
+CQYDVQQGEwJVUzEXMBUGA1UEChMOVmVyaVNpZ24sIEluYy4xHzAdBgNVBAsTFlZl
+cmlTaWduIFRydXN0IE5ldHdvcmsxOjA4BgNVBAsTMShjKSAxOTk5IFZlcmlTaWdu
+LCBJbmMuIC0gRm9yIGF1dGhvcml6ZWQgdXNlIG9ubHkxRTBDBgNVBAMTPFZlcmlT
+aWduIENsYXNzIDQgUHVibGljIFByaW1hcnkgQ2VydGlmaWNhdGlvbiBBdXRob3Jp
+dHkgLSBHMzAeFw05OTEwMDEwMDAwMDBaFw0zNjA3MTYyMzU5NTlaMIHKMQswCQYD
+VQQGEwJVUzEXMBUGA1UEChMOVmVyaVNpZ24sIEluYy4xHzAdBgNVBAsTFlZlcmlT
+aWduIFRydXN0IE5ldHdvcmsxOjA4BgNVBAsTMShjKSAxOTk5IFZlcmlTaWduLCBJ
+bmMuIC0gRm9yIGF1dGhvcml6ZWQgdXNlIG9ubHkxRTBDBgNVBAMTPFZlcmlTaWdu
+IENsYXNzIDQgUHVibGljIFByaW1hcnkgQ2VydGlmaWNhdGlvbiBBdXRob3JpdHkg
+LSBHMzCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAK3LpRFpxlmr8Y+1
+GQ9Wzsy1HyDkniYlS+BzZYlZ3tCD5PUPtbut8XzoIfzk6AzufEUiGXaStBO3IFsJ
++mGuqPKljYXCKtbeZjbSmwL0qJJgfJxptI8kHtCGUvYynEFYHiK9zUVilQhu0Gbd
+U6LM8BDcVHOLBKFGMzNcF0C5nk3T875Vg+ixiY5afJqWIpA7iCXy0lOIAgwLePLm
+NxdLMEYH5IBtptiWLugs+BGzOA1mppvqySNb247i8xOOGlktqgLw7KSHZtzBP/XY
+ufTsgsbSPZUd5cBPhMnZo0QoBmrXRazwa2rvTl/4EYIeOGM0ZlDUPpNz+jDDZq3/
+ky2X7wMCAwEAATANBgkqhkiG9w0BAQUFAAOCAQEAj/ola09b5KROJ1WrIhVZPMq1
+CtRK26vdoV9TxaBXOcLORyu+OshWv8LZJxA6sQU8wHcxuzrTBXttmhwwjIDLk5Mq
+g6sFUYICABFna/OIYUdfA5PVWw3g8dShMjWFsjrbsIKr0csKvE+MW8VLADsfKoKm
+fjaF3H48ZwC15DtS4KjrXRX5xm3wrR0OhbepmnMUWluPQSjA1egtTaRezarZ7c7c
+2NU8Qh0XwRJdRTjDOPP8hS6DRkiy1yBfkjaP53kPmF6Z6PDQpLv1U70qzlmwr25/
+bLvSHgCwIe34QWKCudiyxLtGUPMxxY8BqHTr9Xgn2uf3ZkPznoM+IKrDNWCRzg==
+-----END CERTIFICATE-----
+
+Equifax Secure Global eBusiness CA
+==================================
+
+-----BEGIN CERTIFICATE-----
+MIICkDCCAfmgAwIBAgIBATANBgkqhkiG9w0BAQQFADBaMQswCQYDVQQGEwJVUzEc
+MBoGA1UEChMTRXF1aWZheCBTZWN1cmUgSW5jLjEtMCsGA1UEAxMkRXF1aWZheCBT
+ZWN1cmUgR2xvYmFsIGVCdXNpbmVzcyBDQS0xMB4XDTk5MDYyMTA0MDAwMFoXDTIw
+MDYyMTA0MDAwMFowWjELMAkGA1UEBhMCVVMxHDAaBgNVBAoTE0VxdWlmYXggU2Vj
+dXJlIEluYy4xLTArBgNVBAMTJEVxdWlmYXggU2VjdXJlIEdsb2JhbCBlQnVzaW5l
+c3MgQ0EtMTCBnzANBgkqhkiG9w0BAQEFAAOBjQAwgYkCgYEAuucXkAJlsTRVPEnC
+UdXfp9E3j9HngXNBUmCbnaEXJnitx7HoJpQytd4zjTov2/KaelpzmKNc6fuKcxtc
+58O/gGzNqfTWK8D3+ZmqY6KxRwIP1ORROhI8bIpaVIRw28HFkM9yRcuoWcDNM50/
+o5brhTMhHD4ePmBudpxnhcXIw2ECAwEAAaNmMGQwEQYJYIZIAYb4QgEBBAQDAgAH
+MA8GA1UdEwEB/wQFMAMBAf8wHwYDVR0jBBgwFoAUvqigdHJQa0S3ySPY+6j/s1dr
+aGwwHQYDVR0OBBYEFL6ooHRyUGtEt8kj2Puo/7NXa2hsMA0GCSqGSIb3DQEBBAUA
+A4GBADDiAVGqx+pf2rnQZQ8w1j7aDRRJbpGTJxQx78T3LUX47Me/okENI7SS+RkA
+Z70Br83gcfxaz2TE4JaY0KNA4gGK7ycH8WUBikQtBmV1UsCGECAhX2xrD2yuCRyv
+8qIYNMR1pHMc8Y3c7635s3a0kr/clRAevsvIO1qEYBlWlKlV
+-----END CERTIFICATE-----
+
+Equifax Secure eBusiness CA 1
+=============================
+
+-----BEGIN CERTIFICATE-----
+MIICgjCCAeugAwIBAgIBBDANBgkqhkiG9w0BAQQFADBTMQswCQYDVQQGEwJVUzEc
+MBoGA1UEChMTRXF1aWZheCBTZWN1cmUgSW5jLjEmMCQGA1UEAxMdRXF1aWZheCBT
+ZWN1cmUgZUJ1c2luZXNzIENBLTEwHhcNOTkwNjIxMDQwMDAwWhcNMjAwNjIxMDQw
+MDAwWjBTMQswCQYDVQQGEwJVUzEcMBoGA1UEChMTRXF1aWZheCBTZWN1cmUgSW5j
+LjEmMCQGA1UEAxMdRXF1aWZheCBTZWN1cmUgZUJ1c2luZXNzIENBLTEwgZ8wDQYJ
+KoZIhvcNAQEBBQADgY0AMIGJAoGBAM4vGbwXt3fek6lfWg0XTzQaDJj0ItlZ1MRo
+RvC0NcWFAyDGr0WlIVFFQesWWDYyb+JQYmT5/VGcqiTZ9J2DKocKIdMSODRsjQBu
+WqDZQu4aIZX5UkxVWsUPOE9G+m34LjXWHXzr4vCwdYDIqROsvojvOm6rXyo4YgKw
+Env+j6YDAgMBAAGjZjBkMBEGCWCGSAGG+EIBAQQEAwIABzAPBgNVHRMBAf8EBTAD
+AQH/MB8GA1UdIwQYMBaAFEp4MlIR21kWNl7fwRQ2QGpHfEyhMB0GA1UdDgQWBBRK
+eDJSEdtZFjZe38EUNkBqR3xMoTANBgkqhkiG9w0BAQQFAAOBgQB1W6ibAxHm6VZM
+zfmpTMANmvPMZWnmJXbMWbfWVMMdzZmsGd20hdXgPfxiIKeES1hl8eL5lSE/9dR+
+WB5Hh1Q+WKG1tfgq73HnvMP2sUlG4tega+VWeponmHxGYhTnyfxuAxJ5gDgdSIKN
+/Bf+KpYrtWKmpj29f5JZzVoqgrI3eQ==
+-----END CERTIFICATE-----
+
+Equifax Secure eBusiness CA 2
+=============================
+
+-----BEGIN CERTIFICATE-----
+MIIDIDCCAomgAwIBAgIEN3DPtTANBgkqhkiG9w0BAQUFADBOMQswCQYDVQQGEwJV
+UzEXMBUGA1UEChMORXF1aWZheCBTZWN1cmUxJjAkBgNVBAsTHUVxdWlmYXggU2Vj
+dXJlIGVCdXNpbmVzcyBDQS0yMB4XDTk5MDYyMzEyMTQ0NVoXDTE5MDYyMzEyMTQ0
+NVowTjELMAkGA1UEBhMCVVMxFzAVBgNVBAoTDkVxdWlmYXggU2VjdXJlMSYwJAYD
+VQQLEx1FcXVpZmF4IFNlY3VyZSBlQnVzaW5lc3MgQ0EtMjCBnzANBgkqhkiG9w0B
+AQEFAAOBjQAwgYkCgYEA5Dk5kx5SBhsoNviyoynF7Y6yEb3+6+e0dMKP/wXn2Z0G
+vxLIPw7y1tEkshHe0XMJitSxLJgJDR5QRrKDpkWNYmi7hRsgcDKqQM2mll/EcTc/
+BPO3QSQ5BxoeLmFYoBIL5aXfxavqN3HMHMg3OrmXUqesxWoklE6ce8/AatbfIb0C
+AwEAAaOCAQkwggEFMHAGA1UdHwRpMGcwZaBjoGGkXzBdMQswCQYDVQQGEwJVUzEX
+MBUGA1UEChMORXF1aWZheCBTZWN1cmUxJjAkBgNVBAsTHUVxdWlmYXggU2VjdXJl
+IGVCdXNpbmVzcyBDQS0yMQ0wCwYDVQQDEwRDUkwxMBoGA1UdEAQTMBGBDzIwMTkw
+NjIzMTIxNDQ1WjALBgNVHQ8EBAMCAQYwHwYDVR0jBBgwFoAUUJ4L6q9euSBIplBq
+y/3YIHqngnYwHQYDVR0OBBYEFFCeC+qvXrkgSKZQasv92CB6p4J2MAwGA1UdEwQF
+MAMBAf8wGgYJKoZIhvZ9B0EABA0wCxsFVjMuMGMDAgbAMA0GCSqGSIb3DQEBBQUA
+A4GBAAyGgq3oThr1jokn4jVYPSm0B482UJW/bsGe68SQsoWou7dC4A8HOd/7npCy
+0cE+U58DRLB+S/Rv5Hwf5+Kx5Lia78O9zt4LMjTZ3ijtM2vE1Nc9ElirfQkty3D1
+E4qUoSek1nDFbZS1yX2doNLGCEnZZpum0/QL3MUmV+GRMOrN
+-----END CERTIFICATE-----
+
+Thawte Time Stamping CA
+=======================
+
+-----BEGIN CERTIFICATE-----
+MIICoTCCAgqgAwIBAgIBADANBgkqhkiG9w0BAQQFADCBizELMAkGA1UEBhMCWkEx
+FTATBgNVBAgTDFdlc3Rlcm4gQ2FwZTEUMBIGA1UEBxMLRHVyYmFudmlsbGUxDzAN
+BgNVBAoTBlRoYXd0ZTEdMBsGA1UECxMUVGhhd3RlIENlcnRpZmljYXRpb24xHzAd
+BgNVBAMTFlRoYXd0ZSBUaW1lc3RhbXBpbmcgQ0EwHhcNOTcwMTAxMDAwMDAwWhcN
+MjAxMjMxMjM1OTU5WjCBizELMAkGA1UEBhMCWkExFTATBgNVBAgTDFdlc3Rlcm4g
+Q2FwZTEUMBIGA1UEBxMLRHVyYmFudmlsbGUxDzANBgNVBAoTBlRoYXd0ZTEdMBsG
+A1UECxMUVGhhd3RlIENlcnRpZmljYXRpb24xHzAdBgNVBAMTFlRoYXd0ZSBUaW1l
+c3RhbXBpbmcgQ0EwgZ8wDQYJKoZIhvcNAQEBBQADgY0AMIGJAoGBANYrWHhhRYZT
+6jR7UZztsOYuGA7+4F+oJ9O0yeB8WU4WDnNUYMF/9p8u6TqFJBU820cEY8OexJQa
+Wt9MevPZQx08EHp5JduQ/vBR5zDWQQD9nyjfeb6Uu522FOMjhdepQeBMpHmwKxqL
+8vg7ij5FrHGSALSQQZj7X+36ty6K+Ig3AgMBAAGjEzARMA8GA1UdEwEB/wQFMAMB
+Af8wDQYJKoZIhvcNAQEEBQADgYEAZ9viwuaHPUCDhjc1fR/OmsMMZiCouqoEiYbC
+9RAIDb/LogWK0E02PvTX72nGXuSwlG9KuefeW4i2e9vjJ+V2w/A1wcu1J5szedyQ
+pgCed/r8zSeUQhac0xxo7L9c3eWpexAKMnRUEzGLhQOEkbdYATAUOK8oyvyxUBkZ
+CayJSdM=
+-----END CERTIFICATE-----
+
+thawte Primary Root CA
+======================
+
+-----BEGIN CERTIFICATE-----
+MIIEIDCCAwigAwIBAgIQNE7VVyDV7exJ9C/ON9srbTANBgkqhkiG9w0BAQUFADCB
+qTELMAkGA1UEBhMCVVMxFTATBgNVBAoTDHRoYXd0ZSwgSW5jLjEoMCYGA1UECxMf
+Q2VydGlmaWNhdGlvbiBTZXJ2aWNlcyBEaXZpc2lvbjE4MDYGA1UECxMvKGMpIDIw
+MDYgdGhhd3RlLCBJbmMuIC0gRm9yIGF1dGhvcml6ZWQgdXNlIG9ubHkxHzAdBgNV
+BAMTFnRoYXd0ZSBQcmltYXJ5IFJvb3QgQ0EwHhcNMDYxMTE3MDAwMDAwWhcNMzYw
+NzE2MjM1OTU5WjCBqTELMAkGA1UEBhMCVVMxFTATBgNVBAoTDHRoYXd0ZSwgSW5j
+LjEoMCYGA1UECxMfQ2VydGlmaWNhdGlvbiBTZXJ2aWNlcyBEaXZpc2lvbjE4MDYG
+A1UECxMvKGMpIDIwMDYgdGhhd3RlLCBJbmMuIC0gRm9yIGF1dGhvcml6ZWQgdXNl
+IG9ubHkxHzAdBgNVBAMTFnRoYXd0ZSBQcmltYXJ5IFJvb3QgQ0EwggEiMA0GCSqG
+SIb3DQEBAQUAA4IBDwAwggEKAoIBAQCsoPD7gFnUnMekz52hWXMJEEUMDSxuaPFs
+W0hoSVk3/AszGcJ3f8wQLZU0HObrTQmnHNK4yZc2AreJ1CRfBsDMRJSUjQJib+ta
+3RGNKJpchJAQeg29dGYvajig4tVUROsdB58Hum/u6f1OCyn1PoSgAfGcq/gcfomk
+6KHYcWUNo1F77rzSImANuVud37r8UVsLr5iy6S7pBOhih94ryNdOwUxkHt3Ph1i6
+Sk/KaAcdHJ1KxtUvkcx8cXIcxcBn6zL9yZJclNqFwJu/U30rCfSMnZEfl2pSy94J
+NqR32HuHUETVPm4pafs5SSYeCaWAe0At6+gnhcn+Yf1+5nyXHdWdAgMBAAGjQjBA
+MA8GA1UdEwEB/wQFMAMBAf8wDgYDVR0PAQH/BAQDAgEGMB0GA1UdDgQWBBR7W0XP
+r87Lev0xkhpqtvNG61dIUDANBgkqhkiG9w0BAQUFAAOCAQEAeRHAS7ORtvzw6WfU
+DW5FvlXok9LOAz/t2iWwHVfLHjp2oEzsUHboZHIMpKnxuIvW1oeEuzLlQRHAd9mz
+YJ3rG9XRbkREqaYB7FViHXe4XI5ISXycO1cRrK1zN44veFyQaEfZYGDm/Ac9IiAX
+xPcW6cTYcvnIc3zfFi8VqT79aie2oetaupgf1eNNZAqdE8hhuvU5HIe6uL17In/2
+/qxAeeWsEG89jxt5dovEN7MhGITlNgDrYyCZuen+MwS7QcjBAvlEYyCegc5C09Y/
+LHbTY5xZ3Y+m4Q6gLkH3LpVHz7z9M/P2C2F+fpErgUfCJzDupxBdN49cOSvkBPB7
+jVaMaA==
+-----END CERTIFICATE-----
+
+VeriSign Class 3 Public Primary Certification Authority - G5
+============================================================
+
+-----BEGIN CERTIFICATE-----
+MIIE0zCCA7ugAwIBAgIQGNrRniZ96LtKIVjNzGs7SjANBgkqhkiG9w0BAQUFADCB
+yjELMAkGA1UEBhMCVVMxFzAVBgNVBAoTDlZlcmlTaWduLCBJbmMuMR8wHQYDVQQL
+ExZWZXJpU2lnbiBUcnVzdCBOZXR3b3JrMTowOAYDVQQLEzEoYykgMjAwNiBWZXJp
+U2lnbiwgSW5jLiAtIEZvciBhdXRob3JpemVkIHVzZSBvbmx5MUUwQwYDVQQDEzxW
+ZXJpU2lnbiBDbGFzcyAzIFB1YmxpYyBQcmltYXJ5IENlcnRpZmljYXRpb24gQXV0
+aG9yaXR5IC0gRzUwHhcNMDYxMTA4MDAwMDAwWhcNMzYwNzE2MjM1OTU5WjCByjEL
+MAkGA1UEBhMCVVMxFzAVBgNVBAoTDlZlcmlTaWduLCBJbmMuMR8wHQYDVQQLExZW
+ZXJpU2lnbiBUcnVzdCBOZXR3b3JrMTowOAYDVQQLEzEoYykgMjAwNiBWZXJpU2ln
+biwgSW5jLiAtIEZvciBhdXRob3JpemVkIHVzZSBvbmx5MUUwQwYDVQQDEzxWZXJp
+U2lnbiBDbGFzcyAzIFB1YmxpYyBQcmltYXJ5IENlcnRpZmljYXRpb24gQXV0aG9y
+aXR5IC0gRzUwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCvJAgIKXo1
+nmAMqudLO07cfLw8RRy7K+D+KQL5VwijZIUVJ/XxrcgxiV0i6CqqpkKzj/i5Vbex
+t0uz/o9+B1fs70PbZmIVYc9gDaTY3vjgw2IIPVQT60nKWVSFJuUrjxuf6/WhkcIz
+SdhDY2pSS9KP6HBRTdGJaXvHcPaz3BJ023tdS1bTlr8Vd6Gw9KIl8q8ckmcY5fQG
+BO+QueQA5N06tRn/Arr0PO7gi+s3i+z016zy9vA9r911kTMZHRxAy3QkGSGT2RT+
+rCpSx4/VBEnkjWNHiDxpg8v+R70rfk/Fla4OndTRQ8Bnc+MUCH7lP59zuDMKz10/
+NIeWiu5T6CUVAgMBAAGjgbIwga8wDwYDVR0TAQH/BAUwAwEB/zAOBgNVHQ8BAf8E
+BAMCAQYwbQYIKwYBBQUHAQwEYTBfoV2gWzBZMFcwVRYJaW1hZ2UvZ2lmMCEwHzAH
+BgUrDgMCGgQUj+XTGoasjY5rw8+AatRIGCx7GS4wJRYjaHR0cDovL2xvZ28udmVy
+aXNpZ24uY29tL3ZzbG9nby5naWYwHQYDVR0OBBYEFH/TZafC3ey78DAJ80M5+gKv
+MzEzMA0GCSqGSIb3DQEBBQUAA4IBAQCTJEowX2LP2BqYLz3q3JktvXf2pXkiOOzE
+p6B4Eq1iDkVwZMXnl2YtmAl+X6/WzChl8gGqCBpH3vn5fJJaCGkgDdk+bW48DW7Y
+5gaRQBi5+MHt39tBquCWIMnNZBU4gcmU7qKEKQsTb47bDN0lAtukixlE0kF6BWlK
+WE9gyn6CagsCqiUXObXbf+eEZSqVir2G3l6BFoMtEMze/aiCKm0oHw0LxOXnGiYZ
+4fQRbxC1lfznQgUy286dUV4otp6F01vvpX1FQHKOtw5rDgb7MzVIcbidJ4vEZV8N
+hnacRHr2lVz2XTIIM6RUthg/aFzyQkqFOFSDX9HoLPKsEdao7WNq
+-----END CERTIFICATE-----
+
+Entrust.net Secure Server Certification Authority
+=================================================
+
+-----BEGIN CERTIFICATE-----
+MIIE2DCCBEGgAwIBAgIEN0rSQzANBgkqhkiG9w0BAQUFADCBwzELMAkGA1UEBhMC
+VVMxFDASBgNVBAoTC0VudHJ1c3QubmV0MTswOQYDVQQLEzJ3d3cuZW50cnVzdC5u
+ZXQvQ1BTIGluY29ycC4gYnkgcmVmLiAobGltaXRzIGxpYWIuKTElMCMGA1UECxMc
+KGMpIDE5OTkgRW50cnVzdC5uZXQgTGltaXRlZDE6MDgGA1UEAxMxRW50cnVzdC5u
+ZXQgU2VjdXJlIFNlcnZlciBDZXJ0aWZpY2F0aW9uIEF1dGhvcml0eTAeFw05OTA1
+MjUxNjA5NDBaFw0xOTA1MjUxNjM5NDBaMIHDMQswCQYDVQQGEwJVUzEUMBIGA1UE
+ChMLRW50cnVzdC5uZXQxOzA5BgNVBAsTMnd3dy5lbnRydXN0Lm5ldC9DUFMgaW5j
+b3JwLiBieSByZWYuIChsaW1pdHMgbGlhYi4pMSUwIwYDVQQLExwoYykgMTk5OSBF
+bnRydXN0Lm5ldCBMaW1pdGVkMTowOAYDVQQDEzFFbnRydXN0Lm5ldCBTZWN1cmUg
+U2VydmVyIENlcnRpZmljYXRpb24gQXV0aG9yaXR5MIGdMA0GCSqGSIb3DQEBAQUA
+A4GLADCBhwKBgQDNKIM0VBuJ8w+vN5Ex/68xYMmo6LIQaO2f55M28Qpku0f1BBc/
+I0dNxScZgSYMVHINiC3ZH5oSn7yzcdOAGT9HZnuMNSjSuQrfJNqc1lB5gXpa0zf3
+wkrYKZImZNHkmGw6AIr1NJtl+O3jEP/9uElY3KDegjlrgbEWGWG5VLbmQwIBA6OC
+AdcwggHTMBEGCWCGSAGG+EIBAQQEAwIABzCCARkGA1UdHwSCARAwggEMMIHeoIHb
+oIHYpIHVMIHSMQswCQYDVQQGEwJVUzEUMBIGA1UEChMLRW50cnVzdC5uZXQxOzA5
+BgNVBAsTMnd3dy5lbnRydXN0Lm5ldC9DUFMgaW5jb3JwLiBieSByZWYuIChsaW1p
+dHMgbGlhYi4pMSUwIwYDVQQLExwoYykgMTk5OSBFbnRydXN0Lm5ldCBMaW1pdGVk
+MTowOAYDVQQDEzFFbnRydXN0Lm5ldCBTZWN1cmUgU2VydmVyIENlcnRpZmljYXRp
+b24gQXV0aG9yaXR5MQ0wCwYDVQQDEwRDUkwxMCmgJ6AlhiNodHRwOi8vd3d3LmVu
+dHJ1c3QubmV0L0NSTC9uZXQxLmNybDArBgNVHRAEJDAigA8xOTk5MDUyNTE2MDk0
+MFqBDzIwMTkwNTI1MTYwOTQwWjALBgNVHQ8EBAMCAQYwHwYDVR0jBBgwFoAU8Bdi
+E1U9s/8KAGv7UISX8+1i0BowHQYDVR0OBBYEFPAXYhNVPbP/CgBr+1CEl/PtYtAa
+MAwGA1UdEwQFMAMBAf8wGQYJKoZIhvZ9B0EABAwwChsEVjQuMAMCBJAwDQYJKoZI
+hvcNAQEFBQADgYEAkNwwAvpkdMKnCqV8IY00F6j7Rw7/JXyNEwr75Ji174z4xRAN
+95K+8cPV1ZVqBLssziY2ZcgxxufuP+NXdYR6Ee9GTxj005i7qIcyunL2POI9n9cd
+2cNgQ4xYDiKWL2KjLB+6rQXvqzJ4h6BUcxm1XAX5Uj5tLUUL9wqT6u0G+bI=
+-----END CERTIFICATE-----
diff --git a/boto/tests/__init__.py b/boto/cloudformation/__init__.py
similarity index 80%
copy from boto/tests/__init__.py
copy to boto/cloudformation/__init__.py
index 449bd16..4f8e090 100644
--- a/boto/tests/__init__.py
+++ b/boto/cloudformation/__init__.py
@@ -1,4 +1,5 @@
-# Copyright (c) 2006,2007 Mitch Garnaat http://garnaat.org/
+# Copyright (c) 2010-2011 Mitch Garnaat http://garnaat.org/
+# Copyright (c) 2010-2011, Eucalyptus Systems, Inc.
 #
 # Permission is hereby granted, free of charge, to any person obtaining a
 # copy of this software and associated documentation files (the
@@ -18,6 +19,7 @@
 # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
 # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
 # IN THE SOFTWARE.
-#
 
-
+# this is here for backward compatibility
+# originally, the SNSConnection class was defined here
+from connection import CloudFormationConnection
diff --git a/boto/cloudformation/connection.py b/boto/cloudformation/connection.py
new file mode 100644
index 0000000..59640bd
--- /dev/null
+++ b/boto/cloudformation/connection.py
@@ -0,0 +1,223 @@
+# Copyright (c) 2006-2009 Mitch Garnaat http://garnaat.org/
+#
+# Permission is hereby granted, free of charge, to any person obtaining a
+# copy of this software and associated documentation files (the
+# "Software"), to deal in the Software without restriction, including
+# without limitation the rights to use, copy, modify, merge, publish, dis-
+# tribute, sublicense, and/or sell copies of the Software, and to permit
+# persons to whom the Software is furnished to do so, subject to the fol-
+# lowing conditions:
+#
+# The above copyright notice and this permission notice shall be included
+# in all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
+# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
+# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, 
+# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+# IN THE SOFTWARE.
+
+try:
+    import simplejson as json
+except:
+    import json
+
+import boto
+from boto.cloudformation.stack import Stack, StackSummary, StackEvent
+from boto.cloudformation.stack import StackResource, StackResourceSummary
+from boto.cloudformation.template import Template
+from boto.connection import AWSQueryConnection
+from boto.regioninfo import RegionInfo
+
+class CloudFormationConnection(AWSQueryConnection):
+
+    """
+    A Connection to the CloudFormation Service.
+    """
+    DefaultRegionName = 'us-east-1'
+    DefaultRegionEndpoint = 'cloudformation.us-east-1.amazonaws.com'
+    APIVersion = '2010-05-15'
+
+    valid_states = ("CREATE_IN_PROGRESS", "CREATE_FAILED", "CREATE_COMPLETE",
+            "ROLLBACK_IN_PROGRESS", "ROLLBACK_FAILED", "ROLLBACK_COMPLETE",
+            "DELETE_IN_PROGRESS", "DELETE_FAILED", "DELETE_COMPLETE")
+
+    def __init__(self, aws_access_key_id=None, aws_secret_access_key=None,
+                 is_secure=True, port=None, proxy=None, proxy_port=None,
+                 proxy_user=None, proxy_pass=None, debug=0,
+                 https_connection_factory=None, region=None, path='/', converter=None):
+        if not region:
+            region = RegionInfo(self, self.DefaultRegionName,
+                self.DefaultRegionEndpoint, CloudFormationConnection)
+        self.region = region
+        AWSQueryConnection.__init__(self, aws_access_key_id, aws_secret_access_key,
+                                    is_secure, port, proxy, proxy_port, proxy_user, proxy_pass,
+                                    self.region.endpoint, debug, https_connection_factory, path)
+
+    def _required_auth_capability(self):
+        return ['cloudformation']
+
+    def encode_bool(self, v):
+        v = bool(v)
+        return {True: "true", False: "false"}[v]
+
+    def create_stack(self, stack_name, template_body=None, template_url=None,
+            parameters=[], notification_arns=[], disable_rollback=False,
+            timeout_in_minutes=None):
+        """
+        Creates a CloudFormation Stack as specified by the template.
+
+        :type stack_name: string
+        :param stack_name: The name of the Stack, must be unique amoung running
+                            Stacks
+
+        :type template_body: string
+        :param template_body: The template body (JSON string)
+
+        :type template_url: string
+        :param template_url: An S3 URL of a stored template JSON document. If
+                            both the template_body and template_url are
+                            specified, the template_body takes precedence
+
+        :type parameters: list of tuples
+        :param parameters: A list of (key, value) pairs for template input
+                            parameters.
+
+        :type notification_arns: list of strings
+        :param notification_arns: A list of SNS topics to send Stack event
+                            notifications to
+
+        :type disable_rollback: bool
+        :param disable_rollback: Indicates whether or not to rollback on
+                            failure
+
+        :type timeout_in_minutes: int
+        :param timeout_in_minutes: Maximum amount of time to let the Stack
+                            spend creating itself. If this timeout is exceeded,
+                            the Stack will enter the CREATE_FAILED state
+
+        :rtype: string
+        :return: The unique Stack ID
+        """
+        params = {'ContentType': "JSON", 'StackName': stack_name,
+                'DisableRollback': self.encode_bool(disable_rollback)}
+        if template_body:
+            params['TemplateBody'] = template_body
+        if template_url:
+            params['TemplateURL'] = template_url
+        if template_body and template_url:
+            boto.log.warning("If both TemplateBody and TemplateURL are"
+                " specified, only TemplateBody will be honored by the API")
+        if len(parameters) > 0:
+            for i, (key, value) in enumerate(parameters):
+                params['Parameters.member.%d.ParameterKey' % (i+1)] = key
+                params['Parameters.member.%d.ParameterValue' % (i+1)] = value
+        if len(notification_arns) > 0:
+            self.build_list_params(params, notification_arns, "NotificationARNs.member")
+        if timeout_in_minutes:
+            params['TimeoutInMinutes'] = int(timeout_in_minutes)
+
+        response = self.make_request('CreateStack', params, '/', 'POST')
+        body = response.read()
+        if response.status == 200:
+            body = json.loads(body)
+            return body['CreateStackResponse']['CreateStackResult']['StackId']
+        else:
+            boto.log.error('%s %s' % (response.status, response.reason))
+            boto.log.error('%s' % body)
+            raise self.ResponseError(response.status, response.reason, body)
+
+    def delete_stack(self, stack_name_or_id):
+        params = {'ContentType': "JSON", 'StackName': stack_name_or_id}
+        # TODO: change this to get_status ?
+        response = self.make_request('DeleteStack', params, '/', 'GET')
+        body = response.read()
+        if response.status == 200:
+            return json.loads(body)
+        else:
+            boto.log.error('%s %s' % (response.status, response.reason))
+            boto.log.error('%s' % body)
+            raise self.ResponseError(response.status, response.reason, body)
+
+    def describe_stack_events(self, stack_name_or_id=None, next_token=None):
+        params = {}
+        if stack_name_or_id:
+            params['StackName'] = stack_name_or_id
+        if next_token:
+            params['NextToken'] = next_token
+        return self.get_list('DescribeStackEvents', params, [('member',
+            StackEvent)])
+
+    def describe_stack_resource(self, stack_name_or_id, logical_resource_id):
+        params = {'ContentType': "JSON", 'StackName': stack_name_or_id,
+                'LogicalResourceId': logical_resource_id}
+        response = self.make_request('DescribeStackResource', params, '/', 'GET')
+        body = response.read()
+        if response.status == 200:
+            return json.loads(body)
+        else:
+            boto.log.error('%s %s' % (response.status, response.reason))
+            boto.log.error('%s' % body)
+            raise self.ResponseError(response.status, response.reason, body)
+
+    def describe_stack_resources(self, stack_name_or_id=None,
+            logical_resource_id=None,
+            physical_resource_id=None):
+        params = {}
+        if stack_name_or_id:
+            params['StackName'] = stack_name_or_id
+        if logical_resource_id:
+            params['LogicalResourceId'] = logical_resource_id
+        if physical_resource_id:
+            params['PhysicalResourceId'] = physical_resource_id
+        return self.get_list('DescribeStackResources', params, [('member',
+            StackResource)])
+
+    def describe_stacks(self, stack_name_or_id=None):
+        params = {}
+        if stack_name_or_id:
+            params['StackName'] = stack_name_or_id
+        return self.get_list('DescribeStacks', params, [('member', Stack)])
+
+    def get_template(self, stack_name_or_id):
+        params = {'ContentType': "JSON", 'StackName': stack_name_or_id}
+        response = self.make_request('GetTemplate', params, '/', 'GET')
+        body = response.read()
+        if response.status == 200:
+            return json.loads(body)
+        else:
+            boto.log.error('%s %s' % (response.status, response.reason))
+            boto.log.error('%s' % body)
+            raise self.ResponseError(response.status, response.reason, body)
+
+    def list_stack_resources(self, stack_name_or_id, next_token=None):
+        params = {'StackName': stack_name_or_id}
+        if next_token:
+            params['NextToken'] = next_token
+        return self.get_list('ListStackResources', params, [('member',
+            StackResourceSummary)])
+
+    def list_stacks(self, stack_status_filters=[], next_token=None):
+        params = {}
+        if next_token:
+            params['NextToken'] = next_token
+        if len(stack_status_filters) > 0:
+            self.build_list_params(params, stack_status_filters,
+                "StackStatusFilter.member")
+
+        return self.get_list('ListStacks', params, [('member',
+            StackSummary)])
+
+    def validate_template(self, template_body=None, template_url=None):
+        params = {}
+        if template_body:
+            params['TemplateBody'] = template_body
+        if template_url:
+            params['TemplateUrl'] = template_url
+        if template_body and template_url:
+            boto.log.warning("If both TemplateBody and TemplateURL are"
+                " specified, only TemplateBody will be honored by the API")
+        return self.get_object('ValidateTemplate', params, Template,
+                verb="POST")
diff --git a/boto/cloudformation/stack.py b/boto/cloudformation/stack.py
new file mode 100644
index 0000000..8b9e115
--- /dev/null
+++ b/boto/cloudformation/stack.py
@@ -0,0 +1,289 @@
+from datetime import datetime
+
+from boto.resultset import ResultSet
+
+class Stack:
+    def __init__(self, connection=None):
+        self.connection = connection
+        self.creation_time = None
+        self.description = None
+        self.disable_rollback = None
+        self.notification_arns = []
+        self.outputs = []
+        self.parameters = []
+        self.stack_id = None
+        self.stack_status = None
+        self.stack_name = None
+        self.stack_name_reason = None
+        self.timeout_in_minutes = None
+
+    def startElement(self, name, attrs, connection):
+        if name == "Parameters":
+            self.parameters = ResultSet([('member', Parameter)])
+            return self.parameters
+        elif name == "Outputs":
+            self.outputs = ResultSet([('member', Output)])
+            return self.outputs
+        else:
+            return None
+
+    def endElement(self, name, value, connection):
+        if name == 'CreationTime':
+            self.creation_time = datetime.strptime(value, '%Y-%m-%dT%H:%M:%SZ')
+        elif name == "Description":
+            self.description = value
+        elif name == "DisableRollback":
+            self.disable_rollback = bool(value)
+        elif name == "NotificationARNs":
+            self.notification_arns = value
+        elif name == 'StackId':
+            self.stack_id = value
+        elif name == 'StackName':
+            self.stack_name = value
+        elif name == 'StackStatus':
+            self.stack_status = value
+        elif name == "StackStatusReason":
+            self.stack_status_reason = value
+        elif name == "TimeoutInMinutes":
+            self.timeout_in_minutes = int(value)
+        elif name == "member":
+            pass
+        else:
+            setattr(self, name, value)
+
+    def delete(self):
+        return self.connection.delete_stack(stack_name_or_id=self.stack_id)
+
+    def describe_events(self, next_token=None):
+        return self.connection.describe_stack_events(
+            stack_name_or_id=self.stack_id,
+            next_token=next_token
+        )
+
+    def describe_resource(self, logical_resource_id):
+        return self.connection.describe_stack_resource(
+            stack_name_or_id=self.stack_id,
+            logical_resource_id=logical_resource_id
+        )
+
+    def describe_resources(self, logical_resource_id=None,
+            physical_resource_id=None):
+        return self.connection.describe_stack_resources(
+            stack_name_or_id=self.stack_id,
+            logical_resource_id=logical_resource_id,
+            physical_resource_id=physical_resource_id
+        )
+
+    def list_resources(self, next_token=None):
+        return self.connection.list_stack_resources(
+            stack_name_or_id=self.stack_id,
+            next_token=next_token
+        )
+
+    def update(self):
+        rs = self.connection.describe_stacks(self.stack_id)
+        if len(rs) == 1 and rs[0].stack_id == self.stack_id:
+            self.__dict__.update(rs[0].__dict__)
+        else:
+            raise ValueError("%s is not a valid Stack ID or Name" %
+                self.stack_id)
+
+    def get_template(self):
+        return self.connection.get_template(stack_name_or_id=self.stack_id)
+
+class StackSummary:
+    def __init__(self, connection=None):
+        self.connection = connection
+        self.stack_id = None
+        self.stack_status = None
+        self.stack_name = None
+        self.creation_time = None
+        self.deletion_time = None
+        self.template_description = None
+
+    def startElement(self, name, attrs, connection):
+        return None
+
+    def endElement(self, name, value, connection):
+        if name == 'StackId':
+            self.stack_id = value
+        elif name == 'StackStatus':
+            self.stack_status = value
+        elif name == 'StackName':
+            self.stack_name = value
+        elif name == 'CreationTime':
+            self.creation_time = datetime.strptime(value, '%Y-%m-%dT%H:%M:%SZ')
+        elif name == "DeletionTime":
+            self.deletion_time = datetime.strptime(value, '%Y-%m-%dT%H:%M:%SZ')
+        elif name == 'TemplateDescription':
+            self.template_description = value
+        elif name == "member":
+            pass
+        else:
+            setattr(self, name, value)
+
+class Parameter:
+    def __init__(self, connection=None):
+        self.connection = None
+        self.key = None
+        self.value = None
+
+    def startElement(self, name, attrs, connection):
+        return None
+
+    def endElement(self, name, value, connection):
+        if name == "ParameterKey":
+            self.key = value
+        elif name == "ParameterValue":
+            self.value = value
+        else:
+            setattr(self, name, value)
+
+    def __repr__(self):
+        return "Parameter:\"%s\"=\"%s\"" % (self.key, self.value)
+
+class Output:
+    def __init__(self, connection=None):
+        self.connection = connection
+        self.description = None
+        self.key = None
+        self.value = None
+
+    def startElement(self, name, attrs, connection):
+        return None
+
+    def endElement(self, name, value, connection):
+        if name == "Description":
+            self.description = value
+        elif name == "OutputKey":
+            self.key = value
+        elif name == "OutputValue":
+            self.value = value
+        else:
+            setattr(self, name, value)
+
+    def __repr__(self):
+        return "Output:\"%s\"=\"%s\"" % (self.key, self.value)
+
+class StackResource:
+    def __init__(self, connection=None):
+        self.connection = connection
+        self.description = None
+        self.logical_resource_id = None
+        self.physical_resource_id = None
+        self.resource_status = None
+        self.resource_status_reason = None
+        self.resource_type = None
+        self.stack_id = None
+        self.stack_name = None
+        self.timestamp = None
+
+    def startElement(self, name, attrs, connection):
+        return None
+
+    def endElement(self, name, value, connection):
+        if name == "Description":
+            self.description = value
+        elif name == "LogicalResourceId":
+            self.logical_resource_id = value
+        elif name == "PhysicalResourceId":
+            self.physical_resource_id = value
+        elif name == "ResourceStatus":
+            self.resource_status = value
+        elif name == "ResourceStatusReason":
+            self.resource_status_reason = value
+        elif name == "ResourceType":
+            self.resource_type = value
+        elif name == "StackId":
+            self.stack_id = value
+        elif name == "StackName":
+            self.stack_name = value
+        elif name == "Timestamp":
+            self.timestamp = datetime.strptime(value, '%Y-%m-%dT%H:%M:%SZ')
+        else:
+            setattr(self, name, value)
+
+    def __repr__(self):
+        return "StackResource:%s (%s)" % (self.logical_resource_id,
+                self.resource_type)
+
+class StackResourceSummary:
+    def __init__(self, connection=None):
+        self.connection = connection
+        self.last_updated_timestamp = None
+        self.logical_resource_id = None
+        self.physical_resource_id = None
+        self.resource_status = None
+        self.resource_status_reason = None
+        self.resource_type = None
+
+    def startElement(self, name, attrs, connection):
+        return None
+
+    def endElement(self, name, value, connection):
+        if name == "LastUpdatedTimestamp":
+            self.last_updated_timestampe = datetime.strptime(value,
+                '%Y-%m-%dT%H:%M:%SZ')
+        elif name == "LogicalResourceId":
+            self.logical_resource_id = value
+        elif name == "PhysicalResourceId":
+            self.physical_resource_id = value
+        elif name == "ResourceStatus":
+            self.resource_status = value
+        elif name == "ResourceStatusReason":
+            self.resource_status_reason = value
+        elif name == "ResourceType":
+            self.resource_type = value
+        else:
+            setattr(self, name, value)
+
+    def __repr__(self):
+        return "StackResourceSummary:%s (%s)" % (self.logical_resource_id,
+                self.resource_type)
+
+class StackEvent:
+    valid_states = ("CREATE_IN_PROGRESS", "CREATE_FAILED", "CREATE_COMPLETE",
+            "DELETE_IN_PROGRESS", "DELETE_FAILED", "DELETE_COMPLETE")
+    def __init__(self, connection=None):
+        self.connection = connection
+        self.event_id = None
+        self.logical_resource_id = None
+        self.physical_resource_id = None
+        self.resource_properties = None
+        self.resource_status = None
+        self.resource_status_reason = None
+        self.resource_type = None
+        self.stack_id = None
+        self.stack_name = None
+        self.timestamp = None
+
+    def startElement(self, name, attrs, connection):
+        return None
+
+    def endElement(self, name, value, connection):
+        if name == "EventId":
+            self.event_id = value
+        elif name == "LogicalResourceId":
+            self.logical_resource_id = value
+        elif name == "PhysicalResourceId":
+            self.physical_resource_id = value
+        elif name == "ResourceProperties":
+            self.resource_properties = value
+        elif name == "ResourceStatus":
+            self.resource_status = value
+        elif name == "ResourceStatusReason":
+            self.resource_status_reason = value
+        elif name == "ResourceType":
+            self.resource_type = value
+        elif name == "StackId":
+            self.stack_id = value
+        elif name == "StackName":
+            self.stack_name = value
+        elif name == "Timestamp":
+            self.timestamp = datetime.strptime(value, '%Y-%m-%dT%H:%M:%SZ')
+        else:
+            setattr(self, name, value)
+
+    def __repr__(self):
+        return "StackEvent %s %s %s" % (self.resource_type,
+                self.logical_resource_id, self.resource_status)
diff --git a/boto/cloudformation/template.py b/boto/cloudformation/template.py
new file mode 100644
index 0000000..f1f8501
--- /dev/null
+++ b/boto/cloudformation/template.py
@@ -0,0 +1,43 @@
+from boto.resultset import ResultSet
+
+class Template:
+    def __init__(self, connection=None):
+        self.connection = connection
+        self.description = None
+        self.template_parameters = None
+
+    def startElement(self, name, attrs, connection):
+        if name == "Parameters":
+            self.template_parameters = ResultSet([('member', TemplateParameter)])
+            return self.template_parameters
+        else:
+            return None
+
+    def endElement(self, name, value, connection):
+        if name == "Description":
+            self.description = value
+        else:
+            setattr(self, name, value)
+
+class TemplateParameter:
+    def __init__(self, parent):
+        self.parent = parent
+        self.default_value = None
+        self.description = None
+        self.no_echo = None
+        self.parameter_key = None
+
+    def startElement(self, name, attrs, connection):
+        return None
+
+    def endElement(self, name, value, connection):
+        if name == "DefaultValue":
+            self.default_value = value
+        elif name == "Description":
+            self.description = value
+        elif name == "NoEcho":
+            self.no_echo = bool(value)
+        elif name == "ParameterKey":
+            self.parameter_key = value
+        else:
+            setattr(self, name, value)
diff --git a/boto/cloudfront/__init__.py b/boto/cloudfront/__init__.py
index bd02b00..7f98b70 100644
--- a/boto/cloudfront/__init__.py
+++ b/boto/cloudfront/__init__.py
@@ -153,10 +153,11 @@
         return self._set_config(distribution_id, etag, config)
     
     def create_distribution(self, origin, enabled, caller_reference='',
-                            cnames=None, comment=''):
+                            cnames=None, comment='', trusted_signers=None):
         config = DistributionConfig(origin=origin, enabled=enabled,
                                     caller_reference=caller_reference,
-                                    cnames=cnames, comment=comment)
+                                    cnames=cnames, comment=comment,
+                                    trusted_signers=trusted_signers)
         return self._create_object(config, 'distribution', Distribution)
         
     def delete_distribution(self, distribution_id, etag):
@@ -181,10 +182,12 @@
     
     def create_streaming_distribution(self, origin, enabled,
                                       caller_reference='',
-                                      cnames=None, comment=''):
+                                      cnames=None, comment='',
+                                      trusted_signers=None):
         config = StreamingDistributionConfig(origin=origin, enabled=enabled,
                                              caller_reference=caller_reference,
-                                             cnames=cnames, comment=comment)
+                                             cnames=cnames, comment=comment,
+                                             trusted_signers=trusted_signers)
         return self._create_object(config, 'streaming-distribution',
                                    StreamingDistribution)
         
@@ -246,3 +249,16 @@
         else:
             raise CloudFrontServerError(response.status, response.reason, body)
 
+    def invalidation_request_status (self, distribution_id, request_id, caller_reference=None):
+        uri = '/%s/distribution/%s/invalidation/%s' % (self.Version, distribution_id, request_id )
+        response = self.make_request('GET', uri, {'Content-Type' : 'text/xml'})
+        body = response.read()
+        if response.status == 200:
+            paths = InvalidationBatch([])
+            h = handler.XmlHandler(paths, self)
+            xml.sax.parseString(body, h)
+            return paths
+        else:
+            raise CloudFrontServerError(response.status, response.reason, body)
+
+
diff --git a/boto/cloudfront/distribution.py b/boto/cloudfront/distribution.py
index ed245cb..01ceed4 100644
--- a/boto/cloudfront/distribution.py
+++ b/boto/cloudfront/distribution.py
@@ -20,6 +20,8 @@
 # IN THE SOFTWARE.
 
 import uuid
+import base64
+import json
 from boto.cloudfront.identity import OriginAccessIdentity
 from boto.cloudfront.object import Object, StreamingObject
 from boto.cloudfront.signers import ActiveTrustedSigners, TrustedSigners
@@ -286,6 +288,7 @@
         self.id = id
         self.last_modified_time = last_modified_time
         self.status = status
+        self.in_progress_invalidation_batches = 0
         self.active_signers = None
         self.etag = None
         self._bucket = None
@@ -308,6 +311,8 @@
             self.last_modified_time = value
         elif name == 'Status':
             self.status = value
+        elif name == 'InProgressInvalidationBatches':
+            self.in_progress_invalidation_batches = int(value)
         elif name == 'DomainName':
             self.domain_name = value
         else:
@@ -316,12 +321,18 @@
     def update(self, enabled=None, cnames=None, comment=None):
         """
         Update the configuration of the Distribution.  The only values
-        of the DistributionConfig that can be updated are:
+        of the DistributionConfig that can be directly updated are:
 
          * CNAMES
          * Comment
          * Whether the Distribution is enabled or not
 
+        Any changes to the ``trusted_signers`` or ``origin`` properties of
+        this distribution's current config object will also be included in
+        the update. Therefore, to set the origin access identity for this
+        distribution, set ``Distribution.config.origin.origin_access_identity``
+        before calling this update method.
+
         :type enabled: bool
         :param enabled: Whether the Distribution is active or not.
 
@@ -371,19 +382,23 @@
         self.connection.delete_distribution(self.id, self.etag)
 
     def _get_bucket(self):
-        if not self._bucket:
-            bucket_name = self.config.origin.replace('.s3.amazonaws.com', '')
-            from boto.s3.connection import S3Connection
-            s3 = S3Connection(self.connection.aws_access_key_id,
-                              self.connection.aws_secret_access_key,
-                              proxy=self.connection.proxy,
-                              proxy_port=self.connection.proxy_port,
-                              proxy_user=self.connection.proxy_user,
-                              proxy_pass=self.connection.proxy_pass)
-            self._bucket = s3.get_bucket(bucket_name)
-            self._bucket.distribution = self
-            self._bucket.set_key_class(self._object_class)
-        return self._bucket
+        if isinstance(self.config.origin, S3Origin):
+            if not self._bucket:
+                bucket_dns_name = self.config.origin.dns_name
+                bucket_name = bucket_dns_name.replace('.s3.amazonaws.com', '')
+                from boto.s3.connection import S3Connection
+                s3 = S3Connection(self.connection.aws_access_key_id,
+                                  self.connection.aws_secret_access_key,
+                                  proxy=self.connection.proxy,
+                                  proxy_port=self.connection.proxy_port,
+                                  proxy_user=self.connection.proxy_user,
+                                  proxy_pass=self.connection.proxy_pass)
+                self._bucket = s3.get_bucket(bucket_name)
+                self._bucket.distribution = self
+                self._bucket.set_key_class(self._object_class)
+            return self._bucket
+        else:
+            raise NotImplementedError('Unable to get_objects on CustomOrigin')
     
     def get_objects(self):
         """
@@ -469,17 +484,198 @@
         :rtype: :class:`boto.cloudfront.object.Object`
         :return: The newly created object.
         """
-        if self.config.origin_access_identity:
+        if self.config.origin.origin_access_identity:
             policy = 'private'
         else:
             policy = 'public-read'
         bucket = self._get_bucket()
         object = bucket.new_key(name)
         object.set_contents_from_file(content, headers=headers, policy=policy)
-        if self.config.origin_access_identity:
+        if self.config.origin.origin_access_identity:
             self.set_permissions(object, replace)
         return object
-            
+
+    def create_signed_url(self, url, keypair_id,
+                          expire_time=None, valid_after_time=None,
+                          ip_address=None, policy_url=None,
+                          private_key_file=None, private_key_string=None):
+        """
+        Creates a signed CloudFront URL that is only valid within the specified
+        parameters.
+
+        :type url: str
+        :param url: The URL of the protected object.
+
+        :type keypair_id: str
+        :param keypair_id: The keypair ID of the Amazon KeyPair used to sign
+                           theURL.  This ID MUST correspond to the private key
+                           specified with private_key_file or
+                           private_key_string.
+
+        :type expire_time: int
+        :param expire_time: The expiry time of the URL. If provided, the URL
+                            will expire after the time has passed. If not
+                            provided the URL will never expire. Format is a
+                            unix epoch. Use time.time() + duration_in_sec.
+
+        :type valid_after_time: int
+        :param valid_after_time: If provided, the URL will not be valid until
+                                 after valid_after_time. Format is a unix
+                                 epoch. Use time.time() + secs_until_valid.
+
+        :type ip_address: str
+        :param ip_address: If provided, only allows access from the specified
+                           IP address.  Use '192.168.0.10' for a single IP or
+                           use '192.168.0.0/24' CIDR notation for a subnet.
+
+        :type policy_url: str
+        :param policy_url: If provided, allows the signature to contain
+                           wildcard globs in the URL.  For example, you could
+                           provide: 'http://example.com/media/*' and the policy
+                           and signature would allow access to all contents of
+                           the media subdirectory.  If not specified, only
+                           allow access to the exact url provided in 'url'.
+
+        :type private_key_file: str or file object.
+        :param private_key_file: If provided, contains the filename of the
+                                 private key file used for signing or an open
+                                 file object containing the private key
+                                 contents.  Only one of private_key_file or
+                                 private_key_string can be provided.
+
+        :type private_key_string: str
+        :param private_key_string: If provided, contains the private key string
+                                   used for signing. Only one of
+                                   private_key_file or private_key_string can
+                                   be provided.
+
+        :rtype: str
+        :return: The signed URL.
+        """
+        # Get the required parameters
+        params = self._create_signing_params(
+                     url=url, keypair_id=keypair_id, expire_time=expire_time,
+                     valid_after_time=valid_after_time, ip_address=ip_address,
+                     policy_url=policy_url, private_key_file=private_key_file,
+                     private_key_string=private_key_string)
+
+        #combine these into a full url
+        if "?" in url:
+            sep = "&"
+        else:
+            sep = "?"
+        signed_url_params = []
+        for key in ["Expires", "Policy", "Signature", "Key-Pair-Id"]:
+            if key in params:
+                param = "%s=%s" % (key, params[key])
+                signed_url_params.append(param)
+        signed_url = url + sep + "&".join(signed_url_params)
+        return signed_url
+
+    def _create_signing_params(self, url, keypair_id,
+                          expire_time=None, valid_after_time=None,
+                          ip_address=None, policy_url=None,
+                          private_key_file=None, private_key_string=None):
+        """
+        Creates the required URL parameters for a signed URL.
+        """
+        params = {}
+        # Check if we can use a canned policy
+        if expire_time and not valid_after_time and not ip_address and not policy_url:
+            # we manually construct this policy string to ensure formatting
+            # matches signature
+            policy = self._canned_policy(url, expire_time)
+            params["Expires"] = str(expire_time)
+        else:
+            # If no policy_url is specified, default to the full url.
+            if policy_url is None:
+                policy_url = url
+            # Can't use canned policy
+            policy = self._custom_policy(policy_url, expires=None,
+                                         valid_after=None,
+                                         ip_address=None)
+            encoded_policy = self._url_base64_encode(policy)
+            params["Policy"] = encoded_policy
+        #sign the policy
+        signature = self._sign_string(policy, private_key_file, private_key_string)
+        #now base64 encode the signature (URL safe as well)
+        encoded_signature = self._url_base64_encode(signature)
+        params["Signature"] = encoded_signature
+        params["Key-Pair-Id"] = keypair_id
+        return params
+
+    @staticmethod
+    def _canned_policy(resource, expires):
+        """
+        Creates a canned policy string.
+        """
+        policy = ('{"Statement":[{"Resource":"%(resource)s",'
+                  '"Condition":{"DateLessThan":{"AWS:EpochTime":'
+                  '%(expires)s}}}]}' % locals())
+        return policy
+
+    @staticmethod
+    def _custom_policy(resource, expires=None, valid_after=None, ip_address=None):
+        """
+        Creates a custom policy string based on the supplied parameters.
+        """
+        condition = {}
+        if expires:
+            condition["DateLessThan"] = {"AWS:EpochTime": expires}
+        if valid_after:
+            condition["DateGreaterThan"] = {"AWS:EpochTime": valid_after}
+        if ip_address:
+            if '/' not in ip_address:
+                ip_address += "/32"
+            condition["IpAddress"] = {"AWS:SourceIp": ip_address}
+        policy = {"Statement": [{
+                     "Resource": resource,
+                     "Condition": condition}]}
+        return json.dumps(policy, separators=(",", ":"))
+
+    @staticmethod
+    def _sign_string(message, private_key_file=None, private_key_string=None):
+        """
+        Signs a string for use with Amazon CloudFront.  Requires the M2Crypto
+        library be installed.
+        """
+        try:
+            from M2Crypto import EVP
+        except ImportError:
+            raise NotImplementedError("Boto depends on the python M2Crypto "
+                                      "library to generate signed URLs for "
+                                      "CloudFront")
+        # Make sure only one of private_key_file and private_key_string is set
+        if private_key_file and private_key_string:
+            raise ValueError("Only specify the private_key_file or the private_key_string not both")
+        if not private_key_file and not private_key_string:
+            raise ValueError("You must specify one of private_key_file or private_key_string")
+        # if private_key_file is a file object read the key string from there
+        if isinstance(private_key_file, file):
+            private_key_string = private_key_file.read()
+        # Now load key and calculate signature
+        if private_key_string:
+            key = EVP.load_key_string(private_key_string)
+        else:
+            key = EVP.load_key(private_key_file)
+        key.reset_context(md='sha1')
+        key.sign_init()
+        key.sign_update(str(message))
+        signature = key.sign_final()
+        return signature
+
+    @staticmethod
+    def _url_base64_encode(msg):
+        """
+        Base64 encodes a string using the URL-safe characters specified by
+        Amazon.
+        """
+        msg_base64 = base64.b64encode(msg)
+        msg_base64 = msg_base64.replace('+', '-')
+        msg_base64 = msg_base64.replace('=', '_')
+        msg_base64 = msg_base64.replace('/', '~')
+        return msg_base64
+
 class StreamingDistribution(Distribution):
 
     def __init__(self, connection=None, config=None, domain_name='',
@@ -498,12 +694,19 @@
     def update(self, enabled=None, cnames=None, comment=None):
         """
         Update the configuration of the StreamingDistribution.  The only values
-        of the StreamingDistributionConfig that can be updated are:
+        of the StreamingDistributionConfig that can be directly updated are:
 
          * CNAMES
          * Comment
          * Whether the Distribution is enabled or not
 
+        Any changes to the ``trusted_signers`` or ``origin`` properties of
+        this distribution's current config object will also be included in
+        the update. Therefore, to set the origin access identity for this
+        distribution, set
+        ``StreamingDistribution.config.origin.origin_access_identity``
+        before calling this update method.
+
         :type enabled: bool
         :param enabled: Whether the StreamingDistribution is active or not.
 
diff --git a/boto/cloudfront/invalidation.py b/boto/cloudfront/invalidation.py
index ea13a67..b213e65 100644
--- a/boto/cloudfront/invalidation.py
+++ b/boto/cloudfront/invalidation.py
@@ -27,11 +27,11 @@
         :see: http://docs.amazonwebservices.com/AmazonCloudFront/2010-08-01/APIReference/index.html?InvalidationBatchDatatype.html
     """
 
-    def __init__(self, paths=[], connection=None, distribution=None, caller_reference=''):
+    def __init__(self, paths=None, connection=None, distribution=None, caller_reference=''):
         """Create a new invalidation request:
             :paths: An array of paths to invalidate
         """
-        self.paths = paths
+        self.paths = paths or []
         self.distribution = distribution
         self.caller_reference = caller_reference
         if not self.caller_reference:
diff --git a/boto/connection.py b/boto/connection.py
index 76e9ffe..3c9f237 100644
--- a/boto/connection.py
+++ b/boto/connection.py
@@ -3,6 +3,7 @@
 # Copyright (c) 2008 rPath, Inc.
 # Copyright (c) 2009 The Echo Nest Corporation
 # Copyright (c) 2010, Eucalyptus Systems, Inc.
+# Copyright (c) 2011, Nexenta Systems Inc.
 # All rights reserved.
 #
 # Permission is hereby granted, free of charge, to any person obtaining a
@@ -41,11 +42,13 @@
 Handles basic connections to AWS
 """
 
+from __future__ import with_statement
 import base64
 import errno
 import httplib
 import os
 import Queue
+import random
 import re
 import socket
 import sys
@@ -57,27 +60,227 @@
 import auth_handler
 import boto
 import boto.utils
+import boto.handler
+import boto.cacerts
 
-from boto import config, UserAgent, handler
+from boto import config, UserAgent
 from boto.exception import AWSConnectionError, BotoClientError, BotoServerError
 from boto.provider import Provider
 from boto.resultset import ResultSet
 
+HAVE_HTTPS_CONNECTION = False
+try:
+    import ssl
+    from boto import https_connection
+    # Google App Engine runs on Python 2.5 so doesn't have ssl.SSLError.
+    if hasattr(ssl, 'SSLError'):
+        HAVE_HTTPS_CONNECTION = True
+except ImportError:
+    pass
+
+try:
+    import threading
+except ImportError:
+    import dummy_threading as threading
+
+ON_APP_ENGINE = all(key in os.environ for key in (
+    'USER_IS_ADMIN', 'CURRENT_VERSION_ID', 'APPLICATION_ID'))
 
 PORTS_BY_SECURITY = { True: 443, False: 80 }
 
-class ConnectionPool:
-    def __init__(self, hosts, connections_per_host):
-        self._hosts = boto.utils.LRUCache(hosts)
-        self.connections_per_host = connections_per_host
+DEFAULT_CA_CERTS_FILE = os.path.join(
+        os.path.dirname(os.path.abspath(boto.cacerts.__file__ )), "cacerts.txt")
 
-    def __getitem__(self, key):
-        if key not in self._hosts:
-            self._hosts[key] = Queue.Queue(self.connections_per_host)
-        return self._hosts[key]
+class HostConnectionPool(object):
 
-    def __repr__(self):
-        return 'ConnectionPool:%s' % ','.join(self._hosts._dict.keys())
+    """
+    A pool of connections for one remote (host,is_secure).
+
+    When connections are added to the pool, they are put into a
+    pending queue.  The _mexe method returns connections to the pool
+    before the response body has been read, so they connections aren't
+    ready to send another request yet.  They stay in the pending queue
+    until they are ready for another request, at which point they are
+    returned to the pool of ready connections.
+
+    The pool of ready connections is an ordered list of
+    (connection,time) pairs, where the time is the time the connection
+    was returned from _mexe.  After a certain period of time,
+    connections are considered stale, and discarded rather than being
+    reused.  This saves having to wait for the connection to time out
+    if AWS has decided to close it on the other end because of
+    inactivity.
+
+    Thread Safety:
+
+        This class is used only fram ConnectionPool while it's mutex
+        is held.
+    """
+
+    def __init__(self):
+        self.queue = []
+
+    def size(self):
+        """
+        Returns the number of connections in the pool for this host.
+        Some of the connections may still be in use, and may not be
+        ready to be returned by get().
+        """
+        return len(self.queue)
+    
+    def put(self, conn):
+        """
+        Adds a connection to the pool, along with the time it was
+        added.
+        """
+        self.queue.append((conn, time.time()))
+
+    def get(self):
+        """
+        Returns the next connection in this pool that is ready to be
+        reused.  Returns None of there aren't any.
+        """
+        # Discard ready connections that are too old.
+        self.clean()
+
+        # Return the first connection that is ready, and remove it
+        # from the queue.  Connections that aren't ready are returned
+        # to the end of the queue with an updated time, on the
+        # assumption that somebody is actively reading the response.
+        for _ in range(len(self.queue)):
+            (conn, _) = self.queue.pop(0)
+            if self._conn_ready(conn):
+                return conn
+            else:
+                self.put(conn)
+        return None
+
+    def _conn_ready(self, conn):
+        """
+        There is a nice state diagram at the top of httplib.py.  It
+        indicates that once the response headers have been read (which
+        _mexe does before adding the connection to the pool), a
+        response is attached to the connection, and it stays there
+        until it's done reading.  This isn't entirely true: even after
+        the client is done reading, the response may be closed, but
+        not removed from the connection yet.
+
+        This is ugly, reading a private instance variable, but the
+        state we care about isn't available in any public methods.
+        """
+        if ON_APP_ENGINE:
+            # Google App Engine implementation of HTTPConnection doesn't contain
+            # _HTTPConnection__response attribute. Moreover, it's not possible
+            # to determine if given connection is ready. Reusing connections
+            # simply doesn't make sense with App Engine urlfetch service.
+            return False
+        else:
+            response = conn._HTTPConnection__response
+            return (response is None) or response.isclosed()
+
+    def clean(self):
+        """
+        Get rid of stale connections.
+        """
+        # Note that we do not close the connection here -- somebody
+        # may still be reading from it.
+        while len(self.queue) > 0 and self._pair_stale(self.queue[0]):
+            self.queue.pop(0)
+
+    def _pair_stale(self, pair):
+        """
+        Returns true of the (connection,time) pair is too old to be
+        used.
+        """
+        (_conn, return_time) = pair
+        now = time.time()
+        return return_time + ConnectionPool.STALE_DURATION < now
+
+class ConnectionPool(object):
+
+    """
+    A connection pool that expires connections after a fixed period of
+    time.  This saves time spent waiting for a connection that AWS has
+    timed out on the other end.
+
+    This class is thread-safe.
+    """
+
+    #
+    # The amout of time between calls to clean.
+    #
+    
+    CLEAN_INTERVAL = 5.0
+
+    #
+    # How long before a connection becomes "stale" and won't be reused
+    # again.  The intention is that this time is less that the timeout
+    # period that AWS uses, so we'll never try to reuse a connection
+    # and find that AWS is timing it out.
+    #
+    # Experimentation in July 2011 shows that AWS starts timing things
+    # out after three minutes.  The 60 seconds here is conservative so
+    # we should never hit that 3-minute timout.
+    #
+
+    STALE_DURATION = 60.0
+
+    def __init__(self):
+        # Mapping from (host,is_secure) to HostConnectionPool.
+        # If a pool becomes empty, it is removed.
+        self.host_to_pool = {}
+        # The last time the pool was cleaned.
+        self.last_clean_time = 0.0
+        self.mutex = threading.Lock()
+
+    def size(self):
+        """
+        Returns the number of connections in the pool.
+        """
+        return sum(pool.size() for pool in self.host_to_pool.values())
+
+    def get_http_connection(self, host, is_secure):
+        """
+        Gets a connection from the pool for the named host.  Returns
+        None if there is no connection that can be reused.
+        """
+        self.clean()
+        with self.mutex:
+            key = (host, is_secure)
+            if key not in self.host_to_pool:
+                return None
+            return self.host_to_pool[key].get()
+
+    def put_http_connection(self, host, is_secure, conn):
+        """
+        Adds a connection to the pool of connections that can be
+        reused for the named host.
+        """
+        with self.mutex:
+            key = (host, is_secure)
+            if key not in self.host_to_pool:
+                self.host_to_pool[key] = HostConnectionPool()
+            self.host_to_pool[key].put(conn)
+
+    def clean(self):
+        """
+        Clean up the stale connections in all of the pools, and then
+        get rid of empty pools.  Pools clean themselves every time a
+        connection is fetched; this cleaning takes care of pools that
+        aren't being used any more, so nothing is being gotten from
+        them. 
+        """
+        with self.mutex:
+            now = time.time()
+            if self.last_clean_time + self.CLEAN_INTERVAL < now:
+                to_remove = []
+                for (host, pool) in self.host_to_pool.items():
+                    pool.clean()
+                    if pool.size() == 0:
+                        to_remove.append(host)
+                for host in to_remove:
+                    del self.host_to_pool[host]
+                self.last_clean_time = now
 
 class HTTPRequest(object):
 
@@ -89,7 +292,7 @@
         :param method: The HTTP method name, 'GET', 'POST', 'PUT' etc.
 
         :type protocol: string
-        :param protocol: The http protocol used, 'http' or 'https'. 
+        :param protocol: The http protocol used, 'http' or 'https'.
 
         :type host: string
         :param host: Host to which the request is addressed. eg. abc.com
@@ -98,10 +301,10 @@
         :param port: port on which the request is being sent. Zero means unset,
                      in which case default port will be chosen.
 
-        :type path: string 
+        :type path: string
         :param path: URL path that is bein accessed.
 
-        :type auth_path: string 
+        :type auth_path: string
         :param path: The part of the URL path used when creating the
                      authentication string.
 
@@ -119,12 +322,21 @@
         """
         self.method = method
         self.protocol = protocol
-        self.host = host 
+        self.host = host
         self.port = port
         self.path = path
+        if auth_path is None:
+            auth_path = path
         self.auth_path = auth_path
         self.params = params
-        self.headers = headers
+        # chunked Transfer-Encoding should act only on PUT request.
+        if headers and 'Transfer-Encoding' in headers and \
+                headers['Transfer-Encoding'] == 'chunked' and \
+                self.method != 'PUT':
+            self.headers = headers.copy()
+            del self.headers['Transfer-Encoding']
+        else:
+            self.headers = headers
         self.body = body
 
     def __str__(self):
@@ -133,20 +345,37 @@
                  self.protocol, self.host, self.port, self.path, self.params,
                  self.headers, self.body))
 
+    def authorize(self, connection, **kwargs):
+        for key in self.headers:
+            val = self.headers[key]
+            if isinstance(val, unicode):
+                self.headers[key] = urllib.quote_plus(val.encode('utf-8'))
+
+        connection._auth_handler.add_auth(self, **kwargs)
+
+        self.headers['User-Agent'] = UserAgent
+        # I'm not sure if this is still needed, now that add_auth is
+        # setting the content-length for POST requests.
+        if not self.headers.has_key('Content-Length'):
+            if not self.headers.has_key('Transfer-Encoding') or \
+                    self.headers['Transfer-Encoding'] != 'chunked':
+                self.headers['Content-Length'] = str(len(self.body))
+
 class AWSAuthConnection(object):
     def __init__(self, host, aws_access_key_id=None, aws_secret_access_key=None,
                  is_secure=True, port=None, proxy=None, proxy_port=None,
                  proxy_user=None, proxy_pass=None, debug=0,
-                 https_connection_factory=None, path='/', provider='aws'):
+                 https_connection_factory=None, path='/',
+                 provider='aws', security_token=None):
         """
         :type host: str
         :param host: The host to make the connection to
-       
+
         :keyword str aws_access_key_id: Your AWS Access Key ID (provided by
-            Amazon). If none is specified, the value in your 
+            Amazon). If none is specified, the value in your
             ``AWS_ACCESS_KEY_ID`` environmental variable is used.
-        :keyword str aws_secret_access_key: Your AWS Secret Access Key 
-            (provided by Amazon). If none is specified, the value in your 
+        :keyword str aws_secret_access_key: Your AWS Secret Access Key
+            (provided by Amazon). If none is specified, the value in your
             ``AWS_SECRET_ACCESS_KEY`` environmental variable is used.
 
         :type is_secure: boolean
@@ -172,15 +401,35 @@
         :type port: int
         :param port: The port to use to connect
         """
-        self.num_retries = 5
+        self.num_retries = 6
         # Override passed-in is_secure setting if value was defined in config.
         if config.has_option('Boto', 'is_secure'):
             is_secure = config.getboolean('Boto', 'is_secure')
         self.is_secure = is_secure
+        # Whether or not to validate server certificates.  At some point in the
+        # future, the default should be flipped to true.
+        self.https_validate_certificates = config.getbool(
+                'Boto', 'https_validate_certificates', False)
+        if self.https_validate_certificates and not HAVE_HTTPS_CONNECTION:
+            raise BotoClientError(
+                    "SSL server certificate validation is enabled in boto "
+                    "configuration, but Python dependencies required to "
+                    "support this feature are not available. Certificate "
+                    "validation is only supported when running under Python "
+                    "2.6 or later.")
+        self.ca_certificates_file = config.get_value(
+                'Boto', 'ca_certificates_file', DEFAULT_CA_CERTS_FILE)
         self.handle_proxy(proxy, proxy_port, proxy_user, proxy_pass)
         # define exceptions from httplib that we want to catch and retry
         self.http_exceptions = (httplib.HTTPException, socket.error,
                                 socket.gaierror)
+        # define subclasses of the above that are not retryable.
+        self.http_unretryable_exceptions = []
+        if HAVE_HTTPS_CONNECTION:
+            self.http_unretryable_exceptions.append(ssl.SSLError)
+            self.http_unretryable_exceptions.append(
+                    https_connection.InvalidCertificateException)
+
         # define values in socket exceptions we don't want to catch
         self.socket_exception_values = (errno.EINTR,)
         if https_connection_factory is not None:
@@ -203,20 +452,31 @@
         else:
             self.port = PORTS_BY_SECURITY[is_secure]
 
+        # Timeout used to tell httplib how long to wait for socket timeouts.
+        # Default is to leave timeout unchanged, which will in turn result in
+        # the socket's default global timeout being used. To specify a
+        # timeout, set http_socket_timeout in Boto config. Regardless,
+        # timeouts will only be applied if Python is 2.6 or greater.
+        self.http_connection_kwargs = {}
+        if (sys.version_info[0], sys.version_info[1]) >= (2, 6):
+            if config.has_option('Boto', 'http_socket_timeout'):
+                timeout = config.getint('Boto', 'http_socket_timeout')
+                self.http_connection_kwargs['timeout'] = timeout
+
         self.provider = Provider(provider,
                                  aws_access_key_id,
-                                 aws_secret_access_key)
+                                 aws_secret_access_key,
+                                 security_token)
 
         # allow config file to override default host
         if self.provider.host:
             self.host = self.provider.host
 
-        # cache up to 20 connections per host, up to 20 hosts
-        self._pool = ConnectionPool(20, 20)
+        self._pool = ConnectionPool()
         self._connection = (self.server_name(), self.is_secure)
         self._last_rs = None
         self._auth_handler = auth.get_auth_handler(
-              host, config, self.provider, self._required_auth_capability()) 
+              host, config, self.provider, self._required_auth_capability())
 
     def __repr__(self):
         return '%s:%s' % (self.__class__.__name__, self.host)
@@ -224,13 +484,6 @@
     def _required_auth_capability(self):
         return []
 
-    def _cached_name(self, host, is_secure):
-        if host is None:
-            host = self.server_name()
-        cached_name = is_secure and 'https://' or 'http://'
-        cached_name += host
-        return cached_name
-
     def connection(self):
         return self.get_http_connection(*self._connection)
     connection = property(connection)
@@ -281,7 +534,8 @@
             # did the same when calculating the V2 signature.  In 2.6
             # (and higher!)
             # it no longer does that.  Hence, this kludge.
-            if sys.version[:3] in ('2.6', '2.7') and port == 443:
+            if ((ON_APP_ENGINE and sys.version[:3] == '2.5') or
+                    sys.version[:3] in ('2.6', '2.7')) and port == 443:
                 signature_host = self.host
             else:
                 signature_host = '%s:%d' % (self.host, port)
@@ -322,10 +576,10 @@
         self.use_proxy = (self.proxy != None)
 
     def get_http_connection(self, host, is_secure):
-        queue = self._pool[self._cached_name(host, is_secure)]
-        try:
-            return queue.get_nowait()
-        except Queue.Empty:
+        conn = self._pool.get_http_connection(host, is_secure)
+        if conn is not None:
+            return conn
+        else:
             return self.new_http_connection(host, is_secure)
 
     def new_http_connection(self, host, is_secure):
@@ -334,16 +588,25 @@
         if host is None:
             host = self.server_name()
         if is_secure:
-            boto.log.debug('establishing HTTPS connection')
+            boto.log.debug(
+                    'establishing HTTPS connection: host=%s, kwargs=%s',
+                    host, self.http_connection_kwargs)
             if self.use_proxy:
                 connection = self.proxy_ssl()
             elif self.https_connection_factory:
                 connection = self.https_connection_factory(host)
+            elif self.https_validate_certificates and HAVE_HTTPS_CONNECTION:
+                connection = https_connection.CertValidatingHTTPSConnection(
+                        host, ca_certs=self.ca_certificates_file,
+                        **self.http_connection_kwargs)
             else:
-                connection = httplib.HTTPSConnection(host)
+                connection = httplib.HTTPSConnection(host,
+                        **self.http_connection_kwargs)
         else:
-            boto.log.debug('establishing HTTP connection')
-            connection = httplib.HTTPConnection(host)
+            boto.log.debug('establishing HTTP connection: kwargs=%s' %
+                    self.http_connection_kwargs)
+            connection = httplib.HTTPConnection(host,
+                    **self.http_connection_kwargs)
         if self.debug > 1:
             connection.set_debuglevel(self.debug)
         # self.connection must be maintained for backwards-compatibility
@@ -354,11 +617,7 @@
         return connection
 
     def put_http_connection(self, host, is_secure, connection):
-        try:
-            self._pool[self._cached_name(host, is_secure)].put_nowait(connection)
-        except Queue.Full:
-            # gracefully fail in case of pool overflow
-            connection.close()
+        self._pool.put_http_connection(host, is_secure, connection)
 
     def proxy_ssl(self):
         host = '%s:%d' % (self.host, self.port)
@@ -367,13 +626,14 @@
             sock.connect((self.proxy, int(self.proxy_port)))
         except:
             raise
+        boto.log.debug("Proxy connection: CONNECT %s HTTP/1.0\r\n", host)
         sock.sendall("CONNECT %s HTTP/1.0\r\n" % host)
         sock.sendall("User-Agent: %s\r\n" % UserAgent)
         if self.proxy_user and self.proxy_pass:
             for k, v in self.get_proxy_auth_header().items():
                 sock.sendall("%s: %s\r\n" % (k, v))
         sock.sendall("\r\n")
-        resp = httplib.HTTPResponse(sock, strict=True)
+        resp = httplib.HTTPResponse(sock, strict=True, debuglevel=self.debug)
         resp.begin()
 
         if resp.status != 200:
@@ -388,12 +648,29 @@
 
         h = httplib.HTTPConnection(host)
 
-        # Wrap the socket in an SSL socket
-        if hasattr(httplib, 'ssl'):
-            sslSock = httplib.ssl.SSLSocket(sock)
-        else: # Old Python, no ssl module
-            sslSock = socket.ssl(sock, None, None)
-            sslSock = httplib.FakeSocket(sock, sslSock)
+        if self.https_validate_certificates and HAVE_HTTPS_CONNECTION:
+            boto.log.debug("wrapping ssl socket for proxied connection; "
+                           "CA certificate file=%s",
+                           self.ca_certificates_file)
+            key_file = self.http_connection_kwargs.get('key_file', None)
+            cert_file = self.http_connection_kwargs.get('cert_file', None)
+            sslSock = ssl.wrap_socket(sock, keyfile=key_file,
+                                      certfile=cert_file,
+                                      cert_reqs=ssl.CERT_REQUIRED,
+                                      ca_certs=self.ca_certificates_file)
+            cert = sslSock.getpeercert()
+            hostname = self.host.split(':', 0)[0]
+            if not https_connection.ValidateCertificateHostname(cert, hostname):
+                raise https_connection.InvalidCertificateException(
+                        hostname, cert, 'hostname mismatch')
+        else:
+            # Fallback for old Python without ssl.wrap_socket
+            if hasattr(httplib, 'ssl'):
+                sslSock = httplib.ssl.SSLSocket(sock)
+            else:
+                sslSock = socket.ssl(sock, None, None)
+                sslSock = httplib.FakeSocket(sock, sslSock)
+
         # This is a bit unclean
         h.sock = sslSock
         return h
@@ -406,8 +683,7 @@
         auth = base64.encodestring(self.proxy_user + ':' + self.proxy_pass)
         return {'Proxy-Authorization': 'Basic %s' % auth}
 
-    def _mexe(self, method, path, data, headers, host=None, sender=None,
-              override_num_retries=None):
+    def _mexe(self, request, sender=None, override_num_retries=None):
         """
         mexe - Multi-execute inside a loop, retrying multiple times to handle
                transient Internet errors by simply trying again.
@@ -416,11 +692,11 @@
         This code was inspired by the S3Utils classes posted to the boto-users
         Google group by Larry Bates.  Thanks!
         """
-        boto.log.debug('Method: %s' % method)
-        boto.log.debug('Path: %s' % path)
-        boto.log.debug('Data: %s' % data)
-        boto.log.debug('Headers: %s' % headers)
-        boto.log.debug('Host: %s' % host)
+        boto.log.debug('Method: %s' % request.method)
+        boto.log.debug('Path: %s' % request.path)
+        boto.log.debug('Data: %s' % request.body)
+        boto.log.debug('Headers: %s' % request.headers)
+        boto.log.debug('Host: %s' % request.host)
         response = None
         body = None
         e = None
@@ -429,49 +705,53 @@
         else:
             num_retries = override_num_retries
         i = 0
-        connection = self.get_http_connection(host, self.is_secure)
+        connection = self.get_http_connection(request.host, self.is_secure)
         while i <= num_retries:
+            # Use binary exponential backoff to desynchronize client requests
+            next_sleep = random.random() * (2 ** i)
             try:
+                # we now re-sign each request before it is retried
+                request.authorize(connection=self)
                 if callable(sender):
-                    response = sender(connection, method, path, data, headers)
+                    response = sender(connection, request.method, request.path,
+                                      request.body, request.headers)
                 else:
-                    connection.request(method, path, data, headers)
+                    connection.request(request.method, request.path, request.body,
+                                       request.headers)
                     response = connection.getresponse()
                 location = response.getheader('location')
                 # -- gross hack --
                 # httplib gets confused with chunked responses to HEAD requests
                 # so I have to fake it out
-                if method == 'HEAD' and getattr(response, 'chunked', False):
+                if request.method == 'HEAD' and getattr(response, 'chunked', False):
                     response.chunked = 0
                 if response.status == 500 or response.status == 503:
-                    boto.log.debug('received %d response, retrying in %d seconds' % (response.status, 2 ** i))
+                    boto.log.debug('received %d response, retrying in %3.1f seconds' %
+                                   (response.status, next_sleep))
                     body = response.read()
-                elif response.status == 408:
-                    body = response.read()
-                    print '-------------------------'
-                    print '         4 0 8           '
-                    print 'path=%s' % path
-                    print body
-                    print '-------------------------'
                 elif response.status < 300 or response.status >= 400 or \
                         not location:
-                    self.put_http_connection(host, self.is_secure, connection)
+                    self.put_http_connection(request.host, self.is_secure, connection)
                     return response
                 else:
-                    scheme, host, path, params, query, fragment = \
+                    scheme, request.host, request.path, params, query, fragment = \
                             urlparse.urlparse(location)
                     if query:
-                        path += '?' + query
-                    boto.log.debug('Redirecting: %s' % scheme + '://' + host + path)
-                    connection = self.get_http_connection(host, scheme == 'https')
+                        request.path += '?' + query
+                    boto.log.debug('Redirecting: %s' % scheme + '://' + request.host + request.path)
+                    connection = self.get_http_connection(request.host, scheme == 'https')
                     continue
-            except KeyboardInterrupt:
-                sys.exit('Keyboard Interrupt')
             except self.http_exceptions, e:
+                for unretryable in self.http_unretryable_exceptions:
+                    if isinstance(e, unretryable):
+                        boto.log.debug(
+                            'encountered unretryable %s exception, re-raising' %
+                            e.__class__.__name__)
+                        raise e
                 boto.log.debug('encountered %s exception, reconnecting' % \
                                   e.__class__.__name__)
-                connection = self.new_http_connection(host, self.is_secure)
-            time.sleep(2 ** i)
+                connection = self.new_http_connection(request.host, self.is_secure)
+            time.sleep(next_sleep)
             i += 1
         # If we made it here, it's because we have exhausted our retries and stil haven't
         # succeeded.  So, if we have a response object, use it to raise an exception.
@@ -498,6 +778,8 @@
             headers = headers.copy()
         host = host or self.host
         if self.use_proxy:
+            if not auth_path:
+                auth_path = path
             path = self.prefix_proxy_to_path(path, host)
             if self.proxy_user and self.proxy_pass and not self.is_secure:
                 # If is_secure, we don't have to set the proxy authentication
@@ -506,34 +788,12 @@
         return HTTPRequest(method, self.protocol, host, self.port,
                            path, auth_path, params, headers, data)
 
-    def fill_in_auth(self, http_request, **kwargs):
-        headers = http_request.headers
-        for key in headers:
-            val = headers[key]
-            if isinstance(val, unicode):
-                headers[key] = urllib.quote_plus(val.encode('utf-8'))
-
-        self._auth_handler.add_auth(http_request, **kwargs)
-
-        headers['User-Agent'] = UserAgent
-        if not headers.has_key('Content-Length'):
-            headers['Content-Length'] = str(len(http_request.body))
-        return http_request
-
-    def _send_http_request(self, http_request, sender=None,
-                           override_num_retries=None):
-        return self._mexe(http_request.method, http_request.path,
-                          http_request.body, http_request.headers,
-                          http_request.host, sender, override_num_retries)
-
     def make_request(self, method, path, headers=None, data='', host=None,
                      auth_path=None, sender=None, override_num_retries=None):
         """Makes a request to the server, with stock multiple-retry logic."""
         http_request = self.build_base_http_request(method, path, auth_path,
                                                     {}, headers, data, host)
-        http_request = self.fill_in_auth(http_request)
-        return self._send_http_request(http_request, sender,
-                                       override_num_retries)
+        return self._mexe(http_request, sender, override_num_retries)
 
     def close(self):
         """(Optional) Close any open HTTP connections.  This is non-destructive,
@@ -550,10 +810,13 @@
     def __init__(self, aws_access_key_id=None, aws_secret_access_key=None,
                  is_secure=True, port=None, proxy=None, proxy_port=None,
                  proxy_user=None, proxy_pass=None, host=None, debug=0,
-                 https_connection_factory=None, path='/'):
-        AWSAuthConnection.__init__(self, host, aws_access_key_id, aws_secret_access_key,
-                                   is_secure, port, proxy, proxy_port, proxy_user, proxy_pass,
-                                   debug, https_connection_factory, path)
+                 https_connection_factory=None, path='/', security_token=None):
+        AWSAuthConnection.__init__(self, host, aws_access_key_id,
+                                   aws_secret_access_key,
+                                   is_secure, port, proxy,
+                                   proxy_port, proxy_user, proxy_pass,
+                                   debug, https_connection_factory, path,
+                                   security_token=security_token)
 
     def _required_auth_capability(self):
         return []
@@ -568,8 +831,7 @@
         if action:
             http_request.params['Action'] = action
         http_request.params['Version'] = self.APIVersion
-        http_request = self.fill_in_auth(http_request)
-        return self._send_http_request(http_request)
+        return self._mexe(http_request)
 
     def build_list_params(self, params, items, label):
         if isinstance(items, str):
@@ -579,7 +841,8 @@
 
     # generics
 
-    def get_list(self, action, params, markers, path='/', parent=None, verb='GET'):
+    def get_list(self, action, params, markers, path='/',
+                 parent=None, verb='GET'):
         if not parent:
             parent = self
         response = self.make_request(action, params, path, verb)
@@ -590,7 +853,7 @@
             raise self.ResponseError(response.status, response.reason, body)
         elif response.status == 200:
             rs = ResultSet(markers)
-            h = handler.XmlHandler(rs, parent)
+            h = boto.handler.XmlHandler(rs, parent)
             xml.sax.parseString(body, h)
             return rs
         else:
@@ -598,7 +861,8 @@
             boto.log.error('%s' % body)
             raise self.ResponseError(response.status, response.reason, body)
 
-    def get_object(self, action, params, cls, path='/', parent=None, verb='GET'):
+    def get_object(self, action, params, cls, path='/',
+                   parent=None, verb='GET'):
         if not parent:
             parent = self
         response = self.make_request(action, params, path, verb)
@@ -609,7 +873,7 @@
             raise self.ResponseError(response.status, response.reason, body)
         elif response.status == 200:
             obj = cls(parent)
-            h = handler.XmlHandler(obj, parent)
+            h = boto.handler.XmlHandler(obj, parent)
             xml.sax.parseString(body, h)
             return obj
         else:
@@ -628,7 +892,7 @@
             raise self.ResponseError(response.status, response.reason, body)
         elif response.status == 200:
             rs = ResultSet()
-            h = handler.XmlHandler(rs, parent)
+            h = boto.handler.XmlHandler(rs, parent)
             xml.sax.parseString(body, h)
             return rs.status
         else:
diff --git a/boto/ec2/__init__.py b/boto/ec2/__init__.py
index 8bb3f53..ff9422b 100644
--- a/boto/ec2/__init__.py
+++ b/boto/ec2/__init__.py
@@ -39,12 +39,36 @@
     return c.get_all_regions()
 
 def connect_to_region(region_name, **kw_params):
+    """
+    Given a valid region name, return a 
+    :class:`boto.ec2.connection.EC2Connection`.
+    Any additional parameters after the region_name are passed on to
+    the connect method of the region object.
+
+    :type: str
+    :param region_name: The name of the region to connect to.
+    
+    :rtype: :class:`boto.ec2.connection.EC2Connection` or ``None``
+    :return: A connection to the given region, or None if an invalid region
+             name is given
+    """
     for region in regions(**kw_params):
         if region.name == region_name:
             return region.connect(**kw_params)
     return None
     
 def get_region(region_name, **kw_params):
+    """
+    Find and return a :class:`boto.ec2.regioninfo.RegionInfo` object
+    given a region name.
+
+    :type: str
+    :param: The name of the region.
+
+    :rtype: :class:`boto.ec2.regioninfo.RegionInfo`
+    :return: The RegionInfo object for the given region or None if
+             an invalid region name is provided.
+    """
     for region in regions(**kw_params):
         if region.name == region_name:
             return region
diff --git a/boto/ec2/address.py b/boto/ec2/address.py
index 60ed406..770a904 100644
--- a/boto/ec2/address.py
+++ b/boto/ec2/address.py
@@ -14,7 +14,7 @@
 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
 # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
 # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
-# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, 
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
 # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
 # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
 # IN THE SOFTWARE.
@@ -26,12 +26,15 @@
 from boto.ec2.ec2object import EC2Object
 
 class Address(EC2Object):
-    
+
     def __init__(self, connection=None, public_ip=None, instance_id=None):
         EC2Object.__init__(self, connection)
         self.connection = connection
         self.public_ip = public_ip
         self.instance_id = instance_id
+        self.domain = None
+        self.allocation_id = None
+        self.association_id = None
 
     def __repr__(self):
         return 'Address:%s' % self.public_ip
@@ -41,6 +44,12 @@
             self.public_ip = value
         elif name == 'instanceId':
             self.instance_id = value
+        elif name == 'domain':
+            self.domain = value
+        elif name == 'allocationId':
+            self.allocation_id = value
+        elif name == 'associationId':
+            self.association_id = value
         else:
             setattr(self, name, value)
 
@@ -55,4 +64,4 @@
     def disassociate(self):
         return self.connection.disassociate_address(self.public_ip)
 
-    
+
diff --git a/boto/ec2/autoscale/__init__.py b/boto/ec2/autoscale/__init__.py
index 5d68b32..680c28f 100644
--- a/boto/ec2/autoscale/__init__.py
+++ b/boto/ec2/autoscale/__init__.py
@@ -1,4 +1,5 @@
-# Copyright (c) 2009 Reza Lotun http://reza.lotun.name/
+# Copyright (c) 2009-2011 Reza Lotun http://reza.lotun.name/
+# Copyright (c) 2011 Jann Kleen
 #
 # Permission is hereby granted, free of charge, to any person obtaining a
 # copy of this software and associated documentation files (the
@@ -24,26 +25,68 @@
 Auto Scaling service.
 """
 
+import base64
+
 import boto
 from boto.connection import AWSQueryConnection
 from boto.ec2.regioninfo import RegionInfo
 from boto.ec2.autoscale.request import Request
-from boto.ec2.autoscale.trigger import Trigger
 from boto.ec2.autoscale.launchconfig import LaunchConfiguration
-from boto.ec2.autoscale.group import AutoScalingGroup
+from boto.ec2.autoscale.group import AutoScalingGroup, ProcessType
 from boto.ec2.autoscale.activity import Activity
+from boto.ec2.autoscale.policy import AdjustmentType, MetricCollectionTypes, ScalingPolicy
+from boto.ec2.autoscale.instance import Instance
+from boto.ec2.autoscale.scheduled import ScheduledUpdateGroupAction
+
+
+RegionData = {
+    'us-east-1' : 'autoscaling.us-east-1.amazonaws.com',
+    'us-west-1' : 'autoscaling.us-west-1.amazonaws.com',
+    'eu-west-1' : 'autoscaling.eu-west-1.amazonaws.com',
+    'ap-northeast-1' : 'autoscaling.ap-northeast-1.amazonaws.com',
+    'ap-southeast-1' : 'autoscaling.ap-southeast-1.amazonaws.com'}
+
+def regions():
+    """
+    Get all available regions for the Auto Scaling service.
+
+    :rtype: list
+    :return: A list of :class:`boto.RegionInfo` instances
+    """
+    regions = []
+    for region_name in RegionData:
+        region = RegionInfo(name=region_name,
+                            endpoint=RegionData[region_name],
+                            connection_cls=AutoScaleConnection)
+        regions.append(region)
+    return regions
+
+def connect_to_region(region_name, **kw_params):
+    """
+    Given a valid region name, return a
+    :class:`boto.ec2.autoscale.AutoScaleConnection`.
+
+    :param str region_name: The name of the region to connect to.
+
+    :rtype: :class:`boto.ec2.AutoScaleConnection` or ``None``
+    :return: A connection to the given region, or None if an invalid region
+        name is given
+    """
+    for region in regions():
+        if region.name == region_name:
+            return region.connect(**kw_params)
+    return None
 
 
 class AutoScaleConnection(AWSQueryConnection):
-    APIVersion = boto.config.get('Boto', 'autoscale_version', '2009-05-15')
-    Endpoint = boto.config.get('Boto', 'autoscale_endpoint',
-                               'autoscaling.amazonaws.com')
-    DefaultRegionName = 'us-east-1'
-    DefaultRegionEndpoint = 'autoscaling.amazonaws.com'
+    APIVersion = boto.config.get('Boto', 'autoscale_version', '2011-01-01')
+    DefaultRegionEndpoint = boto.config.get('Boto', 'autoscale_endpoint',
+                                            'autoscaling.amazonaws.com')
+    DefaultRegionName =  boto.config.get('Boto', 'autoscale_region_name', 'us-east-1')
 
     def __init__(self, aws_access_key_id=None, aws_secret_access_key=None,
                  is_secure=True, port=None, proxy=None, proxy_port=None,
-                 proxy_user=None, proxy_pass=None, debug=1,
+                 proxy_user=None, proxy_pass=None, debug=None,
                  https_connection_factory=None, region=None, path='/'):
         """
         Init method to create a new connection to the AutoScaling service.
@@ -67,38 +110,62 @@
         return ['ec2']
 
     def build_list_params(self, params, items, label):
-        """ items is a list of dictionaries or strings:
-                [{'Protocol' : 'HTTP',
-                 'LoadBalancerPort' : '80',
-                 'InstancePort' : '80'},..] etc.
-             or
-                ['us-east-1b',...]
+        """
+        Items is a list of dictionaries or strings::
+
+            [
+                {
+                    'Protocol' : 'HTTP',
+                    'LoadBalancerPort' : '80',
+                    'InstancePort' : '80'
+                },
+                ..
+            ] etc.
+
+        or::
+
+            ['us-east-1b',...]
         """
         # different from EC2 list params
         for i in xrange(1, len(items)+1):
             if isinstance(items[i-1], dict):
                 for k, v in items[i-1].iteritems():
-                    params['%s.member.%d.%s' % (label, i, k)] = v
+                    if isinstance(v, dict):
+                        for kk, vv in v.iteritems():
+                            params['%s.member.%d.%s.%s' % (label, i, k, kk)] = vv
+                    else:
+                        params['%s.member.%d.%s' % (label, i, k)] = v
             elif isinstance(items[i-1], basestring):
                 params['%s.member.%d' % (label, i)] = items[i-1]
 
     def _update_group(self, op, as_group):
         params = {
                   'AutoScalingGroupName'    : as_group.name,
-                  'Cooldown'                : as_group.cooldown,
                   'LaunchConfigurationName' : as_group.launch_config_name,
                   'MinSize'                 : as_group.min_size,
                   'MaxSize'                 : as_group.max_size,
                   }
+        # get availability zone information (required param)
+        zones = as_group.availability_zones
+        self.build_list_params(params, zones,
+                                'AvailabilityZones')
+        if as_group.desired_capacity:
+            params['DesiredCapacity'] = as_group.desired_capacity
+        if as_group.vpc_zone_identifier:
+            params['VPCZoneIdentifier'] = as_group.vpc_zone_identifier
+        if as_group.health_check_period:
+            params['HealthCheckGracePeriod'] = as_group.health_check_period
+        if as_group.health_check_type:
+            params['HealthCheckType'] = as_group.health_check_type
+        if as_group.default_cooldown:
+            params['DefaultCooldown'] = as_group.default_cooldown
+        if as_group.placement_group:
+            params['PlacementGroup'] = as_group.placement_group
         if op.startswith('Create'):
-            if as_group.availability_zones:
-                zones = as_group.availability_zones
-            else:
-                zones = [as_group.availability_zone]
-            self.build_list_params(params, as_group.load_balancers,
-                                   'LoadBalancerNames')
-            self.build_list_params(params, zones,
-                                    'AvailabilityZones')
+            # you can only associate load balancers with an autoscale group at creation time
+            if as_group.load_balancers:
+                self.build_list_params(params, as_group.load_balancers,
+                                       'LoadBalancerNames')
         return self.get_object(op, params, Request)
 
     def create_auto_scaling_group(self, as_group):
@@ -107,22 +174,34 @@
         """
         return self._update_group('CreateAutoScalingGroup', as_group)
 
+    def delete_auto_scaling_group(self, name, force_delete=False):
+        """
+        Deletes the specified auto scaling group if the group has no instances
+        and no scaling activities in progress.
+        """
+        if(force_delete):
+            params = {'AutoScalingGroupName' : name, 'ForceDelete' : 'true'}
+        else:
+            params = {'AutoScalingGroupName' : name}
+        return self.get_object('DeleteAutoScalingGroup', params, Request)
+
     def create_launch_configuration(self, launch_config):
         """
         Creates a new Launch Configuration.
 
-        :type launch_config: boto.ec2.autoscale.launchconfig.LaunchConfiguration
-        :param launch_config: LaunchConfiguraiton object.
+        :type launch_config: :class:`boto.ec2.autoscale.launchconfig.LaunchConfiguration`
+        :param launch_config: LaunchConfiguration object.
 
         """
         params = {
                   'ImageId'                 : launch_config.image_id,
-                  'KeyName'                 : launch_config.key_name,
                   'LaunchConfigurationName' : launch_config.name,
                   'InstanceType'            : launch_config.instance_type,
                  }
+        if launch_config.key_name:
+            params['KeyName'] = launch_config.key_name
         if launch_config.user_data:
-            params['UserData'] = launch_config.user_data
+            params['UserData'] = base64.b64encode(launch_config.user_data)
         if launch_config.kernel_id:
             params['KernelId'] = launch_config.kernel_id
         if launch_config.ramdisk_id:
@@ -130,89 +209,398 @@
         if launch_config.block_device_mappings:
             self.build_list_params(params, launch_config.block_device_mappings,
                                    'BlockDeviceMappings')
-        self.build_list_params(params, launch_config.security_groups,
-                               'SecurityGroups')
+        if launch_config.security_groups:
+            self.build_list_params(params, launch_config.security_groups,
+                                   'SecurityGroups')
+        if launch_config.instance_monitoring:
+            params['InstanceMonitoring.member.Enabled'] = 'true'
         return self.get_object('CreateLaunchConfiguration', params,
                                   Request, verb='POST')
 
-    def create_trigger(self, trigger):
+    def create_scaling_policy(self, scaling_policy):
         """
+        Creates a new Scaling Policy.
 
+        :type scaling_policy: :class:`boto.ec2.autoscale.policy.ScalingPolicy`
+        :param scaling_policy: ScalingPolicy object.
         """
-        params = {'TriggerName'                 : trigger.name,
-                  'AutoScalingGroupName'        : trigger.autoscale_group.name,
-                  'MeasureName'                 : trigger.measure_name,
-                  'Statistic'                   : trigger.statistic,
-                  'Period'                      : trigger.period,
-                  'Unit'                        : trigger.unit,
-                  'LowerThreshold'              : trigger.lower_threshold,
-                  'LowerBreachScaleIncrement'   : trigger.lower_breach_scale_increment,
-                  'UpperThreshold'              : trigger.upper_threshold,
-                  'UpperBreachScaleIncrement'   : trigger.upper_breach_scale_increment,
-                  'BreachDuration'              : trigger.breach_duration}
-        # dimensions should be a list of tuples
-        dimensions = []
-        for dim in trigger.dimensions:
-            name, value = dim
-            dimensions.append(dict(Name=name, Value=value))
-        self.build_list_params(params, dimensions, 'Dimensions')
+        params = {'AdjustmentType'      : scaling_policy.adjustment_type,
+                  'AutoScalingGroupName': scaling_policy.as_name,
+                  'PolicyName'          : scaling_policy.name,
+                  'ScalingAdjustment'   : scaling_policy.scaling_adjustment,}
 
-        req = self.get_object('CreateOrUpdateScalingTrigger', params,
-                               Request)
-        return req
+        if scaling_policy.cooldown is not None:
+            params['Cooldown'] = scaling_policy.cooldown
 
-    def get_all_groups(self, names=None):
+        return self.get_object('PutScalingPolicy', params, Request)
+
+    def delete_launch_configuration(self, launch_config_name):
         """
+        Deletes the specified LaunchConfiguration.
+
+        The specified launch configuration must not be attached to an Auto
+        Scaling group. Once this call completes, the launch configuration is no
+        longer available for use.
+        """
+        params = {'LaunchConfigurationName' : launch_config_name}
+        return self.get_object('DeleteLaunchConfiguration', params, Request)
+
+    def get_all_groups(self, names=None, max_records=None, next_token=None):
+        """
+        Returns a full description of each Auto Scaling group in the given
+        list. This includes all Amazon EC2 instances that are members of the
+        group. If a list of names is not provided, the service returns the full
+        details of all Auto Scaling groups.
+
+        This action supports pagination by returning a token if there are more
+        pages to retrieve. To get the next page, call this action again with
+        the returned token as the NextToken parameter.
+
+        :type names: list
+        :param names: List of group names which should be searched for.
+
+        :type max_records: int
+        :param max_records: Maximum amount of groups to return.
+
+        :rtype: list
+        :returns: List of :class:`boto.ec2.autoscale.group.AutoScalingGroup` instances.
         """
         params = {}
+        if max_records:
+            params['MaxRecords'] = max_records
+        if next_token:
+            params['NextToken'] = next_token
         if names:
             self.build_list_params(params, names, 'AutoScalingGroupNames')
         return self.get_list('DescribeAutoScalingGroups', params,
                              [('member', AutoScalingGroup)])
 
-    def get_all_launch_configurations(self, names=None):
+    def get_all_launch_configurations(self, **kwargs):
         """
+        Returns a full description of the launch configurations given the
+        specified names.
+
+        If no names are specified, then the full details of all launch
+        configurations are returned.
+
+        :type names: list
+        :param names: List of configuration names which should be searched for.
+
+        :type max_records: int
+        :param max_records: Maximum amount of configurations to return.
+
+        :type next_token: str
+        :param next_token: If you have more results than can be returned at once, pass in this
+                           parameter to page through all results.
+
+        :rtype: list
+        :returns: List of :class:`boto.ec2.autoscale.launchconfig.LaunchConfiguration` instances.
         """
         params = {}
+        max_records = kwargs.get('max_records', None)
+        names = kwargs.get('names', None)
+        if max_records is not None:
+            params['MaxRecords'] = max_records
         if names:
             self.build_list_params(params, names, 'LaunchConfigurationNames')
+        next_token = kwargs.get('next_token')
+        if next_token:
+            params['NextToken'] = next_token
         return self.get_list('DescribeLaunchConfigurations', params,
                              [('member', LaunchConfiguration)])
 
-    def get_all_activities(self, autoscale_group,
-                           activity_ids=None,
-                           max_records=100):
+    def get_all_activities(self, autoscale_group, activity_ids=None, max_records=None, next_token=None):
         """
         Get all activities for the given autoscaling group.
 
-        :type autoscale_group: str or AutoScalingGroup object
+        This action supports pagination by returning a token if there are more
+        pages to retrieve. To get the next page, call this action again with
+        the returned token as the NextToken parameter
+
+        :type autoscale_group: str or :class:`boto.ec2.autoscale.group.AutoScalingGroup` object
         :param autoscale_group: The auto scaling group to get activities on.
 
-        @max_records: int
+        :type max_records: int
         :param max_records: Maximum amount of activities to return.
+
+        :rtype: list
+        :returns: List of :class:`boto.ec2.autoscale.activity.Activity` instances.
         """
         name = autoscale_group
         if isinstance(autoscale_group, AutoScalingGroup):
             name = autoscale_group.name
         params = {'AutoScalingGroupName' : name}
+        if max_records:
+            params['MaxRecords'] = max_records
+        if next_token:
+            params['NextToken'] = next_token
         if activity_ids:
             self.build_list_params(params, activity_ids, 'ActivityIds')
-        return self.get_list('DescribeScalingActivities', params,
-                             [('member', Activity)])
+        return self.get_list('DescribeScalingActivities',
+                             params, [('member', Activity)])
 
-    def get_all_triggers(self, autoscale_group):
-        params = {'AutoScalingGroupName' : autoscale_group}
-        return self.get_list('DescribeTriggers', params,
-                             [('member', Trigger)])
+    def delete_scheduled_action(self, scheduled_action_name,
+                                autoscale_group=None):
+        """
+        Deletes a previously scheduled action.
+
+        :param str scheduled_action_name: The name of the action you want
+            to delete.
+        :param str autoscale_group: The name of the autoscale group.
+        """
+        params = {'ScheduledActionName' : scheduled_action_name}
+        if autoscale_group:
+            params['AutoScalingGroupName'] = autoscale_group
+        return self.get_status('DeleteScheduledAction', params)
 
     def terminate_instance(self, instance_id, decrement_capacity=True):
-        params = {
-                  'InstanceId' : instance_id,
-                  'ShouldDecrementDesiredCapacity' : decrement_capacity
-                  }
+        """
+        Terminates the specified instance. The desired group size can
+        also be adjusted, if desired.
+
+        :param str instance_id: The ID of the instance to be terminated.
+        :param bool decrement_capacity: Whether to decrement the size of the
+            autoscaling group or not.
+        """
+        params = {'InstanceId' : instance_id}
+        if decrement_capacity:
+            params['ShouldDecrementDesiredCapacity'] = 'true'
+        else:
+            params['ShouldDecrementDesiredCapacity'] = 'false'
         return self.get_object('TerminateInstanceInAutoScalingGroup', params,
                                Activity)
 
+    def delete_policy(self, policy_name, autoscale_group=None):
+        """
+        Delete a policy.
+
+        :type policy_name: str
+        :param policy_name: The name or ARN of the policy to delete.
+
+        :type autoscale_group: str
+        :param autoscale_group: The name of the autoscale group.
+        """
+        params = {'PolicyName': policy_name}
+        if autoscale_group:
+            params['AutoScalingGroupName'] = autoscale_group
+        return self.get_status('DeletePolicy', params)
+
+    def get_all_adjustment_types(self):
+        return self.get_list('DescribeAdjustmentTypes', {}, [('member', AdjustmentType)])
+
+    def get_all_autoscaling_instances(self, instance_ids=None,
+                                      max_records=None, next_token=None):
+        """
+        Returns a description of each Auto Scaling instance in the instance_ids
+        list. If a list is not provided, the service returns the full details
+        of all instances up to a maximum of fifty.
+
+        This action supports pagination by returning a token if there are more
+        pages to retrieve. To get the next page, call this action again with
+        the returned token as the NextToken parameter.
+
+        :type instance_ids: list
+        :param instance_ids: List of Autoscaling Instance IDs which should be
+                             searched for.
+
+        :type max_records: int
+        :param max_records: Maximum number of results to return.
+
+        :rtype: list
+        :returns: List of :class:`boto.ec2.autoscale.activity.Activity` instances.
+        """
+        params = {}
+        if instance_ids:
+            self.build_list_params(params, instance_ids, 'InstanceIds')
+        if max_records:
+            params['MaxRecords'] = max_records
+        if next_token:
+            params['NextToken'] = next_token
+        return self.get_list('DescribeAutoScalingInstances',
+                             params, [('member', Instance)])
+
+    def get_all_metric_collection_types(self):
+        """
+        Returns a list of metrics and a corresponding list of granularities
+        for each metric.
+        """
+        return self.get_object('DescribeMetricCollectionTypes',
+                               {}, MetricCollectionTypes)
+
+    def get_all_policies(self, as_group=None, policy_names=None,
+                         max_records=None, next_token=None):
+        """
+        Returns descriptions of what each policy does. This action supports
+        pagination. If the response includes a token, there are more records
+        available. To get the additional records, repeat the request with the
+        response token as the NextToken parameter.
+
+        If no group name or list of policy names are provided, all available policies
+        are returned.
+
+        :type as_name: str
+        :param as_name: the name of the :class:`boto.ec2.autoscale.group.AutoScalingGroup` to filter for.
+
+        :type names: list
+        :param names: List of policy names which should be searched for.
+
+        :type max_records: int
+        :param max_records: Maximum amount of groups to return.
+        """
+        params = {}
+        if as_group:
+            params['AutoScalingGroupName'] = as_group
+        if policy_names:
+            self.build_list_params(params, policy_names, 'PolicyNames')
+        if max_records:
+            params['MaxRecords'] = max_records
+        if next_token:
+            params['NextToken'] = next_token
+        return self.get_list('DescribePolicies', params,
+                             [('member', ScalingPolicy)])
+
+    def get_all_scaling_process_types(self):
+        """ Returns scaling process types for use in the ResumeProcesses and
+        SuspendProcesses actions.
+        """
+        return self.get_list('DescribeScalingProcessTypes', {},
+                             [('member', ProcessType)])
+
+    def suspend_processes(self, as_group, scaling_processes=None):
+        """ Suspends Auto Scaling processes for an Auto Scaling group.
+
+        :type as_group: string
+        :param as_group: The auto scaling group to suspend processes on.
+
+        :type scaling_processes: list
+        :param scaling_processes: Processes you want to suspend. If omitted, all
+                                  processes will be suspended.
+        """
+        params = {'AutoScalingGroupName' : as_group}
+        if scaling_processes:
+            self.build_list_params(params, scaling_processes, 'ScalingProcesses')
+        return self.get_status('SuspendProcesses', params)
+
+    def resume_processes(self, as_group, scaling_processes=None):
+        """ Resumes Auto Scaling processes for an Auto Scaling group.
+
+        :type as_group: string
+        :param as_group: The auto scaling group to resume processes on.
+
+        :type scaling_processes: list
+        :param scaling_processes: Processes you want to resume. If omitted, all
+                                  processes will be resumed.
+        """
+        params = {
+                    'AutoScalingGroupName'      :   as_group
+                 }
+        if scaling_processes:
+            self.build_list_params(params, scaling_processes, 'ScalingProcesses')
+        return self.get_status('ResumeProcesses', params)
+
+    def create_scheduled_group_action(self, as_group, name, time, desired_capacity=None,
+                                      min_size=None, max_size=None):
+        """ Creates a scheduled scaling action for a Auto Scaling group. If you
+        leave a parameter unspecified, the corresponding value remains
+        unchanged in the affected Auto Scaling group.
+
+        :type as_group: string
+        :param as_group: The auto scaling group to get activities on.
+
+        :type name: string
+        :param name: Scheduled action name.
+
+        :type time: datetime.datetime
+        :param time: The time for this action to start.
+
+        :type desired_capacity: int
+        :param desired_capacity: The number of EC2 instances that should be running in
+                                this group.
+
+        :type min_size: int
+        :param min_size: The minimum size for the new auto scaling group.
+
+        :type max_size: int
+        :param max_size: The minimum size for the new auto scaling group.
+        """
+        params = {
+                    'AutoScalingGroupName'      :   as_group,
+                    'ScheduledActionName'       :   name,
+                    'Time'                      :   time.isoformat(),
+                 }
+        if desired_capacity is not None:
+            params['DesiredCapacity'] = desired_capacity
+        if min_size is not None:
+            params['MinSize'] = min_size
+        if max_size is not None:
+            params['MaxSize'] = max_size
+        return self.get_status('PutScheduledUpdateGroupAction', params)
+
+    def get_all_scheduled_actions(self, as_group=None, start_time=None, end_time=None, scheduled_actions=None,
+                                  max_records=None, next_token=None):
+        params = {}
+        if as_group:
+            params['AutoScalingGroupName'] = as_group
+        if scheduled_actions:
+            self.build_list_params(params, scheduled_actions, 'ScheduledActionNames')
+        if max_records:
+            params['MaxRecords'] = max_records
+        if next_token:
+            params['NextToken'] = next_token
+        return self.get_list('DescribeScheduledActions', params, [('member', ScheduledUpdateGroupAction)])
+
+    def disable_metrics_collection(self, as_group, metrics=None):
+        """
+        Disables monitoring of group metrics for the Auto Scaling group
+        specified in AutoScalingGroupName. You can specify the list of affected
+        metrics with the Metrics parameter.
+        """
+        params = {
+                    'AutoScalingGroupName'      :   as_group,
+                 }
+        if metrics:
+            self.build_list_params(params, metrics, 'Metrics')
+        return self.get_status('DisableMetricsCollection', params)
+
+    def enable_metrics_collection(self, as_group, granularity, metrics=None):
+        """
+        Enables monitoring of group metrics for the Auto Scaling group
+        specified in AutoScalingGroupName. You can specify the list of enabled
+        metrics with the Metrics parameter.
+
+        Auto scaling metrics collection can be turned on only if the
+        InstanceMonitoring.Enabled flag, in the Auto Scaling group's launch
+        configuration, is set to true.
+
+        :type autoscale_group: string
+        :param autoscale_group: The auto scaling group to get activities on.
+
+        :type granularity: string
+        :param granularity: The granularity to associate with the metrics to
+                            collect. Currently, the only legal granularity is "1Minute".
+
+        :type metrics: string list
+        :param metrics: The list of metrics to collect. If no metrics are
+                        specified, all metrics are enabled.
+        """
+        params = {
+                    'AutoScalingGroupName'      :   as_group,
+                    'Granularity'               :   granularity,
+                 }
+        if metrics:
+            self.build_list_params(params, metrics, 'Metrics')
+        return self.get_status('EnableMetricsCollection', params)
+
+    def execute_policy(self, policy_name, as_group=None, honor_cooldown=None):
+        params = {
+                    'PolicyName'        :   policy_name,
+                 }
+        if as_group:
+            params['AutoScalingGroupName'] = as_group
+        if honor_cooldown:
+            params['HonorCooldown'] = honor_cooldown
+        return self.get_status('ExecutePolicy', params)
+
     def set_instance_health(self, instance_id, health_status,
                             should_respect_grace_period=True):
         """
diff --git a/boto/ec2/autoscale/activity.py b/boto/ec2/autoscale/activity.py
index f895d65..3f23d05 100644
--- a/boto/ec2/autoscale/activity.py
+++ b/boto/ec2/autoscale/activity.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2009 Reza Lotun http://reza.lotun.name/
+# Copyright (c) 2009-2011 Reza Lotun http://reza.lotun.name/
 #
 # Permission is hereby granted, free of charge, to any person obtaining a
 # copy of this software and associated documentation files (the
@@ -19,35 +19,54 @@
 # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
 # IN THE SOFTWARE.
 
+from datetime import datetime
+
 
 class Activity(object):
     def __init__(self, connection=None):
         self.connection = connection
         self.start_time = None
+        self.end_time = None
         self.activity_id = None
         self.progress = None
         self.status_code = None
         self.cause = None
         self.description = None
+        self.status_message = None
+        self.group_name = None
 
     def __repr__(self):
-        return 'Activity:%s status:%s progress:%s' % (self.description,
-                                                      self.status_code,
-                                                      self.progress)
+        return 'Activity<%s>: For group:%s, progress:%s, cause:%s' % (self.activity_id,
+                                                                      self.group_name,
+                                                                      self.status_message,
+                                                                      self.cause)
+
     def startElement(self, name, attrs, connection):
         return None
 
     def endElement(self, name, value, connection):
         if name == 'ActivityId':
             self.activity_id = value
+        elif name == 'AutoScalingGroupName':
+            self.group_name = value
         elif name == 'StartTime':
-            self.start_time = value
+            try:
+                self.start_time = datetime.strptime(value, '%Y-%m-%dT%H:%M:%S.%fZ')
+            except ValueError:
+                self.start_time = datetime.strptime(value, '%Y-%m-%dT%H:%M:%SZ')
+        elif name == 'EndTime':
+            try:
+                self.end_time = datetime.strptime(value, '%Y-%m-%dT%H:%M:%S.%fZ')
+            except ValueError:
+                self.end_time = datetime.strptime(value, '%Y-%m-%dT%H:%M:%SZ')
         elif name == 'Progress':
             self.progress = value
         elif name == 'Cause':
             self.cause = value
         elif name == 'Description':
             self.description = value
+        elif name == 'StatusMessage':
+            self.status_message = value
         elif name == 'StatusCode':
             self.status_code = value
         else:
diff --git a/boto/ec2/autoscale/group.py b/boto/ec2/autoscale/group.py
index 3fa6d68..eb65853 100644
--- a/boto/ec2/autoscale/group.py
+++ b/boto/ec2/autoscale/group.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2009 Reza Lotun http://reza.lotun.name/
+# Copyright (c) 2009-2011 Reza Lotun http://reza.lotun.name/
 #
 # Permission is hereby granted, free of charge, to any person obtaining a
 # copy of this software and associated documentation files (the
@@ -19,37 +19,75 @@
 # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
 # IN THE SOFTWARE.
 
-import weakref
 
 from boto.ec2.elb.listelement import ListElement
 from boto.resultset import ResultSet
-from boto.ec2.autoscale.trigger import Trigger
+from boto.ec2.autoscale.launchconfig import LaunchConfiguration
 from boto.ec2.autoscale.request import Request
+from boto.ec2.autoscale.instance import Instance
 
-class Instance(object):
+
+class ProcessType(object):
     def __init__(self, connection=None):
         self.connection = connection
-        self.instance_id = ''
+        self.process_name = None
 
     def __repr__(self):
-        return 'Instance:%s' % self.instance_id
+        return 'ProcessType(%s)' % self.process_name
 
     def startElement(self, name, attrs, connection):
-        return None
+        pass
 
     def endElement(self, name, value, connection):
-        if name == 'InstanceId':
-            self.instance_id = value
-        else:
-            setattr(self, name, value)
+        if name == 'ProcessName':
+            self.process_name = value
+
+
+class SuspendedProcess(object):
+    def __init__(self, connection=None):
+        self.connection = connection
+        self.process_name = None
+        self.reason = None
+
+    def __repr__(self):
+        return 'SuspendedProcess(%s, %s)' % (self.process_name, self.reason)
+
+    def startElement(self, name, attrs, connection):
+        pass
+
+    def endElement(self, name, value, connection):
+        if name == 'ProcessName':
+            self.process_name = value
+        elif name == 'SuspensionReason':
+            self.reason = value
+
+
+class EnabledMetric(object):
+    def __init__(self, connection=None, metric=None, granularity=None):
+        self.connection = connection
+        self.metric = metric
+        self.granularity = granularity
+
+    def __repr__(self):
+        return 'EnabledMetric(%s, %s)' % (self.metric, self.granularity)
+
+    def startElement(self, name, attrs, connection):
+        pass
+
+    def endElement(self, name, value, connection):
+        if name == 'Granularity':
+            self.granularity = value
+        elif name == 'Metric':
+            self.metric = value
 
 
 class AutoScalingGroup(object):
-    def __init__(self, connection=None, group_name=None,
-                 availability_zone=None, launch_config=None,
-                 availability_zones=None,
-                 load_balancers=None, cooldown=0,
-                 min_size=None, max_size=None):
+    def __init__(self, connection=None, name=None,
+                 launch_config=None, availability_zones=None,
+                 load_balancers=None, default_cooldown=None,
+                 health_check_type=None, health_check_period=None,
+                 placement_group=None, vpc_zone_identifier=None, desired_capacity=None,
+                 min_size=None, max_size=None, **kwargs):
         """
         Creates a new AutoScalingGroup with the specified name.
 
@@ -59,57 +97,85 @@
         used in other calls.
 
         :type name: str
-        :param name: Name of autoscaling group.
+        :param name: Name of autoscaling group (required).
 
-        :type availability_zone: str
-        :param availability_zone: An availability zone. DEPRECATED - use the
-                                  availability_zones parameter, which expects
-                                  a list of availability zone
-                                  strings
+        :type availability_zones: list
+        :param availability_zones: List of availability zones (required).
 
-        :type availability_zone: list
-        :param availability_zone: List of availability zones.
+        :type default_cooldown: int
+        :param default_cooldown: Number of seconds after a Scaling Activity completes
+                                 before any further scaling activities can start.
 
-        :type launch_config: str
-        :param launch_config: Name of launch configuration name.
+        :type desired_capacity: int
+        :param desired_capacity: The desired capacity for the group.
+
+        :type health_check_period: str
+        :param health_check_period: Length of time in seconds after a new EC2 instance
+                                    comes into service that Auto Scaling starts checking its
+                                    health.
+
+        :type health_check_type: str
+        :param health_check_type: The service you want the health status from,
+                                   Amazon EC2 or Elastic Load Balancer.
+
+        :type launch_config: str or LaunchConfiguration
+        :param launch_config: Name of launch configuration (required).
+
 
         :type load_balancers: list
         :param load_balancers: List of load balancers.
 
-        :type minsize: int
-        :param minsize: Minimum size of group
+        :type max_size: int
+        :param maxsize: Maximum size of group (required).
 
-        :type maxsize: int
-        :param maxsize: Maximum size of group
+        :type min_size: int
+        :param minsize: Minimum size of group (required).
 
-        :type cooldown: int
-        :param cooldown: Amount of time after a Scaling Activity completes
-                         before any further scaling activities can start.
+        :type placement_group: str
+        :param placement_group: Physical location of your cluster placement
+                                group created in Amazon EC2.
 
-        :rtype: tuple
-        :return: Updated healthcheck for the instances.
+        :type vpc_zone_identifier: str
+        :param vpc_zone_identifier: The subnet identifier of the Virtual Private Cloud.
+
+        :rtype: :class:`boto.ec2.autoscale.group.AutoScalingGroup`
+        :return: An autoscale group.
         """
-        self.name = group_name
+        self.name = name or kwargs.get('group_name')   # backwards compatibility
         self.connection = connection
-        self.min_size = min_size
-        self.max_size = max_size
+        self.min_size = int(min_size) if min_size is not None else None
+        self.max_size = int(max_size) if max_size is not None else None
         self.created_time = None
-        self.cooldown = cooldown
-        self.launch_config = launch_config
-        if self.launch_config:
-            self.launch_config_name = self.launch_config.name
-        else:
-            self.launch_config_name = None
-        self.desired_capacity = None
+        default_cooldown = default_cooldown or kwargs.get('cooldown')  # backwards compatibility
+        self.default_cooldown = int(default_cooldown) if default_cooldown is not None else None
+        self.launch_config_name = launch_config
+        if launch_config and isinstance(launch_config, LaunchConfiguration):
+            self.launch_config_name = launch_config.name
+        self.desired_capacity = desired_capacity
         lbs = load_balancers or []
         self.load_balancers = ListElement(lbs)
         zones = availability_zones or []
-        self.availability_zone = availability_zone
         self.availability_zones = ListElement(zones)
+        self.health_check_period = health_check_period
+        self.health_check_type = health_check_type
+        self.placement_group = placement_group
+        self.autoscaling_group_arn = None
+        self.vpc_zone_identifier = vpc_zone_identifier
         self.instances = None
 
+    # backwards compatible access to 'cooldown' param
+    def _get_cooldown(self):
+        return self.default_cooldown
+    def _set_cooldown(self, val):
+        self.default_cooldown = val
+    cooldown = property(_get_cooldown, _set_cooldown)
+
     def __repr__(self):
-        return 'AutoScalingGroup:%s' % self.name
+        return 'AutoScalingGroup<%s>: created:%s, minsize:%s, maxsize:%s, capacity:%s' % (self.name,
+                                                                                          self.created_time,
+                                                                                          self.min_size,
+                                                                                          self.max_size,
+                                                                                          self.desired_capacity)
 
     def startElement(self, name, attrs, connection):
         if name == 'Instances':
@@ -119,24 +185,40 @@
             return self.load_balancers
         elif name == 'AvailabilityZones':
             return self.availability_zones
+        elif name == 'EnabledMetrics':
+            self.enabled_metrics = ResultSet([('member', EnabledMetric)])
+            return self.enabled_metrics
+        elif name == 'SuspendedProcesses':
+            self.suspended_processes = ResultSet([('member', SuspendedProcess)])
+            return self.suspended_processes
         else:
             return
 
     def endElement(self, name, value, connection):
         if name == 'MinSize':
-            self.min_size = value
+            self.min_size = int(value)
+        elif name == 'AutoScalingGroupARN':
+            self.autoscaling_group_arn = value
         elif name == 'CreatedTime':
             self.created_time = value
-        elif name == 'Cooldown':
-            self.cooldown = value
+        elif name == 'DefaultCooldown':
+            self.default_cooldown = int(value)
         elif name == 'LaunchConfigurationName':
             self.launch_config_name = value
         elif name == 'DesiredCapacity':
-            self.desired_capacity = value
+            self.desired_capacity = int(value)
         elif name == 'MaxSize':
-            self.max_size = value
+            self.max_size = int(value)
         elif name == 'AutoScalingGroupName':
             self.name = value
+        elif name == 'PlacementGroup':
+            self.placement_group = value
+        elif name == 'HealthCheckGracePeriod':
+            self.health_check_period = int(value)
+        elif name == 'HealthCheckType':
+            self.health_check_type = value
+        elif name == 'VPCZoneIdentifier':
+            self.vpc_zone_identifier = value
         else:
             setattr(self, name, value)
 
@@ -161,29 +243,48 @@
         """
         self.min_size = 0
         self.max_size = 0
+        self.desired_capacity = 0
         self.update()
 
-    def get_all_triggers(self):
-        """ Get all triggers for this auto scaling group. """
-        params = {'AutoScalingGroupName' : self.name}
-        triggers = self.connection.get_list('DescribeTriggers', params,
-                                            [('member', Trigger)])
+    def delete(self, force_delete=False):
+        """ Delete this auto-scaling group if no instances attached or no
+        scaling activities in progress.
+        """
+        return self.connection.delete_auto_scaling_group(self.name, force_delete)
 
-        # allow triggers to be able to access the autoscale group
-        for tr in triggers:
-            tr.autoscale_group = weakref.proxy(self)
-
-        return triggers
-
-    def delete(self):
-        """ Delete this auto-scaling group. """
-        params = {'AutoScalingGroupName' : self.name}
-        return self.connection.get_object('DeleteAutoScalingGroup', params,
-                                          Request)
-
-    def get_activities(self, activity_ids=None, max_records=100):
+    def get_activities(self, activity_ids=None, max_records=50):
         """
         Get all activies for this group.
         """
         return self.connection.get_all_activities(self, activity_ids, max_records)
 
+    def suspend_processes(self, scaling_processes=None):
+        """ Suspends Auto Scaling processes for an Auto Scaling group. """
+        return self.connection.suspend_processes(self.name, scaling_processes)
+
+    def resume_processes(self, scaling_processes=None):
+        """ Resumes Auto Scaling processes for an Auto Scaling group. """
+        return self.connection.resume_processes(self.name, scaling_processes)
+
+
+class AutoScalingGroupMetric(object):
+    def __init__(self, connection=None):
+
+        self.connection = connection
+        self.metric = None
+        self.granularity = None
+
+    def __repr__(self):
+        return 'AutoScalingGroupMetric:%s' % self.metric
+
+    def startElement(self, name, attrs, connection):
+        return
+
+    def endElement(self, name, value, connection):
+        if name == 'Metric':
+            self.metric = value
+        elif name == 'Granularity':
+            self.granularity = value
+        else:
+            setattr(self, name, value)
+
diff --git a/boto/ec2/autoscale/instance.py b/boto/ec2/autoscale/instance.py
index ffdd5b1..6eb89c2 100644
--- a/boto/ec2/autoscale/instance.py
+++ b/boto/ec2/autoscale/instance.py
@@ -23,12 +23,21 @@
 class Instance(object):
     def __init__(self, connection=None):
         self.connection = connection
-        self.instance_id = ''
+        self.instance_id = None
+        self.health_status = None
+        self.launch_config_name = None
         self.lifecycle_state = None
-        self.availability_zone = ''
+        self.availability_zone = None
+        self.group_name = None
 
     def __repr__(self):
-        return 'Instance:%s' % self.instance_id
+        r = 'Instance<id:%s, state:%s, health:%s' % (self.instance_id,
+                                                     self.lifecycle_state,
+                                                     self.health_status)
+        if self.group_name:
+            r += ' group:%s' % self.group_name
+        r += '>'
+        return r
 
     def startElement(self, name, attrs, connection):
         return None
@@ -36,11 +45,16 @@
     def endElement(self, name, value, connection):
         if name == 'InstanceId':
             self.instance_id = value
+        elif name == 'HealthStatus':
+            self.health_status = value
+        elif name == 'LaunchConfigurationName':
+            self.launch_config_name = value
         elif name == 'LifecycleState':
             self.lifecycle_state = value
         elif name == 'AvailabilityZone':
             self.availability_zone = value
+        elif name == 'AutoScalingGroupName':
+            self.group_name = value
         else:
             setattr(self, name, value)
 
-
diff --git a/boto/ec2/autoscale/launchconfig.py b/boto/ec2/autoscale/launchconfig.py
index 7587cb6..2f55b24 100644
--- a/boto/ec2/autoscale/launchconfig.py
+++ b/boto/ec2/autoscale/launchconfig.py
@@ -20,15 +20,76 @@
 # IN THE SOFTWARE.
 
 
-from boto.ec2.autoscale.request import Request
+from datetime import datetime
+import base64
+from boto.resultset import ResultSet
 from boto.ec2.elb.listelement import ListElement
 
+# this should use the corresponding object from boto.ec2
+class Ebs(object):
+    def __init__(self, connection=None, snapshot_id=None, volume_size=None):
+        self.connection = connection
+        self.snapshot_id = snapshot_id
+        self.volume_size = volume_size
+
+    def __repr__(self):
+        return 'Ebs(%s, %s)' % (self.snapshot_id, self.volume_size)
+
+    def startElement(self, name, attrs, connection):
+        pass
+
+    def endElement(self, name, value, connection):
+        if name == 'SnapshotId':
+            self.snapshot_id = value
+        elif name == 'VolumeSize':
+            self.volume_size = value
+
+
+class InstanceMonitoring(object):
+    def __init__(self, connection=None, enabled='false'):
+        self.connection = connection
+        self.enabled = enabled
+
+    def __repr__(self):
+        return 'InstanceMonitoring(%s)' % self.enabled
+
+    def startElement(self, name, attrs, connection):
+        pass
+
+    def endElement(self, name, value, connection):
+        if name == 'Enabled':
+            self.enabled = value
+
+
+# this should use the BlockDeviceMapping from boto.ec2.blockdevicemapping
+class BlockDeviceMapping(object):
+    def __init__(self, connection=None, device_name=None, virtual_name=None):
+        self.connection = connection
+        self.device_name = None
+        self.virtual_name = None
+        self.ebs = None
+
+    def __repr__(self):
+        return 'BlockDeviceMapping(%s, %s)' % (self.device_name, self.virtual_name)
+
+    def startElement(self, name, attrs, connection):
+        if name == 'Ebs':
+            self.ebs = Ebs(self)
+            return self.ebs
+
+    def endElement(self, name, value, connection):
+        if name == 'DeviceName':
+            self.device_name = value
+        elif name == 'VirtualName':
+            self.virtual_name = value
+
 
 class LaunchConfiguration(object):
     def __init__(self, connection=None, name=None, image_id=None,
                  key_name=None, security_groups=None, user_data=None,
                  instance_type='m1.small', kernel_id=None,
-                 ramdisk_id=None, block_device_mappings=None):
+                 ramdisk_id=None, block_device_mappings=None,
+                 instance_monitoring=False):
         """
         A launch configuration.
 
@@ -46,6 +107,25 @@
         :param security_groups: Names of the security groups with which to
                                 associate the EC2 instances.
 
+        :type user_data: str
+        :param user_data: The user data available to launched EC2 instances.
+
+        :type instance_type: str
+        :param instance_type: The instance type
+
+        :type kern_id: str
+        :param kern_id: Kernel id for instance
+
+        :type ramdisk_id: str
+        :param ramdisk_id: RAM disk id for instance
+
+        :type block_device_mappings: list
+        :param block_device_mappings: Specifies how block devices are exposed
+                                      for instances
+
+        :type instance_monitoring: bool
+        :param instance_monitoring: Whether instances in group are launched
+                                    with detailed monitoring.
         """
         self.connection = connection
         self.name = name
@@ -60,6 +140,8 @@
         self.kernel_id = kernel_id
         self.user_data = user_data
         self.created_time = None
+        self.instance_monitoring = instance_monitoring
+        self.launch_configuration_arn = None
 
     def __repr__(self):
         return 'LaunchConfiguration:%s' % self.name
@@ -67,8 +149,12 @@
     def startElement(self, name, attrs, connection):
         if name == 'SecurityGroups':
             return self.security_groups
-        else:
-            return
+        elif name == 'BlockDeviceMappings':
+            self.block_device_mappings = ResultSet([('member', BlockDeviceMapping)])
+            return self.block_device_mappings
+        elif name == 'InstanceMonitoring':
+            self.instance_monitoring = InstanceMonitoring(self)
+            return self.instance_monitoring
 
     def endElement(self, name, value, connection):
         if name == 'InstanceType':
@@ -80,19 +166,24 @@
         elif name == 'ImageId':
             self.image_id = value
         elif name == 'CreatedTime':
-            self.created_time = value
+            try:
+                self.created_time = datetime.strptime(value, '%Y-%m-%dT%H:%M:%S.%fZ')
+            except ValueError:
+                self.created_time = datetime.strptime(value, '%Y-%m-%dT%H:%M:%SZ')
         elif name == 'KernelId':
             self.kernel_id = value
         elif name == 'RamdiskId':
             self.ramdisk_id = value
         elif name == 'UserData':
-            self.user_data = value
+            self.user_data = base64.b64decode(value)
+        elif name == 'LaunchConfigurationARN':
+            self.launch_configuration_arn = value
+        elif name == 'InstanceMonitoring':
+            self.instance_monitoring = value
         else:
             setattr(self, name, value)
 
     def delete(self):
         """ Delete this launch configuration. """
-        params = {'LaunchConfigurationName' : self.name}
-        return self.connection.get_object('DeleteLaunchConfiguration', params,
-                                          Request)
+        return self.connection.delete_launch_configuration(self.name)
 
diff --git a/boto/ec2/autoscale/policy.py b/boto/ec2/autoscale/policy.py
new file mode 100644
index 0000000..d9d1ac6
--- /dev/null
+++ b/boto/ec2/autoscale/policy.py
@@ -0,0 +1,155 @@
+# Copyright (c) 2009-2010 Reza Lotun http://reza.lotun.name/
+# Copyright (c) 2011 Jann Kleen
+#
+# Permission is hereby granted, free of charge, to any person obtaining a
+# copy of this software and associated documentation files (the
+# "Software"), to deal in the Software without restriction, including
+# without limitation the rights to use, copy, modify, merge, publish, dis-
+# tribute, sublicense, and/or sell copies of the Software, and to permit
+# persons to whom the Software is furnished to do so, subject to the fol-
+# lowing conditions:
+#
+# The above copyright notice and this permission notice shall be included
+# in all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
+# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
+# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+# IN THE SOFTWARE.
+
+from boto.resultset import ResultSet
+from boto.ec2.elb.listelement import ListElement
+
+class Alarm(object):
+    def __init__(self, connection=None):
+        self.connection = connection
+        self.name = None
+        self.alarm_arn = None
+
+    def __repr__(self):
+        return 'Alarm:%s' % self.name
+
+    def startElement(self, name, attrs, connection):
+        return None
+
+    def endElement(self, name, value, connection):
+        if name == 'AlarmName':
+            self.name = value
+        elif name == 'AlarmARN':
+            self.alarm_arn = value
+        else:
+            setattr(self, name, value)
+
+
+class AdjustmentType(object):
+    def __init__(self, connection=None):
+        self.connection = connection
+        self.adjustment_types = ListElement([])
+
+    def __repr__(self):
+        return 'AdjustmentType:%s' % self.adjustment_types
+
+    def startElement(self, name, attrs, connection):
+        if name == 'AdjustmentType':
+            return self.adjustment_types
+
+    def endElement(self, name, value, connection):
+        return
+
+
+class MetricCollectionTypes(object):
+    class BaseType(object):
+        arg = ''
+        def __init__(self, connection):
+            self.connection = connection
+            self.val = None
+        def __repr__(self):
+            return '%s:%s' % (self.arg, self.val)
+        def startElement(self, name, attrs, connection):
+            return
+        def endElement(self, name, value, connection):
+            if name == self.arg:
+                self.val = value
+    class Metric(BaseType):
+        arg = 'Metric'
+    class Granularity(BaseType):
+        arg = 'Granularity'
+
+    def __init__(self, connection=None):
+        self.connection = connection
+        self.metrics = []
+        self.granularities = []
+
+    def __repr__(self):
+        return 'MetricCollectionTypes:<%s, %s>' % (self.metrics, self.granularities)
+
+    def startElement(self, name, attrs, connection):
+        if name == 'Granularities':
+            self.granularities = ResultSet([('member', self.Granularity)])
+            return self.granularities
+        elif name == 'Metrics':
+            self.metrics = ResultSet([('member', self.Metric)])
+            return self.metrics
+
+    def endElement(self, name, value, connection):
+        return
+
+
+class ScalingPolicy(object):
+    def __init__(self, connection=None, **kwargs):
+        """
+        Scaling Policy
+
+        :type name: str
+        :param name: Name of scaling policy.
+
+        :type adjustment_type: str
+        :param adjustment_type: Specifies the type of adjustment. Valid values are `ChangeInCapacity`, `ExactCapacity` and `PercentChangeInCapacity`.
+
+        :type as_name: str or int
+        :param as_name: Name or ARN of the Auto Scaling Group.
+
+        :type scaling_adjustment: int
+        :param scaling_adjustment: Value of adjustment (type specified in `adjustment_type`).
+
+        :type cooldown: int
+        :param cooldown: Time (in seconds) before Alarm related Scaling Activities can start after the previous Scaling Activity ends.
+
+        """
+        self.name = kwargs.get('name', None)
+        self.adjustment_type = kwargs.get('adjustment_type', None)
+        self.as_name = kwargs.get('as_name', None)
+        self.scaling_adjustment = kwargs.get('scaling_adjustment', None)
+        self.cooldown = kwargs.get('cooldown', None)
+        self.connection = connection
+
+    def __repr__(self):
+        return 'ScalingPolicy(%s group:%s adjustment:%s)' % (self.name,
+                                                             self.as_name,
+                                                             self.adjustment_type)
+
+    def startElement(self, name, attrs, connection):
+        if name == 'Alarms':
+            self.alarms = ResultSet([('member', Alarm)])
+            return self.alarms
+
+    def endElement(self, name, value, connection):
+        if name == 'PolicyName':
+            self.name = value
+        elif name == 'AutoScalingGroupName':
+            self.as_name = value
+        elif name == 'PolicyARN':
+            self.policy_arn = value
+        elif name == 'ScalingAdjustment':
+            self.scaling_adjustment = int(value)
+        elif name == 'Cooldown':
+            self.cooldown = int(value)
+        elif name == 'AdjustmentType':
+            self.adjustment_type = value
+
+    def delete(self):
+        return self.connection.delete_policy(self.name, self.as_name)
+
diff --git a/boto/ec2/autoscale/scheduled.py b/boto/ec2/autoscale/scheduled.py
new file mode 100644
index 0000000..d8f051c
--- /dev/null
+++ b/boto/ec2/autoscale/scheduled.py
@@ -0,0 +1,60 @@
+# Copyright (c) 2009-2010 Reza Lotun http://reza.lotun.name/
+#
+# Permission is hereby granted, free of charge, to any person obtaining a
+# copy of this software and associated documentation files (the
+# "Software"), to deal in the Software without restriction, including
+# without limitation the rights to use, copy, modify, merge, publish, dis-
+# tribute, sublicense, and/or sell copies of the Software, and to permit
+# persons to whom the Software is furnished to do so, subject to the fol-
+# lowing conditions:
+#
+# The above copyright notice and this permission notice shall be included
+# in all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
+# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
+# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+# IN THE SOFTWARE.
+
+
+from datetime import datetime
+
+
+class ScheduledUpdateGroupAction(object):
+    def __init__(self, connection=None):
+        self.connection = connection
+        self.name = None
+        self.action_arn = None
+        self.time = None
+        self.desired_capacity = None
+        self.max_size = None
+        self.min_size = None
+
+    def __repr__(self):
+        return 'ScheduledUpdateGroupAction:%s' % self.name
+
+    def startElement(self, name, attrs, connection):
+        return None
+
+    def endElement(self, name, value, connection):
+        if name == 'DesiredCapacity':
+            self.desired_capacity = value
+        elif name == 'ScheduledActionName':
+            self.name = value
+        elif name == 'MaxSize':
+            self.max_size = int(value)
+        elif name == 'MinSize':
+            self.min_size = int(value)
+        elif name == 'ScheduledActionARN':
+            self.action_arn = value
+        elif name == 'Time':
+            try:
+                self.time = datetime.strptime(value, '%Y-%m-%dT%H:%M:%S.%fZ')
+            except ValueError:
+                self.time = datetime.strptime(value, '%Y-%m-%dT%H:%M:%SZ')
+        else:
+            setattr(self, name, value)
+
diff --git a/boto/ec2/autoscale/trigger.py b/boto/ec2/autoscale/trigger.py
deleted file mode 100644
index 2840e67..0000000
--- a/boto/ec2/autoscale/trigger.py
+++ /dev/null
@@ -1,134 +0,0 @@
-# Copyright (c) 2009 Reza Lotun http://reza.lotun.name/
-#
-# Permission is hereby granted, free of charge, to any person obtaining a
-# copy of this software and associated documentation files (the
-# "Software"), to deal in the Software without restriction, including
-# without limitation the rights to use, copy, modify, merge, publish, dis-
-# tribute, sublicense, and/or sell copies of the Software, and to permit
-# persons to whom the Software is furnished to do so, subject to the fol-
-# lowing conditions:
-#
-# The above copyright notice and this permission notice shall be included
-# in all copies or substantial portions of the Software.
-#
-# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
-# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
-# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
-# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
-# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
-# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
-# IN THE SOFTWARE.
-
-import weakref
-
-from boto.ec2.autoscale.request import Request
-
-
-class Trigger(object):
-    """
-    An auto scaling trigger.
-    """
-
-    def __init__(self, connection=None, name=None, autoscale_group=None,
-                 dimensions=None, measure_name=None,
-                 statistic=None, unit=None, period=60,
-                 lower_threshold=None,
-                 lower_breach_scale_increment=None,
-                 upper_threshold=None,
-                 upper_breach_scale_increment=None,
-                 breach_duration=None):
-        """
-        Initialize an auto-scaling trigger object.
-        
-        :type name: str
-        :param name: The name for this trigger
-        
-        :type autoscale_group: str
-        :param autoscale_group: The name of the AutoScalingGroup that will be
-                                associated with the trigger. The AutoScalingGroup
-                                that will be affected by the trigger when it is
-                                activated.
-        
-        :type dimensions: list
-        :param dimensions: List of tuples, i.e.
-                            ('ImageId', 'i-13lasde') etc.
-        
-        :type measure_name: str
-        :param measure_name: The measure name associated with the metric used by
-                             the trigger to determine when to activate, for
-                             example, CPU, network I/O, or disk I/O.
-        
-        :type statistic: str
-        :param statistic: The particular statistic used by the trigger when
-                          fetching metric statistics to examine.
-        
-        :type period: int
-        :param period: The period associated with the metric statistics in
-                       seconds. Valid Values: 60 or a multiple of 60.
-        
-        :type unit: str
-        :param unit: The unit of measurement.
-        """
-        self.name = name
-        self.connection = connection
-        self.dimensions = dimensions
-        self.breach_duration = breach_duration
-        self.upper_breach_scale_increment = upper_breach_scale_increment
-        self.created_time = None
-        self.upper_threshold = upper_threshold
-        self.status = None
-        self.lower_threshold = lower_threshold
-        self.period = period
-        self.lower_breach_scale_increment = lower_breach_scale_increment
-        self.statistic = statistic
-        self.unit = unit
-        self.namespace = None
-        if autoscale_group:
-            self.autoscale_group = weakref.proxy(autoscale_group)
-        else:
-            self.autoscale_group = None
-        self.measure_name = measure_name
-
-    def __repr__(self):
-        return 'Trigger:%s' % (self.name)
-
-    def startElement(self, name, attrs, connection):
-        return None
-
-    def endElement(self, name, value, connection):
-        if name == 'BreachDuration':
-            self.breach_duration = value
-        elif name == 'TriggerName':
-            self.name = value
-        elif name == 'Period':
-            self.period = value
-        elif name == 'CreatedTime':
-            self.created_time = value
-        elif name == 'Statistic':
-            self.statistic = value
-        elif name == 'Unit':
-            self.unit = value
-        elif name == 'Namespace':
-            self.namespace = value
-        elif name == 'AutoScalingGroupName':
-            self.autoscale_group_name = value
-        elif name == 'MeasureName':
-            self.measure_name = value
-        else:
-            setattr(self, name, value)
-
-    def update(self):
-        """ Write out differences to trigger. """
-        self.connection.create_trigger(self)
-
-    def delete(self):
-        """ Delete this trigger. """
-        params = {
-                  'TriggerName'          : self.name,
-                  'AutoScalingGroupName' : self.autoscale_group_name,
-                  }
-        req =self.connection.get_object('DeleteTrigger', params,
-                                        Request)
-        self.connection.last_request = req
-        return req
-
diff --git a/boto/ec2/blockdevicemapping.py b/boto/ec2/blockdevicemapping.py
index efbc38b..75be2a4 100644
--- a/boto/ec2/blockdevicemapping.py
+++ b/boto/ec2/blockdevicemapping.py
@@ -22,16 +22,25 @@
 
 class BlockDeviceType(object):
 
-    def __init__(self, connection=None):
+    def __init__(self,
+                 connection=None,
+                 ephemeral_name=None,
+                 no_device=False,
+                 volume_id=None,
+                 snapshot_id=None,
+                 status=None,
+                 attach_time=None,
+                 delete_on_termination=False,
+                 size=None):
         self.connection = connection
-        self.ephemeral_name = None
-        self.no_device = False
-        self.volume_id = None
-        self.snapshot_id = None
-        self.status = None
-        self.attach_time = None
-        self.delete_on_termination = False
-        self.size = None
+        self.ephemeral_name = ephemeral_name
+        self.no_device = no_device
+        self.volume_id = volume_id
+        self.snapshot_id = snapshot_id
+        self.status = status
+        self.attach_time = attach_time
+        self.delete_on_termination = delete_on_termination
+        self.size = size
 
     def startElement(self, name, attrs, connection):
         pass
@@ -71,7 +80,7 @@
         self.current_value = None
 
     def startElement(self, name, attrs, connection):
-        if name == 'ebs':
+        if name == 'ebs' or name == 'virtualName':
             self.current_value = BlockDeviceType(self)
             return self.current_value
 
diff --git a/boto/ec2/cloudwatch/__init__.py b/boto/ec2/cloudwatch/__init__.py
index a02baa3..1c61736 100644
--- a/boto/ec2/cloudwatch/__init__.py
+++ b/boto/ec2/cloudwatch/__init__.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2006-2009 Mitch Garnaat http://garnaat.org/
+# Copyright (c) 2006-2011 Mitch Garnaat http://garnaat.org/
 #
 # Permission is hereby granted, free of charge, to any person obtaining a
 # copy of this software and associated documentation files (the
@@ -140,6 +140,7 @@
     import simplejson as json
 except ImportError:
     import json
+
 from boto.connection import AWSQueryConnection
 from boto.ec2.cloudwatch.metric import Metric
 from boto.ec2.cloudwatch.alarm import MetricAlarm, AlarmHistoryItem
@@ -151,6 +152,7 @@
     'us-east-1' : 'monitoring.us-east-1.amazonaws.com',
     'us-west-1' : 'monitoring.us-west-1.amazonaws.com',
     'eu-west-1' : 'monitoring.eu-west-1.amazonaws.com',
+    'ap-northeast-1' : 'monitoring.ap-northeast-1.amazonaws.com',
     'ap-southeast-1' : 'monitoring.ap-southeast-1.amazonaws.com'}
 
 def regions():
@@ -188,8 +190,10 @@
 class CloudWatchConnection(AWSQueryConnection):
 
     APIVersion = boto.config.get('Boto', 'cloudwatch_version', '2010-08-01')
-    DefaultRegionName = boto.config.get('Boto', 'cloudwatch_region_name', 'us-east-1')
-    DefaultRegionEndpoint = boto.config.get('Boto', 'cloudwatch_region_endpoint',
+    DefaultRegionName = boto.config.get('Boto', 'cloudwatch_region_name',
+                                        'us-east-1')
+    DefaultRegionEndpoint = boto.config.get('Boto',
+                                            'cloudwatch_region_endpoint',
                                             'monitoring.amazonaws.com')
 
 
@@ -218,21 +222,108 @@
     def _required_auth_capability(self):
         return ['ec2']
 
+    def build_dimension_param(self, dimension, params):
+        for dim_name in dimension:
+            dim_value = dimension[dim_name]
+            if isinstance(dim_value, basestring):
+                dim_value = [dim_value]
+            for i, value in enumerate(dim_value):
+                params['Dimensions.member.%d.Name' % (i+1)] = dim_name
+                params['Dimensions.member.%d.Value' % (i+1)] = value
+    
     def build_list_params(self, params, items, label):
-        if isinstance(items, str):
+        if isinstance(items, basestring):
             items = [items]
-        for i in range(1, len(items)+1):
-            params[label % i] = items[i-1]
+        for index, item in enumerate(items):
+            i = index + 1
+            if isinstance(item, dict):
+                for k,v in item.iteritems():
+                    params[label % (i, 'Name')] = k
+                    if v is not None:
+                        params[label % (i, 'Value')] = v
+            else:
+                params[label % i] = item
+
+    def build_put_params(self, params, name, value=None, timestamp=None, 
+                        unit=None, dimensions=None, statistics=None):
+        args = (name, value, unit, dimensions, statistics)
+        length = max(map(lambda a: len(a) if isinstance(a, list) else 1, args))
+
+        def aslist(a):
+            if isinstance(a, list):
+                if len(a) != length:
+                    raise Exception('Must specify equal number of elements; expected %d.' % length)
+                return a
+            return [a] * length
+
+        for index, (n, v, u, d, s) in enumerate(zip(*map(aslist, args))):
+            metric_data = {'MetricName': n}
+
+            if timestamp:
+                metric_data['Timestamp'] = timestamp.isoformat()
+            
+            if unit:
+                metric_data['Unit'] = u
+            
+            if dimensions:
+                self.build_dimension_param(d, metric_data)
+            
+            if statistics:
+                metric_data['StatisticValues.Maximum'] = s['maximum']
+                metric_data['StatisticValues.Minimum'] = s['minimum']
+                metric_data['StatisticValues.SampleCount'] = s['samplecount']
+                metric_data['StatisticValues.Sum'] = s['sum']
+                if value != None:
+                    msg = 'You supplied a value and statistics for a metric.'
+                    msg += 'Posting statistics and not value.'
+                    boto.log.warn(msg)
+            elif value != None:
+                metric_data['Value'] = v
+            else:
+                raise Exception('Must specify a value or statistics to put.')
+
+            for key, value in metric_data.iteritems():
+                params['MetricData.member.%d.%s' % (index + 1, key)] = value
 
     def get_metric_statistics(self, period, start_time, end_time, metric_name,
-                              namespace, statistics, dimensions=None, unit=None):
+                              namespace, statistics, dimensions=None,
+                              unit=None):
         """
         Get time-series data for one or more statistics of a given metric.
 
-        :type metric_name: string
-        :param metric_name: CPUUtilization|NetworkIO-in|NetworkIO-out|DiskIO-ALL-read|
-                             DiskIO-ALL-write|DiskIO-ALL-read-bytes|DiskIO-ALL-write-bytes
+        :type period: integer
+        :param period: The granularity, in seconds, of the returned datapoints.
+                       Period must be at least 60 seconds and must be a multiple
+                       of 60. The default value is 60.
 
+        :type start_time: datetime
+        :param start_time: The time stamp to use for determining the first
+                           datapoint to return. The value specified is
+                           inclusive; results include datapoints with the
+                           time stamp specified.
+
+        :type end_time: datetime
+        :param end_time: The time stamp to use for determining the last
+                         datapoint to return. The value specified is
+                         exclusive; results will include datapoints up to
+                         the time stamp specified.
+
+        :type metric_name: string
+        :param metric_name: The metric name.
+
+        :type namespace: string
+        :param namespace: The metric's namespace.
+
+        :type statistics: list
+        :param statistics: A list of statistics names Valid values:
+                           Average | Sum | SampleCount | Maximum | Minimum
+
+        :type dimensions: dict
+        :param dimensions: A dictionary of dimension key/values where
+                           the key is the dimension name and the value
+                           is either a scalar value or an iterator
+                           of values to be associated with that
+                           dimension.
         :rtype: list
         """
         params = {'Period' : period,
@@ -242,31 +333,106 @@
                   'EndTime' : end_time.isoformat()}
         self.build_list_params(params, statistics, 'Statistics.member.%d')
         if dimensions:
-            i = 1
-            for name in dimensions:
-                params['Dimensions.member.%d.Name' % i] = name
-                params['Dimensions.member.%d.Value' % i] = dimensions[name]
-                i += 1
-        return self.get_list('GetMetricStatistics', params, [('member', Datapoint)])
+            self.build_dimension_param(dimensions, params)
+        return self.get_list('GetMetricStatistics', params,
+                             [('member', Datapoint)])
 
-    def list_metrics(self, next_token=None):
+    def list_metrics(self, next_token=None, dimensions=None,
+                     metric_name=None, namespace=None):
         """
-        Returns a list of the valid metrics for which there is recorded data available.
+        Returns a list of the valid metrics for which there is recorded
+        data available.
 
-        :type next_token: string
-        :param next_token: A maximum of 500 metrics will be returned at one time.
-                           If more results are available, the ResultSet returned
-                           will contain a non-Null next_token attribute.  Passing
-                           that token as a parameter to list_metrics will retrieve
-                           the next page of metrics.
+        :type next_token: str
+        :param next_token: A maximum of 500 metrics will be returned at one
+                           time.  If more results are available, the
+                           ResultSet returned will contain a non-Null
+                           next_token attribute.  Passing that token as a
+                           parameter to list_metrics will retrieve the
+                           next page of metrics.
+
+        :type dimension: dict
+        :param dimension_filters: A dictionary containing name/value pairs
+                                  that will be used to filter the results.
+                                  The key in the dictionary is the name of
+                                  a Dimension.  The value in the dictionary
+                                  is either a scalar value of that Dimension
+                                  name that you want to filter on, a list
+                                  of values to filter on or None if
+                                  you want all metrics with that Dimension name.
+
+        :type metric_name: str
+        :param metric_name: The name of the Metric to filter against.  If None,
+                            all Metric names will be returned.
+
+        :type namespace: str
+        :param namespace: A Metric namespace to filter against (e.g. AWS/EC2).
+                          If None, Metrics from all namespaces will be returned.
         """
         params = {}
         if next_token:
             params['NextToken'] = next_token
+        if dimensions:
+            self.build_dimension_param(dimensions, params)
+        if metric_name:
+            params['MetricName'] = metric_name
+        if namespace:
+            params['Namespace'] = namespace
+        
         return self.get_list('ListMetrics', params, [('member', Metric)])
+    
+    def put_metric_data(self, namespace, name, value=None, timestamp=None, 
+                        unit=None, dimensions=None, statistics=None):
+        """
+        Publishes metric data points to Amazon CloudWatch. Amazon Cloudwatch 
+        associates the data points with the specified metric. If the specified 
+        metric does not exist, Amazon CloudWatch creates the metric. If a list 
+        is specified for some, but not all, of the arguments, the remaining 
+        arguments are repeated a corresponding number of times.
 
-    def describe_alarms(self, action_prefix=None, alarm_name_prefix=None, alarm_names=None,
-                        max_records=None, state_value=None, next_token=None):
+        :type namespace: str
+        :param namespace: The namespace of the metric.
+
+        :type name: str or list
+        :param name: The name of the metric.
+
+        :type value: float or list
+        :param value: The value for the metric.
+
+        :type timestamp: datetime or list
+        :param timestamp: The time stamp used for the metric. If not specified, 
+            the default value is set to the time the metric data was received.
+        
+        :type unit: string or list
+        :param unit: The unit of the metric.  Valid Values: Seconds | 
+            Microseconds | Milliseconds | Bytes | Kilobytes |
+            Megabytes | Gigabytes | Terabytes | Bits | Kilobits |
+            Megabits | Gigabits | Terabits | Percent | Count |
+            Bytes/Second | Kilobytes/Second | Megabytes/Second |
+            Gigabytes/Second | Terabytes/Second | Bits/Second |
+            Kilobits/Second | Megabits/Second | Gigabits/Second |
+            Terabits/Second | Count/Second | None
+        
+        :type dimensions: dict
+        :param dimensions: Add extra name value pairs to associate 
+            with the metric, i.e.:
+            {'name1': value1, 'name2': (value2, value3)}
+        
+        :type statistics: dict or list
+        :param statistics: Use a statistic set instead of a value, for example::
+
+            {'maximum': 30, 'minimum': 1, 'samplecount': 100, 'sum': 10000}
+        """
+        params = {'Namespace': namespace}
+        self.build_put_params(params, name, value=value, timestamp=timestamp,
+            unit=unit, dimensions=dimensions, statistics=statistics)
+
+        return self.get_status('PutMetricData', params)
+
+
+    def describe_alarms(self, action_prefix=None, alarm_name_prefix=None,
+                        alarm_names=None, max_records=None, state_value=None,
+                        next_token=None):
         """
         Retrieves alarms with the specified names. If no name is specified, all
         alarms for the user are returned. Alarms can be retrieved by using only
@@ -277,20 +443,22 @@
         :param action_name: The action name prefix.
 
         :type alarm_name_prefix: string
-        :param alarm_name_prefix: The alarm name prefix. AlarmNames cannot be specified
-                                  if this parameter is specified.
+        :param alarm_name_prefix: The alarm name prefix. AlarmNames cannot
+                                  be specified if this parameter is specified.
 
         :type alarm_names: list
         :param alarm_names: A list of alarm names to retrieve information for.
 
         :type max_records: int
-        :param max_records: The maximum number of alarm descriptions to retrieve.
+        :param max_records: The maximum number of alarm descriptions
+                            to retrieve.
 
         :type state_value: string
         :param state_value: The state value to be used in matching alarms.
 
         :type next_token: string
-        :param next_token: The token returned by a previous call to indicate that there is more data.
+        :param next_token: The token returned by a previous call to
+                           indicate that there is more data.
 
         :rtype list
         """
@@ -307,10 +475,13 @@
             params['NextToken'] = next_token
         if state_value:
             params['StateValue'] = state_value
-        return self.get_list('DescribeAlarms', params, [('member', MetricAlarm)])
+        return self.get_list('DescribeAlarms', params,
+                             [('member', MetricAlarm)])
 
-    def describe_alarm_history(self, alarm_name=None, start_date=None, end_date=None,
-                               max_records=None, history_item_type=None, next_token=None):
+    def describe_alarm_history(self, alarm_name=None,
+                               start_date=None, end_date=None,
+                               max_records=None, history_item_type=None,
+                               next_token=None):
         """
         Retrieves history for the specified alarm. Filter alarms by date range
         or item type. If an alarm name is not specified, Amazon CloudWatch
@@ -330,13 +501,16 @@
         :param end_date: The starting date to retrieve alarm history.
 
         :type history_item_type: string
-        :param history_item_type: The type of alarm histories to retreive (ConfigurationUpdate | StateUpdate | Action)
+        :param history_item_type: The type of alarm histories to retreive
+                                  (ConfigurationUpdate | StateUpdate | Action)
 
         :type max_records: int
-        :param max_records: The maximum number of alarm descriptions to retrieve.
+        :param max_records: The maximum number of alarm descriptions
+                            to retrieve.
 
         :type next_token: string
-        :param next_token: The token returned by a previous call to indicate that there is more data.
+        :param next_token: The token returned by a previous call to indicate
+                           that there is more data.
 
         :rtype list
         """
@@ -353,9 +527,11 @@
             params['MaxRecords'] = max_records
         if next_token:
             params['NextToken'] = next_token
-        return self.get_list('DescribeAlarmHistory', params, [('member', AlarmHistoryItem)])
+        return self.get_list('DescribeAlarmHistory', params,
+                             [('member', AlarmHistoryItem)])
 
-    def describe_alarms_for_metric(self, metric_name, namespace, period=None, statistic=None, dimensions=None, unit=None):
+    def describe_alarms_for_metric(self, metric_name, namespace, period=None,
+                                   statistic=None, dimensions=None, unit=None):
         """
         Retrieves all alarms for a single metric. Specify a statistic, period,
         or unit to filter the set of alarms further.
@@ -367,30 +543,37 @@
         :param namespace: The namespace of the metric.
 
         :type period: int
-        :param period: The period in seconds over which the statistic is applied.
+        :param period: The period in seconds over which the statistic
+                       is applied.
 
         :type statistic: string
         :param statistic: The statistic for the metric.
 
-        :type dimensions: list
+        :param dimension_filters: A dictionary containing name/value pairs
+                                  that will be used to filter the results.
+                                  The key in the dictionary is the name of
+                                  a Dimension.  The value in the dictionary
+                                  is either a scalar value of that Dimension
+                                  name that you want to filter on, a list
+                                  of values to filter on or None if
+                                  you want all metrics with that Dimension name.
 
         :type unit: string
 
         :rtype list
         """
-        params = {
-                    'MetricName'        :   metric_name,
-                    'Namespace'         :   namespace,
-                 }
+        params = {'MetricName' : metric_name,
+                  'Namespace' : namespace}
         if period:
             params['Period'] = period
         if statistic:
             params['Statistic'] = statistic
         if dimensions:
-            self.build_list_params(params, dimensions, 'Dimensions.member.%s')
+            self.build_dimension_param(dimensions, params)
         if unit:
             params['Unit'] = unit
-        return self.get_list('DescribeAlarmsForMetric', params, [('member', MetricAlarm)])
+        return self.get_list('DescribeAlarmsForMetric', params,
+                             [('member', MetricAlarm)])
 
     def put_metric_alarm(self, alarm):
         """
@@ -413,7 +596,7 @@
                     'MetricName'            :       alarm.metric,
                     'Namespace'             :       alarm.namespace,
                     'Statistic'             :       alarm.statistic,
-                    'ComparisonOperator'    :       MetricAlarm._cmp_map[alarm.comparison],
+                    'ComparisonOperator'    :       alarm.comparison,
                     'Threshold'             :       alarm.threshold,
                     'EvaluationPeriods'     :       alarm.evaluation_periods,
                     'Period'                :       alarm.period,
@@ -421,15 +604,18 @@
         if alarm.actions_enabled is not None:
             params['ActionsEnabled'] = alarm.actions_enabled
         if alarm.alarm_actions:
-            self.build_list_params(params, alarm.alarm_actions, 'AlarmActions.member.%s')
+            self.build_list_params(params, alarm.alarm_actions,
+                                   'AlarmActions.member.%s')
         if alarm.description:
             params['AlarmDescription'] = alarm.description
         if alarm.dimensions:
-            self.build_list_params(params, alarm.dimensions, 'Dimensions.member.%s')
+            self.build_dimension_param(alarm.dimensions, params)
         if alarm.insufficient_data_actions:
-            self.build_list_params(params, alarm.insufficient_data_actions, 'InsufficientDataActions.member.%s')
+            self.build_list_params(params, alarm.insufficient_data_actions,
+                                   'InsufficientDataActions.member.%s')
         if alarm.ok_actions:
-            self.build_list_params(params, alarm.ok_actions, 'OKActions.member.%s')
+            self.build_list_params(params, alarm.ok_actions,
+                                   'OKActions.member.%s')
         if alarm.unit:
             params['Unit'] = alarm.unit
         alarm.connection = self
@@ -439,7 +625,8 @@
 
     def delete_alarms(self, alarms):
         """
-        Deletes all specified alarms. In the event of an error, no alarms are deleted.
+        Deletes all specified alarms. In the event of an error, no
+        alarms are deleted.
 
         :type alarms: list
         :param alarms: List of alarm names.
@@ -448,7 +635,8 @@
         self.build_list_params(params, alarms, 'AlarmNames.member.%s')
         return self.get_status('DeleteAlarms', params)
 
-    def set_alarm_state(self, alarm_name, state_reason, state_value, state_reason_data=None):
+    def set_alarm_state(self, alarm_name, state_reason, state_value,
+                        state_reason_data=None):
         """
         Temporarily sets the state of an alarm. When the updated StateValue
         differs from the previous value, the action configured for the
@@ -468,11 +656,9 @@
         :type state_reason_data: string
         :param state_reason_data: Reason string (will be jsonified).
         """
-        params = {
-                    'AlarmName'             :   alarm_name,
-                    'StateReason'           :   state_reason,
-                    'StateValue'            :   state_value,
-                 }
+        params = {'AlarmName' : alarm_name,
+                  'StateReason' : state_reason,
+                  'StateValue' : state_value}
         if state_reason_data:
             params['StateReasonData'] = json.dumps(state_reason_data)
 
diff --git a/boto/ec2/cloudwatch/alarm.py b/boto/ec2/cloudwatch/alarm.py
index 81c0fc3..f81157d 100644
--- a/boto/ec2/cloudwatch/alarm.py
+++ b/boto/ec2/cloudwatch/alarm.py
@@ -21,8 +21,12 @@
 #
 
 from datetime import datetime
-import json
-
+from boto.resultset import ResultSet
+from boto.ec2.cloudwatch.listelement import ListElement
+try:
+    import simplejson as json
+except ImportError:
+    import json
 
 class MetricAlarm(object):
 
@@ -39,8 +43,11 @@
     _rev_cmp_map = dict((v, k) for (k, v) in _cmp_map.iteritems())
 
     def __init__(self, connection=None, name=None, metric=None,
-                 namespace=None, statistic=None, comparison=None, threshold=None,
-                 period=None, evaluation_periods=None):
+                 namespace=None, statistic=None, comparison=None,
+                 threshold=None, period=None, evaluation_periods=None,
+                 unit=None, description='', dimensions=None,
+                 alarm_actions=None, insufficient_data_actions=None,
+                 ok_actions=None):
         """
         Creates a new Alarm.
 
@@ -54,49 +61,103 @@
         :param namespace: The namespace for the alarm's metric.
 
         :type statistic: str
-        :param statistic: The statistic to apply to the alarm's associated metric. Can
-                          be one of 'SampleCount', 'Average', 'Sum', 'Minimum', 'Maximum'
+        :param statistic: The statistic to apply to the alarm's associated
+                          metric.
+                          Valid values: SampleCount|Average|Sum|Minimum|Maximum
 
         :type comparison: str
-        :param comparison: Comparison used to compare statistic with threshold. Can be
-                           one of '>=', '>', '<', '<='
+        :param comparison: Comparison used to compare statistic with threshold.
+                           Valid values: >= | > | < | <=
 
         :type threshold: float
-        :param threshold: The value against which the specified statistic is compared.
+        :param threshold: The value against which the specified statistic
+                          is compared.
 
         :type period: int
-        :param period: The period in seconds over which teh specified statistic is applied.
+        :param period: The period in seconds over which teh specified
+                       statistic is applied.
 
         :type evaluation_periods: int
-        :param evaluation_period: The number of periods over which data is compared to
-                                  the specified threshold
+        :param evaluation_period: The number of periods over which data is
+                                  compared to the specified threshold.
+
+        :type unit: str
+        :param unit: Allowed Values are:
+                     Seconds|Microseconds|Milliseconds,
+                     Bytes|Kilobytes|Megabytes|Gigabytes|Terabytes,
+                     Bits|Kilobits|Megabits|Gigabits|Terabits,
+                     Percent|Count|
+                     Bytes/Second|Kilobytes/Second|Megabytes/Second|
+                     Gigabytes/Second|Terabytes/Second,
+                     Bits/Second|Kilobits/Second|Megabits/Second,
+                     Gigabits/Second|Terabits/Second|Count/Second|None
+
+        :type description: str
+        :param description: Description of MetricAlarm
+
+        :type dimensions: list of dicts
+        :param description: Dimensions of alarm, such as:
+                            [{'InstanceId':['i-0123456,i-0123457']}]
+        
+        :type alarm_actions: list of strs
+        :param alarm_actions: A list of the ARNs of the actions to take in
+                              ALARM state
+        
+        :type insufficient_data_actions: list of strs
+        :param insufficient_data_actions: A list of the ARNs of the actions to
+                                          take in INSUFFICIENT_DATA state
+        
+        :type ok_actions: list of strs
+        :param ok_actions: A list of the ARNs of the actions to take in OK state
         """
         self.name = name
         self.connection = connection
         self.metric = metric
         self.namespace = namespace
         self.statistic = statistic
-        self.threshold = float(threshold) if threshold is not None else None
+        if threshold is not None:
+            self.threshold = float(threshold)
+        else:
+            self.threshold = None
         self.comparison = self._cmp_map.get(comparison)
-        self.period = int(period) if period is not None else None
-        self.evaluation_periods = int(evaluation_periods) if evaluation_periods is not None else None
+        if period is not None:
+            self.period = int(period)
+        else:
+            self.period = None
+        if evaluation_periods is not None:
+            self.evaluation_periods = int(evaluation_periods)
+        else:
+            self.evaluation_periods = None
         self.actions_enabled = None
-        self.alarm_actions = []
         self.alarm_arn = None
         self.last_updated = None
-        self.description = ''
-        self.dimensions = []
-        self.insufficient_data_actions = []
-        self.ok_actions = []
+        self.description = description
+        self.dimensions = dimensions
         self.state_reason = None
         self.state_value = None
-        self.unit = None
+        self.unit = unit
+        self.alarm_actions = alarm_actions
+        self.insufficient_data_actions = insufficient_data_actions
+        self.ok_actions = ok_actions
 
     def __repr__(self):
-        return 'MetricAlarm:%s[%s(%s) %s %s]' % (self.name, self.metric, self.statistic, self.comparison, self.threshold)
+        return 'MetricAlarm:%s[%s(%s) %s %s]' % (self.name, self.metric,
+                                                 self.statistic,
+                                                 self.comparison,
+                                                 self.threshold)
 
     def startElement(self, name, attrs, connection):
-        return
+        if name == 'AlarmActions':
+            self.alarm_actions = ListElement()
+            return self.alarm_actions
+        elif name == 'InsufficientDataActions':
+            self.insufficient_data_actions = ListElement()
+            return self.insufficient_data_actions
+        elif name == 'OKActions':
+            self.ok_actions = ListElement()
+            return self.ok_actions
+        else:
+            pass
 
     def endElement(self, name, value, connection):
         if name == 'ActionsEnabled':
@@ -122,7 +183,7 @@
         elif name == 'StateReason':
             self.state_reason = value
         elif name == 'StateValue':
-            self.state_value = None
+            self.state_value = value
         elif name == 'Statistic':
             self.statistic = value
         elif name == 'Threshold':
@@ -155,9 +216,57 @@
     def disable_actions(self):
         return self.connection.disable_alarm_actions([self.name])
 
-    def describe_history(self, start_date=None, end_date=None, max_records=None, history_item_type=None, next_token=None):
-        return self.connection.describe_alarm_history(self.name, start_date, end_date,
-                                                      max_records, history_item_type, next_token)
+    def describe_history(self, start_date=None, end_date=None, max_records=None,
+                         history_item_type=None, next_token=None):
+        return self.connection.describe_alarm_history(self.name, start_date,
+                                                      end_date, max_records,
+                                                      history_item_type,
+                                                      next_token)
+
+    def add_alarm_action(self, action_arn=None):
+        """
+        Adds an alarm action, represented as an SNS topic, to this alarm. 
+        What do do when alarm is triggered.
+
+        :type action_arn: str
+        :param action_arn: SNS topics to which notification should be 
+                           sent if the alarm goes to state ALARM.
+        """
+        if not action_arn:
+            return # Raise exception instead?
+        self.actions_enabled = 'true'
+        self.alarm_actions.append(action_arn)
+
+    def add_insufficient_data_action(self, action_arn=None):
+        """
+        Adds an insufficient_data action, represented as an SNS topic, to
+        this alarm. What to do when the insufficient_data state is reached.
+
+        :type action_arn: str
+        :param action_arn: SNS topics to which notification should be 
+                           sent if the alarm goes to state INSUFFICIENT_DATA.
+        """
+        if not action_arn:
+            return
+        self.actions_enabled = 'true'
+        self.insufficient_data_actions.append(action_arn)
+    
+    def add_ok_action(self, action_arn=None):
+        """
+        Adds an ok action, represented as an SNS topic, to this alarm. What
+        to do when the ok state is reached.
+
+        :type action_arn: str
+        :param action_arn: SNS topics to which notification should be 
+                           sent if the alarm goes to state INSUFFICIENT_DATA.
+        """
+        if not action_arn:
+            return
+        self.actions_enabled = 'true'
+        self.ok_actions.append(action_arn)
+
+    def delete(self):
+        self.connection.delete_alarms([self])
 
 class AlarmHistoryItem(object):
     def __init__(self, connection=None):
diff --git a/boto/tests/__init__.py b/boto/ec2/cloudwatch/listelement.py
similarity index 78%
copy from boto/tests/__init__.py
copy to boto/ec2/cloudwatch/listelement.py
index 449bd16..5be4599 100644
--- a/boto/tests/__init__.py
+++ b/boto/ec2/cloudwatch/listelement.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2006,2007 Mitch Garnaat http://garnaat.org/
+# Copyright (c) 2006-2009 Mitch Garnaat http://garnaat.org/
 #
 # Permission is hereby granted, free of charge, to any person obtaining a
 # copy of this software and associated documentation files (the
@@ -18,6 +18,14 @@
 # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
 # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
 # IN THE SOFTWARE.
-#
 
+class ListElement(list):
 
+    def startElement(self, name, attrs, connection):
+        pass
+
+    def endElement(self, name, value, connection):
+        if name == 'member':
+            self.append(value)
+    
+    
diff --git a/boto/ec2/cloudwatch/metric.py b/boto/ec2/cloudwatch/metric.py
index cd8c4bc..cda02d8 100644
--- a/boto/ec2/cloudwatch/metric.py
+++ b/boto/ec2/cloudwatch/metric.py
@@ -20,7 +20,9 @@
 # IN THE SOFTWARE.
 #
 
-class Dimensions(dict):
+from boto.ec2.cloudwatch.alarm import MetricAlarm
+
+class Dimension(dict):
 
     def startElement(self, name, attrs, connection):
         pass
@@ -29,15 +31,23 @@
         if name == 'Name':
             self._name = value
         elif name == 'Value':
-            self[self._name] = value
-        elif name != 'Dimensions' and name != 'member':
-            self[name] = value
+            if self._name in self:
+                self[self._name].append(value)
+            else:
+                self[self._name] = [value]
+        else:
+            setattr(self, name, value)
 
 class Metric(object):
 
     Statistics = ['Minimum', 'Maximum', 'Sum', 'Average', 'SampleCount']
-    Units = ['Seconds', 'Percent', 'Bytes', 'Bits', 'Count',
-             'Bytes/Second', 'Bits/Second', 'Count/Second']
+    Units = ['Seconds', 'Microseconds', 'Milliseconds', 'Bytes', 'Kilobytes',
+             'Megabytes', 'Gigabytes', 'Terabytes', 'Bits', 'Kilobits',
+             'Megabits', 'Gigabits', 'Terabits', 'Percent', 'Count',
+             'Bytes/Second', 'Kilobytes/Second', 'Megabytes/Second',
+             'Gigabytes/Second', 'Terabytes/Second', 'Bits/Second',
+             'Kilobits/Second', 'Megabits/Second', 'Gigabits/Second',
+             'Terabits/Second', 'Count/Second', None]
 
     def __init__(self, connection=None):
         self.connection = connection
@@ -46,15 +56,11 @@
         self.dimensions = None
 
     def __repr__(self):
-        s = 'Metric:%s' % self.name
-        if self.dimensions:
-            for name,value in self.dimensions.items():
-                s += '(%s,%s)' % (name, value)
-        return s
+        return 'Metric:%s' % self.name
 
     def startElement(self, name, attrs, connection):
         if name == 'Dimensions':
-            self.dimensions = Dimensions()
+            self.dimensions = Dimension()
             return self.dimensions
 
     def endElement(self, name, value, connection):
@@ -65,12 +71,36 @@
         else:
             setattr(self, name, value)
 
-    def query(self, start_time, end_time, statistic, unit=None, period=60):
-        return self.connection.get_metric_statistics(period, start_time, end_time,
-                                                     self.name, self.namespace, [statistic],
-                                                     self.dimensions, unit)
+    def query(self, start_time, end_time, statistics, unit=None, period=60):
+        if not isinstance(statistics, list):
+            statistics = [statistics]
+        return self.connection.get_metric_statistics(period,
+                                                     start_time,
+                                                     end_time,
+                                                     self.name,
+                                                     self.namespace,
+                                                     statistics,
+                                                     self.dimensions,
+                                                     unit)
 
-    def describe_alarms(self, period=None, statistic=None, dimensions=None, unit=None):
+    def create_alarm(self, name, comparison, threshold,
+                     period, evaluation_periods,
+                     statistic, enabled=True, description=None,
+                     dimensions=None, alarm_actions=None, ok_actions=None,
+                     insufficient_data_actions=None, unit=None):
+        if not dimensions:
+            dimensions = self.dimensions
+        alarm = MetricAlarm(self.connection, name, self.name,
+                            self.namespace, statistic, comparison,
+                            threshold, period, evaluation_periods,
+                            unit, description, dimensions,
+                            alarm_actions, insufficient_data_actions,
+                            ok_actions)
+        if self.connection.put_metric_alarm(alarm):
+            return alarm
+
+    def describe_alarms(self, period=None, statistic=None,
+                        dimensions=None, unit=None):
         return self.connection.describe_alarms_for_metric(self.name,
                                                           self.namespace,
                                                           period,
@@ -78,3 +108,5 @@
                                                           dimensions,
                                                           unit)
 
+
+    
diff --git a/boto/ec2/connection.py b/boto/ec2/connection.py
index 89d1d4e..1e49259 100644
--- a/boto/ec2/connection.py
+++ b/boto/ec2/connection.py
@@ -32,7 +32,8 @@
 from boto.connection import AWSQueryConnection
 from boto.resultset import ResultSet
 from boto.ec2.image import Image, ImageAttribute
-from boto.ec2.instance import Reservation, Instance, ConsoleOutput, InstanceAttribute
+from boto.ec2.instance import Reservation, Instance
+from boto.ec2.instance import ConsoleOutput, InstanceAttribute
 from boto.ec2.keypair import KeyPair
 from boto.ec2.address import Address
 from boto.ec2.volume import Volume
@@ -42,7 +43,8 @@
 from boto.ec2.securitygroup import SecurityGroup
 from boto.ec2.regioninfo import RegionInfo
 from boto.ec2.instanceinfo import InstanceInfo
-from boto.ec2.reservedinstance import ReservedInstancesOffering, ReservedInstance
+from boto.ec2.reservedinstance import ReservedInstancesOffering
+from boto.ec2.reservedinstance import ReservedInstance
 from boto.ec2.spotinstancerequest import SpotInstanceRequest
 from boto.ec2.spotpricehistory import SpotPriceHistory
 from boto.ec2.spotdatafeedsubscription import SpotDatafeedSubscription
@@ -55,16 +57,18 @@
 
 class EC2Connection(AWSQueryConnection):
 
-    APIVersion = boto.config.get('Boto', 'ec2_version', '2010-08-31')
+    APIVersion = boto.config.get('Boto', 'ec2_version', '2011-01-01')
     DefaultRegionName = boto.config.get('Boto', 'ec2_region_name', 'us-east-1')
     DefaultRegionEndpoint = boto.config.get('Boto', 'ec2_region_endpoint',
                                             'ec2.amazonaws.com')
     ResponseError = EC2ResponseError
 
     def __init__(self, aws_access_key_id=None, aws_secret_access_key=None,
-                 is_secure=True, host=None, port=None, proxy=None, proxy_port=None,
+                 is_secure=True, host=None, port=None,
+                 proxy=None, proxy_port=None,
                  proxy_user=None, proxy_pass=None, debug=0,
-                 https_connection_factory=None, region=None, path='/'):
+                 https_connection_factory=None, region=None, path='/',
+                 api_version=None, security_token=None):
         """
         Init method to create a new connection to EC2.
 
@@ -80,7 +84,10 @@
                                     is_secure, port, proxy, proxy_port,
                                     proxy_user, proxy_pass,
                                     self.region.endpoint, debug,
-                                    https_connection_factory, path)
+                                    https_connection_factory, path,
+                                    security_token)
+        if api_version:
+            self.APIVersion = api_version
 
     def _required_auth_capability(self):
         return ['ec2']
@@ -90,8 +97,9 @@
         Returns a dictionary containing the value of of all of the keyword
         arguments passed when constructing this connection.
         """
-        param_names = ['aws_access_key_id', 'aws_secret_access_key', 'is_secure',
-                       'port', 'proxy', 'proxy_port', 'proxy_user', 'proxy_pass',
+        param_names = ['aws_access_key_id', 'aws_secret_access_key',
+                       'is_secure', 'port', 'proxy', 'proxy_port',
+                       'proxy_user', 'proxy_pass',
                        'debug', 'https_connection_factory']
         params = {}
         for name in param_names:
@@ -101,7 +109,9 @@
     def build_filter_params(self, params, filters):
         i = 1
         for name in filters:
-            aws_name = name.replace('_', '-')
+            aws_name = name
+            if not aws_name.startswith('tag:'):
+                aws_name = name.replace('_', '-')
             params['Filter.%d.Name' % i] = aws_name
             value = filters[name]
             if not isinstance(value, list):
@@ -151,7 +161,8 @@
             self.build_list_params(params, executable_by, 'ExecutableBy')
         if filters:
             self.build_filter_params(params, filters)
-        return self.get_list('DescribeImages', params, [('item', Image)], verb='POST')
+        return self.get_list('DescribeImages', params,
+                             [('item', Image)], verb='POST')
 
     def get_all_kernels(self, kernel_ids=None, owners=None):
         """
@@ -174,7 +185,8 @@
             self.build_list_params(params, owners, 'Owner')
         filter = {'image-type' : 'kernel'}
         self.build_filter_params(params, filter)
-        return self.get_list('DescribeImages', params, [('item', Image)], verb='POST')
+        return self.get_list('DescribeImages', params,
+                             [('item', Image)], verb='POST')
 
     def get_all_ramdisks(self, ramdisk_ids=None, owners=None):
         """
@@ -197,7 +209,8 @@
             self.build_list_params(params, owners, 'Owner')
         filter = {'image-type' : 'ramdisk'}
         self.build_filter_params(params, filter)
-        return self.get_list('DescribeImages', params, [('item', Image)], verb='POST')
+        return self.get_list('DescribeImages', params,
+                             [('item', Image)], verb='POST')
 
     def get_image(self, image_id):
         """
@@ -227,7 +240,8 @@
         :param description: The description of the AMI.
 
         :type image_location: string
-        :param image_location: Full path to your AMI manifest in Amazon S3 storage.
+        :param image_location: Full path to your AMI manifest in
+                               Amazon S3 storage.
                                Only used for S3-based AMI's.
 
         :type architecture: string
@@ -235,7 +249,8 @@
                              i386 | x86_64
 
         :type kernel_id: string
-        :param kernel_id: The ID of the kernel with which to launch the instances
+        :param kernel_id: The ID of the kernel with which to launch
+                          the instances
 
         :type root_device_name: string
         :param root_device_name: The root device name (e.g. /dev/sdh)
@@ -269,23 +284,41 @@
         image_id = getattr(rs, 'imageId', None)
         return image_id
 
-    def deregister_image(self, image_id):
+    def deregister_image(self, image_id, delete_snapshot=False):
         """
         Unregister an AMI.
 
         :type image_id: string
         :param image_id: the ID of the Image to unregister
 
+        :type delete_snapshot: bool
+        :param delete_snapshot: Set to True if we should delete the
+                                snapshot associated with an EBS volume
+                                mounted at /dev/sda1
+
         :rtype: bool
         :return: True if successful
         """
-        return self.get_status('DeregisterImage', {'ImageId':image_id}, verb='POST')
+        snapshot_id = None
+        if delete_snapshot:
+            image = self.get_image(image_id)
+            for key in image.block_device_mapping:
+                if key == "/dev/sda1":
+                    snapshot_id = image.block_device_mapping[key].snapshot_id
+                    break
 
-    def create_image(self, instance_id, name, description=None, no_reboot=False):
+        result = self.get_status('DeregisterImage',
+                                 {'ImageId':image_id}, verb='POST')
+        if result and snapshot_id:
+            return result and self.delete_snapshot(snapshot_id)
+        return result
+
+    def create_image(self, instance_id, name,
+                     description=None, no_reboot=False):
         """
         Will create an AMI from the instance in the running or stopped
         state.
-        
+
         :type instance_id: string
         :param instance_id: the ID of the instance to image.
 
@@ -302,7 +335,7 @@
                           bundling.  If this flag is True, the responsibility
                           of maintaining file system integrity is left to the
                           owner of the instance.
-        
+
         :rtype: string
         :return: The new image id
         """
@@ -314,7 +347,7 @@
             params['NoReboot'] = 'true'
         img = self.get_object('CreateImage', params, Image, verb='POST')
         return img.id
-        
+
     # ImageAttribute methods
 
     def get_image_attribute(self, image_id, attribute='launchPermission'):
@@ -337,7 +370,8 @@
         """
         params = {'ImageId' : image_id,
                   'Attribute' : attribute}
-        return self.get_object('DescribeImageAttribute', params, ImageAttribute, verb='POST')
+        return self.get_object('DescribeImageAttribute', params,
+                               ImageAttribute, verb='POST')
 
     def modify_image_attribute(self, image_id, attribute='launchPermission',
                                operation='add', user_ids=None, groups=None,
@@ -420,6 +454,11 @@
         if instance_ids:
             self.build_list_params(params, instance_ids, 'InstanceId')
         if filters:
+            if 'group-id' in filters:
+                warnings.warn("The group-id filter now requires a security "
+                              "group identifier (sg-*) instead of a group "
+                              "name. To filter by group name use the "
+                              "'group-name' filter instead.", UserWarning)
             self.build_filter_params(params, filters)
         return self.get_list('DescribeInstances', params,
                              [('item', Reservation)], verb='POST')
@@ -434,7 +473,8 @@
                       disable_api_termination=False,
                       instance_initiated_shutdown_behavior=None,
                       private_ip_address=None,
-                      placement_group=None, client_token=None):
+                      placement_group=None, client_token=None,
+                      security_group_ids=None):
         """
         Runs an image on EC2.
 
@@ -459,7 +499,7 @@
 
         :type instance_type: string
         :param instance_type: The type of instance to run:
-                              
+
                               * m1.small
                               * m1.large
                               * m1.xlarge
@@ -507,13 +547,12 @@
 
         :type instance_initiated_shutdown_behavior: string
         :param instance_initiated_shutdown_behavior: Specifies whether the
-                                                     instance's EBS volumes are
-                                                     stopped (i.e. detached) or
-                                                     terminated (i.e. deleted)
-                                                     when the instance is
-                                                     shutdown by the
-                                                     owner.  Valid values are:
-                                                     
+                                                     instance stops or
+                                                     terminates on
+                                                     instance-initiated
+                                                     shutdown.
+                                                     Valid values are:
+
                                                      * stop
                                                      * terminate
 
@@ -529,12 +568,24 @@
         :rtype: Reservation
         :return: The :class:`boto.ec2.instance.Reservation` associated with
                  the request for machines
+
+        :type security_group_ids: list of strings
+        :param security_group_ids: The ID of the VPC security groups with
+                                   which to associate instances
         """
         params = {'ImageId':image_id,
                   'MinCount':min_count,
                   'MaxCount': max_count}
         if key_name:
             params['KeyName'] = key_name
+        if security_group_ids:
+            l = []
+            for group in security_group_ids:
+                if isinstance(group, SecurityGroup):
+                    l.append(group.name)
+                else:
+                    l.append(group)
+            self.build_list_params(params, l, 'SecurityGroupId')
         if security_groups:
             l = []
             for group in security_groups:
@@ -587,18 +638,19 @@
         params = {}
         if instance_ids:
             self.build_list_params(params, instance_ids, 'InstanceId')
-        return self.get_list('TerminateInstances', params, [('item', Instance)], verb='POST')
+        return self.get_list('TerminateInstances', params,
+                             [('item', Instance)], verb='POST')
 
     def stop_instances(self, instance_ids=None, force=False):
         """
         Stop the instances specified
-        
+
         :type instance_ids: list
         :param instance_ids: A list of strings of the Instance IDs to stop
 
         :type force: bool
         :param force: Forces the instance to stop
-        
+
         :rtype: list
         :return: A list of the instances stopped
         """
@@ -607,22 +659,24 @@
             params['Force'] = 'true'
         if instance_ids:
             self.build_list_params(params, instance_ids, 'InstanceId')
-        return self.get_list('StopInstances', params, [('item', Instance)], verb='POST')
+        return self.get_list('StopInstances', params,
+                             [('item', Instance)], verb='POST')
 
     def start_instances(self, instance_ids=None):
         """
         Start the instances specified
-        
+
         :type instance_ids: list
         :param instance_ids: A list of strings of the Instance IDs to start
-        
+
         :rtype: list
         :return: A list of the instances started
         """
         params = {}
         if instance_ids:
             self.build_list_params(params, instance_ids, 'InstanceId')
-        return self.get_list('StartInstances', params, [('item', Instance)], verb='POST')
+        return self.get_list('StartInstances', params,
+                             [('item', Instance)], verb='POST')
 
     def get_console_output(self, instance_id):
         """
@@ -636,7 +690,8 @@
         """
         params = {}
         self.build_list_params(params, [instance_id], 'InstanceId')
-        return self.get_object('GetConsoleOutput', params, ConsoleOutput, verb='POST')
+        return self.get_object('GetConsoleOutput', params,
+                               ConsoleOutput, verb='POST')
 
     def reboot_instances(self, instance_ids=None):
         """
@@ -653,7 +708,8 @@
     def confirm_product_instance(self, product_code, instance_id):
         params = {'ProductCode' : product_code,
                   'InstanceId' : instance_id}
-        rs = self.get_object('ConfirmProductInstance', params, ResultSet, verb='POST')
+        rs = self.get_object('ConfirmProductInstance', params,
+                             ResultSet, verb='POST')
         return (rs.status, rs.ownerId)
 
     # InstanceAttribute methods
@@ -668,7 +724,7 @@
         :type attribute: string
         :param attribute: The attribute you need information about
                           Valid choices are:
-                          
+
                           * instanceType|kernel|ramdisk|userData|
                           * disableApiTermination|
                           * instanceInitiatedShutdownBehavior|
@@ -693,7 +749,7 @@
 
         :type attribute: string
         :param attribute: The attribute you wish to change.
-        
+
                           * AttributeName - Expected value (default)
                           * instanceType - A valid instance type (m1.small)
                           * kernel - Kernel ID (None)
@@ -745,10 +801,10 @@
                                        filters=None):
         """
         Retrieve all the spot instances requests associated with your account.
-        
+
         :type request_ids: list
         :param request_ids: A list of strings of spot instance request IDs
-        
+
         :type filters: dict
         :param filters: Optional filters that can be used to limit
                         the results returned.  Filters are provided
@@ -767,30 +823,41 @@
         if request_ids:
             self.build_list_params(params, request_ids, 'SpotInstanceRequestId')
         if filters:
+            if 'launch.group-id' in filters:
+                warnings.warn("The 'launch.group-id' filter now requires a "
+                              "security group id (sg-*) and no longer supports "
+                              "filtering by group name. Please update your "
+                              "filters accordingly.", UserWarning)
             self.build_filter_params(params, filters)
         return self.get_list('DescribeSpotInstanceRequests', params,
                              [('item', SpotInstanceRequest)], verb='POST')
 
     def get_spot_price_history(self, start_time=None, end_time=None,
-                               instance_type=None, product_description=None):
+                               instance_type=None, product_description=None,
+                               availability_zone=None):
         """
         Retrieve the recent history of spot instances pricing.
-        
+
         :type start_time: str
         :param start_time: An indication of how far back to provide price
                            changes for. An ISO8601 DateTime string.
-        
+
         :type end_time: str
         :param end_time: An indication of how far forward to provide price
                          changes for.  An ISO8601 DateTime string.
-        
+
         :type instance_type: str
         :param instance_type: Filter responses to a particular instance type.
-        
+
         :type product_description: str
-        :param product_descripton: Filter responses to a particular platform.
-                                   Valid values are currently: Linux
-        
+        :param product_description: Filter responses to a particular platform.
+                                    Valid values are currently: "Linux/UNIX",
+                                    "SUSE Linux", and "Windows"
+
+        :type availability_zone: str
+        :param availability_zone: The availability zone for which prices
+                                  should be returned
+
         :rtype: list
         :return: A list tuples containing price and timestamp.
         """
@@ -803,6 +870,8 @@
             params['InstanceType'] = instance_type
         if product_description:
             params['ProductDescription'] = product_description
+        if availability_zone:
+            params['AvailabilityZone'] = availability_zone
         return self.get_list('DescribeSpotPriceHistory', params,
                              [('item', SpotPriceHistory)], verb='POST')
 
@@ -820,13 +889,13 @@
 
         :type price: str
         :param price: The maximum price of your bid
-        
+
         :type image_id: string
         :param image_id: The ID of the image to run
 
         :type count: int
         :param count: The of instances to requested
-        
+
         :type type: str
         :param type: Type of request. Can be 'one-time' or 'persistent'.
                      Default is one-time.
@@ -840,12 +909,12 @@
         :type launch_group: str
         :param launch_group: If supplied, all requests will be fulfilled
                              as a group.
-                             
+
         :type availability_zone_group: str
         :param availability_zone_group: If supplied, all requests will be
                                         fulfilled within a single
                                         availability zone.
-                             
+
         :type key_name: string
         :param key_name: The name of the key pair with which to launch instances
 
@@ -858,7 +927,7 @@
 
         :type instance_type: string
         :param instance_type: The type of instance to run:
-                              
+
                               * m1.small
                               * m1.large
                               * m1.xlarge
@@ -943,14 +1012,14 @@
                              [('item', SpotInstanceRequest)],
                              verb='POST')
 
-        
+
     def cancel_spot_instance_requests(self, request_ids):
         """
         Cancel the specified Spot Instance Requests.
-        
+
         :type request_ids: list
         :param request_ids: A list of strings of the Request IDs to terminate
-        
+
         :rtype: list
         :return: A list of the instances terminated
         """
@@ -964,7 +1033,7 @@
         """
         Return the current spot instance data feed subscription
         associated with this account, if any.
-        
+
         :rtype: :class:`boto.ec2.spotdatafeedsubscription.SpotDatafeedSubscription`
         :return: The datafeed subscription object or None
         """
@@ -984,7 +1053,7 @@
         :type prefix: str or unicode
         :param prefix: An optional prefix that will be pre-pended to all
                        data files written to the bucket.
-                       
+
         :rtype: :class:`boto.ec2.spotdatafeedsubscription.SpotDatafeedSubscription`
         :return: The datafeed subscription object or None
         """
@@ -998,11 +1067,12 @@
         """
         Delete the current spot instance data feed subscription
         associated with this account
-        
+
         :rtype: bool
         :return: True if successful
         """
-        return self.get_status('DeleteSpotDatafeedSubscription', None, verb='POST')
+        return self.get_status('DeleteSpotDatafeedSubscription',
+                               None, verb='POST')
 
     # Zone methods
 
@@ -1033,11 +1103,12 @@
             self.build_list_params(params, zones, 'ZoneName')
         if filters:
             self.build_filter_params(params, filters)
-        return self.get_list('DescribeAvailabilityZones', params, [('item', Zone)], verb='POST')
+        return self.get_list('DescribeAvailabilityZones', params,
+                             [('item', Zone)], verb='POST')
 
     # Address methods
 
-    def get_all_addresses(self, addresses=None, filters=None):
+    def get_all_addresses(self, addresses=None, filters=None, allocation_ids=None):
         """
         Get all EIP's associated with the current credentials.
 
@@ -1056,65 +1127,106 @@
                         being performed.  Check the EC2 API guide
                         for details.
 
+        :type allocation_ids: list
+        :param allocation_ids: Optional list of allocation IDs.  If this list is
+                           present, only the Addresses associated with the given
+                           allocation IDs will be returned.
+
         :rtype: list of :class:`boto.ec2.address.Address`
         :return: The requested Address objects
         """
         params = {}
         if addresses:
             self.build_list_params(params, addresses, 'PublicIp')
+        if allocation_ids:
+            self.build_list_params(params, allocation_ids, 'AllocationId')
         if filters:
             self.build_filter_params(params, filters)
         return self.get_list('DescribeAddresses', params, [('item', Address)], verb='POST')
 
-    def allocate_address(self):
+    def allocate_address(self, domain=None):
         """
         Allocate a new Elastic IP address and associate it with your account.
 
         :rtype: :class:`boto.ec2.address.Address`
         :return: The newly allocated Address
         """
-        return self.get_object('AllocateAddress', {}, Address, verb='POST')
+        params = {}
 
-    def associate_address(self, instance_id, public_ip):
+        if domain is not None:
+            params['Domain'] = domain
+
+        return self.get_object('AllocateAddress', params, Address, verb='POST')
+
+    def associate_address(self, instance_id, public_ip=None, allocation_id=None):
         """
         Associate an Elastic IP address with a currently running instance.
+        This requires one of ``public_ip`` or ``allocation_id`` depending
+        on if you're associating a VPC address or a plain EC2 address.
 
         :type instance_id: string
         :param instance_id: The ID of the instance
 
         :type public_ip: string
-        :param public_ip: The public IP address
+        :param public_ip: The public IP address for EC2 based allocations.
+
+        :type allocation_id: string
+        :param allocation_id: The allocation ID for a VPC-based elastic IP.
 
         :rtype: bool
         :return: True if successful
         """
-        params = {'InstanceId' : instance_id, 'PublicIp' : public_ip}
+        params = { 'InstanceId' : instance_id }
+
+        if public_ip is not None:
+            params['PublicIp'] = public_ip
+        elif allocation_id is not None:
+            params['AllocationId'] = allocation_id
+
         return self.get_status('AssociateAddress', params, verb='POST')
 
-    def disassociate_address(self, public_ip):
+    def disassociate_address(self, public_ip=None, association_id=None):
         """
         Disassociate an Elastic IP address from a currently running instance.
 
         :type public_ip: string
-        :param public_ip: The public IP address
+        :param public_ip: The public IP address for EC2 elastic IPs.
+
+        :type association_id: string
+        :param association_id: The association ID for a VPC based elastic ip.
 
         :rtype: bool
         :return: True if successful
         """
-        params = {'PublicIp' : public_ip}
+        params = {}
+
+        if public_ip is not None:
+            params['PublicIp'] = public_ip
+        elif association_id is not None:
+            params['AssociationId'] = association_id
+
         return self.get_status('DisassociateAddress', params, verb='POST')
 
-    def release_address(self, public_ip):
+    def release_address(self, public_ip=None, allocation_id=None):
         """
-        Free up an Elastic IP address
+        Free up an Elastic IP address.
 
         :type public_ip: string
-        :param public_ip: The public IP address
+        :param public_ip: The public IP address for EC2 elastic IPs.
+
+        :type allocation_id: string
+        :param allocation_id: The ID for VPC elastic IPs.
 
         :rtype: bool
         :return: True if successful
         """
-        params = {'PublicIp' : public_ip}
+        params = {}
+
+        if public_ip is not None:
+            params['PublicIp'] = public_ip
+        elif allocation_id is not None:
+            params['AllocationId'] = allocation_id
+
         return self.get_status('ReleaseAddress', params, verb='POST')
 
     # Volume methods
@@ -1124,9 +1236,9 @@
         Get all Volumes associated with the current credentials.
 
         :type volume_ids: list
-        :param volume_ids: Optional list of volume ids.  If this list is present,
-                           only the volumes associated with these volume ids
-                           will be returned.
+        :param volume_ids: Optional list of volume ids.  If this list
+                           is present, only the volumes associated with
+                           these volume ids will be returned.
 
         :type filters: dict
         :param filters: Optional filters that can be used to limit
@@ -1146,7 +1258,8 @@
             self.build_list_params(params, volume_ids, 'VolumeId')
         if filters:
             self.build_filter_params(params, filters)
-        return self.get_list('DescribeVolumes', params, [('item', Volume)], verb='POST')
+        return self.get_list('DescribeVolumes', params,
+                             [('item', Volume)], verb='POST')
 
     def create_volume(self, size, zone, snapshot=None):
         """
@@ -1261,7 +1374,7 @@
         :type owner: str
         :param owner: If present, only the snapshots owned by the specified user
                       will be returned.  Valid values are:
-                      
+
                       * self
                       * amazon
                       * AWS Account ID
@@ -1292,7 +1405,8 @@
             params['RestorableBy'] = restorable_by
         if filters:
             self.build_filter_params(params, filters)
-        return self.get_list('DescribeSnapshots', params, [('item', Snapshot)], verb='POST')
+        return self.get_list('DescribeSnapshots', params,
+                             [('item', Snapshot)], verb='POST')
 
     def create_snapshot(self, volume_id, description=None):
         """
@@ -1311,7 +1425,8 @@
         params = {'VolumeId' : volume_id}
         if description:
             params['Description'] = description[0:255]
-        snapshot = self.get_object('CreateSnapshot', params, Snapshot, verb='POST')
+        snapshot = self.get_object('CreateSnapshot', params,
+                                   Snapshot, verb='POST')
         volume = self.get_all_volumes([volume_id])[0]
         volume_name = volume.tags.get('Name')
         if volume_name:
@@ -1322,21 +1437,26 @@
         params = {'SnapshotId': snapshot_id}
         return self.get_status('DeleteSnapshot', params, verb='POST')
 
-    def trim_snapshots(self, hourly_backups = 8, daily_backups = 7, weekly_backups = 4):
+    def trim_snapshots(self, hourly_backups = 8, daily_backups = 7,
+                       weekly_backups = 4):
         """
-        Trim excess snapshots, based on when they were taken. More current snapshots are 
-        retained, with the number retained decreasing as you move back in time.
+        Trim excess snapshots, based on when they were taken. More current
+        snapshots are retained, with the number retained decreasing as you
+        move back in time.
 
-        If ebs volumes have a 'Name' tag with a value, their snapshots will be assigned the same 
-        tag when they are created. The values of the 'Name' tags for snapshots are used by this
-        function to group snapshots taken from the same volume (or from a series of like-named
-        volumes over time) for trimming.
+        If ebs volumes have a 'Name' tag with a value, their snapshots
+        will be assigned the same tag when they are created. The values
+        of the 'Name' tags for snapshots are used by this function to
+        group snapshots taken from the same volume (or from a series
+        of like-named volumes over time) for trimming.
 
-        For every group of like-named snapshots, this function retains the newest and oldest 
-        snapshots, as well as, by default,  the first snapshots taken in each of the last eight 
-        hours, the first snapshots taken in each of the last seven days, the first snapshots 
-        taken in the last 4 weeks (counting Midnight Sunday morning as the start of the week), 
-        and the first snapshot from the first Sunday of each month forever.
+        For every group of like-named snapshots, this function retains
+        the newest and oldest snapshots, as well as, by default,  the
+        first snapshots taken in each of the last eight hours, the first
+        snapshots taken in each of the last seven days, the first snapshots
+        taken in the last 4 weeks (counting Midnight Sunday morning as
+        the start of the week), and the first snapshot from the first
+        Sunday of each month forever.
 
         :type hourly_backups: int
         :param hourly_backups: How many recent hourly backups should be saved.
@@ -1348,15 +1468,18 @@
         :param weekly_backups: How many recent weekly backups should be saved.
         """
 
-        # This function first builds up an ordered list of target times that snapshots should be saved for 
-        # (last 8 hours, last 7 days, etc.). Then a map of snapshots is constructed, with the keys being
-        # the snapshot / volume names and the values being arrays of chornologically sorted snapshots.
-        # Finally, for each array in the map, we go through the snapshot array and the target time array
-        # in an interleaved fashion, deleting snapshots whose start_times don't immediately follow a
-        # target time (we delete a snapshot if there's another snapshot that was made closer to the
-        # preceding target time).
+        # This function first builds up an ordered list of target times
+        # that snapshots should be saved for (last 8 hours, last 7 days, etc.).
+        # Then a map of snapshots is constructed, with the keys being
+        # the snapshot / volume names and the values being arrays of
+        # chronologically sorted snapshots.
+        # Finally, for each array in the map, we go through the snapshot
+        # array and the target time array in an interleaved fashion,
+        # deleting snapshots whose start_times don't immediately follow a
+        # target time (we delete a snapshot if there's another snapshot
+        # that was made closer to the preceding target time).
 
-        now = datetime.utcnow() # work with UTC time, which is what the snapshot start time is reported in
+        now = datetime.utcnow()
         last_hour = datetime(now.year, now.month, now.day, now.hour)
         last_midnight = datetime(now.year, now.month, now.day)
         last_sunday = datetime(now.year, now.month, now.day) - timedelta(days = (now.weekday() + 1) % 7)
@@ -1364,7 +1487,8 @@
 
         target_backup_times = []
 
-        oldest_snapshot_date = datetime(2007, 1, 1) # there are no snapshots older than 1/1/2007
+        # there are no snapshots older than 1/1/2007
+        oldest_snapshot_date = datetime(2007, 1, 1)
 
         for hour in range(0, hourly_backups):
             target_backup_times.append(last_hour - timedelta(hours = hour))
@@ -1377,13 +1501,16 @@
 
         one_day = timedelta(days = 1)
         while start_of_month > oldest_snapshot_date:
-            # append the start of the month to the list of snapshot dates to save:
+            # append the start of the month to the list of
+            # snapshot dates to save:
             target_backup_times.append(start_of_month)
             # there's no timedelta setting for one month, so instead:
-            # decrement the day by one, so we go to the final day of the previous month...
+            # decrement the day by one, so we go to the final day of
+            # the previous month...
             start_of_month -= one_day
             # ... and then go to the first day of that previous month:
-            start_of_month = datetime(start_of_month.year, start_of_month.month, 1)
+            start_of_month = datetime(start_of_month.year,
+                                      start_of_month.month, 1)
 
         temp = []
 
@@ -1392,14 +1519,18 @@
                 temp.append(t)
 
         target_backup_times = temp
-        target_backup_times.reverse() # make the oldest date first
+        # make the oldeest dates first, and make sure the month start
+        # and last four week's start are in the proper order
+        target_backup_times.sort()
 
-        # get all the snapshots, sort them by date and time, and organize them into one array for each volume:
+        # get all the snapshots, sort them by date and time, and
+        # organize them into one array for each volume:
         all_snapshots = self.get_all_snapshots(owner = 'self')
-        all_snapshots.sort(cmp = lambda x, y: cmp(x.start_time, y.start_time)) # oldest first
+        all_snapshots.sort(cmp = lambda x, y: cmp(x.start_time, y.start_time))
         snaps_for_each_volume = {}
         for snap in all_snapshots:
-            # the snapshot name and the volume name are the same. The snapshot name is set from the volume
+            # the snapshot name and the volume name are the same.
+            # The snapshot name is set from the volume
             # name at the time the snapshot is taken
             volume_name = snap.tags.get('Name')
             if volume_name:
@@ -1410,34 +1541,46 @@
                     snaps_for_each_volume[volume_name] = snaps_for_volume
                 snaps_for_volume.append(snap)
 
-        # Do a running comparison of snapshot dates to desired time periods, keeping the oldest snapshot in each
+        # Do a running comparison of snapshot dates to desired time
+        #periods, keeping the oldest snapshot in each
         # time period and deleting the rest:
         for volume_name in snaps_for_each_volume:
             snaps = snaps_for_each_volume[volume_name]
-            snaps = snaps[:-1] # never delete the newest snapshot, so remove it from consideration
+            snaps = snaps[:-1] # never delete the newest snapshot
             time_period_number = 0
             snap_found_for_this_time_period = False
             for snap in snaps:
                 check_this_snap = True
                 while check_this_snap and time_period_number < target_backup_times.__len__():
-                    snap_date = datetime.strptime(snap.start_time, '%Y-%m-%dT%H:%M:%S.000Z')
+                    snap_date = datetime.strptime(snap.start_time,
+                                                  '%Y-%m-%dT%H:%M:%S.000Z')
                     if snap_date < target_backup_times[time_period_number]:
-                        # the snap date is before the cutoff date. Figure out if it's the first snap in this
-                        # date range and act accordingly (since both date the date ranges and the snapshots
-                        # are sorted chronologically, we know this snapshot isn't in an earlier date range):
+                        # the snap date is before the cutoff date.
+                        # Figure out if it's the first snap in this
+                        # date range and act accordingly (since both
+                        #date the date ranges and the snapshots
+                        # are sorted chronologically, we know this
+                        #snapshot isn't in an earlier date range):
                         if snap_found_for_this_time_period == True:
                             if not snap.tags.get('preserve_snapshot'):
-                                # as long as the snapshot wasn't marked with the 'preserve_snapshot' tag, delete it:
-                                self.delete_snapshot(snap.id)
-                                boto.log.info('Trimmed snapshot %s (%s)' % (snap.tags['Name'], snap.start_time))
-                            # go on and look at the next snapshot, leaving the time period alone
+                                # as long as the snapshot wasn't marked
+                                # with the 'preserve_snapshot' tag, delete it:
+                                try:
+                                    self.delete_snapshot(snap.id)
+                                    boto.log.info('Trimmed snapshot %s (%s)' % (snap.tags['Name'], snap.start_time))
+                                except EC2ResponseError:
+                                    boto.log.error('Attempt to trim snapshot %s (%s) failed. Possible result of a race condition with trimming on another server?' % (snap.tags['Name'], snap.start_time))
+                            # go on and look at the next snapshot,
+                            #leaving the time period alone
                         else:
-                            # this was the first snapshot found for this time period. Leave it alone and look at the 
+                            # this was the first snapshot found for this
+                            #time period. Leave it alone and look at the
                             # next snapshot:
                             snap_found_for_this_time_period = True
                         check_this_snap = False
                     else:
-                        # the snap is after the cutoff date. Check it against the next cutoff date
+                        # the snap is after the cutoff date. Check it
+                        # against the next cutoff date
                         time_period_number += 1
                         snap_found_for_this_time_period = False
 
@@ -1453,7 +1596,7 @@
 
         :type attribute: str
         :param attribute: The requested attribute.  Valid values are:
-        
+
                           * createVolumePermission
 
         :rtype: list of :class:`boto.ec2.snapshotattribute.SnapshotAttribute`
@@ -1545,7 +1688,8 @@
             self.build_list_params(params, keynames, 'KeyName')
         if filters:
             self.build_filter_params(params, filters)
-        return self.get_list('DescribeKeyPairs', params, [('item', KeyPair)], verb='POST')
+        return self.get_list('DescribeKeyPairs', params,
+                             [('item', KeyPair)], verb='POST')
 
     def get_key_pair(self, keyname):
         """
@@ -1621,13 +1765,14 @@
                  The material attribute of the new KeyPair object
                  will contain the the unencrypted PEM encoded RSA private key.
         """
+        public_key_material = base64.b64encode(public_key_material)
         params = {'KeyName' : key_name,
                   'PublicKeyMaterial' : public_key_material}
         return self.get_object('ImportKeyPair', params, KeyPair, verb='POST')
 
     # SecurityGroup methods
 
-    def get_all_security_groups(self, groupnames=None, filters=None):
+    def get_all_security_groups(self, groupnames=None, group_ids=None, filters=None):
         """
         Get all security groups associated with your account in a region.
 
@@ -1636,6 +1781,10 @@
                            If not provided, all security groups will be
                            returned.
 
+        :type group_ids: list
+        :param group_ids: A list of IDs of security groups to retrieve for
+                          security groups within a VPC.
+
         :type filters: dict
         :param filters: Optional filters that can be used to limit
                         the results returned.  Filters are provided
@@ -1650,14 +1799,17 @@
         :return: A list of :class:`boto.ec2.securitygroup.SecurityGroup`
         """
         params = {}
-        if groupnames:
+        if groupnames is not None:
             self.build_list_params(params, groupnames, 'GroupName')
-        if filters:
+        if group_ids is not None:
+            self.build_list_params(params, group_ids, 'GroupId')
+        if filters is not None:
             self.build_filter_params(params, filters)
+
         return self.get_list('DescribeSecurityGroups', params,
                              [('item', SecurityGroup)], verb='POST')
 
-    def create_security_group(self, name, description):
+    def create_security_group(self, name, description, vpc_id=None):
         """
         Create a new security group for your account.
         This will create the security group within the region you
@@ -1669,35 +1821,60 @@
         :type description: string
         :param description: The description of the new security group
 
+        :type vpc_id: string
+        :param vpc_id: The ID of the VPC to create the security group in,
+                       if any.
+
         :rtype: :class:`boto.ec2.securitygroup.SecurityGroup`
         :return: The newly created :class:`boto.ec2.keypair.KeyPair`.
         """
-        params = {'GroupName':name, 'GroupDescription':description}
-        group = self.get_object('CreateSecurityGroup', params, SecurityGroup, verb='POST')
+        params = {
+            'GroupName': name,
+            'GroupDescription': description
+        }
+
+        if vpc_id is not None:
+            params['VpcId'] = vpc_id
+
+        group = self.get_object('CreateSecurityGroup', params,
+                                SecurityGroup, verb='POST')
         group.name = name
         group.description = description
         return group
 
-    def delete_security_group(self, name):
+    def delete_security_group(self, name=None, group_id=None):
         """
         Delete a security group from your account.
 
-        :type key_name: string
-        :param key_name: The name of the keypair to delete
+        :type name: string
+        :param name: The name of the security group to delete.
+
+        :type group_id: string
+        :param group_id: The ID of the security group to delete within
+          a VPC.
+
+        :rtype: bool
+        :return: True if successful.
         """
-        params = {'GroupName':name}
+        params = {}
+
+        if name is not None:
+            params['GroupName'] = name
+        elif group_id is not None:
+            params['GroupId'] = group_id
+
         return self.get_status('DeleteSecurityGroup', params, verb='POST')
 
-    def _authorize_deprecated(self, group_name, src_security_group_name=None,
-                              src_security_group_owner_id=None):
+    def authorize_security_group_deprecated(self, group_name,
+                                            src_security_group_name=None,
+                                            src_security_group_owner_id=None,
+                                            ip_protocol=None,
+                                            from_port=None, to_port=None,
+                                            cidr_ip=None):
         """
-        This method is called only when someone tries to authorize a group
-        without specifying a from_port or to_port.  Until recently, that was
-        the only way to do group authorization but the EC2 API has been
-        changed to now require a from_port and to_port when specifying a
-        group.  This is a much better approach but I don't want to break
-        existing boto applications that depend on the old behavior, hence
-        this kludge.
+        NOTE: This method uses the old-style request parameters
+              that did not allow a port to be specified when
+              authorizing a group.
 
         :type group_name: string
         :param group_name: The name of the security group you are adding
@@ -1711,22 +1888,43 @@
         :param src_security_group_owner_id: The ID of the owner of the security
                                             group you are granting access to.
 
+        :type ip_protocol: string
+        :param ip_protocol: Either tcp | udp | icmp
+
+        :type from_port: int
+        :param from_port: The beginning port number you are enabling
+
+        :type to_port: int
+        :param to_port: The ending port number you are enabling
+
+        :type to_port: string
+        :param to_port: The CIDR block you are providing access to.
+                        See http://goo.gl/Yj5QC
+
         :rtype: bool
         :return: True if successful.
         """
-        warnings.warn('FromPort and ToPort now required for group authorization',
-                      DeprecationWarning)
         params = {'GroupName':group_name}
         if src_security_group_name:
             params['SourceSecurityGroupName'] = src_security_group_name
         if src_security_group_owner_id:
             params['SourceSecurityGroupOwnerId'] = src_security_group_owner_id
-        return self.get_status('AuthorizeSecurityGroupIngress', params, verb='POST')
+        if ip_protocol:
+            params['IpProtocol'] = ip_protocol
+        if from_port:
+            params['FromPort'] = from_port
+        if to_port:
+            params['ToPort'] = to_port
+        if cidr_ip:
+            params['CidrIp'] = cidr_ip
+        return self.get_status('AuthorizeSecurityGroupIngress', params)
 
-    def authorize_security_group(self, group_name, src_security_group_name=None,
+    def authorize_security_group(self, group_name=None,
+                                 src_security_group_name=None,
                                  src_security_group_owner_id=None,
                                  ip_protocol=None, from_port=None, to_port=None,
-                                 cidr_ip=None):
+                                 cidr_ip=None, group_id=None,
+                                 src_security_group_group_id=None):
         """
         Add a new rule to an existing security group.
         You need to pass in either src_security_group_name and
@@ -1757,70 +1955,163 @@
 
         :type cidr_ip: string
         :param cidr_ip: The CIDR block you are providing access to.
-                        See http://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing
+                        See http://goo.gl/Yj5QC
+
+        :type group_id: string
+        :param group_id: ID of the EC2 or VPC security group to modify.
+                         This is required for VPC security groups and
+                         can be used instead of group_name for EC2
+                         security groups.
+
+        :type group_id: string
+        :param group_id: ID of the EC2 or VPC source security group.
+                         This is required for VPC security groups and
+                         can be used instead of group_name for EC2
+                         security groups.
 
         :rtype: bool
         :return: True if successful.
         """
         if src_security_group_name:
             if from_port is None and to_port is None and ip_protocol is None:
-                return self._authorize_deprecated(group_name,
-                                                  src_security_group_name,
-                                                  src_security_group_owner_id)
-        params = {'GroupName':group_name}
+                return self.authorize_security_group_deprecated(
+                    group_name, src_security_group_name,
+                    src_security_group_owner_id)
+
+        params = {}
+
+        if group_name:
+            params['GroupName'] = group_name
+        if group_id:
+            params['GroupId'] = group_id
         if src_security_group_name:
-            params['IpPermissions.1.Groups.1.GroupName'] = src_security_group_name
+            param_name = 'IpPermissions.1.Groups.1.GroupName'
+            params[param_name] = src_security_group_name
         if src_security_group_owner_id:
-            params['IpPermissions.1.Groups.1.UserId'] = src_security_group_owner_id
+            param_name = 'IpPermissions.1.Groups.1.UserId'
+            params[param_name] = src_security_group_owner_id
+        if src_security_group_group_id:
+            param_name = 'IpPermissions.1.Groups.1.GroupId'
+            params[param_name] = src_security_group_group_id
         if ip_protocol:
             params['IpPermissions.1.IpProtocol'] = ip_protocol
-        if from_port:
+        if from_port is not None:
             params['IpPermissions.1.FromPort'] = from_port
-        if to_port:
+        if to_port is not None:
             params['IpPermissions.1.ToPort'] = to_port
         if cidr_ip:
             params['IpPermissions.1.IpRanges.1.CidrIp'] = cidr_ip
-        return self.get_status('AuthorizeSecurityGroupIngress', params, verb='POST')
 
-    def _revoke_deprecated(self, group_name, src_security_group_name=None,
-                           src_security_group_owner_id=None):
+        return self.get_status('AuthorizeSecurityGroupIngress',
+                               params, verb='POST')
+
+    def authorize_security_group_egress(group_id,
+                                        ip_protocol,
+                                        from_port=None,
+                                        to_port=None,
+                                        src_group_id=None,
+                                        cidr_ip=None):
         """
-        This method is called only when someone tries to revoke a group
-        without specifying a from_port or to_port.  Until recently, that was
-        the only way to do group revocation but the EC2 API has been
-        changed to now require a from_port and to_port when specifying a
-        group.  This is a much better approach but I don't want to break
-        existing boto applications that depend on the old behavior, hence
-        this kludge.
+        The action adds one or more egress rules to a VPC security
+        group. Specifically, this action permits instances in a
+        security group to send traffic to one or more destination
+        CIDR IP address ranges, or to one or more destination
+        security groups in the same VPC.
+        """
+        params = {
+            'GroupId': group_id,
+            'IpPermissions.1.IpProtocol': ip_protocol
+        }
+
+        if from_port is not None:
+            params['IpPermissions.1.FromPort'] = from_port
+        if to_port is not None:
+            params['IpPermissions.1.ToPort'] = to_port
+        if src_group_id is not None:
+            params['IpPermissions.1.Groups.1.GroupId'] = src_group_id
+        if cidr_ip is not None:
+            params['IpPermissions.1.Groups.1.CidrIp'] = cidr_ip
+
+        return self.get_status('AuthorizeSecurityGroupEgress',
+                               params, verb='POST')
+
+    def revoke_security_group_deprecated(self, group_name,
+                                         src_security_group_name=None,
+                                         src_security_group_owner_id=None,
+                                         ip_protocol=None,
+                                         from_port=None, to_port=None,
+                                         cidr_ip=None):
+        """
+        NOTE: This method uses the old-style request parameters
+              that did not allow a port to be specified when
+              authorizing a group.
+
+        Remove an existing rule from an existing security group.
+        You need to pass in either src_security_group_name and
+        src_security_group_owner_id OR ip_protocol, from_port, to_port,
+        and cidr_ip.  In other words, either you are revoking another
+        group or you are revoking some ip-based rule.
 
         :type group_name: string
-        :param group_name: The name of the security group you are adding
-                           the rule to.
+        :param group_name: The name of the security group you are removing
+                           the rule from.
 
         :type src_security_group_name: string
         :param src_security_group_name: The name of the security group you are
-                                        granting access to.
+                                        revoking access to.
 
         :type src_security_group_owner_id: string
         :param src_security_group_owner_id: The ID of the owner of the security
-                                            group you are granting access to.
+                                            group you are revoking access to.
+
+        :type ip_protocol: string
+        :param ip_protocol: Either tcp | udp | icmp
+
+        :type from_port: int
+        :param from_port: The beginning port number you are disabling
+
+        :type to_port: int
+        :param to_port: The ending port number you are disabling
+
+        :type to_port: string
+        :param to_port: The CIDR block you are revoking access to.
+                        http://goo.gl/Yj5QC
+
+        :type group_id: string
+        :param group_id: ID of the EC2 or VPC security group to modify.
+                         This is required for VPC security groups and
+                         can be used instead of group_name for EC2
+                         security groups.
+
+        :type group_id: string
+        :param group_id: ID of the EC2 or VPC source security group.
+                         This is required for VPC security groups and
+                         can be used instead of group_name for EC2
+                         security groups.
 
         :rtype: bool
         :return: True if successful.
         """
-        warnings.warn('FromPort and ToPort now required for group authorization',
-                      DeprecationWarning)
         params = {'GroupName':group_name}
         if src_security_group_name:
             params['SourceSecurityGroupName'] = src_security_group_name
         if src_security_group_owner_id:
             params['SourceSecurityGroupOwnerId'] = src_security_group_owner_id
-        return self.get_status('RevokeSecurityGroupIngress', params, verb='POST')
+        if ip_protocol:
+            params['IpProtocol'] = ip_protocol
+        if from_port:
+            params['FromPort'] = from_port
+        if to_port:
+            params['ToPort'] = to_port
+        if cidr_ip:
+            params['CidrIp'] = cidr_ip
+        return self.get_status('RevokeSecurityGroupIngress', params)
 
     def revoke_security_group(self, group_name, src_security_group_name=None,
                               src_security_group_owner_id=None,
                               ip_protocol=None, from_port=None, to_port=None,
-                              cidr_ip=None):
+                              cidr_ip=None, group_id=None,
+                              src_security_group_group_id=None):
         """
         Remove an existing rule from an existing security group.
         You need to pass in either src_security_group_name and
@@ -1851,30 +2142,35 @@
 
         :type cidr_ip: string
         :param cidr_ip: The CIDR block you are revoking access to.
-                        See http://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing
+                        See http://goo.gl/Yj5QC
 
         :rtype: bool
         :return: True if successful.
         """
         if src_security_group_name:
             if from_port is None and to_port is None and ip_protocol is None:
-                return self._revoke_deprecated(group_name,
-                                               src_security_group_name,
-                                               src_security_group_owner_id)
-        params = {'GroupName':group_name}
+                return self.revoke_security_group_deprecated(
+                    group_name, src_security_group_name,
+                    src_security_group_owner_id)
+        params = {}
+        if group_name:
+            params['GroupName'] = group_name
         if src_security_group_name:
-            params['IpPermissions.1.Groups.1.GroupName'] = src_security_group_name
+            param_name = 'IpPermissions.1.Groups.1.GroupName'
+            params[param_name] = src_security_group_name
         if src_security_group_owner_id:
-            params['IpPermissions.1.Groups.1.UserId'] = src_security_group_owner_id
+            param_name = 'IpPermissions.1.Groups.1.UserId'
+            params[param_name] = src_security_group_owner_id
         if ip_protocol:
             params['IpPermissions.1.IpProtocol'] = ip_protocol
-        if from_port:
+        if from_port is not None:
             params['IpPermissions.1.FromPort'] = from_port
-        if to_port:
+        if to_port is not None:
             params['IpPermissions.1.ToPort'] = to_port
         if cidr_ip:
             params['IpPermissions.1.IpRanges.1.CidrIp'] = cidr_ip
-        return self.get_status('RevokeSecurityGroupIngress', params, verb='POST')
+        return self.get_status('RevokeSecurityGroupIngress',
+                               params, verb='POST')
 
     #
     # Regions
@@ -1905,7 +2201,8 @@
             self.build_list_params(params, region_names, 'RegionName')
         if filters:
             self.build_filter_params(params, filters)
-        regions =  self.get_list('DescribeRegions', params, [('item', RegionInfo)], verb='POST')
+        regions =  self.get_list('DescribeRegions', params,
+                                 [('item', RegionInfo)], verb='POST')
         for region in regions:
             region.connection_cls = EC2Connection
         return regions
@@ -1964,7 +2261,8 @@
             self.build_filter_params(params, filters)
 
         return self.get_list('DescribeReservedInstancesOfferings',
-                             params, [('item', ReservedInstancesOffering)], verb='POST')
+                             params, [('item', ReservedInstancesOffering)],
+                             verb='POST')
 
     def get_all_reserved_instances(self, reserved_instances_id=None,
                                    filters=None):
@@ -1998,7 +2296,8 @@
         return self.get_list('DescribeReservedInstances',
                              params, [('item', ReservedInstance)], verb='POST')
 
-    def purchase_reserved_instance_offering(self, reserved_instances_offering_id,
+    def purchase_reserved_instance_offering(self,
+                                            reserved_instances_offering_id,
                                             instance_count=1):
         """
         Purchase a Reserved Instance for use with your account.
@@ -2017,8 +2316,9 @@
         :rtype: :class:`boto.ec2.reservedinstance.ReservedInstance`
         :return: The newly created Reserved Instance
         """
-        params = {'ReservedInstancesOfferingId' : reserved_instances_offering_id,
-                  'InstanceCount' : instance_count}
+        params = {
+            'ReservedInstancesOfferingId' : reserved_instances_offering_id,
+            'InstanceCount' : instance_count}
         return self.get_object('PurchaseReservedInstancesOffering', params,
                                ReservedInstance, verb='POST')
 
@@ -2026,8 +2326,24 @@
     # Monitoring
     #
 
+    def monitor_instances(self, instance_ids):
+        """
+        Enable CloudWatch monitoring for the supplied instances.
+
+        :type instance_id: list of strings
+        :param instance_id: The instance ids
+
+        :rtype: list
+        :return: A list of :class:`boto.ec2.instanceinfo.InstanceInfo`
+        """
+        params = {}
+        self.build_list_params(params, instance_ids, 'InstanceId')
+        return self.get_list('MonitorInstances', params,
+                             [('item', InstanceInfo)], verb='POST')
+
     def monitor_instance(self, instance_id):
         """
+        Deprecated Version, maintained for backward compatibility.
         Enable CloudWatch monitoring for the supplied instance.
 
         :type instance_id: string
@@ -2036,12 +2352,26 @@
         :rtype: list
         :return: A list of :class:`boto.ec2.instanceinfo.InstanceInfo`
         """
-        params = {'InstanceId' : instance_id}
-        return self.get_list('MonitorInstances', params,
+        return self.monitor_instances([instance_id])
+
+    def unmonitor_instances(self, instance_ids):
+        """
+        Disable CloudWatch monitoring for the supplied instance.
+
+        :type instance_id: list of string
+        :param instance_id: The instance id
+
+        :rtype: list
+        :return: A list of :class:`boto.ec2.instanceinfo.InstanceInfo`
+        """
+        params = {}
+        self.build_list_params(params, instance_ids, 'InstanceId')
+        return self.get_list('UnmonitorInstances', params,
                              [('item', InstanceInfo)], verb='POST')
 
     def unmonitor_instance(self, instance_id):
         """
+        Deprecated Version, maintained for backward compatibility.
         Disable CloudWatch monitoring for the supplied instance.
 
         :type instance_id: string
@@ -2050,16 +2380,14 @@
         :rtype: list
         :return: A list of :class:`boto.ec2.instanceinfo.InstanceInfo`
         """
-        params = {'InstanceId' : instance_id}
-        return self.get_list('UnmonitorInstances', params,
-                             [('item', InstanceInfo)], verb='POST')
+        return self.unmonitor_instances([instance_id])
 
-    # 
+    #
     # Bundle Windows Instances
     #
 
     def bundle_instance(self, instance_id,
-                        s3_bucket, 
+                        s3_bucket,
                         s3_prefix,
                         s3_upload_policy):
         """
@@ -2084,11 +2412,13 @@
                   'Storage.S3.Bucket' : s3_bucket,
                   'Storage.S3.Prefix' : s3_prefix,
                   'Storage.S3.UploadPolicy' : s3_upload_policy}
-        s3auth = boto.auth.get_auth_handler(None, boto.config, self.provider, ['s3'])
+        s3auth = boto.auth.get_auth_handler(None, boto.config,
+                                            self.provider, ['s3'])
         params['Storage.S3.AWSAccessKeyId'] = self.aws_access_key_id
         signature = s3auth.sign_string(s3_upload_policy)
         params['Storage.S3.UploadPolicySignature'] = signature
-        return self.get_object('BundleInstance', params, BundleInstanceTask, verb='POST') 
+        return self.get_object('BundleInstance', params,
+                               BundleInstanceTask, verb='POST')
 
     def get_all_bundle_tasks(self, bundle_ids=None, filters=None):
         """
@@ -2096,9 +2426,9 @@
         tasks are retrieved.
 
         :type bundle_ids: list
-        :param bundle_ids: A list of strings containing identifiers for 
+        :param bundle_ids: A list of strings containing identifiers for
                            previously created bundling tasks.
-                           
+
         :type filters: dict
         :param filters: Optional filters that can be used to limit
                         the results returned.  Filters are provided
@@ -2110,7 +2440,7 @@
                         for details.
 
         """
- 
+
         params = {}
         if bundle_ids:
             self.build_list_params(params, bundle_ids, 'BundleId')
@@ -2122,13 +2452,14 @@
     def cancel_bundle_task(self, bundle_id):
         """
         Cancel a previously submitted bundle task
- 
+
         :type bundle_id: string
         :param bundle_id: The identifier of the bundle task to cancel.
-        """                        
+        """
 
         params = {'BundleId' : bundle_id}
-        return self.get_object('CancelBundleTask', params, BundleInstanceTask, verb='POST')
+        return self.get_object('CancelBundleTask', params,
+                               BundleInstanceTask, verb='POST')
 
     def get_password_data(self, instance_id):
         """
@@ -2143,7 +2474,7 @@
         rs = self.get_object('GetPasswordData', params, ResultSet, verb='POST')
         return rs.passwordData
 
-    # 
+    #
     # Cluster Placement Groups
     #
 
@@ -2190,8 +2521,8 @@
         :param strategy: The placement strategy of the new placement group.
                          Currently, the only acceptable value is "cluster".
 
-        :rtype: :class:`boto.ec2.placementgroup.PlacementGroup`
-        :return: The newly created :class:`boto.ec2.keypair.KeyPair`.
+        :rtype: bool
+        :return: True if successful
         """
         params = {'GroupName':name, 'Strategy':strategy}
         group = self.get_status('CreatePlacementGroup', params, verb='POST')
@@ -2219,14 +2550,11 @@
             if value is not None:
                 params['Tag.%d.Value'%i] = value
             i += 1
-        
-    def get_all_tags(self, tags=None, filters=None):
+
+    def get_all_tags(self, filters=None):
         """
         Retrieve all the metadata tags associated with your account.
 
-        :type tags: list
-        :param tags: A list of mumble
-
         :type filters: dict
         :param filters: Optional filters that can be used to limit
                         the results returned.  Filters are provided
@@ -2241,11 +2569,10 @@
         :return: A dictionary containing metadata tags
         """
         params = {}
-        if tags:
-            self.build_list_params(params, instance_ids, 'InstanceId')
         if filters:
             self.build_filter_params(params, filters)
-        return self.get_list('DescribeTags', params, [('item', Tag)], verb='POST')
+        return self.get_list('DescribeTags', params,
+                             [('item', Tag)], verb='POST')
 
     def create_tags(self, resource_ids, tags):
         """
@@ -2255,7 +2582,10 @@
         :param resource_ids: List of strings
 
         :type tags: dict
-        :param tags: A dictionary containing the name/value pairs
+        :param tags: A dictionary containing the name/value pairs.
+                     If you want to create only a tag name, the
+                     value for that tag should be the empty string
+                     (e.g. '').
 
         """
         params = {}
@@ -2275,7 +2605,9 @@
                      or a list containing just tag names.
                      If you pass in a dictionary, the values must
                      match the actual tag values or the tag will
-                     not be deleted.
+                     not be deleted.  If you pass in a value of None
+                     for the tag value, all tags with that name will
+                     be deleted.
 
         """
         if isinstance(tags, list):
diff --git a/boto/ec2/ec2object.py b/boto/ec2/ec2object.py
index 6e37596..7756bee 100644
--- a/boto/ec2/ec2object.py
+++ b/boto/ec2/ec2object.py
@@ -62,7 +62,7 @@
         else:
             return None
 
-    def add_tag(self, key, value=None):
+    def add_tag(self, key, value=''):
         """
         Add a tag to this object.  Tag's are stored by AWS and can be used
         to organize and filter resources.  Adding a tag involves a round-trip
@@ -73,6 +73,8 @@
 
         :type value: str
         :param value: An optional value that can be stored with the tag.
+                      If you want only the tag name and no value, the
+                      value should be the empty string.
         """
         status = self.connection.create_tags([self.id], {key : value})
         if self.tags is None:
@@ -91,7 +93,10 @@
         :param value: An optional value that can be stored with the tag.
                       If a value is provided, it must match the value
                       currently stored in EC2.  If not, the tag will not
-                      be removed.
+                      be removed.  If a value of None is provided, all
+                      tags with the specified name will be deleted.
+                      NOTE: There is an important distinction between
+                      a value of '' and a value of None.
         """
         if value:
             tags = {key : value}
diff --git a/boto/ec2/elb/__init__.py b/boto/ec2/elb/__init__.py
index f4061d3..7d8c51b 100644
--- a/boto/ec2/elb/__init__.py
+++ b/boto/ec2/elb/__init__.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2006-2009 Mitch Garnaat http://garnaat.org/
+# Copyright (c) 2006-2011 Mitch Garnaat http://garnaat.org/
 #
 # Permission is hereby granted, free of charge, to any person obtaining a
 # copy of this software and associated documentation files (the
@@ -35,6 +35,7 @@
     'us-east-1' : 'elasticloadbalancing.us-east-1.amazonaws.com',
     'us-west-1' : 'elasticloadbalancing.us-west-1.amazonaws.com',
     'eu-west-1' : 'elasticloadbalancing.eu-west-1.amazonaws.com',
+    'ap-northeast-1' : 'elasticloadbalancing.ap-northeast-1.amazonaws.com',
     'ap-southeast-1' : 'elasticloadbalancing.ap-southeast-1.amazonaws.com'}
 
 def regions():
@@ -70,7 +71,7 @@
 
 class ELBConnection(AWSQueryConnection):
 
-    APIVersion = boto.config.get('Boto', 'elb_version', '2010-07-01')
+    APIVersion = boto.config.get('Boto', 'elb_version', '2011-04-05')
     DefaultRegionName = boto.config.get('Boto', 'elb_region_name', 'us-east-1')
     DefaultRegionEndpoint = boto.config.get('Boto', 'elb_region_endpoint',
                                             'elasticloadbalancing.amazonaws.com')
@@ -102,8 +103,8 @@
     def build_list_params(self, params, items, label):
         if isinstance(items, str):
             items = [items]
-        for i in range(1, len(items)+1):
-            params[label % i] = items[i-1]
+        for index, item in enumerate(items):
+            params[label % (index + 1)] = item
 
     def get_all_load_balancers(self, load_balancer_names=None):
         """
@@ -117,9 +118,10 @@
         """
         params = {}
         if load_balancer_names:
-            self.build_list_params(params, load_balancer_names, 'LoadBalancerNames.member.%d')
-        return self.get_list('DescribeLoadBalancers', params, [('member', LoadBalancer)])
-
+            self.build_list_params(params, load_balancer_names,
+                                   'LoadBalancerNames.member.%d')
+        return self.get_list('DescribeLoadBalancers', params,
+                             [('member', LoadBalancer)])
 
     def create_load_balancer(self, name, zones, listeners):
         """
@@ -133,10 +135,10 @@
 
         :type listeners: List of tuples
         :param listeners: Each tuple contains three or four values,
-                          (LoadBalancerPortNumber, InstancePortNumber, Protocol,
-                          [SSLCertificateId])
-                          where LoadBalancerPortNumber and InstancePortNumber are
-                          integer values between 1 and 65535, Protocol is a
+                          (LoadBalancerPortNumber, InstancePortNumber,
+                          Protocol, [SSLCertificateId])
+                          where LoadBalancerPortNumber and InstancePortNumber
+                          are integer values between 1 and 65535, Protocol is a
                           string containing either 'TCP', 'HTTP' or 'HTTPS';
                           SSLCertificateID is the ARN of a AWS AIM certificate,
                           and must be specified when doing HTTPS.
@@ -145,14 +147,16 @@
         :return: The newly created :class:`boto.ec2.elb.loadbalancer.LoadBalancer`
         """
         params = {'LoadBalancerName' : name}
-        for i in range(0, len(listeners)):
-            params['Listeners.member.%d.LoadBalancerPort' % (i+1)] = listeners[i][0]
-            params['Listeners.member.%d.InstancePort' % (i+1)] = listeners[i][1]
-            params['Listeners.member.%d.Protocol' % (i+1)] = listeners[i][2]
-            if listeners[i][2]=='HTTPS':
-                params['Listeners.member.%d.SSLCertificateId' % (i+1)] = listeners[i][3]
+        for index, listener in enumerate(listeners):
+            i = index + 1
+            params['Listeners.member.%d.LoadBalancerPort' % i] = listener[0]
+            params['Listeners.member.%d.InstancePort' % i] = listener[1]
+            params['Listeners.member.%d.Protocol' % i] = listener[2]
+            if listener[2]=='HTTPS':
+                params['Listeners.member.%d.SSLCertificateId' % i] = listener[3]
         self.build_list_params(params, zones, 'AvailabilityZones.member.%d')
-        load_balancer = self.get_object('CreateLoadBalancer', params, LoadBalancer)
+        load_balancer = self.get_object('CreateLoadBalancer',
+                                        params, LoadBalancer)
         load_balancer.name = name
         load_balancer.listeners = listeners
         load_balancer.availability_zones = zones
@@ -178,12 +182,13 @@
         :return: The status of the request
         """
         params = {'LoadBalancerName' : name}
-        for i in range(0, len(listeners)):
-            params['Listeners.member.%d.LoadBalancerPort' % (i+1)] = listeners[i][0]
-            params['Listeners.member.%d.InstancePort' % (i+1)] = listeners[i][1]
-            params['Listeners.member.%d.Protocol' % (i+1)] = listeners[i][2]
-            if listeners[i][2]=='HTTPS':
-                params['Listeners.member.%d.SSLCertificateId' % (i+1)] = listeners[i][3]
+        for index, listener in enumerate(listeners):
+            i = index + 1
+            params['Listeners.member.%d.LoadBalancerPort' % i] = listener[0]
+            params['Listeners.member.%d.InstancePort' % i] = listener[1]
+            params['Listeners.member.%d.Protocol' % i] = listener[2]
+            if listener[2]=='HTTPS':
+                params['Listeners.member.%d.SSLCertificateId' % i] = listener[3]
         return self.get_status('CreateLoadBalancerListeners', params)
 
 
@@ -210,12 +215,10 @@
         :return: The status of the request
         """
         params = {'LoadBalancerName' : name}
-        for i in range(0, len(ports)):
-            params['LoadBalancerPorts.member.%d' % (i+1)] = ports[i]
+        for index, port in enumerate(ports):
+            params['LoadBalancerPorts.member.%d' % (index + 1)] = port
         return self.get_status('DeleteLoadBalancerListeners', params)
 
-
-
     def enable_availability_zones(self, load_balancer_name, zones_to_add):
         """
         Add availability zones to an existing Load Balancer
@@ -234,8 +237,10 @@
 
         """
         params = {'LoadBalancerName' : load_balancer_name}
-        self.build_list_params(params, zones_to_add, 'AvailabilityZones.member.%d')
-        return self.get_list('EnableAvailabilityZonesForLoadBalancer', params, None)
+        self.build_list_params(params, zones_to_add,
+                               'AvailabilityZones.member.%d')
+        return self.get_list('EnableAvailabilityZonesForLoadBalancer',
+                             params, None)
 
     def disable_availability_zones(self, load_balancer_name, zones_to_remove):
         """
@@ -256,8 +261,10 @@
 
         """
         params = {'LoadBalancerName' : load_balancer_name}
-        self.build_list_params(params, zones_to_remove, 'AvailabilityZones.member.%d')
-        return self.get_list('DisableAvailabilityZonesForLoadBalancer', params, None)
+        self.build_list_params(params, zones_to_remove,
+                               'AvailabilityZones.member.%d')
+        return self.get_list('DisableAvailabilityZonesForLoadBalancer',
+                             params, None)
 
     def register_instances(self, load_balancer_name, instances):
         """
@@ -274,8 +281,10 @@
 
         """
         params = {'LoadBalancerName' : load_balancer_name}
-        self.build_list_params(params, instances, 'Instances.member.%d.InstanceId')
-        return self.get_list('RegisterInstancesWithLoadBalancer', params, [('member', InstanceInfo)])
+        self.build_list_params(params, instances,
+                               'Instances.member.%d.InstanceId')
+        return self.get_list('RegisterInstancesWithLoadBalancer',
+                             params, [('member', InstanceInfo)])
 
     def deregister_instances(self, load_balancer_name, instances):
         """
@@ -292,8 +301,10 @@
 
         """
         params = {'LoadBalancerName' : load_balancer_name}
-        self.build_list_params(params, instances, 'Instances.member.%d.InstanceId')
-        return self.get_list('DeregisterInstancesFromLoadBalancer', params, [('member', InstanceInfo)])
+        self.build_list_params(params, instances,
+                               'Instances.member.%d.InstanceId')
+        return self.get_list('DeregisterInstancesFromLoadBalancer',
+                             params, [('member', InstanceInfo)])
 
     def describe_instance_health(self, load_balancer_name, instances=None):
         """
@@ -313,15 +324,17 @@
         """
         params = {'LoadBalancerName' : load_balancer_name}
         if instances:
-            self.build_list_params(params, instances, 'Instances.member.%d.InstanceId')
-        return self.get_list('DescribeInstanceHealth', params, [('member', InstanceState)])
+            self.build_list_params(params, instances,
+                                   'Instances.member.%d.InstanceId')
+        return self.get_list('DescribeInstanceHealth', params,
+                             [('member', InstanceState)])
 
     def configure_health_check(self, name, health_check):
         """
         Define a health check for the EndPoints.
 
         :type name: string
-        :param name: The mnemonic name associated with the new access point
+        :param name: The mnemonic name associated with the load balancer
 
         :type health_check: :class:`boto.ec2.elb.healthcheck.HealthCheck`
         :param health_check: A HealthCheck object populated with the desired
@@ -338,7 +351,8 @@
                   'HealthCheck.HealthyThreshold' : health_check.healthy_threshold}
         return self.get_object('ConfigureHealthCheck', params, HealthCheck)
 
-    def set_lb_listener_SSL_certificate(self, lb_name, lb_port, ssl_certificate_id):
+    def set_lb_listener_SSL_certificate(self, lb_name, lb_port,
+                                        ssl_certificate_id):
         """
         Sets the certificate that terminates the specified listener's SSL
         connections. The specified certificate replaces any prior certificate
@@ -374,7 +388,8 @@
                  }
         return self.get_status('CreateAppCookieStickinessPolicy', params)
 
-    def create_lb_cookie_stickiness_policy(self, cookie_expiration_period, lb_name, policy_name):
+    def create_lb_cookie_stickiness_policy(self, cookie_expiration_period,
+                                           lb_name, policy_name):
         """
         Generates a stickiness policy with sticky session lifetimes controlled
         by the lifetime of the browser (user-agent) or a specified expiration
diff --git a/boto/ec2/elb/healthcheck.py b/boto/ec2/elb/healthcheck.py
index 5a3edbc..5b47d62 100644
--- a/boto/ec2/elb/healthcheck.py
+++ b/boto/ec2/elb/healthcheck.py
@@ -57,8 +57,7 @@
         if not self.access_point:
             return
 
-        new_hc = self.connection.configure_health_check(self.access_point,
-                                                        self)
+        new_hc = self.connection.configure_health_check(self.access_point, self)
         self.interval = new_hc.interval
         self.target = new_hc.target
         self.healthy_threshold = new_hc.healthy_threshold
diff --git a/boto/ec2/elb/loadbalancer.py b/boto/ec2/elb/loadbalancer.py
index 9759952..3c459c1 100644
--- a/boto/ec2/elb/loadbalancer.py
+++ b/boto/ec2/elb/loadbalancer.py
@@ -23,6 +23,7 @@
 from boto.ec2.elb.listener import Listener
 from boto.ec2.elb.listelement import ListElement
 from boto.ec2.elb.policies import Policies
+from boto.ec2.elb.securitygroup import SecurityGroup
 from boto.ec2.instanceinfo import InstanceInfo
 from boto.resultset import ResultSet
 
@@ -41,6 +42,9 @@
         self.created_time = None
         self.instances = None
         self.availability_zones = ListElement()
+        self.canonical_hosted_zone_name = None
+        self.canonical_hosted_zone_name_id = None
+        self.source_security_group = None
 
     def __repr__(self):
         return 'LoadBalancer:%s' % self.name
@@ -60,6 +64,9 @@
         elif name == 'Policies':
             self.policies = Policies(self)
             return self.policies
+        elif name == 'SourceSecurityGroup':
+            self.source_security_group = SecurityGroup()
+            return self.source_security_group
         else:
             return None
 
@@ -72,6 +79,10 @@
             self.created_time = value
         elif name == 'InstanceId':
             self.instances.append(value)
+        elif name == 'CanonicalHostedZoneName':
+            self.canonical_hosted_zone_name = value
+        elif name == 'CanonicalHostedZoneNameID':
+            self.canonical_hosted_zone_name_id = value
         else:
             setattr(self, name, value)
 
diff --git a/boto/ec2/elb/policies.py b/boto/ec2/elb/policies.py
index 428ce72..7bf5455 100644
--- a/boto/ec2/elb/policies.py
+++ b/boto/ec2/elb/policies.py
@@ -28,7 +28,8 @@
         self.policy_name = None
 
     def __repr__(self):
-        return 'AppCookieStickiness(%s, %s)' % (self.policy_name, self.cookie_name)
+        return 'AppCookieStickiness(%s, %s)' % (self.policy_name,
+                                                self.cookie_name)
 
     def startElement(self, name, attrs, connection):
         pass
@@ -46,7 +47,8 @@
         self.cookie_expiration_period = None
 
     def __repr__(self):
-        return 'LBCookieStickiness(%s, %s)' % (self.policy_name, self.cookie_expiration_period)
+        return 'LBCookieStickiness(%s, %s)' % (self.policy_name,
+                                               self.cookie_expiration_period)
 
     def startElement(self, name, attrs, connection):
         pass
@@ -68,14 +70,17 @@
         self.lb_cookie_stickiness_policies = None
 
     def __repr__(self):
-        return 'Policies(AppCookieStickiness%s, LBCookieStickiness%s)' % (self.app_cookie_stickiness_policies,
-                                                                           self.lb_cookie_stickiness_policies)
+        app = 'AppCookieStickiness%s' % self.app_cookie_stickiness_policies
+        lb = 'LBCookieStickiness%s' % self.lb_cookie_stickiness_policies
+        return 'Policies(%s,%s)' % (app, lb)
 
     def startElement(self, name, attrs, connection):
         if name == 'AppCookieStickinessPolicies':
-            self.app_cookie_stickiness_policies = ResultSet([('member', AppCookieStickinessPolicy)])
+            rs = ResultSet([('member', AppCookieStickinessPolicy)])
+            self.app_cookie_stickiness_policies = rs
         elif name == 'LBCookieStickinessPolicies':
-            self.lb_cookie_stickiness_policies = ResultSet([('member', LBCookieStickinessPolicy)])
+            rs = ResultSet([('member', LBCookieStickinessPolicy)])
+            self.lb_cookie_stickiness_policies = rs
 
     def endElement(self, name, value, connection):
         return
diff --git a/boto/tests/__init__.py b/boto/ec2/elb/securitygroup.py
similarity index 65%
copy from boto/tests/__init__.py
copy to boto/ec2/elb/securitygroup.py
index 449bd16..4f37790 100644
--- a/boto/tests/__init__.py
+++ b/boto/ec2/elb/securitygroup.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2006,2007 Mitch Garnaat http://garnaat.org/
+# Copyright (c) 2010 Reza Lotun http://reza.lotun.name
 #
 # Permission is hereby granted, free of charge, to any person obtaining a
 # copy of this software and associated documentation files (the
@@ -14,10 +14,25 @@
 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
 # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
 # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
-# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, 
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
 # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
 # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
 # IN THE SOFTWARE.
-#
 
+class SecurityGroup(object):
+    def __init__(self, connection=None):
+        self.name = None
+        self.owner_alias = None
+
+    def __repr__(self):
+        return 'SecurityGroup(%s, %s)' % (self.name, self.owner_alias)
+
+    def startElement(self, name, attrs, connection):
+        pass
+
+    def endElement(self, name, value, connection):
+        if name == 'GroupName':
+            self.name = value
+        elif name == 'OwnerAlias':
+            self.owner_alias = value
 
diff --git a/boto/ec2/image.py b/boto/ec2/image.py
index a85fba0..de1b5d2 100644
--- a/boto/ec2/image.py
+++ b/boto/ec2/image.py
@@ -151,7 +151,7 @@
             raise ValueError('%s is not a valid Image ID' % self.id)
         return self.state
 
-    def run(self, min_count=1, max_count=1, key_name=None,
+    def run(self, min_count=1, max_count=1, key_name=None, 
             security_groups=None, user_data=None,
             addressing_type=None, instance_type='m1.small', placement=None,
             kernel_id=None, ramdisk_id=None,
@@ -160,7 +160,7 @@
             disable_api_termination=False,
             instance_initiated_shutdown_behavior=None,
             private_ip_address=None,
-            placement_group=None):
+            placement_group=None, security_group_ids=None):
         """
         Runs this instance.
         
@@ -220,11 +220,9 @@
                                         via the API.
 
         :type instance_initiated_shutdown_behavior: string
-        :param instance_initiated_shutdown_behavior: Specifies whether the instance's
-                                                     EBS volumes are stopped (i.e. detached)
-                                                     or terminated (i.e. deleted) when
-                                                     the instance is shutdown by the
-                                                     owner.  Valid values are:
+        :param instance_initiated_shutdown_behavior: Specifies whether the instance
+                                                     stops or terminates on instance-initiated
+                                                     shutdown. Valid values are:
                                                      stop | terminate
 
         :type placement_group: string
@@ -233,7 +231,11 @@
 
         :rtype: Reservation
         :return: The :class:`boto.ec2.instance.Reservation` associated with the request for machines
+
+        :type security_group_ids: 
+        :param security_group_ids:
         """
+
         return self.connection.run_instances(self.id, min_count, max_count,
                                              key_name, security_groups,
                                              user_data, addressing_type,
@@ -242,11 +244,11 @@
                                              monitoring_enabled, subnet_id,
                                              block_device_map, disable_api_termination,
                                              instance_initiated_shutdown_behavior,
-                                             private_ip_address,
-                                             placement_group)
+                                             private_ip_address, placement_group, 
+                                             security_group_ids=security_group_ids)
 
-    def deregister(self):
-        return self.connection.deregister_image(self.id)
+    def deregister(self, delete_snapshot=False):
+        return self.connection.deregister_image(self.id, delete_snapshot)
 
     def get_launch_permissions(self):
         img_attrs = self.connection.get_image_attribute(self.id,
diff --git a/boto/ec2/instance.py b/boto/ec2/instance.py
index 9e8aacf..afb593f 100644
--- a/boto/ec2/instance.py
+++ b/boto/ec2/instance.py
@@ -32,6 +32,16 @@
 import base64
 
 class Reservation(EC2Object):
+    """
+    Represents a Reservation response object.
+
+    :ivar id: The unique ID of the Reservation.
+    :ivar owner_id: The unique ID of the owner of the Reservation.
+    :ivar groups: A list of Group objects representing the security
+                  groups associated with launched instances.
+    :ivar instances: A list of Instance objects launched in this
+                     Reservation.
+    """
     
     def __init__(self, connection=None):
         EC2Object.__init__(self, connection)
@@ -103,6 +113,7 @@
         self.state_reason = None
         self.group_name = None
         self.client_token = None
+        self.groups = []
 
     def __repr__(self):
         return 'Instance:%s' % self.id
@@ -121,6 +132,9 @@
         elif name == 'stateReason':
             self.state_reason = StateReason()
             return self.state_reason
+        elif name == 'groupSet':
+            self.groups = ResultSet([('item', Group)])
+            return self.groups
         return None
 
     def endElement(self, name, value, connection):
@@ -229,7 +243,8 @@
         Terminate the instance
         """
         rs = self.connection.terminate_instances([self.id])
-        self._update(rs[0])
+        if len(rs) > 0:
+            self._update(rs[0])
 
     def stop(self, force=False):
         """
@@ -242,14 +257,16 @@
         :return: A list of the instances stopped
         """
         rs = self.connection.stop_instances([self.id])
-        self._update(rs[0])
+        if len(rs) > 0:
+            self._update(rs[0])
 
     def start(self):
         """
         Start the instance.
         """
         rs = self.connection.start_instances([self.id])
-        self._update(rs[0])
+        if len(rs) > 0:
+            self._update(rs[0])
 
     def reboot(self):
         return self.connection.reboot_instances([self.id])
@@ -336,6 +353,7 @@
 
     def __init__(self, parent=None):
         self.id = None
+        self.name = None
 
     def startElement(self, name, attrs, connection):
         return None
@@ -343,6 +361,8 @@
     def endElement(self, name, value, connection):
         if name == 'groupId':
             self.id = value
+        elif name == 'groupName':
+            self.name = value
         else:
             setattr(self, name, value)
     
@@ -352,7 +372,7 @@
         self.parent = parent
         self.instance_id = None
         self.timestamp = None
-        self.comment = None
+        self.output = None
 
     def startElement(self, name, attrs, connection):
         return None
@@ -360,6 +380,8 @@
     def endElement(self, name, value, connection):
         if name == 'instanceId':
             self.instance_id = value
+        elif name == 'timestamp':
+            self.timestamp = value
         elif name == 'output':
             self.output = base64.b64decode(value)
         else:
@@ -367,17 +389,35 @@
 
 class InstanceAttribute(dict):
 
+    ValidValues = ['instanceType', 'kernel', 'ramdisk', 'userData',
+                   'disableApiTermination', 'instanceInitiatedShutdownBehavior',
+                   'rootDeviceName', 'blockDeviceMapping', 'sourceDestCheck',
+                   'groupSet']
+
     def __init__(self, parent=None):
         dict.__init__(self)
+        self.instance_id = None
+        self.request_id = None
         self._current_value = None
 
     def startElement(self, name, attrs, connection):
-        return None
+        if name == 'blockDeviceMapping':
+            self[name] = BlockDeviceMapping()
+            return self[name]
+        elif name == 'groupSet':
+            self[name] = ResultSet([('item', Group)])
+            return self[name]
+        else:
+            return None
 
     def endElement(self, name, value, connection):
-        if name == 'value':
+        if name == 'instanceId':
+            self.instance_id = value
+        elif name == 'requestId':
+            self.request_id = value
+        elif name == 'value':
             self._current_value = value
-        else:
+        elif name in self.ValidValues:
             self[name] = self._current_value
 
 class StateReason(dict):
diff --git a/boto/ec2/keypair.py b/boto/ec2/keypair.py
index d08e5ce..65c9590 100644
--- a/boto/ec2/keypair.py
+++ b/boto/ec2/keypair.py
@@ -76,12 +76,14 @@
         :return: True if successful.
         """
         if self.material:
+            directory_path = os.path.expanduser(directory_path)
             file_path = os.path.join(directory_path, '%s.pem' % self.name)
             if os.path.exists(file_path):
                 raise BotoClientError('%s already exists, it will not be overwritten' % file_path)
             fp = open(file_path, 'wb')
             fp.write(self.material)
             fp.close()
+            os.chmod(file_path, 0600)
             return True
         else:
             raise BotoClientError('KeyPair contains no material')
diff --git a/boto/ec2/securitygroup.py b/boto/ec2/securitygroup.py
index 24e08c3..af7811b 100644
--- a/boto/ec2/securitygroup.py
+++ b/boto/ec2/securitygroup.py
@@ -1,4 +1,5 @@
-# Copyright (c) 2006,2007 Mitch Garnaat http://garnaat.org/
+# Copyright (c) 2006-2011 Mitch Garnaat http://garnaat.org/
+# Copyright (c) 2011, Eucalyptus Systems, Inc.
 #
 # Permission is hereby granted, free of charge, to any person obtaining a
 # copy of this software and associated documentation files (the
@@ -22,34 +23,45 @@
 """
 Represents an EC2 Security Group
 """
-from boto.ec2.ec2object import EC2Object
+from boto.ec2.ec2object import TaggedEC2Object
 from boto.exception import BotoClientError
 
-class SecurityGroup(EC2Object):
+class SecurityGroup(TaggedEC2Object):
     
     def __init__(self, connection=None, owner_id=None,
-                 name=None, description=None):
-        EC2Object.__init__(self, connection)
+                 name=None, description=None, id=None):
+        TaggedEC2Object.__init__(self, connection)
+        self.id = id
         self.owner_id = owner_id
         self.name = name
         self.description = description
-        self.rules = []
+        self.vpc_id = None
+        self.rules = IPPermissionsList()
+        self.rules_egress = IPPermissionsList()
 
     def __repr__(self):
         return 'SecurityGroup:%s' % self.name
 
     def startElement(self, name, attrs, connection):
-        if name == 'item':
-            self.rules.append(IPPermissions(self))
-            return self.rules[-1]
+        retval = TaggedEC2Object.startElement(self, name, attrs, connection)
+        if retval is not None:
+            return retval
+        if name == 'ipPermissions':
+            return self.rules
+        elif name == 'ipPermissionsEgress':
+            return self.rules_egress
         else:
             return None
 
     def endElement(self, name, value, connection):
         if name == 'ownerId':
             self.owner_id = value
+        elif name == 'groupId':
+            self.id = value
         elif name == 'groupName':
             self.name = value
+        elif name == 'vpcId':
+            self.vpc_id = value
         elif name == 'groupDescription':
             self.description = value
         elif name == 'ipRanges':
@@ -128,12 +140,13 @@
         :type to_port: int
         :param to_port: The ending port number you are enabling
 
-        :type to_port: string
-        :param to_port: The CIDR block you are providing access to.
+        :type cidr_ip: string
+        :param cidr_ip: The CIDR block you are providing access to.
                         See http://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing
 
         :type src_group: :class:`boto.ec2.securitygroup.SecurityGroup` or
                          :class:`boto.ec2.securitygroup.GroupOrCIDR`
+        :param src_group: The Security Group you are granting access to.
                          
         :rtype: bool
         :return: True if successful.
@@ -203,25 +216,46 @@
         source_groups = []
         for rule in self.rules:
             grant = rule.grants[0]
-            if grant.name:
-                if grant.name not in source_groups:
-                    source_groups.append(grant.name)
-                    sg.authorize(None, None, None, None, grant)
-            else:
-                sg.authorize(rule.ip_protocol, rule.from_port, rule.to_port,
-                             grant.cidr_ip)
+            for grant in rule.grants:
+                if grant.name:
+                    if grant.name not in source_groups:
+                        source_groups.append(grant.name)
+                        sg.authorize(None, None, None, None, grant)
+                else:
+                    sg.authorize(rule.ip_protocol, rule.from_port, rule.to_port,
+                                 grant.cidr_ip)
         return sg
 
     def instances(self):
+        """
+        Find all of the current instances that are running within this
+        security group.
+
+        :rtype: list of :class:`boto.ec2.instance.Instance`
+        :return: A list of Instance objects
+        """
+        # It would be more efficient to do this with filters now
+        # but not all services that implement EC2 API support filters.
         instances = []
         rs = self.connection.get_all_instances()
         for reservation in rs:
-            uses_group = [g.id for g in reservation.groups if g.id == self.name]
+            uses_group = [g.name for g in reservation.groups if g.name == self.name]
             if uses_group:
                 instances.extend(reservation.instances)
         return instances
 
-class IPPermissions:
+class IPPermissionsList(list):
+    
+    def startElement(self, name, attrs, connection):
+        if name == 'item':
+            self.append(IPPermissions(self))
+            return self[-1]
+        return None
+
+    def endElement(self, name, value, connection):
+        pass
+            
+class IPPermissions(object):
 
     def __init__(self, parent=None):
         self.parent = parent
@@ -258,7 +292,7 @@
         self.grants.append(grant)
         return grant
 
-class GroupOrCIDR:
+class GroupOrCIDR(object):
 
     def __init__(self, parent=None):
         self.owner_id = None
diff --git a/boto/ec2/snapshot.py b/boto/ec2/snapshot.py
index bbe8ad4..d52abe4 100644
--- a/boto/ec2/snapshot.py
+++ b/boto/ec2/snapshot.py
@@ -21,12 +21,14 @@
 # IN THE SOFTWARE.
 
 """
-Represents an EC2 Elastic IP Snapshot
+Represents an EC2 Elastic Block Store Snapshot
 """
 from boto.ec2.ec2object import TaggedEC2Object
 
 class Snapshot(TaggedEC2Object):
     
+    AttrName = 'createVolumePermission'
+    
     def __init__(self, connection=None):
         TaggedEC2Object.__init__(self, connection)
         self.id = None
@@ -88,26 +90,26 @@
         return self.connection.delete_snapshot(self.id)
 
     def get_permissions(self):
-        attrs = self.connection.get_snapshot_attribute(self.id,
-                                                       attribute='createVolumePermission')
+        attrs = self.connection.get_snapshot_attribute(self.id, self.AttrName)
         return attrs.attrs
 
     def share(self, user_ids=None, groups=None):
         return self.connection.modify_snapshot_attribute(self.id,
-                                                         'createVolumePermission',
+                                                         self.AttrName,
                                                          'add',
                                                          user_ids,
                                                          groups)
 
     def unshare(self, user_ids=None, groups=None):
         return self.connection.modify_snapshot_attribute(self.id,
-                                                         'createVolumePermission',
+                                                         self.AttrName,
                                                          'remove',
                                                          user_ids,
                                                          groups)
 
     def reset_permissions(self):
-        return self.connection.reset_snapshot_attribute(self.id, 'createVolumePermission')
+        return self.connection.reset_snapshot_attribute(self.id,
+                                                        self.AttrName)
 
 class SnapshotAttribute:
 
diff --git a/boto/ec2/spotpricehistory.py b/boto/ec2/spotpricehistory.py
index d4e1711..268d6b3 100644
--- a/boto/ec2/spotpricehistory.py
+++ b/boto/ec2/spotpricehistory.py
@@ -33,6 +33,7 @@
         self.instance_type = None
         self.product_description = None
         self.timestamp = None
+        self.availability_zone = None
 
     def __repr__(self):
         return 'SpotPriceHistory(%s):%2f' % (self.instance_type, self.price)
@@ -46,6 +47,8 @@
             self.product_description = value
         elif name == 'timestamp':
             self.timestamp = value
+        elif name == 'availabilityZone':
+            self.availability_zone = value
         else:
             setattr(self, name, value)
 
diff --git a/boto/ec2/volume.py b/boto/ec2/volume.py
index 45345fa..57f2cb1 100644
--- a/boto/ec2/volume.py
+++ b/boto/ec2/volume.py
@@ -110,7 +110,7 @@
 
         :type device: str
         :param device: The device on the instance through which the
-                       volume will be exposted (e.g. /dev/sdh)
+                       volume will be exposed (e.g. /dev/sdh)
 
         :rtype: bool
         :return: True if successful
diff --git a/boto/ecs/__init__.py b/boto/ecs/__init__.py
index db86dd5..cbaf478 100644
--- a/boto/ecs/__init__.py
+++ b/boto/ecs/__init__.py
@@ -28,7 +28,13 @@
 from boto import handler
 
 class ECSConnection(AWSQueryConnection):
-    """ECommerse Connection"""
+    """
+    ECommerce Connection
+
+    For more information on how to use this module see:
+
+    http://blog.coredumped.org/2010/09/search-for-books-on-amazon-using-boto.html
+    """
 
     APIVersion = '2010-11-01'
 
diff --git a/boto/emr/bootstrap_action.py b/boto/emr/bootstrap_action.py
index c1c9038..7db0b3d 100644
--- a/boto/emr/bootstrap_action.py
+++ b/boto/emr/bootstrap_action.py
@@ -1,4 +1,5 @@
 # Copyright (c) 2010 Spotify AB
+# Copyright (c) 2010 Yelp
 #
 # Permission is hereby granted, free of charge, to any person obtaining a
 # copy of this software and associated documentation files (the
diff --git a/boto/emr/connection.py b/boto/emr/connection.py
index 2bfd368..b1effcf 100644
--- a/boto/emr/connection.py
+++ b/boto/emr/connection.py
@@ -1,4 +1,5 @@
 # Copyright (c) 2010 Spotify AB
+# Copyright (c) 2010-2011 Yelp
 #
 # Permission is hereby granted, free of charge, to any person obtaining a
 # copy of this software and associated documentation files (the
@@ -25,8 +26,10 @@
 import types
 
 import boto
+import boto.utils
 from boto.ec2.regioninfo import RegionInfo
 from boto.emr.emrobject import JobFlow, RunJobFlowResponse
+from boto.emr.emrobject import AddInstanceGroupsResponse, ModifyInstanceGroupsResponse
 from boto.emr.step import JarStep
 from boto.connection import AWSQueryConnection
 from boto.exception import EmrResponseError
@@ -94,9 +97,11 @@
         if jobflow_ids:
             self.build_list_params(params, jobflow_ids, 'JobFlowIds.member')
         if created_after:
-            params['CreatedAfter'] = created_after.strftime('%Y-%m-%dT%H:%M:%S')
+            params['CreatedAfter'] = created_after.strftime(
+                boto.utils.ISO8601)
         if created_before:
-            params['CreatedBefore'] = created_before.strftime('%Y-%m-%dT%H:%M:%S')
+            params['CreatedBefore'] = created_before.strftime(
+                boto.utils.ISO8601)
 
         return self.get_list('DescribeJobFlows', params, [('member', JobFlow)])
 
@@ -105,9 +110,9 @@
         Terminate an Elastic MapReduce job flow
 
         :type jobflow_id: str
-        :param jobflow_id: A jobflow id 
+        :param jobflow_id: A jobflow id
         """
-        self.terminate_jobflows([jobflow_id]) 
+        self.terminate_jobflows([jobflow_id])
 
     def terminate_jobflows(self, jobflow_ids):
         """
@@ -118,7 +123,7 @@
         """
         params = {}
         self.build_list_params(params, jobflow_ids, 'JobFlowIds.member')
-        return self.get_status('TerminateJobFlows', params)
+        return self.get_status('TerminateJobFlows', params, verb='POST')
 
     def add_jobflow_steps(self, jobflow_id, steps):
         """
@@ -138,16 +143,61 @@
         step_args = [self._build_step_args(step) for step in steps]
         params.update(self._build_step_list(step_args))
 
-        return self.get_object('AddJobFlowSteps', params, RunJobFlowResponse)
+        return self.get_object(
+            'AddJobFlowSteps', params, RunJobFlowResponse, verb='POST')
+
+    def add_instance_groups(self, jobflow_id, instance_groups):
+        """
+        Adds instance groups to a running cluster.
+
+        :type jobflow_id: str
+        :param jobflow_id: The id of the jobflow which will take the new instance groups
+        :type instance_groups: list(boto.emr.InstanceGroup)
+        :param instance_groups: A list of instance groups to add to the job
+        """
+        if type(instance_groups) != types.ListType:
+            instance_groups = [instance_groups]
+        params = {}
+        params['JobFlowId'] = jobflow_id
+        params.update(self._build_instance_group_list_args(instance_groups))
+
+        return self.get_object('AddInstanceGroups', params, AddInstanceGroupsResponse, verb='POST')
+
+    def modify_instance_groups(self, instance_group_ids, new_sizes):
+        """
+        Modify the number of nodes and configuration settings in an instance group.
+
+        :type instance_group_ids: list(str)
+        :param instance_group_ids: A list of the ID's of the instance groups to be modified
+        :type new_sizes: list(int)
+        :param new_sizes: A list of the new sizes for each instance group
+        """
+        if type(instance_group_ids) != types.ListType:
+            instance_group_ids = [instance_group_ids]
+        if type(new_sizes) != types.ListType:
+            new_sizes = [new_sizes]
+
+        instance_groups = zip(instance_group_ids, new_sizes)
+
+        params = {}
+        for k, ig in enumerate(instance_groups):
+            #could be wrong - the example amazon gives uses InstanceRequestCount,
+            #while the api documentation says InstanceCount
+            params['InstanceGroups.member.%d.InstanceGroupId' % (k+1) ] = ig[0]
+            params['InstanceGroups.member.%d.InstanceCount' % (k+1) ] = ig[1]
+
+        return self.get_object('ModifyInstanceGroups', params, ModifyInstanceGroupsResponse, verb='POST')
 
     def run_jobflow(self, name, log_uri, ec2_keyname=None, availability_zone=None,
                     master_instance_type='m1.small',
                     slave_instance_type='m1.small', num_instances=1,
                     action_on_failure='TERMINATE_JOB_FLOW', keep_alive=False,
                     enable_debugging=False,
-                    hadoop_version='0.18',
+                    hadoop_version='0.20',
                     steps=[],
-                    bootstrap_actions=[]):
+                    bootstrap_actions=[],
+                    instance_groups=None,
+                    additional_info=None):
         """
         Runs a job flow
 
@@ -173,7 +223,14 @@
         :param enable_debugging: Denotes whether AWS console debugging should be enabled.
         :type steps: list(boto.emr.Step)
         :param steps: List of steps to add with the job
-
+        :type bootstrap_actions: list(boto.emr.BootstrapAction)
+        :param bootstrap_actions: List of bootstrap actions that run before Hadoop starts.
+        :type instance_groups: list(boto.emr.InstanceGroup)
+        :param instance_groups: Optional list of instance groups to use when creating
+                      this job. NB: When provided, this argument supersedes
+                      num_instances and master/slave_instance_type.
+        :type additional_info: JSON str
+        :param additional_info: A JSON string for selecting additional features
         :rtype: str
         :return: The jobflow id
         """
@@ -183,11 +240,31 @@
         params['Name'] = name
         params['LogUri'] = log_uri
 
-        # Instance args
-        instance_params = self._build_instance_args(ec2_keyname, availability_zone,
-                                                    master_instance_type, slave_instance_type,
-                                                    num_instances, keep_alive, hadoop_version)
-        params.update(instance_params)
+        # Common instance args
+        common_params = self._build_instance_common_args(ec2_keyname,
+                                                         availability_zone,
+                                                         keep_alive, hadoop_version)
+        params.update(common_params)
+
+        # NB: according to the AWS API's error message, we must
+        # "configure instances either using instance count, master and
+        # slave instance type or instance groups but not both."
+        #
+        # Thus we switch here on the truthiness of instance_groups.
+        if not instance_groups:
+            # Instance args (the common case)
+            instance_params = self._build_instance_count_and_type_args(
+                                                        master_instance_type,
+                                                        slave_instance_type,
+                                                        num_instances)
+            params.update(instance_params)
+        else:
+            # Instance group args (for spot instances or a heterogenous cluster)
+            list_args = self._build_instance_group_list_args(instance_groups)
+            instance_params = dict(
+                ('Instances.%s' % k, v) for k, v in list_args.iteritems()
+                )
+            params.update(instance_params)
 
         # Debugging step from EMR API docs
         if enable_debugging:
@@ -207,9 +284,31 @@
             bootstrap_action_args = [self._build_bootstrap_action_args(bootstrap_action) for bootstrap_action in bootstrap_actions]
             params.update(self._build_bootstrap_action_list(bootstrap_action_args))
 
-        response = self.get_object('RunJobFlow', params, RunJobFlowResponse)
+        if additional_info is not None:
+            params['AdditionalInfo'] = additional_info
+
+        response = self.get_object(
+            'RunJobFlow', params, RunJobFlowResponse, verb='POST')
         return response.jobflowid
 
+    def set_termination_protection(self, jobflow_id, termination_protection_status):
+        """
+        Set termination protection on specified Elastic MapReduce job flows
+
+        :type jobflow_ids: list or str
+        :param jobflow_ids: A list of job flow IDs
+        :type termination_protection_status: bool
+        :param termination_protection_status: Termination protection status
+        """
+        assert termination_protection_status in (True, False)
+
+        params = {}
+        params['TerminationProtected'] = (termination_protection_status and "true") or "false"
+        self.build_list_params(params, [jobflow_id], 'JobFlowIds.member')
+
+        return self.get_status('SetTerminationProtection', params, verb='POST')
+
+
     def _build_bootstrap_action_args(self, bootstrap_action):
         bootstrap_action_params = {}
         bootstrap_action_params['ScriptBootstrapAction.Path'] = bootstrap_action.path
@@ -248,7 +347,7 @@
         params = {}
         for i, bootstrap_action in enumerate(bootstrap_actions):
             for key, value in bootstrap_action.iteritems():
-                params['BootstrapActions.memeber.%s.%s' % (i + 1, key)] = value
+                params['BootstrapActions.member.%s.%s' % (i + 1, key)] = value
         return params
 
     def _build_step_list(self, steps):
@@ -258,15 +357,17 @@
         params = {}
         for i, step in enumerate(steps):
             for key, value in step.iteritems():
-                params['Steps.memeber.%s.%s' % (i+1, key)] = value
+                params['Steps.member.%s.%s' % (i+1, key)] = value
         return params
 
-    def _build_instance_args(self, ec2_keyname, availability_zone, master_instance_type,
-                             slave_instance_type, num_instances, keep_alive, hadoop_version):
+    def _build_instance_common_args(self, ec2_keyname, availability_zone,
+                                    keep_alive, hadoop_version):
+        """
+        Takes a number of parameters used when starting a jobflow (as
+        specified in run_jobflow() above). Returns a comparable dict for
+        use in making a RunJobFlow request.
+        """
         params = {
-            'Instances.MasterInstanceType' : master_instance_type,
-            'Instances.SlaveInstanceType' : slave_instance_type,
-            'Instances.InstanceCount' : num_instances,
             'Instances.KeepJobFlowAliveWhenNoSteps' : str(keep_alive).lower(),
             'Instances.HadoopVersion' : hadoop_version
         }
@@ -274,7 +375,53 @@
         if ec2_keyname:
             params['Instances.Ec2KeyName'] = ec2_keyname
         if availability_zone:
-            params['Placement'] = availability_zone
+            params['Instances.Placement.AvailabilityZone'] = availability_zone
 
         return params
 
+    def _build_instance_count_and_type_args(self, master_instance_type,
+                                            slave_instance_type, num_instances):
+        """
+        Takes a master instance type (string), a slave instance type
+        (string), and a number of instances. Returns a comparable dict
+        for use in making a RunJobFlow request.
+        """
+        params = {
+            'Instances.MasterInstanceType' : master_instance_type,
+            'Instances.SlaveInstanceType' : slave_instance_type,
+            'Instances.InstanceCount' : num_instances,
+            }
+        return params
+
+    def _build_instance_group_args(self, instance_group):
+        """
+        Takes an InstanceGroup; returns a dict that, when its keys are
+        properly prefixed, can be used for describing InstanceGroups in
+        RunJobFlow or AddInstanceGroups requests.
+        """
+        params = {
+            'InstanceCount' : instance_group.num_instances,
+            'InstanceRole' : instance_group.role,
+            'InstanceType' : instance_group.type,
+            'Name' : instance_group.name,
+            'Market' : instance_group.market
+        }
+        if instance_group.market == 'SPOT':
+            params['BidPrice'] = instance_group.bidprice
+        return params
+
+    def _build_instance_group_list_args(self, instance_groups):
+        """
+        Takes a list of InstanceGroups, or a single InstanceGroup. Returns
+        a comparable dict for use in making a RunJobFlow or AddInstanceGroups
+        request.
+        """
+        if type(instance_groups) != types.ListType:
+            instance_groups = [instance_groups]
+
+        params = {}
+        for i, instance_group in enumerate(instance_groups):
+            ig_dict = self._build_instance_group_args(instance_group)
+            for key, value in ig_dict.iteritems():
+                params['InstanceGroups.member.%d.%s' % (i+1, key)] = value
+        return params
diff --git a/boto/emr/emrobject.py b/boto/emr/emrobject.py
index 0ffe292..3430b98 100644
--- a/boto/emr/emrobject.py
+++ b/boto/emr/emrobject.py
@@ -1,5 +1,6 @@
 # Copyright (c) 2010 Spotify AB
 # Copyright (c) 2010 Jeremy Thurgood <firxen+boto@gmail.com>
+# Copyright (c) 2010-2011 Yelp
 #
 # Permission is hereby granted, free of charge, to any person obtaining a
 # copy of this software and associated documentation files (the
@@ -44,6 +45,12 @@
 class RunJobFlowResponse(EmrObject):
     Fields = set(['JobFlowId'])
 
+class AddInstanceGroupsResponse(EmrObject):
+    Fields = set(['InstanceGroupIds', 'JobFlowId'])
+    
+class ModifyInstanceGroupsResponse(EmrObject):
+    Fields = set(['RequestId'])
+    
 
 class Arg(EmrObject):
     def __init__(self, connection=None):
@@ -54,19 +61,37 @@
 
 
 class BootstrapAction(EmrObject):
-    Fields = set(['Name',
-                  'Args',
-                  'Path'])
+    Fields = set([
+        'Args',
+        'Name',
+        'Path',
+    ])
+
+    def startElement(self, name, attrs, connection):
+        if name == 'Args':
+            self.args = ResultSet([('member', Arg)])
+            return self.args
+
+
+class KeyValue(EmrObject):
+    Fields = set([
+        'Key',
+        'Value',
+    ])
 
 
 class Step(EmrObject):
-    Fields = set(['Name',
-                  'ActionOnFailure',
-                  'CreationDateTime',
-                  'StartDateTime',
-                  'EndDateTime',
-                  'LastStateChangeReason',
-                  'State'])
+    Fields = set([
+        'ActionOnFailure',
+        'CreationDateTime',
+        'EndDateTime',
+        'Jar',
+        'LastStateChangeReason',
+        'MainClass',
+        'Name',
+        'StartDateTime',
+        'State',
+    ])
 
     def __init__(self, connection=None):
         self.connection = connection
@@ -76,49 +101,58 @@
         if name == 'Args':
             self.args = ResultSet([('member', Arg)])
             return self.args
+        if name == 'Properties':
+            self.properties = ResultSet([('member', KeyValue)])
+            return self.properties
 
 
 class InstanceGroup(EmrObject):
-    Fields = set(['Name',
-                  'CreationDateTime',
-                  'InstanceRunningCount',
-                  'StartDateTime',
-                  'ReadyDateTime',
-                  'State',
-                  'EndDateTime',
-                  'InstanceRequestCount',
-                  'InstanceType',
-                  'Market',
-                  'LastStateChangeReason',
-                  'InstanceRole',
-                  'InstanceGroupId',
-                  'LaunchGroup',
-                  'SpotPrice'])
+    Fields = set([
+        'BidPrice',
+        'CreationDateTime',
+        'EndDateTime',
+        'InstanceGroupId',
+        'InstanceRequestCount',
+        'InstanceRole',
+        'InstanceRunningCount',
+        'InstanceType',
+        'LastStateChangeReason',
+        'LaunchGroup',
+        'Market',
+        'Name',
+        'ReadyDateTime',
+        'StartDateTime',
+        'State',
+    ])
 
 
 class JobFlow(EmrObject):
-    Fields = set(['CreationDateTime',
-                  'StartDateTime',
-                  'State',
-                  'EndDateTime',
-                  'Id',
-                  'InstanceCount',
-                  'JobFlowId',
-                  'LogUri',
-                  'MasterPublicDnsName',
-                  'MasterInstanceId',
-                  'Name',
-                  'Placement',
-                  'RequestId',
-                  'Type',
-                  'Value',
-                  'AvailabilityZone',
-                  'SlaveInstanceType',
-                  'MasterInstanceType',
-                  'Ec2KeyName',
-                  'InstanceCount',
-                  'KeepJobFlowAliveWhenNoSteps',
-                  'LastStateChangeReason'])
+    Fields = set([
+        'AvailabilityZone',
+        'CreationDateTime',
+        'Ec2KeyName',
+        'EndDateTime',
+        'HadoopVersion',
+        'Id',
+        'InstanceCount',
+        'JobFlowId',
+        'KeepJobFlowAliveWhenNoSteps',
+        'LastStateChangeReason',
+        'LogUri',
+        'MasterInstanceId',
+        'MasterInstanceType',
+        'MasterPublicDnsName',
+        'Name',
+        'NormalizedInstanceHours',
+        'ReadyDateTime',
+        'RequestId',
+        'SlaveInstanceType',
+        'StartDateTime',
+        'State',
+        'TerminationProtected',
+        'Type',
+        'Value',
+    ])
 
     def __init__(self, connection=None):
         self.connection = connection
@@ -138,4 +172,3 @@
             return self.bootstrapactions
         else:
             return None
-
diff --git a/boto/emr/instance_group.py b/boto/emr/instance_group.py
new file mode 100644
index 0000000..be22951
--- /dev/null
+++ b/boto/emr/instance_group.py
@@ -0,0 +1,43 @@
+#
+# Permission is hereby granted, free of charge, to any person obtaining a
+# copy of this software and associated documentation files (the
+# "Software"), to deal in the Software without restriction, including
+# without limitation the rights to use, copy, modify, merge, publish, dis-
+# tribute, sublicense, and/or sell copies of the Software, and to permit
+# persons to whom the Software is furnished to do so, subject to the fol-
+# lowing conditions:
+#
+# The above copyright notice and this permission notice shall be included
+# in all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
+# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
+# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+# IN THE SOFTWARE.
+
+
+class InstanceGroup(object):
+    def __init__(self, num_instances, role, type, market, name, bidprice=None):
+        self.num_instances = num_instances
+        self.role = role
+        self.type = type
+        self.market = market
+        self.name = name
+        if market == 'SPOT':
+            if not isinstance(bidprice, basestring):
+                raise ValueError('bidprice must be specified if market == SPOT')
+            self.bidprice = bidprice
+
+    def __repr__(self):
+        if self.market == 'SPOT':
+            return '%s.%s(name=%r, num_instances=%r, role=%r, type=%r, market = %r, bidprice = %r)' % (
+                self.__class__.__module__, self.__class__.__name__,
+                self.name, self.num_instances, self.role, self.type, self.market,
+                self.bidprice)
+        else:
+            return '%s.%s(name=%r, num_instances=%r, role=%r, type=%r, market = %r)' % (
+                self.__class__.__module__, self.__class__.__name__,
+                self.name, self.num_instances, self.role, self.type, self.market)
diff --git a/boto/emr/step.py b/boto/emr/step.py
index a444261..15dfe88 100644
--- a/boto/emr/step.py
+++ b/boto/emr/step.py
@@ -1,4 +1,5 @@
 # Copyright (c) 2010 Spotify AB
+# Copyright (c) 2010-2011 Yelp
 #
 # Permission is hereby granted, free of charge, to any person obtaining a
 # copy of this software and associated documentation files (the
@@ -94,10 +95,11 @@
     """
     Hadoop streaming step
     """
-    def __init__(self, name, mapper, reducer=None,
+    def __init__(self, name, mapper, reducer=None, combiner=None,
                  action_on_failure='TERMINATE_JOB_FLOW',
                  cache_files=None, cache_archives=None,
-                 step_args=None, input=None, output=None):
+                 step_args=None, input=None, output=None,
+                 jar='/home/hadoop/contrib/streaming/hadoop-streaming.jar'):
         """
         A hadoop streaming elastic mapreduce step
 
@@ -107,6 +109,8 @@
         :param mapper: The mapper URI
         :type reducer: str
         :param reducer: The reducer URI
+        :type combiner: str
+        :param combiner: The combiner URI. Only works for Hadoop 0.20 and later!
         :type action_on_failure: str
         :param action_on_failure: An action, defined in the EMR docs to take on failure.
         :type cache_files: list(str)
@@ -119,15 +123,19 @@
         :param input: The input uri
         :type output: str
         :param output: The output uri
+        :type jar: str
+        :param jar: The hadoop streaming jar. This can be either a local path on the master node, or an s3:// URI.
         """
         self.name = name
         self.mapper = mapper
         self.reducer = reducer
+        self.combiner = combiner
         self.action_on_failure = action_on_failure
         self.cache_files = cache_files
         self.cache_archives = cache_archives
         self.input = input
         self.output = output
+        self._jar = jar
 
         if isinstance(step_args, basestring):
             step_args = [step_args]
@@ -135,16 +143,28 @@
         self.step_args = step_args
 
     def jar(self):
-        return '/home/hadoop/contrib/streaming/hadoop-0.18-streaming.jar'
+        return self._jar
 
     def main_class(self):
         return None
 
     def args(self):
-        args = ['-mapper', self.mapper]
+        args = []
+
+        # put extra args BEFORE -mapper and -reducer so that e.g. -libjar
+        # will work
+        if self.step_args:
+            args.extend(self.step_args)
+
+        args.extend(['-mapper', self.mapper])
+
+        if self.combiner:
+            args.extend(['-combiner', self.combiner])
 
         if self.reducer:
             args.extend(['-reducer', self.reducer])
+        else:
+            args.extend(['-jobconf', 'mapred.reduce.tasks=0'])
 
         if self.input:
             if isinstance(self.input, list):
@@ -163,17 +183,11 @@
            for cache_archive in self.cache_archives:
                 args.extend(('-cacheArchive', cache_archive))
 
-        if self.step_args:
-            args.extend(self.step_args)
-
-        if not self.reducer:
-            args.extend(['-jobconf', 'mapred.reduce.tasks=0'])
-
         return args
 
     def __repr__(self):
-        return '%s.%s(name=%r, mapper=%r, reducer=%r, action_on_failure=%r, cache_files=%r, cache_archives=%r, step_args=%r, input=%r, output=%r)' % (
+        return '%s.%s(name=%r, mapper=%r, reducer=%r, action_on_failure=%r, cache_files=%r, cache_archives=%r, step_args=%r, input=%r, output=%r, jar=%r)' % (
             self.__class__.__module__, self.__class__.__name__,
             self.name, self.mapper, self.reducer, self.action_on_failure,
             self.cache_files, self.cache_archives, self.step_args,
-            self.input, self.output)
+            self.input, self.output, self._jar)
diff --git a/boto/exception.py b/boto/exception.py
index 718be46..bfdb052 100644
--- a/boto/exception.py
+++ b/boto/exception.py
@@ -35,8 +35,8 @@
     General Boto Client error (error accessing AWS)
     """
 
-    def __init__(self, reason):
-        StandardError.__init__(self)
+    def __init__(self, reason, *args):
+        StandardError.__init__(self, reason, *args)
         self.reason = reason
 
     def __repr__(self):
@@ -69,8 +69,8 @@
 
 class BotoServerError(StandardError):
 
-    def __init__(self, status, reason, body=None):
-        StandardError.__init__(self)
+    def __init__(self, status, reason, body=None, *args):
+        StandardError.__init__(self, status, reason, body, *args)
         self.status = status
         self.reason = reason
         self.body = body or ''
@@ -86,11 +86,12 @@
                 h = handler.XmlHandler(self, self)
                 xml.sax.parseString(self.body, h)
             except xml.sax.SAXParseException, pe:
-                # Go ahead and clean up anything that may have
-                # managed to get into the error data so we
-                # don't get partial garbage.
-                print "Warning: failed to parse error message from AWS: %s" % pe
-                self._cleanupParsedProperties()
+                # Remove unparsable message body so we don't include garbage
+                # in exception. But first, save self.body in self.error_message
+                # because occasionally we get error messages from Eucalyptus
+                # that are just text strings that we want to preserve.
+                self.error_message = self.body
+                self.body = None
 
     def __getattr__(self, name):
         if name == 'message':
@@ -221,7 +222,7 @@
     Error when decoding an SQS message.
     """
     def __init__(self, reason, message):
-        BotoClientError.__init__(self, reason)
+        BotoClientError.__init__(self, reason, message)
         self.message = message
 
     def __repr__(self):
@@ -358,14 +359,14 @@
     """Exception raised when URI is invalid."""
 
     def __init__(self, message):
-        Exception.__init__(self)
+        Exception.__init__(self, message)
         self.message = message
 
 class InvalidAclError(Exception):
     """Exception raised when ACL XML is invalid."""
 
     def __init__(self, message):
-        Exception.__init__(self)
+        Exception.__init__(self, message)
         self.message = message
 
 class NoAuthHandlerFound(Exception):
@@ -390,11 +391,21 @@
     START_OVER = 'START_OVER'
 
     # WAIT_BEFORE_RETRY means the resumable transfer failed but that it can
-    # be retried after a time delay.
+    # be retried after a time delay within the current process.
     WAIT_BEFORE_RETRY = 'WAIT_BEFORE_RETRY'
 
-    # ABORT means the resumable transfer failed and that delaying/retrying
-    # within the current process will not help.
+    # ABORT_CUR_PROCESS means the resumable transfer failed and that
+    # delaying/retrying within the current process will not help. If
+    # resumable transfer included a state tracker file the upload can be
+    # retried again later, in another process (e.g., a later run of gsutil).
+    ABORT_CUR_PROCESS = 'ABORT_CUR_PROCESS'
+
+    # ABORT means the resumable transfer failed in a way that it does not
+    # make sense to continue in the current process, and further that the 
+    # current tracker ID should not be preserved (in a tracker file if one
+    # was specified at resumable upload start time). If the user tries again
+    # later (e.g., a separate run of gsutil) it will get a new resumable
+    # upload ID.
     ABORT = 'ABORT'
 
 class ResumableUploadException(Exception):
@@ -405,7 +416,7 @@
     """
 
     def __init__(self, message, disposition):
-        Exception.__init__(self)
+        Exception.__init__(self, message, disposition)
         self.message = message
         self.disposition = disposition
 
@@ -421,7 +432,7 @@
     """
 
     def __init__(self, message, disposition):
-        Exception.__init__(self)
+        Exception.__init__(self, message, disposition)
         self.message = message
         self.disposition = disposition
 
diff --git a/boto/file/bucket.py b/boto/file/bucket.py
index 7a1636b..8aec677 100644
--- a/boto/file/bucket.py
+++ b/boto/file/bucket.py
@@ -1,4 +1,5 @@
 # Copyright 2010 Google Inc.
+# Copyright (c) 2011, Nexenta Systems Inc.
 #
 # Permission is hereby granted, free of charge, to any person obtaining a
 # copy of this software and associated documentation files (the
@@ -49,7 +50,7 @@
 
         :type version_id: string
         :param version_id: Unused in this subclass.
-        
+
         :type mfa_token: tuple or list of strings
         :param mfa_token: Unused in this subclass.
         """
@@ -67,7 +68,8 @@
         key = Key(self.name, self.contained_key)
         return SimpleResultSet([key])
 
-    def get_key(self, key_name, headers=None, version_id=None):
+    def get_key(self, key_name, headers=None, version_id=None,
+                                            key_type=Key.KEY_REGULAR_FILE):
         """
         Check to see if a particular key exists within the bucket.
         Returns: An instance of a Key object or None
@@ -78,13 +80,19 @@
         :type version_id: string
         :param version_id: Unused in this subclass.
 
+        :type stream_type: integer
+        :param stream_type: Type of the Key - Regular File or input/output Stream
+
         :rtype: :class:`boto.file.key.Key`
         :returns: A Key object from this bucket.
         """
-        fp = open(key_name, 'rb')
-        return Key(self.name, key_name, fp)
+        if key_name == '-':
+            return Key(self.name, '-', key_type=Key.KEY_STREAM_READABLE)
+        else:
+            fp = open(key_name, 'rb')
+            return Key(self.name, key_name, fp)
 
-    def new_key(self, key_name=None):
+    def new_key(self, key_name=None, key_type=Key.KEY_REGULAR_FILE):
         """
         Creates a new key
 
@@ -94,8 +102,11 @@
         :rtype: :class:`boto.file.key.Key`
         :returns: An instance of the newly created key object
         """
-        dir_name = os.path.dirname(key_name)
-        if dir_name and not os.path.exists(dir_name):
-            os.makedirs(dir_name)
-        fp = open(key_name, 'wb')
-        return Key(self.name, key_name, fp)
+        if key_name == '-':
+            return Key(self.name, '-', key_type=Key.KEY_STREAM_WRITABLE)
+        else:
+            dir_name = os.path.dirname(key_name)
+            if dir_name and not os.path.exists(dir_name):
+                os.makedirs(dir_name)
+            fp = open(key_name, 'wb')
+            return Key(self.name, key_name, fp)
diff --git a/boto/file/key.py b/boto/file/key.py
index af801a5..6f66eda 100755
--- a/boto/file/key.py
+++ b/boto/file/key.py
@@ -1,4 +1,5 @@
 # Copyright 2010 Google Inc.
+# Copyright (c) 2011, Nexenta Systems Inc.
 #
 # Permission is hereby granted, free of charge, to any person obtaining a
 # copy of this software and associated documentation files (the
@@ -22,14 +23,31 @@
 # File representation of key, for use with "file://" URIs.
 
 import os, shutil, StringIO
+import sys
 
 class Key(object):
 
-    def __init__(self, bucket, name, fp=None):
+    KEY_STREAM_READABLE = 0x01
+    KEY_STREAM_WRITABLE = 0x02
+    KEY_STREAM          = (KEY_STREAM_READABLE | KEY_STREAM_WRITABLE)
+    KEY_REGULAR_FILE    = 0x00
+
+    def __init__(self, bucket, name, fp=None, key_type=KEY_REGULAR_FILE):
         self.bucket = bucket
         self.full_path = name
-        self.name = name
-        self.fp = fp
+        if name == '-':
+            self.name = None
+        else:
+            self.name = name
+        self.key_type = key_type
+        if key_type == self.KEY_STREAM_READABLE:
+            self.fp = sys.stdin
+            self.full_path = '<STDIN>'
+        elif key_type == self.KEY_STREAM_WRITABLE:
+            self.fp = sys.stdout
+            self.full_path = '<STDOUT>'
+        else:
+            self.fp = fp
 
     def __str__(self):
         return 'file://' + self.full_path
@@ -50,7 +68,12 @@
         :type cb: int
         :param num_cb: ignored in this subclass.
         """
-        key_file = open(self.full_path, 'rb')
+        if self.key_type & self.KEY_STREAM_READABLE:
+            raise BotoClientError('Stream is not Readable')
+        elif self.key_type & self.KEY_STREAM_WRITABLE:
+            key_file = self.fp
+        else:
+            key_file = open(self.full_path, 'rb')
         shutil.copyfileobj(key_file, fp)
 
     def set_contents_from_file(self, fp, headers=None, replace=True, cb=None,
@@ -88,9 +111,14 @@
                    This is the same format returned by the compute_md5 method.
         :param md5: ignored in this subclass.
         """
-        if not replace and os.path.exists(self.full_path):
-            return
-        key_file = open(self.full_path, 'wb')
+        if self.key_type & self.KEY_STREAM_WRITABLE:
+            raise BotoClientError('Stream is not writable')
+        elif self.key_type & self.KEY_STREAM_READABLE:
+            key_file = self.fp
+        else:
+            if not replace and os.path.exists(self.full_path):
+                return
+            key_file = open(self.full_path, 'wb')
         shutil.copyfileobj(fp, key_file)
         key_file.close()
 
@@ -121,3 +149,6 @@
         fp = StringIO.StringIO()
         self.get_contents_to_file(fp)
         return fp.getvalue()
+
+    def is_stream(self):
+        return (self.key_type & self.KEY_STREAM)
diff --git a/boto/fps/connection.py b/boto/fps/connection.py
index 3d7812e..24b04d9 100644
--- a/boto/fps/connection.py
+++ b/boto/fps/connection.py
@@ -42,7 +42,8 @@
                  proxy_user=None, proxy_pass=None,
                  host='fps.sandbox.amazonaws.com', debug=0,
                  https_connection_factory=None, path="/"):
-        AWSQueryConnection.__init__(self, aws_access_key_id, aws_secret_access_key,
+        AWSQueryConnection.__init__(self, aws_access_key_id,
+                                    aws_secret_access_key,
                                     is_secure, port, proxy, proxy_port,
                                     proxy_user, proxy_pass, host, debug,
                                     https_connection_factory, path)
@@ -50,7 +51,9 @@
     def _required_auth_capability(self):
         return ['fps']
 
-    def install_payment_instruction(self, instruction, token_type="Unrestricted", transaction_id=None):
+    def install_payment_instruction(self, instruction,
+                                    token_type="Unrestricted",
+                                    transaction_id=None):
         """
         InstallPaymentInstruction
         instruction: The PaymentInstruction to send, for example: 
@@ -70,13 +73,16 @@
         response = self.make_request("InstallPaymentInstruction", params)
         return response
     
-    def install_caller_instruction(self, token_type="Unrestricted", transaction_id=None):
+    def install_caller_instruction(self, token_type="Unrestricted",
+                                   transaction_id=None):
         """
         Set us up as a caller
         This will install a new caller_token into the FPS section.
         This should really only be called to regenerate the caller token.
         """
-        response = self.install_payment_instruction("MyRole=='Caller';", token_type=token_type, transaction_id=transaction_id)
+        response = self.install_payment_instruction("MyRole=='Caller';",
+                                                    token_type=token_type,
+                                                    transaction_id=transaction_id)
         body = response.read()
         if(response.status == 200):
             rs = ResultSet()
@@ -84,20 +90,25 @@
             xml.sax.parseString(body, h)
             caller_token = rs.TokenId
             try:
-                boto.config.save_system_option("FPS", "caller_token", caller_token)
+                boto.config.save_system_option("FPS", "caller_token",
+                                               caller_token)
             except(IOError):
-                boto.config.save_user_option("FPS", "caller_token", caller_token)
+                boto.config.save_user_option("FPS", "caller_token",
+                                             caller_token)
             return caller_token
         else:
             raise FPSResponseError(response.status, response.reason, body)
 
-    def install_recipient_instruction(self, token_type="Unrestricted", transaction_id=None):
+    def install_recipient_instruction(self, token_type="Unrestricted",
+                                      transaction_id=None):
         """
         Set us up as a Recipient
         This will install a new caller_token into the FPS section.
         This should really only be called to regenerate the recipient token.
         """
-        response = self.install_payment_instruction("MyRole=='Recipient';", token_type=token_type, transaction_id=transaction_id)
+        response = self.install_payment_instruction("MyRole=='Recipient';",
+                                                    token_type=token_type,
+                                                    transaction_id=transaction_id)
         body = response.read()
         if(response.status == 200):
             rs = ResultSet()
@@ -105,15 +116,65 @@
             xml.sax.parseString(body, h)
             recipient_token = rs.TokenId
             try:
-                boto.config.save_system_option("FPS", "recipient_token", recipient_token)
+                boto.config.save_system_option("FPS", "recipient_token",
+                                               recipient_token)
             except(IOError):
-                boto.config.save_user_option("FPS", "recipient_token", recipient_token)
+                boto.config.save_user_option("FPS", "recipient_token",
+                                             recipient_token)
 
             return recipient_token
         else:
             raise FPSResponseError(response.status, response.reason, body)
 
-    def make_url(self, returnURL, paymentReason, pipelineName, transactionAmount, **params):
+    def make_marketplace_registration_url(self, returnURL, pipelineName,
+                                          maxFixedFee=0.0, maxVariableFee=0.0,
+                                          recipientPaysFee=True, **params):  
+        """
+        Generate the URL with the signature required for signing up a recipient
+        """
+        # use the sandbox authorization endpoint if we're using the
+        #  sandbox for API calls.
+        endpoint_host = 'authorize.payments.amazon.com'
+        if 'sandbox' in self.host:
+            endpoint_host = 'authorize.payments-sandbox.amazon.com'
+        base = "/cobranded-ui/actions/start"
+
+        params['callerKey'] = str(self.aws_access_key_id)
+        params['returnURL'] = str(returnURL)
+        params['pipelineName'] = str(pipelineName)
+        params['maxFixedFee'] = str(maxFixedFee)
+        params['maxVariableFee'] = str(maxVariableFee)
+        params['recipientPaysFee'] = str(recipientPaysFee)
+        params["signatureMethod"] = 'HmacSHA256'
+        params["signatureVersion"] = '2'
+
+        if(not params.has_key('callerReference')):
+            params['callerReference'] = str(uuid.uuid4())
+
+        parts = ''
+        for k in sorted(params.keys()):
+            parts += "&%s=%s" % (k, urllib.quote(params[k], '~'))
+
+        canonical = '\n'.join(['GET',
+                               str(endpoint_host).lower(),
+                               base,
+                               parts[1:]])
+
+        signature = self._auth_handler.sign_string(canonical)
+        params["signature"] = signature
+
+        urlsuffix = ''
+        for k in sorted(params.keys()):
+            urlsuffix += "&%s=%s" % (k, urllib.quote(params[k], '~'))
+        urlsuffix = urlsuffix[1:] # strip the first &
+        
+        fmt = "https://%(endpoint_host)s%(base)s?%(urlsuffix)s"
+        final = fmt % vars()
+        return final
+
+
+    def make_url(self, returnURL, paymentReason, pipelineName,
+                 transactionAmount, **params):
         """
         Generate the URL with the signature required for a transaction
         """
@@ -124,15 +185,14 @@
             endpoint_host = 'authorize.payments-sandbox.amazon.com'
         base = "/cobranded-ui/actions/start"
 
-
         params['callerKey'] = str(self.aws_access_key_id)
         params['returnURL'] = str(returnURL)
         params['paymentReason'] = str(paymentReason)
         params['pipelineName'] = pipelineName
+        params['transactionAmount'] = transactionAmount
         params["signatureMethod"] = 'HmacSHA256'
         params["signatureVersion"] = '2'
-        params["transactionAmount"] = transactionAmount
-
+        
         if(not params.has_key('callerReference')):
             params['callerReference'] = str(uuid.uuid4())
 
@@ -161,8 +221,9 @@
             recipientTokenId=None, callerTokenId=None,
             chargeFeeTo="Recipient",
             callerReference=None, senderReference=None, recipientReference=None,
-            senderDescription=None, recipientDescription=None, callerDescription=None,
-            metadata=None, transactionDate=None, reserve=False):
+            senderDescription=None, recipientDescription=None,
+            callerDescription=None, metadata=None,
+            transactionDate=None, reserve=False):
         """
         Make a payment transaction. You must specify the amount.
         This can also perform a Reserve request if 'reserve' is set to True.
@@ -269,9 +330,11 @@
         else:
             raise FPSResponseError(response.status, response.reason, body)
     
-    def refund(self, callerReference, transactionId, refundAmount=None, callerDescription=None):
+    def refund(self, callerReference, transactionId, refundAmount=None,
+               callerDescription=None):
         """
-        Refund a transaction. This refunds the full amount by default unless 'refundAmount' is specified.
+        Refund a transaction. This refunds the full amount by default
+        unless 'refundAmount' is specified.
         """
         params = {}
         params['CallerReference'] = callerReference
@@ -310,10 +373,10 @@
     
     def get_token_by_caller_reference(self, callerReference):
         """
-        Returns details about the token specified by 'callerReference'.
+        Returns details about the token specified by 'CallerReference'.
         """
         params ={}
-        params['callerReference'] = callerReference
+        params['CallerReference'] = callerReference
         
         response = self.make_request("GetTokenByCaller", params)
         body = response.read()
@@ -324,9 +387,10 @@
             return rs
         else:
             raise FPSResponseError(response.status, response.reason, body)
+        
     def get_token_by_caller_token(self, tokenId):
         """
-        Returns details about the token specified by 'callerReference'.
+        Returns details about the token specified by 'TokenId'.
         """
         params ={}
         params['TokenId'] = tokenId
diff --git a/boto/gs/acl.py b/boto/gs/acl.py
index 33aaadf..0846f01 100755
--- a/boto/gs/acl.py
+++ b/boto/gs/acl.py
@@ -42,7 +42,7 @@
 USER_BY_ID = 'UserById'
 
 
-CannedACLStrings = ['private', 'public-read',
+CannedACLStrings = ['private', 'public-read', 'project-private',
                     'public-read-write', 'authenticated-read',
                     'bucket-owner-read', 'bucket-owner-full-control']
 
@@ -54,6 +54,10 @@
         self.parent = parent
         self.entries = []
 
+    @property
+    def acl(self):
+        return self
+
     def __repr__(self):
         # Owner is optional in GS ACLs.
         if hasattr(self, 'owner'):
diff --git a/boto/gs/bucket.py b/boto/gs/bucket.py
index b4b80e8..f49533c 100644
--- a/boto/gs/bucket.py
+++ b/boto/gs/bucket.py
@@ -22,7 +22,7 @@
 import boto
 from boto import handler
 from boto.exception import InvalidAclError
-from boto.gs.acl import ACL
+from boto.gs.acl import ACL, CannedACLStrings
 from boto.gs.acl import SupportedPermissions as GSPermissions
 from boto.gs.key import Key as GSKey
 from boto.s3.acl import Policy
@@ -55,6 +55,25 @@
             raise self.connection.provider.storage_response_error(
                 response.status, response.reason, body)
 
+    def set_canned_acl(self, acl_str, key_name='', headers=None,
+                       version_id=None):
+        assert acl_str in CannedACLStrings
+
+        if headers:
+            headers[self.connection.provider.acl_header] = acl_str
+        else:
+            headers={self.connection.provider.acl_header: acl_str}
+
+        query_args='acl'
+        if version_id:
+            query_args += '&versionId=%s' % version_id
+        response = self.connection.make_request('PUT', self.name, key_name,
+                headers=headers, query_args=query_args)
+        body = response.read()
+        if response.status != 200:
+            raise self.connection.provider.storage_response_error(
+                response.status, response.reason, body)
+
     # Method with same signature as boto.s3.bucket.Bucket.add_email_grant(),
     # to allow polymorphic treatment at application layer.
     def add_email_grant(self, permission, email_address,
@@ -171,3 +190,23 @@
     def list_grants(self, headers=None):
         acl = self.get_acl(headers=headers)
         return acl.entries
+
+    def disable_logging(self, headers=None):
+        xml_str = '<?xml version="1.0" encoding="UTF-8"?><Logging/>'
+        self.set_subresource('logging', xml_str, headers=headers)
+
+    def enable_logging(self, target_bucket, target_prefix=None, headers=None,
+                       canned_acl=None):
+        if isinstance(target_bucket, Bucket):
+            target_bucket = target_bucket.name
+        xml_str = '<?xml version="1.0" encoding="UTF-8"?><Logging>'
+        xml_str = (xml_str + '<LogBucket>%s</LogBucket>' % target_bucket)
+        if target_prefix:
+            xml_str = (xml_str +
+                       '<LogObjectPrefix>%s</LogObjectPrefix>' % target_prefix)
+        if canned_acl:
+            xml_str = (xml_str +
+                       '<PredefinedAcl>%s</PredefinedAcl>' % canned_acl)
+        xml_str = xml_str + '</Logging>'
+
+        self.set_subresource('logging', xml_str, headers=headers)
diff --git a/boto/gs/connection.py b/boto/gs/connection.py
index ec81f32..cf79ed7 100755
--- a/boto/gs/connection.py
+++ b/boto/gs/connection.py
@@ -19,9 +19,14 @@
 # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
 # IN THE SOFTWARE.
 
+from boto.gs.bucket import Bucket            
 from boto.s3.connection import S3Connection
 from boto.s3.connection import SubdomainCallingFormat
-from boto.gs.bucket import Bucket            
+from boto.s3.connection import check_lowercase_bucketname
+
+class Location:
+    DEFAULT = '' # US
+    EU = 'EU'
 
 class GSConnection(S3Connection):
 
@@ -37,3 +42,51 @@
                  is_secure, port, proxy, proxy_port, proxy_user, proxy_pass,
                  host, debug, https_connection_factory, calling_format, path,
                  "google", Bucket)
+
+    def create_bucket(self, bucket_name, headers=None,
+                      location=Location.DEFAULT, policy=None):
+        """
+        Creates a new bucket. By default it's located in the USA. You can
+        pass Location.EU to create an European bucket. You can also pass
+        a LocationConstraint, which (in addition to locating the bucket
+        in the specified location) informs Google that Google services
+        must not copy data out of that location.
+
+        :type bucket_name: string
+        :param bucket_name: The name of the new bucket
+        
+        :type headers: dict
+        :param headers: Additional headers to pass along with the request to AWS.
+
+        :type location: :class:`boto.gs.connection.Location`
+        :param location: The location of the new bucket
+
+        :type policy: :class:`boto.s3.acl.CannedACLStrings`
+        :param policy: A canned ACL policy that will be applied to the new key in S3.
+             
+        """
+        check_lowercase_bucketname(bucket_name)
+
+        if policy:
+            if headers:
+                headers[self.provider.acl_header] = policy
+            else:
+                headers = {self.provider.acl_header : policy}
+        if not location:
+            data = ''
+        else:
+            data = ('<CreateBucketConfiguration>'
+                        '<LocationConstraint>%s</LocationConstraint>'
+                    '</CreateBucketConfiguration>' % location)
+        response = self.make_request('PUT', bucket_name, headers=headers,
+                data=data)
+        body = response.read()
+        if response.status == 409:
+            raise self.provider.storage_create_error(
+                response.status, response.reason, body)
+        if response.status == 200:
+            return self.bucket_class(self, bucket_name)
+        else:
+            raise self.provider.storage_response_error(
+                response.status, response.reason, body)
+
diff --git a/boto/gs/key.py b/boto/gs/key.py
index 608a9a5..de6e6f4 100644
--- a/boto/gs/key.py
+++ b/boto/gs/key.py
@@ -14,11 +14,12 @@
 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
 # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
 # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
-# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, 
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
 # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
 # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
 # IN THE SOFTWARE.
 
+import StringIO
 from boto.s3.key import Key as S3Key
 
 class Key(S3Key):
@@ -107,7 +108,7 @@
         acl.add_group_grant(permission, group_id)
         self.set_acl(acl)
 
-    def set_contents_from_file(self, fp, headers={}, replace=True,
+    def set_contents_from_file(self, fp, headers=None, replace=True,
                                cb=None, num_cb=10, policy=None, md5=None,
                                res_upload_handler=None):
         """
@@ -163,8 +164,7 @@
         just overriding/sharing code the way it currently works).
         """
         provider = self.bucket.connection.provider
-        if headers is None:
-            headers = {}
+        headers = headers or {}
         if policy:
             headers[provider.acl_header] = policy
         if hasattr(fp, 'name'):
@@ -245,3 +245,56 @@
         self.set_contents_from_file(fp, headers, replace, cb, num_cb,
                                     policy, md5, res_upload_handler)
         fp.close()
+
+    def set_contents_from_string(self, s, headers=None, replace=True,
+                                 cb=None, num_cb=10, policy=None, md5=None):
+        """
+        Store an object in S3 using the name of the Key object as the
+        key in S3 and the string 's' as the contents.
+        See set_contents_from_file method for details about the
+        parameters.
+
+        :type headers: dict
+        :param headers: Additional headers to pass along with the
+                        request to AWS.
+
+        :type replace: bool
+        :param replace: If True, replaces the contents of the file if
+                        it already exists.
+
+        :type cb: function
+        :param cb: a callback function that will be called to report
+                   progress on the upload.  The callback should accept
+                   two integer parameters, the first representing the
+                   number of bytes that have been successfully
+                   transmitted to S3 and the second representing the
+                   size of the to be transmitted object.
+
+        :type cb: int
+        :param num_cb: (optional) If a callback is specified with
+                       the cb parameter this parameter determines the
+                       granularity of the callback by defining
+                       the maximum number of times the callback will
+                       be called during the file transfer.
+
+        :type policy: :class:`boto.s3.acl.CannedACLStrings`
+        :param policy: A canned ACL policy that will be applied to the
+                       new key in S3.
+
+        :type md5: A tuple containing the hexdigest version of the MD5
+                   checksum of the file as the first element and the
+                   Base64-encoded version of the plain checksum as the
+                   second element.  This is the same format returned by
+                   the compute_md5 method.
+        :param md5: If you need to compute the MD5 for any reason prior
+                    to upload, it's silly to have to do it twice so this
+                    param, if present, will be used as the MD5 values
+                    of the file.  Otherwise, the checksum will be computed.
+        """
+        if isinstance(s, unicode):
+            s = s.encode("utf-8")
+        fp = StringIO.StringIO(s)
+        r = self.set_contents_from_file(fp, headers, replace, cb, num_cb,
+                                        policy, md5)
+        fp.close()
+        return r
diff --git a/boto/gs/resumable_upload_handler.py b/boto/gs/resumable_upload_handler.py
index e8d5b03..a60d91d 100644
--- a/boto/gs/resumable_upload_handler.py
+++ b/boto/gs/resumable_upload_handler.py
@@ -23,6 +23,7 @@
 import errno
 import httplib
 import os
+import random
 import re
 import socket
 import time
@@ -35,13 +36,14 @@
 from boto.exception import ResumableUploadException
 
 """
-Handler for Google Storage resumable uploads. See
+Handler for Google Cloud Storage resumable uploads. See
 http://code.google.com/apis/storage/docs/developer-guide.html#resumable
 for details.
 
 Resumable uploads will retry failed uploads, resuming at the byte
 count completed by the last upload attempt. If too many retries happen with
-no progress (per configurable num_retries param), the upload will be aborted.
+no progress (per configurable num_retries param), the upload will be
+aborted in the current process.
 
 The caller can optionally specify a tracker_file_name param in the
 ResumableUploadHandler constructor. If you do this, that file will
@@ -169,14 +171,13 @@
 
     def _query_server_state(self, conn, file_length):
         """
-        Queries server to find out what bytes it currently has.
+        Queries server to find out state of given upload.
 
         Note that this method really just makes special case use of the
         fact that the upload server always returns the current start/end
         state whenever a PUT doesn't complete.
 
-        Returns (server_start, server_end), where the values are inclusive.
-        For example, (0, 2) would mean that the server has bytes 0, 1, *and* 2.
+        Returns HTTP response from sending request.
 
         Raises ResumableUploadException if problem querying server.
         """
@@ -186,11 +187,22 @@
         put_headers['Content-Range'] = (
             self._build_content_range_header('*', file_length))
         put_headers['Content-Length'] = '0'
-        resp = AWSAuthConnection.make_request(conn, 'PUT',
+        return AWSAuthConnection.make_request(conn, 'PUT',
                                               path=self.tracker_uri_path,
                                               auth_path=self.tracker_uri_path,
                                               headers=put_headers,
                                               host=self.tracker_uri_host)
+
+    def _query_server_pos(self, conn, file_length):
+        """
+        Queries server to find out what bytes it currently has.
+
+        Returns (server_start, server_end), where the values are inclusive.
+        For example, (0, 2) would mean that the server has bytes 0, 1, *and* 2.
+
+        Raises ResumableUploadException if problem querying server.
+        """
+        resp = self._query_server_state(conn, file_length)
         if resp.status == 200:
             return (0, file_length)  # Completed upload.
         if resp.status != 308:
@@ -260,11 +272,22 @@
             'POST', key.bucket.name, key.name, post_headers)
         # Get tracker URI from response 'Location' header.
         body = resp.read()
-        # Check for '201 Created' response code.
-        if resp.status != 201:
+
+        # Check for various status conditions.
+        if resp.status in [500, 503]:
+            # Retry status 500 and 503 errors after a delay.
             raise ResumableUploadException(
-                'Got status %d from attempt to start resumable upload' %
-                resp.status, ResumableTransferDisposition.WAIT_BEFORE_RETRY)
+                'Got status %d from attempt to start resumable upload. '
+                'Will wait/retry' % resp.status,
+                ResumableTransferDisposition.WAIT_BEFORE_RETRY)
+        elif resp.status != 200 and resp.status != 201:
+            raise ResumableUploadException(
+                'Got status %d from attempt to start resumable upload. '
+                'Aborting' % resp.status,
+                ResumableTransferDisposition.ABORT)
+
+        # Else we got 200 or 201 response code, indicating the resumable
+        # upload was created.
         tracker_uri = resp.getheader('Location')
         if not tracker_uri:
             raise ResumableUploadException(
@@ -330,32 +353,29 @@
         if cb:
             cb(total_bytes_uploaded, file_length)
         if total_bytes_uploaded != file_length:
-            raise ResumableUploadException('File changed during upload: EOF at '
-                                           '%d bytes of %d byte file.' %
-                                           (total_bytes_uploaded, file_length),
-                                           ResumableTransferDisposition.ABORT)
+            # Abort (and delete the tracker file) so if the user retries
+            # they'll start a new resumable upload rather than potentially
+            # attempting to pick back up later where we left off.
+            raise ResumableUploadException(
+                'File changed during upload: EOF at %d bytes of %d byte file.' %
+                (total_bytes_uploaded, file_length),
+                ResumableTransferDisposition.ABORT)
         resp = http_conn.getresponse()
         body = resp.read()
         # Restore http connection debug level.
         http_conn.set_debuglevel(conn.debug)
 
-        additional_note = ''
         if resp.status == 200:
             return resp.getheader('etag')  # Success
-        # Retry status 503 errors after a delay.
-        elif resp.status == 503:
+        # Retry timeout (408) and status 500 and 503 errors after a delay.
+        elif resp.status in [408, 500, 503]:
             disposition = ResumableTransferDisposition.WAIT_BEFORE_RETRY
-        elif resp.status == 500:
-            disposition = ResumableTransferDisposition.ABORT
-            additional_note = ('This can happen if you attempt to upload a '
-                               'different size file on a already partially '
-                               'uploaded resumable upload')
         else:
+            # Catch all for any other error codes.
             disposition = ResumableTransferDisposition.ABORT
         raise ResumableUploadException('Got response code %d while attempting '
-                                       'upload (%s)%s' %
-                                       (resp.status, resp.reason,
-                                        additional_note), disposition)
+                                       'upload (%s)' %
+                                       (resp.status, resp.reason), disposition)
 
     def _attempt_resumable_upload(self, key, fp, file_length, headers, cb,
                                   num_cb):
@@ -372,8 +392,9 @@
             # Try to resume existing resumable upload.
             try:
                 (server_start, server_end) = (
-                    self._query_server_state(conn, file_length))
+                    self._query_server_pos(conn, file_length))
                 self.server_has_bytes = server_start
+                key=key
                 if conn.debug >= 1:
                     print 'Resuming transfer.'
             except ResumableUploadException, e:
@@ -390,8 +411,16 @@
             self.upload_start_point = server_end
 
         if server_end == file_length:
-            return #  Done.
-        total_bytes_uploaded = server_end + 1
+            # Boundary condition: complete file was already uploaded (e.g.,
+            # user interrupted a previous upload attempt after the upload
+            # completed but before the gsutil tracker file was deleted). Set
+            # total_bytes_uploaded to server_end so we'll attempt to upload
+            # no more bytes but will still make final HTTP request and get
+            # back the response (which contains the etag we need to compare
+            # at the end).
+            total_bytes_uploaded = server_end
+        else:
+            total_bytes_uploaded = server_end + 1
         fp.seek(total_bytes_uploaded)
         conn = key.bucket.connection
 
@@ -409,6 +438,15 @@
         try:
             return self._upload_file_bytes(conn, http_conn, fp, file_length,
                                            total_bytes_uploaded, cb, num_cb)
+        except (ResumableUploadException, socket.error):
+            resp = self._query_server_state(conn, file_length)
+            if resp.status == 400:
+                raise ResumableUploadException('Got 400 response from server '
+                    'state query after failed resumable upload attempt. This '
+                    'can happen if the file size changed between upload '
+                    'attempts', ResumableTransferDisposition.ABORT)
+            else:
+                raise
         finally:
             http_conn.close()
 
@@ -494,11 +532,28 @@
             except self.RETRYABLE_EXCEPTIONS, e:
                 if debug >= 1:
                     print('Caught exception (%s)' % e.__repr__())
+                if isinstance(e, IOError) and e.errno == errno.EPIPE:
+                    # Broken pipe error causes httplib to immediately
+                    # close the socket (http://bugs.python.org/issue5542),
+                    # so we need to close the connection before we resume
+                    # the upload (which will cause a new connection to be
+                    # opened the next time an HTTP request is sent).
+                    key.bucket.connection.connection.close()
             except ResumableUploadException, e:
-                if e.disposition == ResumableTransferDisposition.ABORT:
+                if (e.disposition ==
+                    ResumableTransferDisposition.ABORT_CUR_PROCESS):
                     if debug >= 1:
                         print('Caught non-retryable ResumableUploadException '
-                              '(%s)' % e.message)
+                              '(%s); aborting but retaining tracker file' %
+                              e.message)
+                    raise
+                elif (e.disposition ==
+                    ResumableTransferDisposition.ABORT):
+                    if debug >= 1:
+                        print('Caught non-retryable ResumableUploadException '
+                              '(%s); aborting and removing tracker file' %
+                              e.message)
+                    self._remove_tracker_file()
                     raise
                 else:
                     if debug >= 1:
@@ -516,11 +571,12 @@
                 raise ResumableUploadException(
                     'Too many resumable upload attempts failed without '
                     'progress. You might try this upload again later',
-                    ResumableTransferDisposition.ABORT)
+                    ResumableTransferDisposition.ABORT_CUR_PROCESS)
 
-            sleep_time_secs = 2**progress_less_iterations
+            # Use binary exponential backoff to desynchronize client requests
+            sleep_time_secs = random.random() * (2**progress_less_iterations)
             if debug >= 1:
                 print ('Got retryable failure (%d progress-less in a row).\n'
-                       'Sleeping %d seconds before re-trying' %
+                       'Sleeping %3.1f seconds before re-trying' %
                        (progress_less_iterations, sleep_time_secs))
             time.sleep(sleep_time_secs)
diff --git a/boto/https_connection.py b/boto/https_connection.py
new file mode 100644
index 0000000..d7a3f3a
--- /dev/null
+++ b/boto/https_connection.py
@@ -0,0 +1,124 @@
+# Copyright 2007,2011 Google Inc.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+# This file is derived from
+# http://googleappengine.googlecode.com/svn-history/r136/trunk/python/google/appengine/tools/https_wrapper.py
+
+
+"""Extensions to allow HTTPS requests with SSL certificate validation."""
+
+import httplib
+import re
+import socket
+import ssl
+
+import boto
+
+class InvalidCertificateException(httplib.HTTPException):
+  """Raised when a certificate is provided with an invalid hostname."""
+
+  def __init__(self, host, cert, reason):
+    """Constructor.
+
+    Args:
+      host: The hostname the connection was made to.
+      cert: The SSL certificate (as a dictionary) the host returned.
+    """
+    httplib.HTTPException.__init__(self)
+    self.host = host
+    self.cert = cert
+    self.reason = reason
+
+  def __str__(self):
+    return ('Host %s returned an invalid certificate (%s): %s' %
+            (self.host, self.reason, self.cert))
+
+def GetValidHostsForCert(cert):
+  """Returns a list of valid host globs for an SSL certificate.
+
+  Args:
+    cert: A dictionary representing an SSL certificate.
+  Returns:
+    list: A list of valid host globs.
+  """
+  if 'subjectAltName' in cert:
+    return [x[1] for x in cert['subjectAltName'] if x[0].lower() == 'dns']
+  else:
+    return [x[0][1] for x in cert['subject']
+            if x[0][0].lower() == 'commonname']
+
+def ValidateCertificateHostname(cert, hostname):
+  """Validates that a given hostname is valid for an SSL certificate.
+
+  Args:
+    cert: A dictionary representing an SSL certificate.
+    hostname: The hostname to test.
+  Returns:
+    bool: Whether or not the hostname is valid for this certificate.
+  """
+  hosts = GetValidHostsForCert(cert)
+  boto.log.debug(
+      "validating server certificate: hostname=%s, certificate hosts=%s",
+      hostname, hosts)
+  for host in hosts:
+    host_re = host.replace('.', '\.').replace('*', '[^.]*')
+    if re.search('^%s$' % (host_re,), hostname, re.I):
+      return True
+  return False
+
+
+class CertValidatingHTTPSConnection(httplib.HTTPConnection):
+  """An HTTPConnection that connects over SSL and validates certificates."""
+
+  default_port = httplib.HTTPS_PORT
+
+  def __init__(self, host, port=None, key_file=None, cert_file=None,
+               ca_certs=None, strict=None, **kwargs):
+    """Constructor.
+
+    Args:
+      host: The hostname. Can be in 'host:port' form.
+      port: The port. Defaults to 443.
+      key_file: A file containing the client's private key
+      cert_file: A file containing the client's certificates
+      ca_certs: A file contianing a set of concatenated certificate authority
+          certs for validating the server against.
+      strict: When true, causes BadStatusLine to be raised if the status line
+          can't be parsed as a valid HTTP/1.0 or 1.1 status line.
+    """
+    httplib.HTTPConnection.__init__(self, host, port, strict, **kwargs)
+    self.key_file = key_file
+    self.cert_file = cert_file
+    self.ca_certs = ca_certs
+
+  def connect(self):
+    "Connect to a host on a given (SSL) port."
+    sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
+    sock.connect((self.host, self.port))
+    boto.log.debug("wrapping ssl socket; CA certificate file=%s",
+                   self.ca_certs)
+    self.sock = ssl.wrap_socket(sock, keyfile=self.key_file,
+                                certfile=self.cert_file,
+                                cert_reqs=ssl.CERT_REQUIRED,
+                                ca_certs=self.ca_certs)
+    cert = self.sock.getpeercert()
+    hostname = self.host.split(':', 0)[0]
+    if not ValidateCertificateHostname(cert, hostname):
+      raise InvalidCertificateException(hostname,
+                                        cert,
+                                        'remote hostname "%s" does not match '\
+                                        'certificate' % hostname)
+
+
diff --git a/boto/iam/connection.py b/boto/iam/connection.py
index 39ab704..ae68f33 100644
--- a/boto/iam/connection.py
+++ b/boto/iam/connection.py
@@ -22,6 +22,7 @@
 
 import boto
 import boto.jsonresponse
+from boto.iam.summarymap import SummaryMap
 from boto.connection import AWSQueryConnection
 
 #boto.set_stream_logger('iam')
@@ -33,12 +34,14 @@
     def __init__(self, aws_access_key_id=None, aws_secret_access_key=None,
                  is_secure=True, port=None, proxy=None, proxy_port=None,
                  proxy_user=None, proxy_pass=None, host='iam.amazonaws.com',
-                 debug=0, https_connection_factory=None, path='/'):
+                 debug=0, https_connection_factory=None,
+                 path='/'):
         AWSQueryConnection.__init__(self, aws_access_key_id,
                                     aws_secret_access_key,
                                     is_secure, port, proxy,
                                     proxy_port, proxy_user, proxy_pass,
-                                    host, debug, https_connection_factory, path)
+                                    host, debug, https_connection_factory,
+                                    path)
 
     def _required_auth_capability(self):
         return ['iam']
@@ -54,7 +57,8 @@
         body = response.read()
         boto.log.debug(body)
         if response.status == 200:
-            e = boto.jsonresponse.Element(list_marker=list_marker)
+            e = boto.jsonresponse.Element(list_marker=list_marker,
+                                          pythonize_name=True)
             h = boto.jsonresponse.XmlHandler(e, parent)
             h.parse(body)
             return e
@@ -178,7 +182,7 @@
         Add a user to a group
 
         :type group_name: string
-        :param group_name: The name of the new group
+        :param group_name: The name of the group
 
         :type user_name: string
         :param user_name: The to be added to the group.
@@ -193,7 +197,7 @@
         Remove a user from a group.
 
         :type group_name: string
-        :param group_name: The name of the new group
+        :param group_name: The name of the group
 
         :type user_name: string
         :param user_name: The user to remove from the group.
@@ -493,7 +497,7 @@
         Get all access keys associated with an account.
 
         :type user_name: string
-        :param user_name: The username of the new user
+        :param user_name: The username of the user
 
         :type marker: string
         :param marker: Use this only when paginating results and only in
@@ -524,7 +528,7 @@
         implicitly based on the AWS Access Key ID used to sign the request.
 
         :type user_name: string
-        :param user_name: The username of the new user
+        :param user_name: The username of the user
 
         """
         params = {'UserName' : user_name}
@@ -566,7 +570,7 @@
         :param access_key_id: The ID of the access key to be deleted.
 
         :type user_name: string
-        :param user_name: The username of the new user
+        :param user_name: The username of the user
 
         """
         params = {'AccessKeyId' : access_key_id}
@@ -647,7 +651,7 @@
         :param cert_body: The body of the signing certificate.
 
         :type user_name: string
-        :param user_name: The username of the new user
+        :param user_name: The username of the user
 
         """
         params = {'CertificateBody' : cert_body}
@@ -664,7 +668,7 @@
         on the AWS Access Key ID used to sign the request.
 
         :type user_name: string
-        :param user_name: The username of the new user
+        :param user_name: The username of the user
 
         :type cert_id: string
         :param cert_id: The ID of the certificate.
@@ -906,6 +910,17 @@
     #
     # Login Profiles
     #
+
+    def get_login_profiles(self, user_name):
+        """
+        Retrieves the login profile for the specified user.
+        
+        :type user_name: string
+        :param user_name: The username of the user
+        
+        """
+        params = {'UserName' : user_name}
+        return self.get_response('GetLoginProfile', params)
     
     def create_login_profile(self, user_name, password):
         """
@@ -913,7 +928,7 @@
         ability to access AWS services and the AWS Management Console.
 
         :type user_name: string
-        :param user_name: The name of the new user
+        :param user_name: The name of the user
 
         :type password: string
         :param password: The new password for the user
@@ -985,11 +1000,8 @@
         For more information on account id aliases, please see
         http://goo.gl/ToB7G
         """
-        r = self.get_response('ListAccountAliases', {})
-        response = r.get('ListAccountAliasesResponse')
-        result = response.get('ListAccountAliasesResult')
-        aliases = result.get('AccountAliases')
-        return aliases.get('member', None)
+        return self.get_response('ListAccountAliases', {},
+                                 list_marker='AccountAliases')
 
     def get_signin_url(self, service='ec2'):
         """
@@ -1004,3 +1016,17 @@
             raise Exception('No alias associated with this account.  Please use iam.create_account_alias() first.')
 
         return "https://%s.signin.aws.amazon.com/console/%s" % (alias, service)
+
+    def get_account_summary(self):
+        """
+        Get the alias for the current account.
+
+        This is referred to in the docs as list_account_aliases,
+        but it seems you can only have one account alias currently.
+        
+        For more information on account id aliases, please see
+        http://goo.gl/ToB7G
+        """
+        return self.get_object('GetAccountSummary', {}, SummaryMap)
+
+    
diff --git a/boto/tests/__init__.py b/boto/iam/summarymap.py
similarity index 62%
copy from boto/tests/__init__.py
copy to boto/iam/summarymap.py
index 449bd16..0002389 100644
--- a/boto/tests/__init__.py
+++ b/boto/iam/summarymap.py
@@ -1,4 +1,5 @@
-# Copyright (c) 2006,2007 Mitch Garnaat http://garnaat.org/
+# Copyright (c) 2010 Mitch Garnaat http://garnaat.org/
+# Copyright (c) 2010, Eucalyptus Systems, Inc.
 #
 # Permission is hereby granted, free of charge, to any person obtaining a
 # copy of this software and associated documentation files (the
@@ -14,10 +15,28 @@
 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
 # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
 # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
-# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, 
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
 # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
 # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
 # IN THE SOFTWARE.
-#
 
+class SummaryMap(dict):
+
+    def __init__(self, parent=None):
+        self.parent = parent
+        dict.__init__(self)
+
+    def startElement(self, name, attrs, connection):
+        pass
+
+    def endElement(self, name, value, connection):
+        if name == 'key':
+            self._name = value
+        elif name == 'value':
+            try:
+                self[self._name] = int(value)
+            except ValueError:
+                self[self._name] = value
+        else:
+            setattr(self, name, value)
 
diff --git a/boto/jsonresponse.py b/boto/jsonresponse.py
index beb50ce..9433815 100644
--- a/boto/jsonresponse.py
+++ b/boto/jsonresponse.py
@@ -55,7 +55,8 @@
 
     def __init__(self, connection=None, element_name=None,
                  stack=None, parent=None, list_marker=('Set',),
-                 item_marker=('member', 'item')):
+                 item_marker=('member', 'item'),
+                 pythonize_name=False):
         dict.__init__(self)
         self.connection = connection
         self.element_name = element_name
@@ -65,6 +66,7 @@
             self.stack = []
         else:
             self.stack = stack
+        self.pythonize_name = pythonize_name
         self.parent = parent
 
     def __getattr__(self, key):
@@ -79,19 +81,25 @@
                     pass
         raise AttributeError
 
+    def get_name(self, name):
+        if self.pythonize_name:
+            name = utils.pythonize_name(name)
+        return name
+
     def startElement(self, name, attrs, connection):
         self.stack.append(name)
         for lm in self.list_marker:
             if name.endswith(lm):
                 l = ListElement(self.connection, name, self.list_marker,
-                                self.item_marker)
-                self[name] = l
+                                self.item_marker, self.pythonize_name)
+                self[self.get_name(name)] = l
                 return l
         if len(self.stack) > 0:
             element_name = self.stack[-1]
             e = Element(self.connection, element_name, self.stack, self,
-                        self.list_marker, self.item_marker)
-            self[element_name] = e
+                        self.list_marker, self.item_marker,
+                        self.pythonize_name)
+            self[self.get_name(element_name)] = e
             return (element_name, e)
         else:
             return None
@@ -102,28 +110,37 @@
         value = value.strip()
         if value:
             if isinstance(self.parent, Element):
-                self.parent[name] = value
+                self.parent[self.get_name(name)] = value
             elif isinstance(self.parent, ListElement):
                 self.parent.append(value)
 
 class ListElement(list):
 
     def __init__(self, connection=None, element_name=None,
-                 list_marker=['Set'], item_marker=('member', 'item')):
+                 list_marker=['Set'], item_marker=('member', 'item'),
+                 pythonize_name=False):
         list.__init__(self)
         self.connection = connection
         self.element_name = element_name
         self.list_marker = list_marker
         self.item_marker = item_marker
+        self.pythonize_name = pythonize_name
+
+    def get_name(self, name):
+        if self.pythonize_name:
+            name = utils.pythonize_name(name)
+        return name
 
     def startElement(self, name, attrs, connection):
         for lm in self.list_marker:
             if name.endswith(lm):
-                l = ListElement(self.connection, name, self.item_marker)
-                setattr(self, name, l)
+                l = ListElement(self.connection, name, self.item_marker,
+                                pythonize_name=self.pythonize_name)
+                setattr(self, self.get_name(name), l)
                 return l
         if name in self.item_marker:
-            e = Element(self.connection, name, parent=self)
+            e = Element(self.connection, name, parent=self,
+                        pythonize_name=self.pythonize_name)
             self.append(e)
             return e
         else:
@@ -140,4 +157,4 @@
                 for e in empty:
                     self.remove(e)
         else:
-            setattr(self, name, value)
+            setattr(self, self.get_name(name), value)
diff --git a/boto/manage/cmdshell.py b/boto/manage/cmdshell.py
index cbd2e60..2275fa0 100644
--- a/boto/manage/cmdshell.py
+++ b/boto/manage/cmdshell.py
@@ -72,20 +72,27 @@
                 retry += 1
         print 'Could not establish SSH connection'
 
+    def open_sftp(self):
+        return self._ssh_client.open_sftp()
+
     def get_file(self, src, dst):
-        sftp_client = self._ssh_client.open_sftp()
+        sftp_client = self.open_sftp()
         sftp_client.get(src, dst)
 
     def put_file(self, src, dst):
-        sftp_client = self._ssh_client.open_sftp()
+        sftp_client = self.open_sftp()
         sftp_client.put(src, dst)
 
-    def listdir(self, path):
-        sftp_client = self._ssh_client.open_sftp()
-        return sftp_client.listdir(path)
+    def open(self, filename, mode='r', bufsize=-1):
+        """
+        Open a file on the remote system and return a file-like object.
+        """
+        sftp_client = self.open_sftp()
+        return sftp_client.open(filename, mode, bufsize)
 
-    def open_sftp(self):
-        return self._ssh_client.open_sftp()
+    def listdir(self, path):
+        sftp_client = self.open_sftp()
+        return sftp_client.listdir(path)
 
     def isdir(self, path):
         status = self.run('[ -d %s ] || echo "FALSE"' % path)
@@ -100,24 +107,43 @@
         return 1
 
     def shell(self):
+        """
+        Start an interactive shell session on the remote host.
+        """
         channel = self._ssh_client.invoke_shell()
         interactive_shell(channel)
 
     def run(self, command):
-        boto.log.info('running:%s on %s' % (command, self.server.instance_id))
-        log_fp = StringIO.StringIO()
+        """
+        Execute a command on the remote host.  Return a tuple containing
+        an integer status and a two strings, the first containing stdout
+        and the second containing stderr from the command.
+        """
+        boto.log.debug('running:%s on %s' % (command, self.server.instance_id))
         status = 0
         try:
             t = self._ssh_client.exec_command(command)
         except paramiko.SSHException:
             status = 1
-        log_fp.write(t[1].read())
-        log_fp.write(t[2].read())
+        std_out = t[1].read()
+        std_err = t[2].read()
         t[0].close()
         t[1].close()
         t[2].close()
-        boto.log.info('output: %s' % log_fp.getvalue())
-        return (status, log_fp.getvalue())
+        boto.log.debug('stdout: %s' % std_out)
+        boto.log.debug('stderr: %s' % std_err)
+        return (status, std_out, std_err)
+
+    def run_pty(self, command):
+        """
+        Execute a command on the remote host with a pseudo-terminal.
+        Returns a string containing the output of the command.
+        """
+        boto.log.debug('running:%s on %s' % (command, self.server.instance_id))
+        channel = self._ssh_client.get_transport().open_session()
+        channel.get_pty()
+        channel.exec_command(command)
+        return channel.recv(1024)
 
     def close(self):
         transport = self._ssh_client.get_transport()
@@ -166,9 +192,50 @@
     def close(self):
         pass
 
+class FakeServer(object):
+    """
+    A little class to fake out SSHClient (which is expecting a
+    :class`boto.manage.server.Server` instance.  This allows us
+    to 
+    """
+    def __init__(self, instance, ssh_key_file):
+        self.instance = instance
+        self.ssh_key_file = ssh_key_file
+        self.hostname = instance.dns_name
+        self.instance_id = self.instance.id
+        
 def start(server):
     instance_id = boto.config.get('Instance', 'instance-id', None)
     if instance_id == server.instance_id:
         return LocalClient(server)
     else:
         return SSHClient(server)
+
+def sshclient_from_instance(instance, ssh_key_file,
+                            host_key_file='~/.ssh/known_hosts',
+                            user_name='root', ssh_pwd=None):
+    """
+    Create and return an SSHClient object given an
+    instance object.
+
+    :type instance: :class`boto.ec2.instance.Instance` object
+    :param instance: The instance object.
+
+    :type ssh_key_file: str
+    :param ssh_key_file: A path to the private key file used
+                         to log into instance.
+
+    :type host_key_file: str
+    :param host_key_file: A path to the known_hosts file used
+                          by the SSH client.
+                          Defaults to ~/.ssh/known_hosts
+    :type user_name: str
+    :param user_name: The username to use when logging into
+                      the instance.  Defaults to root.
+
+    :type ssh_pwd: str
+    :param ssh_pwd: The passphrase, if any, associated with
+                    private key.
+    """
+    s = FakeServer(instance, ssh_key_file)
+    return SSHClient(s, host_key_file, user_name, ssh_pwd)
diff --git a/boto/mashups/iobject.py b/boto/mashups/iobject.py
index a226b5c..de74287 100644
--- a/boto/mashups/iobject.py
+++ b/boto/mashups/iobject.py
@@ -40,7 +40,7 @@
             n = 1
             choices = []
             for item in item_list:
-                if isinstance(item, str):
+                if isinstance(item, basestring):
                     print '[%d] %s' % (n, item)
                     choices.append(item)
                     n += 1
diff --git a/boto/mturk/connection.py b/boto/mturk/connection.py
index 619697f..9e8493f 100644
--- a/boto/mturk/connection.py
+++ b/boto/mturk/connection.py
@@ -41,7 +41,7 @@
     APIVersion = '2008-08-02'
     
     def __init__(self, aws_access_key_id=None, aws_secret_access_key=None,
-                 is_secure=False, port=None, proxy=None, proxy_port=None,
+                 is_secure=True, port=None, proxy=None, proxy_port=None,
                  proxy_user=None, proxy_pass=None,
                  host=None, debug=0,
                  https_connection_factory=None):
@@ -228,8 +228,9 @@
         Change the HIT type of an existing HIT. Note that the reward associated
         with the new HIT type must match the reward of the current HIT type in
         order for the operation to be valid.
-        \thit_id is a string
-        \thit_type is a string
+        
+        :type hit_id: str
+        :type hit_type: str
         """
         params = {'HITId' : hit_id,
                   'HITTypeId': hit_type}
@@ -515,9 +516,9 @@
         """
         Send a text message to workers.
         """
-        params = {'WorkerId' : worker_ids,
-                  'Subject' : subject,
+        params = {'Subject' : subject,
                   'MessageText': message_text}
+        self.build_list_params(params, worker_ids, 'WorkerId')
 
         return self._process_request('NotifyWorkers', params)
 
@@ -579,7 +580,7 @@
             params['Test'] = test.get_as_xml()
 
         if test_duration is not None:
-            params['TestDuration'] = test_duration
+            params['TestDurationInSeconds'] = test_duration
 
         if answer_key is not None:
             if isinstance(answer_key, basestring):
@@ -671,7 +672,7 @@
                   'MustBeRequestable' : must_be_requestable,
                   'MustBeOwnedByCaller' : must_be_owned_by_caller}
         return self._process_request('SearchQualificationTypes', params,
-                                     [('QualificationType', QualificationType),])
+                    [('QualificationType', QualificationType),])
 
     def get_qualification_requests(self, qualification_type_id,
                                    sort_by='Expiration',
@@ -684,7 +685,7 @@
                   'PageSize' : page_size,
                   'PageNumber' : page_number}
         return self._process_request('GetQualificationRequests', params,
-                                     [('QualificationRequest', QualificationRequest),])
+                    [('QualificationRequest', QualificationRequest),])
 
     def grant_qualification(self, qualification_request_id, integer_value=1):
         """TODO: Document."""
@@ -705,9 +706,24 @@
         params = {'QualificationTypeId' : qualification_type_id,
                   'WorkerId' : worker_id,
                   'IntegerValue' : value,
-                  'SendNotification' : send_notification, }
+                  'SendNotification' : send_notification}
         return self._process_request('AssignQualification', params)
 
+    def get_qualification_score(self, qualification_type_id, worker_id):
+        """TODO: Document."""
+        params = {'QualificationTypeId' : qualification_type_id,
+                  'SubjectId' : worker_id}
+        return self._process_request('GetQualificationScore', params,
+                    [('Qualification', Qualification),])
+
+    def update_qualification_score(self, qualification_type_id, worker_id,
+                                   value):
+        """TODO: Document."""
+        params = {'QualificationTypeId' : qualification_type_id,
+                  'SubjectId' : worker_id,
+                  'IntegerValue' : value}
+        return self._process_request('UpdateQualificationScore', params)
+
     def _process_request(self, request_type, params, marker_elems=None):
         """
         Helper to process the xml response from AWS
@@ -804,6 +820,17 @@
     # are we there yet?
     expired = property(_has_expired)
 
+class Qualification(BaseAutoResultElement):
+    """
+    Class to extract an Qualification structure from a response (used in
+    ResultSet)
+    
+    Will have attributes named as per the Developer Guide such as
+    QualificationTypeId, IntegerValue. Does not seem to contain GrantTime.
+    """
+    
+    pass
+
 class QualificationType(BaseAutoResultElement):
     """
     Class to extract an QualificationType structure from a response (used in
@@ -881,7 +908,7 @@
     def endElement(self, name, value, connection):
         if name == 'QuestionIdentifier':
             self.qid = value
-        elif name in ['FreeText', 'SelectionIdentifier'] and self.qid:
+        elif name in ['FreeText', 'SelectionIdentifier', 'OtherSelectionText'] and self.qid:
             self.fields.append((self.qid,value))
         elif name == 'Answer':
             self.qid = None
diff --git a/boto/mturk/notification.py b/boto/mturk/notification.py
index 2aa99ca..02c93aa 100644
--- a/boto/mturk/notification.py
+++ b/boto/mturk/notification.py
@@ -74,9 +74,18 @@
     def verify(self, secret_key):
         """
         Verifies the authenticity of a notification message.
+
+        TODO: This is doing a form of authentication and
+              this functionality should really be merged
+              with the pluggable authentication mechanism
+              at some point.
         """
-        verification_input = NotificationMessage.SERVICE_NAME + NotificationMessage.OPERATION_NAME + self.timestamp
-        signature_calc = self._auth_handler.sign_string(verification_input)
+        verification_input = NotificationMessage.SERVICE_NAME
+        verification_input += NotificationMessage.OPERATION_NAME
+        verification_input += self.timestamp
+        h = hmac.new(key=secret_key, digestmod=sha)
+        h.update(verification_input)
+        signature_calc = base64.b64encode(h.digest())
         return self.signature == signature_calc
 
 class Event:
diff --git a/boto/mturk/question.py b/boto/mturk/question.py
index b1556ad..bf16b3e 100644
--- a/boto/mturk/question.py
+++ b/boto/mturk/question.py
@@ -22,7 +22,8 @@
 class Question(object):
     template = "<Question>%(items)s</Question>"
     
-    def __init__(self, identifier, content, answer_spec, is_required=False, display_name=None):
+    def __init__(self, identifier, content, answer_spec,
+                 is_required=False, display_name=None):
         # copy all of the parameters into object attributes
         self.__dict__.update(vars())
         del self.self
@@ -176,27 +177,31 @@
     """
     From the AMT API docs:
     
-    The top-most element of the QuestionForm data structure is a QuestionForm element. This
-    element contains optional Overview elements and one or more Question elements. There can be
-    any number of these two element types listed in any order. The following example structure has an
-    Overview element and a Question element followed by a second Overview element and Question
-    element--all within the same QuestionForm.
+    The top-most element of the QuestionForm data structure is a
+    QuestionForm element. This element contains optional Overview
+    elements and one or more Question elements. There can be any
+    number of these two element types listed in any order. The
+    following example structure has an Overview element and a
+    Question element followed by a second Overview element and
+    Question element--all within the same QuestionForm.
     
-    <QuestionForm xmlns="[the QuestionForm schema URL]">
-        <Overview>
+    ::
+    
+        <QuestionForm xmlns="[the QuestionForm schema URL]">
+            <Overview>
+                [...]
+            </Overview>
+            <Question>
+                [...]
+            </Question>
+            <Overview>
+                [...]
+            </Overview>
+            <Question>
+                [...]
+            </Question>
             [...]
-        </Overview>
-        <Question>
-            [...]
-        </Question>
-        <Overview>
-            [...]
-        </Overview>
-        <Question>
-            [...]
-        </Question>
-        [...]
-    </QuestionForm>
+        </QuestionForm>
     
     QuestionForm is implemented as a list, so to construct a
     QuestionForm, simply append Questions and Overviews (with at least
@@ -291,13 +296,14 @@
 
     def __init__(self, default=None, constraints=None, num_lines=None):
         self.default = default
-        if constraints is None: constraints = Constraints()
-        self.constraints = Constraints(constraints)
+        if constraints is None:
+            self.constraints = Constraints()
+        else:
+            self.constraints = Constraints(constraints)
         self.num_lines = num_lines
     
     def get_as_xml(self):
-        constraints = Constraints()
-        items = [constraints]
+        items = [self.constraints]
         if self.default:
             items.append(SimpleField('DefaultText', self.default))
         if self.num_lines:
diff --git a/boto/provider.py b/boto/provider.py
index c1c8b59..7e9f640 100644
--- a/boto/provider.py
+++ b/boto/provider.py
@@ -1,6 +1,7 @@
 # Copyright (c) 2010 Mitch Garnaat http://garnaat.org/
 # Copyright 2010 Google Inc.
 # Copyright (c) 2010, Eucalyptus Systems, Inc.
+# Copyright (c) 2011, Nexenta Systems Inc.
 # All rights reserved.
 #
 # Permission is hereby granted, free of charge, to any person obtaining a
@@ -50,6 +51,7 @@
 SECURITY_TOKEN_HEADER_KEY = 'security-token-header'
 STORAGE_CLASS_HEADER_KEY = 'storage-class'
 MFA_HEADER_KEY = 'mfa-header'
+SERVER_SIDE_ENCRYPTION_KEY = 'server-side-encryption-header'
 VERSION_ID_HEADER_KEY = 'version-id-header'
 
 STORAGE_COPY_ERROR = 'StorageCopyError'
@@ -81,6 +83,14 @@
         'google' : 'gs'
     }
 
+    ChunkedTransferSupport = {
+        'aws' : False,
+        'google' : True
+    }
+
+    # If you update this map please make sure to put "None" for the
+    # right-hand-side for any headers that don't apply to a provider, rather
+    # than simply leaving that header out (which would cause KeyErrors).
     HeaderInfoMap = {
         'aws' : {
             HEADER_PREFIX_KEY : AWS_HEADER_PREFIX,
@@ -96,6 +106,7 @@
                                             'metadata-directive',
             RESUMABLE_UPLOAD_HEADER_KEY : None,
             SECURITY_TOKEN_HEADER_KEY : AWS_HEADER_PREFIX + 'security-token',
+            SERVER_SIDE_ENCRYPTION_KEY : AWS_HEADER_PREFIX + 'server-side-encryption',
             VERSION_ID_HEADER_KEY : AWS_HEADER_PREFIX + 'version-id',
             STORAGE_CLASS_HEADER_KEY : AWS_HEADER_PREFIX + 'storage-class',
             MFA_HEADER_KEY : AWS_HEADER_PREFIX + 'mfa',
@@ -114,6 +125,9 @@
                                             'metadata-directive',
             RESUMABLE_UPLOAD_HEADER_KEY : GOOG_HEADER_PREFIX + 'resumable',
             SECURITY_TOKEN_HEADER_KEY : GOOG_HEADER_PREFIX + 'security-token',
+            SERVER_SIDE_ENCRYPTION_KEY : None,
+            # Note that this version header is not to be confused with
+            # the Google Cloud Storage 'x-goog-api-version' header.
             VERSION_ID_HEADER_KEY : GOOG_HEADER_PREFIX + 'version-id',
             STORAGE_CLASS_HEADER_KEY : None,
             MFA_HEADER_KEY : None,
@@ -137,10 +151,12 @@
         }
     }
 
-    def __init__(self, name, access_key=None, secret_key=None):
+    def __init__(self, name, access_key=None, secret_key=None,
+                 security_token=None):
         self.host = None
         self.access_key = access_key
         self.secret_key = secret_key
+        self.security_token = security_token
         self.name = name
         self.acl_class = self.AclClassMap[self.name]
         self.canned_acls = self.CannedAclsMap[self.name]
@@ -188,6 +204,7 @@
         self.security_token_header = header_info_map[SECURITY_TOKEN_HEADER_KEY]
         self.resumable_upload_header = (
             header_info_map[RESUMABLE_UPLOAD_HEADER_KEY])
+        self.server_side_encryption_header = header_info_map[SERVER_SIDE_ENCRYPTION_KEY]
         self.storage_class_header = header_info_map[STORAGE_CLASS_HEADER_KEY]
         self.version_id = header_info_map[VERSION_ID_HEADER_KEY]
         self.mfa_header = header_info_map[MFA_HEADER_KEY]
@@ -203,6 +220,9 @@
     def get_provider_name(self):
         return self.HostKeyMap[self.name]
 
+    def supports_chunked_transfer(self):
+        return self.ChunkedTransferSupport[self.name]
+
 # Static utility method for getting default Provider.
 def get_default():
     return Provider('aws')
diff --git a/boto/pyami/bootstrap.py b/boto/pyami/bootstrap.py
index c1441fd..cd44682 100644
--- a/boto/pyami/bootstrap.py
+++ b/boto/pyami/bootstrap.py
@@ -24,6 +24,7 @@
 from boto.utils import get_instance_metadata, get_instance_userdata
 from boto.pyami.config import Config, BotoConfigPath
 from boto.pyami.scriptbase import ScriptBase
+import time
 
 class Bootstrap(ScriptBase):
     """
@@ -75,7 +76,15 @@
             self.run('svn update %s %s' % (version, location))
         elif update.startswith('git'):
             location = boto.config.get('Boto', 'boto_location', '/usr/share/python-support/python-boto/boto')
-            self.run('git pull', cwd=location)
+            num_remaining_attempts = 10
+            while num_remaining_attempts > 0:
+                num_remaining_attempts -= 1
+                try:
+                    self.run('git pull', cwd=location)
+                    num_remaining_attempts = 0
+                except Exception, e:
+                    boto.log.info('git pull attempt failed with the following exception. Trying again in a bit. %s', e)
+                    time.sleep(2)
             if update.find(':') >= 0:
                 method, version = update.split(':')
             else:
diff --git a/boto/pyami/config.py b/boto/pyami/config.py
index f4613ab..d75e791 100644
--- a/boto/pyami/config.py
+++ b/boto/pyami/config.py
@@ -1,4 +1,5 @@
 # Copyright (c) 2006,2007 Mitch Garnaat http://garnaat.org/
+# Copyright (c) 2011 Chris Moyer http://coredumped.org/
 #
 # Permission is hereby granted, free of charge, to any person obtaining a
 # copy of this software and associated documentation files (the
@@ -20,18 +21,39 @@
 # IN THE SOFTWARE.
 #
 import StringIO, os, re
+import warnings
 import ConfigParser
 import boto
 
+# If running in Google App Engine there is no "user" and
+# os.path.expanduser() will fail. Attempt to detect this case and use a
+# no-op expanduser function in this case.
+try:
+  os.path.expanduser('~')
+  expanduser = os.path.expanduser
+except (AttributeError, ImportError):
+  # This is probably running on App Engine.
+  expanduser = (lambda x: x)
+
+# By default we use two locations for the boto configurations,
+# /etc/boto.cfg and ~/.boto (which works on Windows and Unix).
 BotoConfigPath = '/etc/boto.cfg'
 BotoConfigLocations = [BotoConfigPath]
+UserConfigPath = os.path.join(expanduser('~'), '.boto')
+BotoConfigLocations.append(UserConfigPath)
+
+# If there's a BOTO_CONFIG variable set, we load ONLY 
+# that variable
 if 'BOTO_CONFIG' in os.environ:
-    BotoConfigLocations = [os.path.expanduser(os.environ['BOTO_CONFIG'])]
-elif 'HOME' in os.environ:
-    UserConfigPath = os.path.expanduser('~/.boto')
-    BotoConfigLocations.append(UserConfigPath)
-else:
-    UserConfigPath = None
+    BotoConfigLocations = [expanduser(os.environ['BOTO_CONFIG'])]
+
+# If there's a BOTO_PATH variable set, we use anything there
+# as the current configuration locations, split with colons
+elif 'BOTO_PATH' in os.environ:
+    BotoConfigLocations = []
+    for path in os.environ['BOTO_PATH'].split(":"):
+        BotoConfigLocations.append(expanduser(path))
+
 
 class Config(ConfigParser.SafeConfigParser):
 
@@ -46,7 +68,11 @@
             else:
                 self.read(BotoConfigLocations)
             if "AWS_CREDENTIAL_FILE" in os.environ:
-                self.load_credential_file(os.path.expanduser(os.environ['AWS_CREDENTIAL_FILE']))
+                full_path = expanduser(os.environ['AWS_CREDENTIAL_FILE'])
+                try:
+                    self.load_credential_file(full_path)
+                except IOError:
+                    warnings.warn('Unable to load AWS_CREDENTIAL_FILE (%s)' % full_path)
 
     def load_credential_file(self, path):
         """Load a credential file as is setup like the Java utilities"""
@@ -170,7 +196,11 @@
                     fp.write('%s = %s\n' % (option, self.get(section, option)))
     
     def dump_to_sdb(self, domain_name, item_name):
-        import simplejson
+        try:
+            import simplejson as json
+        except ImportError:
+            import json
+
         sdb = boto.connect_sdb()
         domain = sdb.lookup(domain_name)
         if not domain:
@@ -181,18 +211,22 @@
             d = {}
             for option in self.options(section):
                 d[option] = self.get(section, option)
-            item[section] = simplejson.dumps(d)
+            item[section] = json.dumps(d)
         item.save()
 
     def load_from_sdb(self, domain_name, item_name):
-        import simplejson
+        try:
+            import json
+        except ImportError:
+            import simplejson as json
+
         sdb = boto.connect_sdb()
         domain = sdb.lookup(domain_name)
         item = domain.get_item(item_name)
         for section in item.keys():
             if not self.has_section(section):
                 self.add_section(section)
-            d = simplejson.loads(item[section])
+            d = json.loads(item[section])
             for attr_name in d.keys():
                 attr_value = d[attr_name]
                 if attr_value == None:
diff --git a/boto/pyami/installers/ubuntu/ebs.py b/boto/pyami/installers/ubuntu/ebs.py
index 204c9b1..a52549b 100644
--- a/boto/pyami/installers/ubuntu/ebs.py
+++ b/boto/pyami/installers/ubuntu/ebs.py
@@ -45,6 +45,7 @@
 """
 import boto
 from boto.manage.volume import Volume
+from boto.exception import EC2ResponseError
 import os, time
 from boto.pyami.installers.ubuntu.installer import Installer
 from string import Template
@@ -60,7 +61,7 @@
     def main(self):
         try:
             ec2 = boto.connect_ec2()
-            self.run("/usr/sbin/xfs_freeze -f ${mount_point}")
+            self.run("/usr/sbin/xfs_freeze -f ${mount_point}", exit_on_error = True)
             snapshot = ec2.create_snapshot('${volume_id}')
             boto.log.info("Snapshot created: %s " %  snapshot)
         except Exception, e:
@@ -86,6 +87,15 @@
     v.trim_snapshots(True)
 """
     
+TagBasedBackupCleanupScript= """#!/usr/bin/env python
+import boto
+
+# Cleans Backups of EBS volumes
+
+ec2 = boto.connect_ec2()
+ec2.trim_snapshots()
+"""
+
 class EBSInstaller(Installer):
     """
     Set up the EBS stuff
@@ -147,9 +157,12 @@
         fp.close()
         self.run('chmod +x /usr/local/bin/ebs_backup')
 
-    def create_backup_cleanup_script(self):
+    def create_backup_cleanup_script(self, use_tag_based_cleanup = False):
         fp = open('/usr/local/bin/ebs_backup_cleanup', 'w')
-        fp.write(BackupCleanupScript)
+        if use_tag_based_cleanup:
+            fp.write(TagBasedBackupCleanupScript)
+        else:
+            fp.write(BackupCleanupScript)
         fp.close()
         self.run('chmod +x /usr/local/bin/ebs_backup_cleanup')
 
@@ -207,7 +220,12 @@
         minute = boto.config.get('EBS', 'backup_cleanup_cron_minute')
         hour = boto.config.get('EBS', 'backup_cleanup_cron_hour')
         if (minute != None) and (hour != None):
-            self.create_backup_cleanup_script();
+            # Snapshot clean up can either be done via the manage module, or via the new tag based
+            # snapshot code, if the snapshots have been tagged with the name of the associated 
+            # volume. Check for the presence of the new configuration flag, and use the appropriate
+            # cleanup method / script:
+            use_tag_based_cleanup = boto.config.has_option('EBS', 'use_tag_based_snapshot_cleanup')
+            self.create_backup_cleanup_script(use_tag_based_cleanup);
             self.add_cron("ebs_backup_cleanup", "/usr/local/bin/ebs_backup_cleanup", minute=minute, hour=hour)
 
         # Set up the fstab
diff --git a/boto/rds/__init__.py b/boto/rds/__init__.py
index 940815d..f271cf3 100644
--- a/boto/rds/__init__.py
+++ b/boto/rds/__init__.py
@@ -38,19 +38,34 @@
     :return: A list of :class:`boto.rds.regioninfo.RDSRegionInfo`
     """
     return [RDSRegionInfo(name='us-east-1',
-                          endpoint='rds.amazonaws.com'),
+                          endpoint='rds.us-east-1.amazonaws.com'),
             RDSRegionInfo(name='eu-west-1',
-                          endpoint='eu-west-1.rds.amazonaws.com'),
+                          endpoint='rds.eu-west-1.amazonaws.com'),
             RDSRegionInfo(name='us-west-1',
-                          endpoint='us-west-1.rds.amazonaws.com'),
+                          endpoint='rds.us-west-1.amazonaws.com'),
+            RDSRegionInfo(name='ap-northeast-1',
+                          endpoint='rds.ap-northeast-1.amazonaws.com'),
             RDSRegionInfo(name='ap-southeast-1',
-                          endpoint='ap-southeast-1.rds.amazonaws.com')
+                          endpoint='rds.ap-southeast-1.amazonaws.com')
             ]
 
-def connect_to_region(region_name):
+def connect_to_region(region_name, **kw_params):
+    """
+    Given a valid region name, return a 
+    :class:`boto.ec2.connection.EC2Connection`.
+    Any additional parameters after the region_name are passed on to
+    the connect method of the region object.
+
+    :type: str
+    :param region_name: The name of the region to connect to.
+
+    :rtype: :class:`boto.ec2.connection.EC2Connection` or ``None``
+    :return: A connection to the given region, or None if an invalid region
+             name is given
+    """
     for region in regions():
         if region.name == region_name:
-            return region.connect()
+            return region.connect(**kw_params)
     return None
 
 #boto.set_stream_logger('rds')
@@ -59,7 +74,7 @@
 
     DefaultRegionName = 'us-east-1'
     DefaultRegionEndpoint = 'rds.amazonaws.com'
-    APIVersion = '2009-10-16'
+    APIVersion = '2011-04-01'
 
     def __init__(self, aws_access_key_id=None, aws_secret_access_key=None,
                  is_secure=True, port=None, proxy=None, proxy_port=None,
@@ -69,9 +84,11 @@
             region = RDSRegionInfo(self, self.DefaultRegionName,
                                    self.DefaultRegionEndpoint)
         self.region = region
-        AWSQueryConnection.__init__(self, aws_access_key_id, aws_secret_access_key,
-                                    is_secure, port, proxy, proxy_port, proxy_user,
-                                    proxy_pass, self.region.endpoint, debug,
+        AWSQueryConnection.__init__(self, aws_access_key_id,
+                                    aws_secret_access_key,
+                                    is_secure, port, proxy, proxy_port,
+                                    proxy_user, proxy_pass,
+                                    self.region.endpoint, debug,
                                     https_connection_factory, path)
 
     def _required_auth_capability(self):
@@ -85,9 +102,10 @@
         Retrieve all the DBInstances in your account.
 
         :type instance_id: str
-        :param instance_id: DB Instance identifier.  If supplied, only information
-                            this instance will be returned.  Otherwise, info
-                            about all DB Instances will be returned.
+        :param instance_id: DB Instance identifier.  If supplied, only
+                            information this instance will be returned.
+                            Otherwise, info about all DB Instances will
+                            be returned.
 
         :type max_records: int
         :param max_records: The maximum number of records to be returned.
@@ -108,7 +126,8 @@
             params['MaxRecords'] = max_records
         if marker:
             params['Marker'] = marker
-        return self.get_list('DescribeDBInstances', params, [('DBInstance', DBInstance)])
+        return self.get_list('DescribeDBInstances', params,
+                             [('DBInstance', DBInstance)])
 
     def create_dbinstance(self, id, allocated_storage, instance_class,
                           master_username, master_password, port=3306,
@@ -134,9 +153,8 @@
                                   Valid values are [5-1024]
 
         :type instance_class: str
-        :param instance_class: The compute and memory capacity of the DBInstance.
-
-                               Valid values are:
+        :param instance_class: The compute and memory capacity of
+                               the DBInstance. Valid values are:
 
                                * db.m1.small
                                * db.m1.large
@@ -235,7 +253,7 @@
             params['AvailabilityZone'] = availability_zone
         if preferred_maintenance_window:
             params['PreferredMaintenanceWindow'] = preferred_maintenance_window
-        if backup_retention_period:
+        if backup_retention_period is not None:
             params['BackupRetentionPeriod'] = backup_retention_period
         if preferred_backup_window:
             params['PreferredBackupWindow'] = preferred_backup_window
@@ -406,7 +424,7 @@
             params['AllocatedStorage'] = allocated_storage
         if instance_class:
             params['DBInstanceClass'] = instance_class
-        if backup_retention_period:
+        if backup_retention_period is not None:
             params['BackupRetentionPeriod'] = backup_retention_period
         if preferred_backup_window:
             params['PreferredBackupWindow'] = preferred_backup_window
@@ -428,9 +446,9 @@
         :type skip_final_snapshot: bool
         :param skip_final_snapshot: This parameter determines whether a final
                                     db snapshot is created before the instance
-                                    is deleted.  If True, no snapshot is created.
-                                    If False, a snapshot is created before
-                                    deleting the instance.
+                                    is deleted.  If True, no snapshot
+                                    is created.  If False, a snapshot
+                                    is created before deleting the instance.
 
         :type final_snapshot_id: str
         :param final_snapshot_id: If a final snapshot is requested, this
@@ -568,9 +586,11 @@
         for i in range(0, len(parameters)):
             parameter = parameters[i]
             parameter.merge(params, i+1)
-        return self.get_list('ModifyDBParameterGroup', params, ParameterGroup)
+        return self.get_list('ModifyDBParameterGroup', params,
+                             ParameterGroup, verb='POST')
 
-    def reset_parameter_group(self, name, reset_all_params=False, parameters=None):
+    def reset_parameter_group(self, name, reset_all_params=False,
+                              parameters=None):
         """
         Resets some or all of the parameters of a ParameterGroup to the
         default value
@@ -579,8 +599,8 @@
         :param key_name: The name of the ParameterGroup to reset
 
         :type parameters: list of :class:`boto.rds.parametergroup.Parameter`
-        :param parameters: The parameters to reset.  If not supplied, all parameters
-                           will be reset.
+        :param parameters: The parameters to reset.  If not supplied,
+                           all parameters will be reset.
         """
         params = {'DBParameterGroupName':name}
         if reset_all_params:
@@ -611,7 +631,8 @@
 
         :type groupnames: list
         :param groupnames: A list of the names of security groups to retrieve.
-                           If not provided, all security groups will be returned.
+                           If not provided, all security groups will
+                           be returned.
 
         :type max_records: int
         :param max_records: The maximum number of records to be returned.
@@ -653,7 +674,8 @@
         params = {'DBSecurityGroupName':name}
         if description:
             params['DBSecurityGroupDescription'] = description
-        group = self.get_object('CreateDBSecurityGroup', params, DBSecurityGroup)
+        group = self.get_object('CreateDBSecurityGroup', params,
+                                DBSecurityGroup)
         group.name = name
         group.description = description
         return group
@@ -681,12 +703,13 @@
                            the rule to.
 
         :type ec2_security_group_name: string
-        :param ec2_security_group_name: The name of the EC2 security group you are
-                                        granting access to.
+        :param ec2_security_group_name: The name of the EC2 security group
+                                        you are granting access to.
 
         :type ec2_security_group_owner_id: string
-        :param ec2_security_group_owner_id: The ID of the owner of the EC2 security
-                                            group you are granting access to.
+        :param ec2_security_group_owner_id: The ID of the owner of the EC2
+                                            security group you are granting
+                                            access to.
 
         :type cidr_ip: string
         :param cidr_ip: The CIDR block you are providing access to.
@@ -702,7 +725,8 @@
             params['EC2SecurityGroupOwnerId'] = ec2_security_group_owner_id
         if cidr_ip:
             params['CIDRIP'] = urllib.quote(cidr_ip)
-        return self.get_object('AuthorizeDBSecurityGroupIngress', params, DBSecurityGroup)
+        return self.get_object('AuthorizeDBSecurityGroupIngress', params,
+                               DBSecurityGroup)
 
     def revoke_dbsecurity_group(self, group_name, ec2_security_group_name=None,
                                 ec2_security_group_owner_id=None, cidr_ip=None):
@@ -716,12 +740,13 @@
                            the rule from.
 
         :type ec2_security_group_name: string
-        :param ec2_security_group_name: The name of the EC2 security group from which
-                                        you are removing access.
+        :param ec2_security_group_name: The name of the EC2 security group
+                                        from which you are removing access.
 
         :type ec2_security_group_owner_id: string
-        :param ec2_security_group_owner_id: The ID of the owner of the EC2 security
-                                            from which you are removing access.
+        :param ec2_security_group_owner_id: The ID of the owner of the EC2
+                                            security from which you are
+                                            removing access.
 
         :type cidr_ip: string
         :param cidr_ip: The CIDR block from which you are removing access.
@@ -737,7 +762,8 @@
             params['EC2SecurityGroupOwnerId'] = ec2_security_group_owner_id
         if cidr_ip:
             params['CIDRIP'] = cidr_ip
-        return self.get_object('RevokeDBSecurityGroupIngress', params, DBSecurityGroup)
+        return self.get_object('RevokeDBSecurityGroupIngress', params,
+                               DBSecurityGroup)
 
     # For backwards compatibility.  This method was improperly named
     # in previous versions.  I have renamed it to match the others.
@@ -827,8 +853,8 @@
                               which the snapshot is created.
 
         :type instance_class: str
-        :param instance_class: The compute and memory capacity of the DBInstance.
-                               Valid values are:
+        :param instance_class: The compute and memory capacity of the
+                               DBInstance.  Valid values are:
                                db.m1.small | db.m1.large | db.m1.xlarge |
                                db.m2.2xlarge | db.m2.4xlarge
 
@@ -879,8 +905,8 @@
                              used if use_latest is False.
 
         :type instance_class: str
-        :param instance_class: The compute and memory capacity of the DBInstance.
-                               Valid values are:
+        :param instance_class: The compute and memory capacity of the
+                               DBInstance.  Valid values are:
                                db.m1.small | db.m1.large | db.m1.xlarge |
                                db.m2.2xlarge | db.m2.4xlarge
 
diff --git a/boto/roboto/__init__.py b/boto/roboto/__init__.py
new file mode 100644
index 0000000..792d600
--- /dev/null
+++ b/boto/roboto/__init__.py
@@ -0,0 +1 @@
+#
diff --git a/boto/roboto/awsqueryrequest.py b/boto/roboto/awsqueryrequest.py
new file mode 100644
index 0000000..9e05ac6
--- /dev/null
+++ b/boto/roboto/awsqueryrequest.py
@@ -0,0 +1,504 @@
+# Copyright (c) 2010 Mitch Garnaat http://garnaat.org/
+# Copyright (c) 2010, Eucalyptus Systems, Inc.
+#
+# Permission is hereby granted, free of charge, to any person obtaining a
+# copy of this software and associated documentation files (the
+# "Software"), to deal in the Software without restriction, including
+# without limitation the rights to use, copy, modify, merge, publish, dis-
+# tribute, sublicense, and/or sell copies of the Software, and to permit
+# persons to whom the Software is furnished to do so, subject to the fol-
+# lowing conditions:
+#
+# The above copyright notice and this permission notice shall be included
+# in all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
+# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
+# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+# IN THE SOFTWARE.
+
+import sys
+import os
+import boto
+import optparse
+import copy
+import boto.exception
+import boto.roboto.awsqueryservice
+
+import bdb
+import traceback
+try:
+    import epdb as debugger
+except ImportError:
+    import pdb as debugger
+
+def boto_except_hook(debugger_flag, debug_flag):
+    def excepthook(typ, value, tb):
+        if typ is bdb.BdbQuit:
+            sys.exit(1)
+        sys.excepthook = sys.__excepthook__
+
+        if debugger_flag and sys.stdout.isatty() and sys.stdin.isatty():
+            if debugger.__name__ == 'epdb':
+                debugger.post_mortem(tb, typ, value)
+            else:
+                debugger.post_mortem(tb)
+        elif debug_flag:
+            print traceback.print_tb(tb)
+            sys.exit(1)
+        else:
+            print value
+            sys.exit(1)
+
+    return excepthook
+
+class Line(object):
+
+    def __init__(self, fmt, data, label):
+        self.fmt = fmt
+        self.data = data
+        self.label = label
+        self.line = '%s\t' % label
+        self.printed = False
+
+    def append(self, datum):
+        self.line += '%s\t' % datum
+
+    def print_it(self):
+        if not self.printed:
+            print self.line
+            self.printed = True
+
+class RequiredParamError(boto.exception.BotoClientError):
+
+    def __init__(self, required):
+        self.required = required
+        s = 'Required parameters are missing: %s' % self.required
+        boto.exception.BotoClientError.__init__(self, s)
+
+class EncoderError(boto.exception.BotoClientError):
+
+    def __init__(self, error_msg):
+        s = 'Error encoding value (%s)' % error_msg
+        boto.exception.BotoClientError.__init__(self, s)
+        
+class FilterError(boto.exception.BotoClientError):
+
+    def __init__(self, filters):
+        self.filters = filters
+        s = 'Unknown filters: %s' % self.filters
+        boto.exception.BotoClientError.__init__(self, s)
+        
+class Encoder:
+
+    @classmethod
+    def encode(cls, p, rp, v, label=None):
+        if p.name.startswith('_'):
+            return
+        try:
+            mthd = getattr(cls, 'encode_'+p.ptype)
+            mthd(p, rp, v, label)
+        except AttributeError:
+            raise EncoderError('Unknown type: %s' % p.ptype)
+        
+    @classmethod
+    def encode_string(cls, p, rp, v, l):
+        if l:
+            label = l
+        else:
+            label = p.name
+        rp[label] = v
+
+    encode_file = encode_string
+    encode_enum = encode_string
+
+    @classmethod
+    def encode_integer(cls, p, rp, v, l):
+        if l:
+            label = l
+        else:
+            label = p.name
+        rp[label] = '%d' % v
+        
+    @classmethod
+    def encode_boolean(cls, p, rp, v, l):
+        if l:
+            label = l
+        else:
+            label = p.name
+        if v:
+            v = 'true'
+        else:
+            v = 'false'
+        rp[label] = v
+        
+    @classmethod
+    def encode_datetime(cls, p, rp, v, l):
+        if l:
+            label = l
+        else:
+            label = p.name
+        rp[label] = v
+        
+    @classmethod
+    def encode_array(cls, p, rp, v, l):
+        v = boto.utils.mklist(v)
+        if l:
+            label = l
+        else:
+            label = p.name
+        label = label + '.%d'
+        for i, value in enumerate(v):
+            rp[label%(i+1)] = value
+            
+class AWSQueryRequest(object):
+
+    ServiceClass = None
+
+    Description = ''
+    Params = []
+    Args = []
+    Filters = []
+    Response = {}
+
+    CLITypeMap = {'string' : 'string',
+                  'integer' : 'int',
+                  'int' : 'int',
+                  'enum' : 'choice',
+                  'datetime' : 'string',
+                  'dateTime' : 'string',
+                  'file' : 'string',
+                  'boolean' : None}
+
+    @classmethod
+    def name(cls):
+        return cls.__name__
+
+    def __init__(self, **args):
+        self.args = args
+        self.parser = None
+        self.cli_options = None
+        self.cli_args = None
+        self.cli_output_format = None
+        self.connection = None
+        self.list_markers = []
+        self.item_markers = []
+        self.request_params = {}
+        self.connection_args = None
+
+    def __repr__(self):
+        return self.name()
+
+    def get_connection(self, **args):
+        if self.connection is None:
+            self.connection = self.ServiceClass(**args)
+        return self.connection
+
+    @property
+    def status(self):
+        retval = None
+        if self.http_response is not None:
+            retval = self.http_response.status
+        return retval
+
+    @property
+    def reason(self):
+        retval = None
+        if self.http_response is not None:
+            retval = self.http_response.reason
+        return retval
+
+    @property
+    def request_id(self):
+        retval = None
+        if self.aws_response is not None:
+            retval = getattr(self.aws_response, 'requestId')
+        return retval
+
+    def process_filters(self):
+        filters = self.args.get('filters', [])
+        filter_names = [f['name'] for f in self.Filters]
+        unknown_filters = [f for f in filters if f not in filter_names]
+        if unknown_filters:
+            raise FilterError, 'Unknown filters: %s' % unknown_filters
+        for i, filter in enumerate(self.Filters):
+            name = filter['name']
+            if name in filters:
+                self.request_params['Filter.%d.Name' % (i+1)] = name
+                for j, value in enumerate(boto.utils.mklist(filters[name])):
+                    Encoder.encode(filter, self.request_params, value,
+                                   'Filter.%d.Value.%d' % (i+1,j+1))
+
+    def process_args(self, **args):
+        """
+        Responsible for walking through Params defined for the request and:
+
+        * Matching them with keyword parameters passed to the request
+          constructor or via the command line.
+        * Checking to see if all required parameters have been specified
+          and raising an exception, if not.
+        * Encoding each value into the set of request parameters that will
+          be sent in the request to the AWS service.
+        """
+        self.args.update(args)
+        self.connection_args = copy.copy(self.args)
+        if 'debug' in self.args and self.args['debug'] >= 2:
+            boto.set_stream_logger(self.name())
+        required = [p.name for p in self.Params+self.Args if not p.optional]
+        for param in self.Params+self.Args:
+            if param.long_name:
+                python_name = param.long_name.replace('-', '_')
+            else:
+                python_name = boto.utils.pythonize_name(param.name, '_')
+            value = None
+            if python_name in self.args:
+                value = self.args[python_name]
+            if value is None:
+                value = param.default
+            if value is not None:
+                if param.name in required:
+                    required.remove(param.name)
+                if param.request_param:
+                    if param.encoder:
+                        param.encoder(param, self.request_params, value)
+                    else:
+                        Encoder.encode(param, self.request_params, value)
+            if python_name in self.args:
+                del self.connection_args[python_name]
+        if required:
+            l = []
+            for p in self.Params+self.Args:
+                if p.name in required:
+                    if p.short_name and p.long_name:
+                        l.append('(%s, %s)' % (p.optparse_short_name,
+                                               p.optparse_long_name))
+                    elif p.short_name:
+                        l.append('(%s)' % p.optparse_short_name)
+                    else:
+                        l.append('(%s)' % p.optparse_long_name)
+            raise RequiredParamError(','.join(l))
+        boto.log.debug('request_params: %s' % self.request_params)
+        self.process_markers(self.Response)
+
+    def process_markers(self, fmt, prev_name=None):
+        if fmt and fmt['type'] == 'object':
+            for prop in fmt['properties']:
+                self.process_markers(prop, fmt['name'])
+        elif fmt and fmt['type'] == 'array':
+            self.list_markers.append(prev_name)
+            self.item_markers.append(fmt['name'])
+        
+    def send(self, verb='GET', **args):
+        self.process_args(**args)
+        self.process_filters()
+        conn = self.get_connection(**self.connection_args)
+        self.http_response = conn.make_request(self.name(),
+                                               self.request_params,
+                                               verb=verb)
+        self.body = self.http_response.read()
+        boto.log.debug(self.body)
+        if self.http_response.status == 200:
+            self.aws_response = boto.jsonresponse.Element(list_marker=self.list_markers,
+                                                          item_marker=self.item_markers)
+            h = boto.jsonresponse.XmlHandler(self.aws_response, self)
+            h.parse(self.body)
+            return self.aws_response
+        else:
+            boto.log.error('%s %s' % (self.http_response.status,
+                                      self.http_response.reason))
+            boto.log.error('%s' % self.body)
+            raise conn.ResponseError(self.http_response.status,
+                                     self.http_response.reason,
+                                     self.body)
+
+    def add_standard_options(self):
+        group = optparse.OptionGroup(self.parser, 'Standard Options')
+        # add standard options that all commands get
+        group.add_option('-D', '--debug', action='store_true',
+                         help='Turn on all debugging output')
+        group.add_option('--debugger', action='store_true',
+                         default=False,
+                         help='Enable interactive debugger on error')
+        group.add_option('-U', '--url', action='store',
+                         help='Override service URL with value provided')
+        group.add_option('--region', action='store',
+                         help='Name of the region to connect to')
+        group.add_option('-I', '--access-key-id', action='store',
+                         help='Override access key value')
+        group.add_option('-S', '--secret-key', action='store',
+                         help='Override secret key value')
+        group.add_option('--version', action='store_true',
+                         help='Display version string')
+        if self.Filters:
+            self.group.add_option('--help-filters', action='store_true',
+                                   help='Display list of available filters')
+            self.group.add_option('--filter', action='append',
+                                   metavar=' name=value',
+                                   help='A filter for limiting the results')
+        self.parser.add_option_group(group)
+
+    def process_standard_options(self, options, args, d):
+        if hasattr(options, 'help_filters') and options.help_filters:
+            print 'Available filters:'
+            for filter in self.Filters:
+                print '%s\t%s' % (filter.name, filter.doc)
+            sys.exit(0)
+        if options.debug:
+            self.args['debug'] = 2
+        if options.url:
+            self.args['url'] = options.url
+        if options.region:
+            self.args['region'] = options.region
+        if options.access_key_id:
+            self.args['aws_access_key_id'] = options.access_key_id
+        if options.secret_key:
+            self.args['aws_secret_access_key'] = options.secret_key
+        if options.version:
+            # TODO - Where should the version # come from?
+            print 'version x.xx'
+            exit(0)
+        sys.excepthook = boto_except_hook(options.debugger,
+                                          options.debug)
+
+    def get_usage(self):
+        s = 'usage: %prog [options] '
+        l = [ a.long_name for a in self.Args ]
+        s += ' '.join(l)
+        for a in self.Args:
+            if a.doc:
+                s += '\n\n\t%s - %s' % (a.long_name, a.doc)
+        return s
+    
+    def build_cli_parser(self):
+        self.parser = optparse.OptionParser(description=self.Description,
+                                            usage=self.get_usage())
+        self.add_standard_options()
+        for param in self.Params:
+            ptype = action = choices = None
+            if param.ptype in self.CLITypeMap:
+                ptype = self.CLITypeMap[param.ptype]
+                action = 'store'
+            if param.ptype == 'boolean':
+                action = 'store_true'
+            elif param.ptype == 'array':
+                if len(param.items) == 1:
+                    ptype = param.items[0]['type']
+                    action = 'append'
+            elif param.cardinality != 1:
+                action = 'append'
+            if ptype or action == 'store_true':
+                if param.short_name:
+                    self.parser.add_option(param.optparse_short_name,
+                                           param.optparse_long_name,
+                                           action=action, type=ptype,
+                                           choices=param.choices,
+                                           help=param.doc)
+                elif param.long_name:
+                    self.parser.add_option(param.optparse_long_name,
+                                           action=action, type=ptype,
+                                           choices=param.choices,
+                                           help=param.doc)
+
+    def do_cli(self):
+        if not self.parser:
+            self.build_cli_parser()
+        self.cli_options, self.cli_args = self.parser.parse_args()
+        d = {}
+        self.process_standard_options(self.cli_options, self.cli_args, d)
+        for param in self.Params:
+            if param.long_name:
+                p_name = param.long_name.replace('-', '_')
+            else:
+                p_name = boto.utils.pythonize_name(param.name)
+            value = getattr(self.cli_options, p_name)
+            if param.ptype == 'file' and value:
+                if value == '-':
+                    value = sys.stdin.read()
+                else:
+                    path = os.path.expanduser(value)
+                    path = os.path.expandvars(path)
+                    if os.path.isfile(path):
+                        fp = open(path)
+                        value = fp.read()
+                        fp.close()
+                    else:
+                        self.parser.error('Unable to read file: %s' % path)
+            d[p_name] = value
+        for arg in self.Args:
+            if arg.long_name:
+                p_name = arg.long_name.replace('-', '_')
+            else:
+                p_name = boto.utils.pythonize_name(arg.name)
+            value = None
+            if arg.cardinality == 1:
+                if len(self.cli_args) >= 1:
+                    value = self.cli_args[0]
+            else:
+                value = self.cli_args
+            d[p_name] = value
+        self.args.update(d)
+        if hasattr(self.cli_options, 'filter') and self.cli_options.filter:
+            d = {}
+            for filter in self.cli_options.filter:
+                name, value = filter.split('=')
+                d[name] = value
+            if 'filters' in self.args:
+                self.args['filters'].update(d)
+            else:
+                self.args['filters'] = d
+        try:
+            response = self.main()
+            self.cli_formatter(response)
+        except RequiredParamError, e:
+            print e
+            sys.exit(1)
+        except self.ServiceClass.ResponseError, err:
+            print 'Error(%s): %s' % (err.error_code, err.error_message)
+            sys.exit(1)
+        except boto.roboto.awsqueryservice.NoCredentialsError, err:
+            print 'Unable to find credentials.'
+            sys.exit(1)
+        except Exception, e:
+            print e
+            sys.exit(1)
+
+    def _generic_cli_formatter(self, fmt, data, label=''):
+        if fmt['type'] == 'object':
+            for prop in fmt['properties']:
+                if 'name' in fmt:
+                    if fmt['name'] in data:
+                        data = data[fmt['name']]
+                    if fmt['name'] in self.list_markers:
+                        label = fmt['name']
+                        if label[-1] == 's':
+                            label = label[0:-1]
+                        label = label.upper()
+                self._generic_cli_formatter(prop, data, label)
+        elif fmt['type'] == 'array':
+            for item in data:
+                line = Line(fmt, item, label)
+                if isinstance(item, dict):
+                    for field_name in item:
+                        line.append(item[field_name])
+                elif isinstance(item, basestring):
+                    line.append(item)
+                line.print_it()
+
+    def cli_formatter(self, data):
+        """
+        This method is responsible for formatting the output for the
+        command line interface.  The default behavior is to call the
+        generic CLI formatter which attempts to print something
+        reasonable.  If you want specific formatting, you should
+        override this method and do your own thing.
+
+        :type data: dict
+        :param data: The data returned by AWS.
+        """
+        if data:
+            self._generic_cli_formatter(self.Response, data)
+
+
diff --git a/boto/roboto/awsqueryservice.py b/boto/roboto/awsqueryservice.py
new file mode 100644
index 0000000..0ca78c2
--- /dev/null
+++ b/boto/roboto/awsqueryservice.py
@@ -0,0 +1,121 @@
+import os
+import urlparse
+import boto
+import boto.connection
+import boto.jsonresponse
+import boto.exception
+import awsqueryrequest
+
+class NoCredentialsError(boto.exception.BotoClientError):
+
+    def __init__(self):
+        s = 'Unable to find credentials'
+        boto.exception.BotoClientError.__init__(self, s)
+
+class AWSQueryService(boto.connection.AWSQueryConnection):
+
+    Name = ''
+    Description = ''
+    APIVersion = ''
+    Authentication = 'sign-v2'
+    Path = '/'
+    Port = 443
+    Provider = 'aws'
+    EnvURL = 'AWS_URL'
+
+    Regions = []
+
+    def __init__(self, **args):
+        self.args = args
+        self.check_for_credential_file()
+        self.check_for_env_url()
+        if 'host' not in self.args:
+            if self.Regions:
+                region_name = self.args.get('region_name',
+                                            self.Regions[0]['name'])
+                for region in self.Regions:
+                    if region['name'] == region_name:
+                        self.args['host'] = region['endpoint']
+        if 'path' not in self.args:
+            self.args['path'] = self.Path
+        if 'port' not in self.args:
+            self.args['port'] = self.Port
+        try:
+            boto.connection.AWSQueryConnection.__init__(self, **self.args)
+            self.aws_response = None
+        except boto.exception.NoAuthHandlerFound:
+            raise NoCredentialsError()
+
+    def check_for_credential_file(self):
+        """
+        Checks for the existance of an AWS credential file.
+        If the environment variable AWS_CREDENTIAL_FILE is
+        set and points to a file, that file will be read and
+        will be searched credentials.
+        Note that if credentials have been explicitelypassed
+        into the class constructor, those values always take
+        precedence.
+        """
+        if 'AWS_CREDENTIAL_FILE' in os.environ:
+            path = os.environ['AWS_CREDENTIAL_FILE']
+            path = os.path.expanduser(path)
+            path = os.path.expandvars(path)
+            if os.path.isfile(path):
+                fp = open(path)
+                lines = fp.readlines()
+                fp.close()
+                for line in lines:
+                    if line[0] != '#':
+                        if '=' in line:
+                            name, value = line.split('=', 1)
+                            if name.strip() == 'AWSAccessKeyId':
+                                if 'aws_access_key_id' not in self.args:
+                                    value = value.strip()
+                                    self.args['aws_access_key_id'] = value
+                            elif name.strip() == 'AWSSecretKey':
+                                if 'aws_secret_access_key' not in self.args:
+                                    value = value.strip()
+                                    self.args['aws_secret_access_key'] = value
+            else:
+                print 'Warning: unable to read AWS_CREDENTIAL_FILE'
+
+    def check_for_env_url(self):
+        """
+        First checks to see if a url argument was explicitly passed
+        in.  If so, that will be used.  If not, it checks for the
+        existence of the environment variable specified in ENV_URL.
+        If this is set, it should contain a fully qualified URL to the
+        service you want to use.
+        Note that any values passed explicitly to the class constructor
+        will take precedence.
+        """
+        url = self.args.get('url', None)
+        if url:
+            del self.args['url']
+        if not url and self.EnvURL in os.environ:
+            url = os.environ[self.EnvURL]
+        if url:
+            rslt = urlparse.urlparse(url)
+            if 'is_secure' not in self.args:
+                if rslt.scheme == 'https':
+                    self.args['is_secure'] = True
+                else:
+                    self.args['is_secure'] = False
+
+            host = rslt.netloc
+            port = None
+            l = host.split(':')
+            if len(l) > 1:
+                host = l[0]
+                port = int(l[1])
+            if 'host' not in self.args:
+                self.args['host'] = host
+            if port and 'port' not in self.args:
+                self.args['port'] = port
+
+            if rslt.path and 'path' not in self.args:
+                self.args['path'] = rslt.path
+            
+    def _required_auth_capability(self):
+        return [self.Authentication]
+        
diff --git a/boto/roboto/param.py b/boto/roboto/param.py
new file mode 100644
index 0000000..6136400
--- /dev/null
+++ b/boto/roboto/param.py
@@ -0,0 +1,147 @@
+# Copyright (c) 2010 Mitch Garnaat http://garnaat.org/
+# Copyright (c) 2010, Eucalyptus Systems, Inc.
+#
+# Permission is hereby granted, free of charge, to any person obtaining a
+# copy of this software and associated documentation files (the
+# "Software"), to deal in the Software without restriction, including
+# without limitation the rights to use, copy, modify, merge, publish, dis-
+# tribute, sublicense, and/or sell copies of the Software, and to permit
+# persons to whom the Software is furnished to do so, subject to the fol-
+# lowing conditions:
+#
+# The above copyright notice and this permission notice shall be included
+# in all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
+# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
+# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+# IN THE SOFTWARE.
+
+import os
+
+class Converter(object):
+    
+    @classmethod
+    def convert_string(cls, param, value):
+        # TODO: could do length validation, etc. here
+        if not isinstance(value, basestring):
+            raise ValueError
+        return value
+
+    @classmethod
+    def convert_integer(cls, param, value):
+        # TODO: could do range checking here
+        return int(value)
+        
+    @classmethod
+    def convert_boolean(cls, param, value):
+        """
+        For command line arguments, just the presence
+        of the option means True so just return True
+        """
+        return True
+        
+    @classmethod
+    def convert_file(cls, param, value):
+        if os.path.isfile(value):
+            return value
+        raise ValueError
+        
+    @classmethod
+    def convert_dir(cls, param, value):
+        if os.path.isdir(value):
+            return value
+        raise ValueError
+        
+    @classmethod
+    def convert(cls, param, value):
+        try:
+            if hasattr(cls, 'convert_'+param.ptype):
+                mthd = getattr(cls, 'convert_'+param.ptype)
+            else:
+                mthd = cls.convert_string
+            return mthd(param, value)
+        except:
+            raise ValidationException(param, '')
+        
+class Param(object):
+
+    def __init__(self, name=None, ptype='string', optional=True,
+                 short_name=None, long_name=None, doc='',
+                 metavar=None, cardinality=1, default=None,
+                 choices=None, encoder=None, request_param=True):
+        self.name = name
+        self.ptype = ptype
+        self.optional = optional
+        self.short_name = short_name
+        self.long_name = long_name
+        self.doc = doc
+        self.metavar = metavar
+        self.cardinality = cardinality
+        self.default = default
+        self.choices = choices
+        self.encoder = encoder
+        self.request_param = request_param
+
+    @property
+    def optparse_long_name(self):
+        ln = None
+        if self.long_name:
+            ln = '--%s' % self.long_name
+        return ln
+
+    @property
+    def synopsis_long_name(self):
+        ln = None
+        if self.long_name:
+            ln = '--%s' % self.long_name
+        return ln
+
+    @property
+    def getopt_long_name(self):
+        ln = None
+        if self.long_name:
+            ln = '%s' % self.long_name
+            if self.ptype != 'boolean':
+                ln += '='
+        return ln
+
+    @property
+    def optparse_short_name(self):
+        sn = None
+        if self.short_name:
+            sn = '-%s' % self.short_name
+        return sn
+
+    @property
+    def synopsis_short_name(self):
+        sn = None
+        if self.short_name:
+            sn = '-%s' % self.short_name
+        return sn
+
+    @property
+    def getopt_short_name(self):
+        sn = None
+        if self.short_name:
+            sn = '%s' % self.short_name
+            if self.ptype != 'boolean':
+                sn += ':'
+        return sn
+
+    def convert(self, value):
+        """
+        Convert a string value as received in the command line
+        tools and convert to the appropriate type of value.
+        Raise a ValidationError if the value can't be converted.
+
+        :type value: str
+        :param value: The value to convert.  This should always
+                      be a string.
+        """
+        return Converter.convert(self, value)
+
+
diff --git a/boto/route53/connection.py b/boto/route53/connection.py
index bbd218c..7c3f1b8 100644
--- a/boto/route53/connection.py
+++ b/boto/route53/connection.py
@@ -45,10 +45,14 @@
 #boto.set_stream_logger('dns')
 
 class Route53Connection(AWSAuthConnection):
-
     DefaultHost = 'route53.amazonaws.com'
-    Version = '2010-10-01'
-    XMLNameSpace = 'https://route53.amazonaws.com/doc/2010-10-01/'
+    """The default Route53 API endpoint to connect to."""
+
+    Version = '2011-05-05'
+    """Route53 API version."""
+
+    XMLNameSpace = 'https://route53.amazonaws.com/doc/2011-05-05/'
+    """XML schema for this Route53 API version."""
 
     def __init__(self, aws_access_key_id=None, aws_secret_access_key=None,
                  port=None, proxy=None, proxy_port=None,
@@ -71,12 +75,20 @@
 
     # Hosted Zones
 
-    def get_all_hosted_zones(self):
+    def get_all_hosted_zones(self, start_marker=None, zone_list=None):
         """
         Returns a Python data structure with information about all
         Hosted Zones defined for the AWS account.
+
+        :param int start_marker: start marker to pass when fetching additional
+            results after a truncated list
+        :param list zone_list: a HostedZones list to prepend to results
         """
-        response = self.make_request('GET', '/%s/hostedzone' % self.Version)
+        params = {}
+        if start_marker:
+            params = {'marker': start_marker}
+        response = self.make_request('GET', '/%s/hostedzone' % self.Version,
+                params=params)
         body = response.read()
         boto.log.debug(body)
         if response.status >= 300:
@@ -87,8 +99,14 @@
                                       item_marker=('HostedZone',))
         h = boto.jsonresponse.XmlHandler(e, None)
         h.parse(body)
+        if zone_list:
+            e['ListHostedZonesResponse']['HostedZones'].extend(zone_list)
+        while e['ListHostedZonesResponse'].has_key('NextMarker'):
+            next_marker = e['ListHostedZonesResponse']['NextMarker']
+            zone_list = e['ListHostedZonesResponse']['HostedZones']
+            e = self.get_all_hosted_zones(next_marker, zone_list)
         return e
-    
+
     def get_hosted_zone(self, hosted_zone_id):
         """
         Get detailed information about a particular Hosted Zone.
@@ -118,28 +136,25 @@
         
         :type domain_name: str
         :param domain_name: The name of the domain. This should be a
-                            fully-specified domain, and should end with
-                            a final period as the last label indication.
-                            If you omit the final period, Amazon Route 53
-                            assumes the domain is relative to the root.
-                            This is the name you have registered with your
-                            DNS registrar. It is also the name you will
-                            delegate from your registrar to the Amazon
-                            Route 53 delegation servers returned in
-                            response to this request.A list of strings
-                            with the image IDs wanted
+            fully-specified domain, and should end with a final period
+            as the last label indication.  If you omit the final period,
+            Amazon Route 53 assumes the domain is relative to the root.
+            This is the name you have registered with your DNS registrar.
+            It is also the name you will delegate from your registrar to
+            the Amazon Route 53 delegation servers returned in
+            response to this request.A list of strings with the image
+            IDs wanted.
 
         :type caller_ref: str
         :param caller_ref: A unique string that identifies the request
-                           and that allows failed CreateHostedZone requests
-                           to be retried without the risk of executing the
-                           operation twice.
-                           If you don't provide a value for this, boto will
-                           generate a Type 4 UUID and use that.
+            and that allows failed CreateHostedZone requests to be retried
+            without the risk of executing the operation twice.  If you don't
+            provide a value for this, boto will generate a Type 4 UUID and
+            use that.
 
         :type comment: str
-        :param comment: Any comments you want to include about the hosted
-                        zone.
+        :param comment: Any comments you want to include about the hosted      
+            zone.
 
         """
         if caller_ref is None:
@@ -182,7 +197,7 @@
     # Resource Record Sets
 
     def get_all_rrsets(self, hosted_zone_id, type=None,
-                       name=None, maxitems=None):
+                       name=None, identifier=None, maxitems=None):
         """
         Retrieve the Resource Record Sets defined for this Hosted Zone.
         Returns the raw XML data returned by the Route53 call.
@@ -192,29 +207,50 @@
 
         :type type: str
         :param type: The type of resource record set to begin the record
-                     listing from.  Valid choices are:
+            listing from.  Valid choices are:
 
-                     * A
-                     * AAAA
-                     * CNAME
-                     * MX
-                     * NS
-                     * PTR
-                     * SOA
-                     * SPF
-                     * SRV
-                     * TXT
+                * A
+                * AAAA
+                * CNAME
+                * MX
+                * NS
+                * PTR
+                * SOA
+                * SPF
+                * SRV
+                * TXT
+
+            Valid values for weighted resource record sets:
+
+                * A
+                * AAAA
+                * CNAME
+                * TXT
+
+            Valid values for Zone Apex Aliases:
+
+                * A
+                * AAAA
 
         :type name: str
         :param name: The first name in the lexicographic ordering of domain
                      names to be retrieved
 
+        :type identifier: str
+        :param identifier: In a hosted zone that includes weighted resource
+            record sets (multiple resource record sets with the same DNS
+            name and type that are differentiated only by SetIdentifier),
+            if results were truncated for a given DNS name and type,
+            the value of SetIdentifier for the next resource record
+            set that has the current DNS name and type
+
         :type maxitems: int
         :param maxitems: The maximum number of records
 
         """
         from boto.route53.record import ResourceRecordSets
-        params = {'type': type, 'name': name, 'maxitems': maxitems}
+        params = {'type': type, 'name': name,
+                  'Identifier': identifier, 'maxitems': maxitems}
         uri = '/%s/hostedzone/%s/rrset' % (self.Version, hosted_zone_id)
         response = self.make_request('GET', uri, params=params)
         body = response.read()
@@ -240,7 +276,7 @@
 
         :type xml_body: str
         :param xml_body: The list of changes to be made, defined in the
-                         XML schema defined by the Route53 service.
+            XML schema defined by the Route53 service.
 
         """
         uri = '/%s/hostedzone/%s/rrset' % (self.Version, hosted_zone_id)
@@ -267,8 +303,7 @@
 
         :type change_id: str
         :param change_id: The unique identifier for the set of changes.
-                          This ID is returned in the response to the
-                          change_rrsets method.
+            This ID is returned in the response to the change_rrsets method.
 
         """
         uri = '/%s/change/%s' % (self.Version, change_id)
diff --git a/boto/route53/record.py b/boto/route53/record.py
index 24f0482..6e91a83 100644
--- a/boto/route53/record.py
+++ b/boto/route53/record.py
@@ -26,7 +26,7 @@
 class ResourceRecordSets(ResultSet):
 
     ChangeResourceRecordSetsBody = """<?xml version="1.0" encoding="UTF-8"?>
-    <ChangeResourceRecordSetsRequest xmlns="https://route53.amazonaws.com/doc/2010-10-01/">
+    <ChangeResourceRecordSetsRequest xmlns="https://route53.amazonaws.com/doc/2011-05-05/">
             <ChangeBatch>
                 <Comment>%(comment)s</Comment>
                 <Changes>%(changes)s</Changes>
@@ -51,9 +51,9 @@
     def __repr__(self):
         return '<ResourceRecordSets: %s>' % self.hosted_zone_id
 
-    def add_change(self, action, name, type, ttl=600):
+    def add_change(self, action, name, type, ttl=600, alias_hosted_zone_id=None, alias_dns_name=None):
         """Add a change request"""
-        change = Record(name, type, ttl)
+        change = Record(name, type, ttl, alias_hosted_zone_id=alias_hosted_zone_id, alias_dns_name=alias_dns_name)
         self.changes.append([action, change])
         return change
 
@@ -104,40 +104,72 @@
     XMLBody = """<ResourceRecordSet>
         <Name>%(name)s</Name>
         <Type>%(type)s</Type>
-        <TTL>%(ttl)s</TTL>
-        <ResourceRecords>%(records)s</ResourceRecords>
+        %(body)s
     </ResourceRecordSet>"""
 
+    ResourceRecordsBody = """
+        <TTL>%(ttl)s</TTL>
+        <ResourceRecords>
+            %(records)s
+        </ResourceRecords>"""
+
     ResourceRecordBody = """<ResourceRecord>
         <Value>%s</Value>
     </ResourceRecord>"""
 
+    AliasBody = """<AliasTarget>
+        <HostedZoneId>%s</HostedZoneId>
+        <DNSName>%s</DNSName>
+    </AliasTarget>"""
 
-    def __init__(self, name=None, type=None, ttl=600, resource_records=None):
+    def __init__(self, name=None, type=None, ttl=600, resource_records=None, alias_hosted_zone_id=None, alias_dns_name=None):
         self.name = name
         self.type = type
         self.ttl = ttl
         if resource_records == None:
             resource_records = []
         self.resource_records = resource_records
+        self.alias_hosted_zone_id = alias_hosted_zone_id
+        self.alias_dns_name = alias_dns_name
     
     def add_value(self, value):
         """Add a resource record value"""
         self.resource_records.append(value)
 
+    def set_alias(self, alias_hosted_zone_id, alias_dns_name):
+        """Make this an alias resource record set"""
+        self.alias_hosted_zone_id = alias_hosted_zone_id
+        self.alias_dns_name = alias_dns_name
+
     def to_xml(self):
         """Spit this resource record set out as XML"""
-        records = ""
-        for r in self.resource_records:
-            records += self.ResourceRecordBody % r
+        if self.alias_hosted_zone_id != None and self.alias_dns_name != None:
+            # Use alias
+            body = self.AliasBody % (self.alias_hosted_zone_id, self.alias_dns_name)
+        else:
+            # Use resource record(s)
+            records = ""
+            for r in self.resource_records:
+                records += self.ResourceRecordBody % r
+            body = self.ResourceRecordsBody % {
+                "ttl": self.ttl,
+                "records": records,
+            }
         params = {
             "name": self.name,
             "type": self.type,
-            "ttl": self.ttl,
-            "records": records
+            "body": body,
         }
         return self.XMLBody % params
 
+    def to_print(self):
+        if self.alias_hosted_zone_id != None and self.alias_dns_name != None:
+            # Show alias
+            return 'ALIAS ' + self.alias_hosted_zone_id + ' ' + self.alias_dns_name
+        else:
+            # Show resource record(s)
+            return ",".join(self.resource_records)
+
     def endElement(self, name, value, connection):
         if name == 'Name':
             self.name = value
@@ -147,6 +179,10 @@
             self.ttl = value
         elif name == 'Value':
             self.resource_records.append(value)
+        elif name == 'HostedZoneId':
+            self.alias_hosted_zone_id = value
+        elif name == 'DNSName':
+            self.alias_dns_name = value
 
     def startElement(self, name, attrs, connection):
         return None
diff --git a/boto/s3/acl.py b/boto/s3/acl.py
index 2640499..6039683 100644
--- a/boto/s3/acl.py
+++ b/boto/s3/acl.py
@@ -44,7 +44,7 @@
                 elif g.type == 'Group':
                     u = g.uri
                 else:
-                    u = g.email
+                    u = g.email_address
                 grants.append("%s = %s" % (u, g.permission))
         return "<Policy: %s>" % ", ".join(grants)
 
@@ -87,8 +87,8 @@
                       email_address=email_address)
         self.grants.append(grant)
 
-    def add_user_grant(self, permission, user_id):
-        grant = Grant(permission=permission, type='CanonicalUser', id=user_id)
+    def add_user_grant(self, permission, user_id, display_name=None):
+        grant = Grant(permission=permission, type='CanonicalUser', id=user_id, display_name=display_name)
         self.grants.append(grant)
 
     def startElement(self, name, attrs, connection):
diff --git a/boto/s3/bucket.py b/boto/s3/bucket.py
index c1b38e9..144edbf 100644
--- a/boto/s3/bucket.py
+++ b/boto/s3/bucket.py
@@ -23,13 +23,11 @@
 
 import boto
 from boto import handler
-from boto.provider import Provider
 from boto.resultset import ResultSet
-from boto.s3.acl import ACL, Policy, CannedACLStrings, Grant
+from boto.s3.acl import Policy, CannedACLStrings, Grant
 from boto.s3.key import Key
 from boto.s3.prefix import Prefix
 from boto.s3.deletemarker import DeleteMarker
-from boto.s3.user import User
 from boto.s3.multipart import MultiPartUpload
 from boto.s3.multipart import CompleteMultiPartUpload
 from boto.s3.bucketlistresultset import BucketListResultSet
@@ -48,6 +46,7 @@
 
     trans_region['EU'] = 's3-website-eu-west-1'
     trans_region['us-west-1'] = 's3-website-us-west-1'
+    trans_region['ap-northeast-1'] = 's3-website-ap-northeast-1'
     trans_region['ap-southeast-1'] = 's3-website-ap-southeast-1'
 
     @classmethod
@@ -106,7 +105,7 @@
         return iter(BucketListResultSet(self))
 
     def __contains__(self, key_name):
-       return not (self.get_key(key_name) is None)
+        return not (self.get_key(key_name) is None)
 
     def startElement(self, name, attrs, connection):
         return None
@@ -175,10 +174,19 @@
             k.content_type = response.getheader('content-type')
             k.content_encoding = response.getheader('content-encoding')
             k.last_modified = response.getheader('last-modified')
-            k.size = int(response.getheader('content-length'))
+            # the following machinations are a workaround to the fact that
+            # apache/fastcgi omits the content-length header on HEAD
+            # requests when the content-length is zero.
+            # See http://goo.gl/0Tdax for more details.
+            clen = response.getheader('content-length')
+            if clen:
+                k.size = int(response.getheader('content-length'))
+            else:
+                k.size = 0
             k.cache_control = response.getheader('cache-control')
             k.name = key_name
             k.handle_version_headers(response)
+            k.handle_encryption_headers(response)
             return k
         else:
             if response.status == 404:
@@ -281,7 +289,7 @@
     def _get_all(self, element_map, initial_query_string='',
                  headers=None, **params):
         l = []
-        for k,v in params.items():
+        for k, v in params.items():
             k = k.replace('_', '-')
             if  k == 'maxkeys':
                 k = 'max-keys'
@@ -294,7 +302,8 @@
         else:
             s = initial_query_string
         response = self.connection.make_request('GET', self.name,
-                headers=headers, query_args=s)
+                                                headers=headers,
+                                                query_args=s)
         body = response.read()
         boto.log.debug(body)
         if response.status == 200:
@@ -434,11 +443,12 @@
         """
         return self.key_class(self, key_name)
 
-    def generate_url(self, expires_in, method='GET',
-                     headers=None, force_http=False):
+    def generate_url(self, expires_in, method='GET', headers=None,
+                     force_http=False, response_headers=None):
         return self.connection.generate_url(expires_in, method, self.name,
                                             headers=headers,
-                                            force_http=force_http)
+                                            force_http=force_http,
+                                            response_headers=response_headers)
 
     def delete_key(self, key_name, headers=None,
                    version_id=None, mfa_token=None):
@@ -479,7 +489,8 @@
 
     def copy_key(self, new_key_name, src_bucket_name,
                  src_key_name, metadata=None, src_version_id=None,
-                 storage_class='STANDARD', preserve_acl=False):
+                 storage_class='STANDARD', preserve_acl=False,
+                 encrypt_key=False):
         """
         Create a new key in the bucket by copying another existing key.
 
@@ -524,21 +535,34 @@
                              of False will be significantly more
                              efficient.
 
+        :type encrypt_key: bool
+        :param encrypt_key: If True, the new copy of the object will
+                            be encrypted on the server-side by S3 and
+                            will be stored in an encrypted form while
+                            at rest in S3.
+
         :rtype: :class:`boto.s3.key.Key` or subclass
         :returns: An instance of the newly created key object
         """
+        headers = {}
+        provider = self.connection.provider
+        src_key_name = boto.utils.get_utf8_value(src_key_name)
         if preserve_acl:
-            acl = self.get_xml_acl(src_key_name)
+            if self.name == src_bucket_name:
+                src_bucket = self
+            else:
+                src_bucket = self.connection.get_bucket(src_bucket_name)
+            acl = src_bucket.get_xml_acl(src_key_name)
+        if encrypt_key:
+            headers[provider.server_side_encryption_header] = 'AES256'
         src = '%s/%s' % (src_bucket_name, urllib.quote(src_key_name))
         if src_version_id:
             src += '?version_id=%s' % src_version_id
-        provider = self.connection.provider
-        headers = {provider.copy_source_header : src}
-        if storage_class != 'STANDARD':
-            headers[provider.storage_class_header] = storage_class
+        headers = {provider.copy_source_header : str(src)}
+        headers[provider.storage_class_header] = storage_class
         if metadata:
             headers[provider.metadata_directive_header] = 'REPLACE'
-            headers = boto.utils.merge_meta(headers, metadata)
+            headers = boto.utils.merge_meta(headers, metadata, provider)
         else:
             headers[provider.metadata_directive_header] = 'COPY'
         response = self.connection.make_request('PUT', self.name, new_key_name,
@@ -555,7 +579,8 @@
                 self.set_xml_acl(acl, new_key_name)
             return key
         else:
-            raise provider.storage_response_error(response.status, response.reason, body)
+            raise provider.storage_response_error(response.status,
+                                                  response.reason, body)
 
     def set_canned_acl(self, acl_str, key_name='', headers=None,
                        version_id=None):
@@ -566,7 +591,7 @@
         else:
             headers={self.connection.provider.acl_header: acl_str}
 
-        query_args='acl'
+        query_args = 'acl'
         if version_id:
             query_args += '&versionId=%s' % version_id
         response = self.connection.make_request('PUT', self.name, key_name,
@@ -594,7 +619,7 @@
         if version_id:
             query_args += '&versionId=%s' % version_id
         response = self.connection.make_request('PUT', self.name, key_name,
-                                                data=acl_str,
+                                                data=acl_str.encode('ISO-8859-1'),
                                                 query_args=query_args,
                                                 headers=headers)
         body = response.read()
@@ -627,6 +652,80 @@
             raise self.connection.provider.storage_response_error(
                 response.status, response.reason, body)
 
+    def set_subresource(self, subresource, value, key_name = '', headers=None,
+                        version_id=None):
+        """
+        Set a subresource for a bucket or key.
+
+        :type subresource: string
+        :param subresource: The subresource to set.
+
+        :type value: string
+        :param value: The value of the subresource.
+
+        :type key_name: string
+        :param key_name: The key to operate on, or None to operate on the
+                         bucket.
+
+        :type headers: dict
+        :param headers: Additional HTTP headers to include in the request.
+
+        :type src_version_id: string
+        :param src_version_id: Optional. The version id of the key to operate
+                               on. If not specified, operate on the newest
+                               version.
+        """
+        if not subresource:
+            raise TypeError('set_subresource called with subresource=None')
+        query_args = subresource
+        if version_id:
+            query_args += '&versionId=%s' % version_id
+        response = self.connection.make_request('PUT', self.name, key_name,
+                                                data=value.encode('UTF-8'),
+                                                query_args=query_args,
+                                                headers=headers)
+        body = response.read()
+        if response.status != 200:
+            raise self.connection.provider.storage_response_error(
+                response.status, response.reason, body)
+
+    def get_subresource(self, subresource, key_name='', headers=None,
+                        version_id=None):
+        """
+        Get a subresource for a bucket or key.
+
+        :type subresource: string
+        :param subresource: The subresource to get.
+
+        :type key_name: string
+        :param key_name: The key to operate on, or None to operate on the
+                         bucket.
+
+        :type headers: dict
+        :param headers: Additional HTTP headers to include in the request.
+
+        :type src_version_id: string
+        :param src_version_id: Optional. The version id of the key to operate
+                               on. If not specified, operate on the newest
+                               version.
+
+        :rtype: string
+        :returns: The value of the subresource.
+        """
+        if not subresource:
+            raise TypeError('get_subresource called with subresource=None')
+        query_args = subresource
+        if version_id:
+            query_args += '&versionId=%s' % version_id
+        response = self.connection.make_request('GET', self.name, key_name,
+                                                query_args=query_args,
+                                                headers=headers)
+        body = response.read()
+        if response.status != 200:
+            raise self.connection.provider.storage_response_error(
+                response.status, response.reason, body)
+        return body
+
     def make_public(self, recursive=False, headers=None):
         self.set_canned_acl('public-read', headers=headers)
         if recursive:
@@ -668,8 +767,8 @@
             for key in self:
                 key.add_email_grant(permission, email_address, headers=headers)
 
-    def add_user_grant(self, permission, user_id,
-                       recursive=False, headers=None):
+    def add_user_grant(self, permission, user_id, recursive=False,
+                       headers=None, display_name=None):
         """
         Convenience method that provides a quick way to add a canonical
         user grant to a bucket.  This method retrieves the current ACL,
@@ -692,16 +791,22 @@
                           in the bucket and apply the same grant to each key.
                           CAUTION: If you have a lot of keys, this could take
                           a long time!
+                          
+        :type display_name: string
+        :param display_name: An option string containing the user's
+                             Display Name.  Only required on Walrus.
         """
         if permission not in S3Permissions:
             raise self.connection.provider.storage_permissions_error(
                 'Unknown Permission: %s' % permission)
         policy = self.get_acl(headers=headers)
-        policy.acl.add_user_grant(permission, user_id)
+        policy.acl.add_user_grant(permission, user_id,
+                                  display_name=display_name)
         self.set_acl(policy, headers=headers)
         if recursive:
             for key in self:
-                key.add_user_grant(permission, user_id, headers=headers)
+                key.add_user_grant(permission, user_id, headers=headers,
+                                   display_name=display_name)
 
     def list_grants(self, headers=None):
         policy = self.get_acl(headers=headers)
@@ -795,9 +900,9 @@
                              mfa_token=None, headers=None):
         """
         Configure versioning for this bucket.
-        Note: This feature is currently in beta release and is available
-              only in the Northern California region.
-
+        
+        ..note:: This feature is currently in beta.
+                 
         :type versioning: bool
         :param versioning: A boolean indicating whether version is
                            enabled (True) or disabled (False).
@@ -909,14 +1014,17 @@
 
         :rtype: dict
         :returns: A dictionary containing a Python representation
-                  of the XML response from S3.  The overall structure is:
+                  of the XML response from S3. The overall structure is:
 
-                   * WebsiteConfiguration
-                     * IndexDocument
-                       * Suffix : suffix that is appended to request that
-                         is for a "directory" on the website endpoint
-                     * ErrorDocument
-                       * Key : name of object to serve when an error occurs
+            * WebsiteConfiguration
+    
+              * IndexDocument
+    
+                * Suffix : suffix that is appended to request that
+                is for a "directory" on the website endpoint
+                * ErrorDocument
+    
+                  * Key : name of object to serve when an error occurs
         """
         response = self.connection.make_request('GET', self.name,
                 query_args='website', headers=headers)
@@ -978,8 +1086,70 @@
             raise self.connection.provider.storage_response_error(
                 response.status, response.reason, body)
 
-    def initiate_multipart_upload(self, key_name, headers=None):
+    def delete_policy(self, headers=None):
+        response = self.connection.make_request('DELETE', self.name,
+                                                data='/?policy',
+                                                query_args='policy',
+                                                headers=headers)
+        body = response.read()
+        if response.status >= 200 and response.status <= 204:
+            return True
+        else:
+            raise self.connection.provider.storage_response_error(
+                response.status, response.reason, body)
+        
+
+    def initiate_multipart_upload(self, key_name, headers=None,
+                                  reduced_redundancy=False,
+                                  metadata=None, encrypt_key=False):
+        """
+        Start a multipart upload operation.
+
+        :type key_name: string
+        :param key_name: The name of the key that will ultimately result from
+                         this multipart upload operation.  This will be exactly
+                         as the key appears in the bucket after the upload
+                         process has been completed.
+
+        :type headers: dict
+        :param headers: Additional HTTP headers to send and store with the
+                        resulting key in S3.
+
+        :type reduced_redundancy: boolean
+        :param reduced_redundancy: In multipart uploads, the storage class is
+                                   specified when initiating the upload,
+                                   not when uploading individual parts.  So
+                                   if you want the resulting key to use the
+                                   reduced redundancy storage class set this
+                                   flag when you initiate the upload.
+
+        :type metadata: dict
+        :param metadata: Any metadata that you would like to set on the key
+                         that results from the multipart upload.
+                         
+        :type encrypt_key: bool
+        :param encrypt_key: If True, the new copy of the object will
+                            be encrypted on the server-side by S3 and
+                            will be stored in an encrypted form while
+                            at rest in S3.
+        """
         query_args = 'uploads'
+        provider = self.connection.provider
+        if headers is None:
+            headers = {}
+        if reduced_redundancy:
+            storage_class_header = provider.storage_class_header
+            if storage_class_header:
+                headers[storage_class_header] = 'REDUCED_REDUNDANCY'
+            # TODO: what if the provider doesn't support reduced redundancy?
+            # (see boto.s3.key.Key.set_contents_from_file)
+        if encrypt_key:
+            headers[provider.server_side_encryption_header] = 'AES256'
+        if metadata is None:
+            metadata = {}
+
+        headers = boto.utils.merge_meta(headers, metadata,
+                self.connection.provider)
         response = self.connection.make_request('POST', self.name, key_name,
                                                 query_args=query_args,
                                                 headers=headers)
@@ -996,6 +1166,9 @@
         
     def complete_multipart_upload(self, key_name, upload_id,
                                   xml_body, headers=None):
+        """
+        Complete a multipart upload operation.
+        """
         query_args = 'uploadId=%s' % upload_id
         if headers is None:
             headers = {}
@@ -1003,9 +1176,15 @@
         response = self.connection.make_request('POST', self.name, key_name,
                                                 query_args=query_args,
                                                 headers=headers, data=xml_body)
+        contains_error = False
         body = response.read()
+        # Some errors will be reported in the body of the response
+        # even though the HTTP response code is 200.  This check
+        # does a quick and dirty peek in the body for an error element.
+        if body.find('<Error>') > 0:
+            contains_error = True
         boto.log.debug(body)
-        if response.status == 200:
+        if response.status == 200 and not contains_error:
             resp = CompleteMultiPartUpload(self)
             h = handler.XmlHandler(resp, self)
             xml.sax.parseString(body, h)
@@ -1027,4 +1206,3 @@
         
     def delete(self, headers=None):
         return self.connection.delete_bucket(self.name, headers=headers)
-
diff --git a/boto/s3/bucketlistresultset.py b/boto/s3/bucketlistresultset.py
index 0123663..14b0f5d 100644
--- a/boto/s3/bucketlistresultset.py
+++ b/boto/s3/bucketlistresultset.py
@@ -52,7 +52,8 @@
 
     def __iter__(self):
         return bucket_lister(self.bucket, prefix=self.prefix,
-                             delimiter=self.delimiter, marker=self.marker, headers=self.headers)
+                             delimiter=self.delimiter, marker=self.marker,
+                             headers=self.headers)
 
 def versioned_bucket_lister(bucket, prefix='', delimiter='',
                             key_marker='', version_id_marker='', headers=None):
@@ -64,7 +65,8 @@
     while more_results:
         rs = bucket.get_all_versions(prefix=prefix, key_marker=key_marker,
                                      version_id_marker=version_id_marker,
-                                     delimiter=delimiter, headers=headers)
+                                     delimiter=delimiter, headers=headers,
+                                     max_keys=999)
         for k in rs:
             yield k
         key_marker = rs.next_key_marker
diff --git a/boto/s3/connection.py b/boto/s3/connection.py
index 25ba4ab..80209b7 100644
--- a/boto/s3/connection.py
+++ b/boto/s3/connection.py
@@ -27,7 +27,6 @@
 import boto.utils
 from boto.connection import AWSAuthConnection
 from boto import handler
-from boto.provider import Provider
 from boto.s3.bucket import Bucket
 from boto.s3.key import Key
 from boto.resultset import ResultSet
@@ -64,7 +63,10 @@
         return f(*args, **kwargs)
     return wrapper
 
-class _CallingFormat:
+class _CallingFormat(object):
+
+    def get_bucket_server(self, server, bucket):
+        return ''
 
     def build_url_base(self, connection, protocol, server, bucket, key=''):
         url_base = '%s://' % protocol
@@ -79,12 +81,14 @@
             return self.get_bucket_server(server, bucket)
 
     def build_auth_path(self, bucket, key=''):
+        key = boto.utils.get_utf8_value(key)
         path = ''
         if bucket != '':
             path = '/' + bucket
         return path + '/%s' % urllib.quote(key)
 
     def build_path_base(self, bucket, key=''):
+        key = boto.utils.get_utf8_value(key)
         return '/%s' % urllib.quote(key)
 
 class SubdomainCallingFormat(_CallingFormat):
@@ -105,15 +109,25 @@
         return server
 
     def build_path_base(self, bucket, key=''):
+        key = boto.utils.get_utf8_value(key)
         path_base = '/'
         if bucket:
             path_base += "%s/" % bucket
         return path_base + urllib.quote(key)
 
+class ProtocolIndependentOrdinaryCallingFormat(OrdinaryCallingFormat):
+    
+    def build_url_base(self, connection, protocol, server, bucket, key=''):
+        url_base = '//'
+        url_base += self.build_host(server, bucket)
+        url_base += connection.get_path(self.build_path_base(bucket, key))
+        return url_base
+
 class Location:
     DEFAULT = '' # US Classic Region
     EU = 'EU'
     USWest = 'us-west-1'
+    APNortheast = 'ap-northeast-1'
     APSoutheast = 'ap-southeast-1'
 
 class S3Connection(AWSAuthConnection):
@@ -125,15 +139,15 @@
                  is_secure=True, port=None, proxy=None, proxy_port=None,
                  proxy_user=None, proxy_pass=None,
                  host=DefaultHost, debug=0, https_connection_factory=None,
-                 calling_format=SubdomainCallingFormat(), path='/', provider='aws',
-                 bucket_class=Bucket):
+                 calling_format=SubdomainCallingFormat(), path='/',
+                 provider='aws', bucket_class=Bucket, security_token=None):
         self.calling_format = calling_format
         self.bucket_class = bucket_class
         AWSAuthConnection.__init__(self, host,
                 aws_access_key_id, aws_secret_access_key,
                 is_secure, port, proxy, proxy_port, proxy_user, proxy_pass,
                 debug=debug, https_connection_factory=https_connection_factory,
-                path=path, provider=provider)
+                path=path, provider=provider, security_token=security_token)
 
     def _required_auth_capability(self):
         return ['s3']
@@ -143,7 +157,7 @@
             yield bucket
 
     def __contains__(self, bucket_name):
-       return not (self.lookup(bucket_name) is None)
+        return not (self.lookup(bucket_name) is None)
 
     def set_bucket_class(self, bucket_class):
         """
@@ -170,8 +184,10 @@
 
 
     def build_post_form_args(self, bucket_name, key, expires_in = 6000,
-                        acl = None, success_action_redirect = None, max_content_length = None,
-                        http_method = "http", fields=None, conditions=None):
+                             acl = None, success_action_redirect = None,
+                             max_content_length = None,
+                             http_method = "http", fields=None,
+                             conditions=None):
         """
         Taken from the AWS book Python examples and modified for use with boto
         This only returns the arguments required for the post form, not the actual form
@@ -261,13 +277,22 @@
         return {"action": url, "fields": fields}
 
 
-    def generate_url(self, expires_in, method, bucket='', key='',
-                     headers=None, query_auth=True, force_http=False):
+    def generate_url(self, expires_in, method, bucket='', key='', headers=None,
+                     query_auth=True, force_http=False, response_headers=None):
         if not headers:
             headers = {}
         expires = int(time.time() + expires_in)
         auth_path = self.calling_format.build_auth_path(bucket, key)
         auth_path = self.get_path(auth_path)
+        # Arguments to override response headers become part of the canonical
+        # string to be signed.
+        if response_headers:
+            response_hdrs = ["%s=%s" % (k, v) for k, v in
+                             response_headers.items()]
+            delimiter = '?' if '?' not in auth_path else '&'
+            auth_path = "%s%s%s" % (auth_path, delimiter, '&'.join(response_hdrs))
+        else:
+            response_headers = {}
         c_string = boto.utils.canonical_string(method, auth_path, headers,
                                                expires, self.provider)
         b64_hmac = self._auth_handler.sign_string(c_string)
@@ -275,11 +300,13 @@
         self.calling_format.build_path_base(bucket, key)
         if query_auth:
             query_part = '?' + self.QueryString % (encoded_canonical, expires,
-                                             self.aws_access_key_id)
-            sec_hdr = self.provider.security_token_header
-            if sec_hdr in headers:
-                query_part += ('&%s=%s' % (sec_hdr,
-                                           urllib.quote(headers[sec_hdr])));
+                                                   self.aws_access_key_id)
+            # The response headers must also be GET parameters in the URL.
+            headers.update(response_headers)
+            hdrs = [ '%s=%s'%(name, urllib.quote(val)) for name,val in headers.items() ]
+            q_str = '&'.join(hdrs)
+            if q_str:
+                query_part += '&' + q_str
         else:
             query_part = ''
         if force_http:
@@ -304,11 +331,13 @@
 
     def get_canonical_user_id(self, headers=None):
         """
-        Convenience method that returns the "CanonicalUserID" of the user who's credentials
-        are associated with the connection.  The only way to get this value is to do a GET
-        request on the service which returns all buckets associated with the account.  As part
-        of that response, the canonical userid is returned.  This method simply does all of
-        that and then returns just the user id.
+        Convenience method that returns the "CanonicalUserID" of the
+        user who's credentials are associated with the connection.
+        The only way to get this value is to do a GET request on the
+        service which returns all buckets associated with the account.
+        As part of that response, the canonical userid is returned.
+        This method simply does all of that and then returns just the
+        user id.
 
         :rtype: string
         :return: A string containing the canonical user id.
diff --git a/boto/s3/key.py b/boto/s3/key.py
index c7e77f4..18829c2 100644
--- a/boto/s3/key.py
+++ b/boto/s3/key.py
@@ -1,4 +1,5 @@
 # Copyright (c) 2006,2007 Mitch Garnaat http://garnaat.org/
+# Copyright (c) 2011, Nexenta Systems Inc.
 #
 # Permission is hereby granted, free of charge, to any person obtaining a
 # copy of this software and associated documentation files (the
@@ -14,13 +15,14 @@
 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
 # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
 # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
-# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, 
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
 # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
 # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
 # IN THE SOFTWARE.
 
 import mimetypes
 import os
+import re
 import rfc822
 import StringIO
 import base64
@@ -62,6 +64,7 @@
         self.version_id = None
         self.source_version_id = None
         self.delete_marker = False
+        self.encrypted = None
 
     def __repr__(self):
         if self.bucket:
@@ -91,7 +94,7 @@
             if self.bucket.connection:
                 provider = self.bucket.connection.provider
         return provider
-    
+
     def get_md5_from_hexdigest(self, md5_hexdigest):
         """
         A utility function to create the 2-tuple (md5hexdigest, base64md5)
@@ -103,7 +106,14 @@
         if base64md5[-1] == '\n':
             base64md5 = base64md5[0:-1]
         return (md5_hexdigest, base64md5)
-    
+
+    def handle_encryption_headers(self, resp):
+        provider = self.bucket.connection.provider
+        if provider.server_side_encryption_header:
+            self.encrypted = resp.getheader(provider.server_side_encryption_header, None)
+        else:
+            self.encrypted = None
+
     def handle_version_headers(self, resp, force=False):
         provider = self.bucket.connection.provider
         # If the Key object already has a version_id attribute value, it
@@ -113,7 +123,8 @@
         # overwrite the version_id in this Key object.  Comprende?
         if self.version_id is None or force:
             self.version_id = resp.getheader(provider.version_id, None)
-        self.source_version_id = resp.getheader(provider.copy_source_version_id, None)
+        self.source_version_id = resp.getheader(provider.copy_source_version_id,
+                                                None)
         if resp.getheader(provider.delete_marker, 'false') == 'true':
             self.delete_marker = True
         else:
@@ -123,13 +134,13 @@
                   override_num_retries=None, response_headers=None):
         """
         Open this key for reading
-        
+
         :type headers: dict
         :param headers: Headers to pass in the web request
-        
+
         :type query_args: string
         :param query_args: Arguments to pass in the query string (ie, 'torrent')
-        
+
         :type override_num_retries: int
         :param override_num_retries: If not None will override configured
                                      num_retries parameter for underlying GET.
@@ -142,7 +153,7 @@
         """
         if self.resp == None:
             self.mode = 'r'
-            
+
             provider = self.bucket.connection.provider
             self.resp = self.bucket.connection.make_request(
                 'GET', self.bucket.name, self.name, headers,
@@ -156,8 +167,15 @@
             self.metadata = boto.utils.get_aws_metadata(response_headers,
                                                         provider)
             for name,value in response_headers.items():
-                if name.lower() == 'content-length':
+                # To get correct size for Range GETs, use Content-Range
+                # header if one was returned. If not, use Content-Length
+                # header.
+                if (name.lower() == 'content-length' and
+                    'Content-Range' not in response_headers):
                     self.size = int(value)
+                elif name.lower() == 'content-range':
+                    end_range = re.sub('.*/(.*)', '\\1', value)
+                    self.size = int(end_range)
                 elif name.lower() == 'etag':
                     self.etag = value
                 elif name.lower() == 'content-type':
@@ -169,12 +187,13 @@
                 elif name.lower() == 'cache-control':
                     self.cache_control = value
             self.handle_version_headers(self.resp)
+            self.handle_encryption_headers(self.resp)
 
     def open_write(self, headers=None, override_num_retries=None):
         """
-        Open this key for writing. 
+        Open this key for writing.
         Not yet implemented
-        
+
         :type headers: dict
         :param headers: Headers to pass in the write request
 
@@ -204,7 +223,7 @@
         self.resp = None
         self.mode = None
         self.closed = True
-    
+
     def next(self):
         """
         By providing a next method, the key object supports use as an iterator.
@@ -223,10 +242,11 @@
         return data
 
     def read(self, size=0):
-        if size == 0:
-            size = self.BufferSize
         self.open_read()
-        data = self.resp.read(size)
+        if size == 0:
+            data = self.resp.read()
+        else:
+            data = self.resp.read(size)
         if not data:
             self.close()
         return data
@@ -250,7 +270,7 @@
         :param dst_bucket: The name of a destination bucket.  If not
                            provided the current bucket of the key
                            will be used.
-                                  
+
         """
         if new_storage_class == 'STANDARD':
             return self.copy(self.bucket.name, self.name,
@@ -263,7 +283,8 @@
                                   new_storage_class)
 
     def copy(self, dst_bucket, dst_key, metadata=None,
-             reduced_redundancy=False, preserve_acl=False):
+             reduced_redundancy=False, preserve_acl=False,
+             encrypt_key=False):
         """
         Copy this Key to another bucket.
 
@@ -272,7 +293,7 @@
 
         :type dst_key: string
         :param dst_key: The name of the destination key
-        
+
         :type metadata: dict
         :param metadata: Metadata to be associated with new key.
                          If metadata is supplied, it will replace the
@@ -303,6 +324,12 @@
                              of False will be significantly more
                              efficient.
 
+        :type encrypt_key: bool
+        :param encrypt_key: If True, the new copy of the object will
+                            be encrypted on the server-side by S3 and
+                            will be stored in an encrypted form while
+                            at rest in S3.
+                            
         :rtype: :class:`boto.s3.key.Key` or subclass
         :returns: An instance of the newly created key object
         """
@@ -314,7 +341,8 @@
         return dst_bucket.copy_key(dst_key, self.bucket.name,
                                    self.name, metadata,
                                    storage_class=storage_class,
-                                   preserve_acl=preserve_acl)
+                                   preserve_acl=preserve_acl,
+                                   encrypt_key=encrypt_key)
 
     def startElement(self, name, attrs, connection):
         if name == 'Owner':
@@ -344,7 +372,7 @@
     def exists(self):
         """
         Returns True if the key exists
-        
+
         :rtype: bool
         :return: Whether the key exists on S3
         """
@@ -364,7 +392,7 @@
 
     def update_metadata(self, d):
         self.metadata.update(d)
-    
+
     # convenience methods for setting/getting ACL
     def set_acl(self, acl_str, headers=None):
         if self.bucket != None:
@@ -384,51 +412,56 @@
 
     def set_canned_acl(self, acl_str, headers=None):
         return self.bucket.set_canned_acl(acl_str, self.name, headers)
-        
+
     def make_public(self, headers=None):
         return self.bucket.set_canned_acl('public-read', self.name, headers)
 
     def generate_url(self, expires_in, method='GET', headers=None,
-                     query_auth=True, force_http=False):
+                     query_auth=True, force_http=False, response_headers=None):
         """
         Generate a URL to access this key.
-        
+
         :type expires_in: int
         :param expires_in: How long the url is valid for, in seconds
-        
+
         :type method: string
-        :param method: The method to use for retrieving the file (default is GET)
-        
+        :param method: The method to use for retrieving the file
+                       (default is GET)
+
         :type headers: dict
         :param headers: Any headers to pass along in the request
-        
+
         :type query_auth: bool
-        :param query_auth: 
-        
+        :param query_auth:
+
         :rtype: string
         :return: The URL to access the key
         """
         return self.bucket.connection.generate_url(expires_in, method,
                                                    self.bucket.name, self.name,
-                                                   headers, query_auth, force_http)
+                                                   headers, query_auth,
+                                                   force_http,
+                                                   response_headers)
 
-    def send_file(self, fp, headers=None, cb=None, num_cb=10, query_args=None):
+    def send_file(self, fp, headers=None, cb=None, num_cb=10,
+                  query_args=None, chunked_transfer=False):
         """
         Upload a file to a key into a bucket on S3.
-        
+
         :type fp: file
         :param fp: The file pointer to upload
-        
+
         :type headers: dict
         :param headers: The headers to pass along with the PUT request
-        
+
         :type cb: function
         :param cb: a callback function that will be called to report
-                    progress on the upload.  The callback should accept two integer
-                    parameters, the first representing the number of bytes that have
-                    been successfully transmitted to S3 and the second representing
-                    the total number of bytes that need to be transmitted.
-                    
+                   progress on the upload.  The callback should accept
+                   two integer parameters, the first representing the
+                   number of bytes that have been successfully
+                   transmitted to S3 and the second representing the
+                   size of the to be transmitted object.
+
         :type num_cb: int
         :param num_cb: (optional) If a callback is specified with the cb
                        parameter this parameter determines the granularity
@@ -436,7 +469,7 @@
                        times the callback will be called during the file
                        transfer. Providing a negative integer will cause
                        your callback to be called with each buffer read.
-             
+
         """
         provider = self.bucket.connection.provider
 
@@ -445,12 +478,28 @@
             for key in headers:
                 http_conn.putheader(key, headers[key])
             http_conn.endheaders()
-            fp.seek(0)
+            if chunked_transfer:
+                # MD5 for the stream has to be calculated on the fly, as
+                # we don't know the size of the stream before hand.
+                m = md5()
+            else:
+                fp.seek(0)
+
             save_debug = self.bucket.connection.debug
             self.bucket.connection.debug = 0
-            http_conn.set_debuglevel(0)
+            # If the debuglevel < 3 we don't want to show connection
+            # payload, so turn off HTTP connection-level debug output (to
+            # be restored below).
+            # Use the getattr approach to allow this to work in AppEngine.
+            if getattr(http_conn, 'debuglevel', 0) < 3:
+                http_conn.set_debuglevel(0)
             if cb:
-                if num_cb > 2:
+                if chunked_transfer:
+                    # For chunked Transfer, we call the cb for every 1MB
+                    # of data transferred.
+                    cb_count = (1024 * 1024)/self.BufferSize
+                    self.size = 0
+                elif num_cb > 2:
                     cb_count = self.size / self.BufferSize / (num_cb-2)
                 elif num_cb < 0:
                     cb_count = -1
@@ -460,24 +509,39 @@
                 cb(total_bytes, self.size)
             l = fp.read(self.BufferSize)
             while len(l) > 0:
-                http_conn.send(l)
+                if chunked_transfer:
+                    http_conn.send('%x;\r\n' % len(l))
+                    http_conn.send(l)
+                    http_conn.send('\r\n')
+                else:
+                    http_conn.send(l)
                 if cb:
                     total_bytes += len(l)
                     i += 1
                     if i == cb_count or cb_count == -1:
                         cb(total_bytes, self.size)
                         i = 0
+                if chunked_transfer:
+                    m.update(l)
                 l = fp.read(self.BufferSize)
+            if chunked_transfer:
+                http_conn.send('0\r\n')
+                http_conn.send('\r\n')
+                if cb:
+                    self.size = total_bytes
+                # Get the md5 which is calculated on the fly.
+                self.md5 = m.hexdigest()
+            else:
+                fp.seek(0)
             if cb:
                 cb(total_bytes, self.size)
             response = http_conn.getresponse()
             body = response.read()
-            fp.seek(0)
             http_conn.set_debuglevel(save_debug)
             self.bucket.connection.debug = save_debug
-            if response.status == 500 or response.status == 503 or \
-                    response.getheader('location'):
-                # we'll try again
+            if ((response.status == 500 or response.status == 503 or
+                    response.getheader('location')) and not chunked_transfer):
+                # we'll try again.
                 return response
             elif response.status >= 200 and response.status <= 299:
                 self.etag = response.getheader('etag')
@@ -494,7 +558,8 @@
         else:
             headers = headers.copy()
         headers['User-Agent'] = UserAgent
-        headers['Content-MD5'] = self.base64md5
+        if self.base64md5:
+            headers['Content-MD5'] = self.base64md5
         if self.storage_class != 'STANDARD':
             headers[provider.storage_class_header] = self.storage_class
         if headers.has_key('Content-Encoding'):
@@ -508,7 +573,8 @@
             headers['Content-Type'] = self.content_type
         else:
             headers['Content-Type'] = self.content_type
-        headers['Content-Length'] = str(self.size)
+        if not chunked_transfer:
+            headers['Content-Length'] = str(self.size)
         headers['Expect'] = '100-Continue'
         headers = boto.utils.merge_meta(headers, self.metadata, provider)
         resp = self.bucket.connection.make_request('PUT', self.bucket.name,
@@ -520,9 +586,10 @@
     def compute_md5(self, fp):
         """
         :type fp: file
-        :param fp: File pointer to the file to MD5 hash.  The file pointer will be
-                   reset to the beginning of the file before the method returns.
-        
+        :param fp: File pointer to the file to MD5 hash.  The file pointer
+                   will be reset to the beginning of the file before the
+                   method returns.
+
         :rtype: tuple
         :return: A tuple containing the hex digest version of the MD5 hash
                  as the first element and the base64 encoded version of the
@@ -542,19 +609,102 @@
         fp.seek(0)
         return (hex_md5, base64md5)
 
+    def set_contents_from_stream(self, fp, headers=None, replace=True,
+                                 cb=None, num_cb=10, policy=None,
+                                 reduced_redundancy=False, query_args=None):
+        """
+        Store an object using the name of the Key object as the key in
+        cloud and the contents of the data stream pointed to by 'fp' as
+        the contents.
+        The stream object is not seekable and total size is not known.
+        This has the implication that we can't specify the Content-Size and
+        Content-MD5 in the header. So for huge uploads, the delay in calculating
+        MD5 is avoided but with a penalty of inability to verify the integrity
+        of the uploaded data.
+
+        :type fp: file
+        :param fp: the file whose contents are to be uploaded
+
+        :type headers: dict
+        :param headers: additional HTTP headers to be sent with the PUT request.
+
+        :type replace: bool
+        :param replace: If this parameter is False, the method will first check
+            to see if an object exists in the bucket with the same key. If it
+            does, it won't overwrite it. The default value is True which will
+            overwrite the object.
+
+        :type cb: function
+        :param cb: a callback function that will be called to report
+            progress on the upload. The callback should accept two integer
+            parameters, the first representing the number of bytes that have
+            been successfully transmitted to GS and the second representing the
+            total number of bytes that need to be transmitted.
+
+        :type num_cb: int
+        :param num_cb: (optional) If a callback is specified with the cb
+            parameter, this parameter determines the granularity of the callback
+            by defining the maximum number of times the callback will be called
+            during the file transfer.
+
+        :type policy: :class:`boto.gs.acl.CannedACLStrings`
+        :param policy: A canned ACL policy that will be applied to the new key
+            in GS.
+
+        :type reduced_redundancy: bool
+        :param reduced_redundancy: If True, this will set the storage
+                                   class of the new Key to be
+                                   REDUCED_REDUNDANCY. The Reduced Redundancy
+                                   Storage (RRS) feature of S3, provides lower
+                                   redundancy at lower storage cost.
+        """
+
+        provider = self.bucket.connection.provider
+        if not provider.supports_chunked_transfer():
+            raise BotoClientError('%s does not support chunked transfer'
+                % provider.get_provider_name())
+
+        # Name of the Object should be specified explicitly for Streams.
+        if not self.name or self.name == '':
+            raise BotoClientError('Cannot determine the destination '
+                                'object name for the given stream')
+
+        if headers is None:
+            headers = {}
+        if policy:
+            headers[provider.acl_header] = policy
+
+        # Set the Transfer Encoding for Streams.
+        headers['Transfer-Encoding'] = 'chunked'
+
+        if reduced_redundancy:
+            self.storage_class = 'REDUCED_REDUNDANCY'
+            if provider.storage_class_header:
+                headers[provider.storage_class_header] = self.storage_class
+
+        if self.bucket != None:
+            if not replace:
+                k = self.bucket.lookup(self.name)
+                if k:
+                    return
+            self.send_file(fp, headers, cb, num_cb, query_args,
+                                            chunked_transfer=True)
+
     def set_contents_from_file(self, fp, headers=None, replace=True,
                                cb=None, num_cb=10, policy=None, md5=None,
-                               reduced_redundancy=False, query_args=None):
+                               reduced_redundancy=False, query_args=None,
+                               encrypt_key=False):
         """
         Store an object in S3 using the name of the Key object as the
         key in S3 and the contents of the file pointed to by 'fp' as the
         contents.
-        
+
         :type fp: file
         :param fp: the file whose contents to upload
-        
+
         :type headers: dict
-        :param headers: additional HTTP headers that will be sent with the PUT request.
+        :param headers: Additional HTTP headers that will be sent with
+                        the PUT request.
 
         :type replace: bool
         :param replace: If this parameter is False, the method
@@ -562,29 +712,36 @@
                         bucket with the same key.  If it does, it won't
                         overwrite it.  The default value is True which will
                         overwrite the object.
-                    
+
         :type cb: function
         :param cb: a callback function that will be called to report
-                    progress on the upload.  The callback should accept two integer
-                    parameters, the first representing the number of bytes that have
-                    been successfully transmitted to S3 and the second representing
-                    the total number of bytes that need to be transmitted.
-                    
+                   progress on the upload.  The callback should accept
+                   two integer parameters, the first representing the
+                   number of bytes that have been successfully
+                   transmitted to S3 and the second representing the
+                   size of the to be transmitted object.
+
         :type cb: int
-        :param num_cb: (optional) If a callback is specified with the cb parameter
-             this parameter determines the granularity of the callback by defining
-             the maximum number of times the callback will be called during the file transfer.
+        :param num_cb: (optional) If a callback is specified with the cb
+                       parameter this parameter determines the granularity
+                       of the callback by defining the maximum number of
+                       times the callback will be called during the
+                       file transfer.
 
         :type policy: :class:`boto.s3.acl.CannedACLStrings`
-        :param policy: A canned ACL policy that will be applied to the new key in S3.
-             
-        :type md5: A tuple containing the hexdigest version of the MD5 checksum of the
-                   file as the first element and the Base64-encoded version of the plain
-                   checksum as the second element.  This is the same format returned by
+        :param policy: A canned ACL policy that will be applied to the
+                       new key in S3.
+
+        :type md5: A tuple containing the hexdigest version of the MD5
+                   checksum of the file as the first element and the
+                   Base64-encoded version of the plain checksum as the
+                   second element.  This is the same format returned by
                    the compute_md5 method.
-        :param md5: If you need to compute the MD5 for any reason prior to upload,
-                    it's silly to have to do it twice so this param, if present, will be
-                    used as the MD5 values of the file.  Otherwise, the checksum will be computed.
+        :param md5: If you need to compute the MD5 for any reason prior
+                    to upload, it's silly to have to do it twice so this
+                    param, if present, will be used as the MD5 values of
+                    the file.  Otherwise, the checksum will be computed.
+
         :type reduced_redundancy: bool
         :param reduced_redundancy: If True, this will set the storage
                                    class of the new Key to be
@@ -592,17 +749,25 @@
                                    Storage (RRS) feature of S3, provides lower
                                    redundancy at lower storage cost.
 
+        :type encrypt_key: bool
+        :param encrypt_key: If True, the new copy of the object will
+                            be encrypted on the server-side by S3 and
+                            will be stored in an encrypted form while
+                            at rest in S3.
         """
         provider = self.bucket.connection.provider
         if headers is None:
             headers = {}
         if policy:
             headers[provider.acl_header] = policy
+        if encrypt_key:
+            headers[provider.server_side_encryption_header] = 'AES256'
+
         if reduced_redundancy:
             self.storage_class = 'REDUCED_REDUNDANCY'
             if provider.storage_class_header:
                 headers[provider.storage_class_header] = self.storage_class
-                # TODO - What if the provider doesn't support reduced reduncancy?
+                # TODO - What if provider doesn't support reduced reduncancy?
                 # What if different providers provide different classes?
         if hasattr(fp, 'name'):
             self.path = fp.name
@@ -626,105 +791,137 @@
 
     def set_contents_from_filename(self, filename, headers=None, replace=True,
                                    cb=None, num_cb=10, policy=None, md5=None,
-                                   reduced_redundancy=False):
+                                   reduced_redundancy=False,
+                                   encrypt_key=False):
         """
         Store an object in S3 using the name of the Key object as the
         key in S3 and the contents of the file named by 'filename'.
         See set_contents_from_file method for details about the
         parameters.
-        
+
         :type filename: string
         :param filename: The name of the file that you want to put onto S3
-        
+
         :type headers: dict
-        :param headers: Additional headers to pass along with the request to AWS.
-        
+        :param headers: Additional headers to pass along with the
+                        request to AWS.
+
         :type replace: bool
-        :param replace: If True, replaces the contents of the file if it already exists.
-        
+        :param replace: If True, replaces the contents of the file
+                        if it already exists.
+
         :type cb: function
-        :param cb: (optional) a callback function that will be called to report
-             progress on the download.  The callback should accept two integer
-             parameters, the first representing the number of bytes that have
-             been successfully transmitted from S3 and the second representing
-             the total number of bytes that need to be transmitted.        
-                    
+        :param cb: a callback function that will be called to report
+                   progress on the upload.  The callback should accept
+                   two integer parameters, the first representing the
+                   number of bytes that have been successfully
+                   transmitted to S3 and the second representing the
+                   size of the to be transmitted object.
+
         :type cb: int
-        :param num_cb: (optional) If a callback is specified with the cb parameter
-             this parameter determines the granularity of the callback by defining
-             the maximum number of times the callback will be called during the file transfer.  
-             
+        :param num_cb: (optional) If a callback is specified with
+                       the cb parameter this parameter determines the
+                       granularity of the callback by defining
+                       the maximum number of times the callback will
+                       be called during the file transfer.
+
         :type policy: :class:`boto.s3.acl.CannedACLStrings`
-        :param policy: A canned ACL policy that will be applied to the new key in S3.
-             
-        :type md5: A tuple containing the hexdigest version of the MD5 checksum of the
-                   file as the first element and the Base64-encoded version of the plain
-                   checksum as the second element.  This is the same format returned by
+        :param policy: A canned ACL policy that will be applied to the
+                       new key in S3.
+
+        :type md5: A tuple containing the hexdigest version of the MD5
+                   checksum of the file as the first element and the
+                   Base64-encoded version of the plain checksum as the
+                   second element.  This is the same format returned by
                    the compute_md5 method.
-        :param md5: If you need to compute the MD5 for any reason prior to upload,
-                    it's silly to have to do it twice so this param, if present, will be
-                    used as the MD5 values of the file.  Otherwise, the checksum will be computed.
-                    
+        :param md5: If you need to compute the MD5 for any reason prior
+                    to upload, it's silly to have to do it twice so this
+                    param, if present, will be used as the MD5 values
+                    of the file.  Otherwise, the checksum will be computed.
+
         :type reduced_redundancy: bool
         :param reduced_redundancy: If True, this will set the storage
                                    class of the new Key to be
                                    REDUCED_REDUNDANCY. The Reduced Redundancy
                                    Storage (RRS) feature of S3, provides lower
                                    redundancy at lower storage cost.
+        :type encrypt_key: bool
+        :param encrypt_key: If True, the new copy of the object will
+                            be encrypted on the server-side by S3 and
+                            will be stored in an encrypted form while
+                            at rest in S3.
         """
         fp = open(filename, 'rb')
         self.set_contents_from_file(fp, headers, replace, cb, num_cb,
-                                    policy, md5, reduced_redundancy)
+                                    policy, md5, reduced_redundancy,
+                                    encrypt_key=encrypt_key)
         fp.close()
 
     def set_contents_from_string(self, s, headers=None, replace=True,
                                  cb=None, num_cb=10, policy=None, md5=None,
-                                 reduced_redundancy=False):
+                                 reduced_redundancy=False,
+                                 encrypt_key=False):
         """
         Store an object in S3 using the name of the Key object as the
         key in S3 and the string 's' as the contents.
         See set_contents_from_file method for details about the
         parameters.
-        
+
         :type headers: dict
-        :param headers: Additional headers to pass along with the request to AWS.
-        
+        :param headers: Additional headers to pass along with the
+                        request to AWS.
+
         :type replace: bool
-        :param replace: If True, replaces the contents of the file if it already exists.
-        
+        :param replace: If True, replaces the contents of the file if
+                        it already exists.
+
         :type cb: function
-        :param cb: (optional) a callback function that will be called to report
-             progress on the download.  The callback should accept two integer
-             parameters, the first representing the number of bytes that have
-             been successfully transmitted from S3 and the second representing
-             the total number of bytes that need to be transmitted.        
-                    
+        :param cb: a callback function that will be called to report
+                   progress on the upload.  The callback should accept
+                   two integer parameters, the first representing the
+                   number of bytes that have been successfully
+                   transmitted to S3 and the second representing the
+                   size of the to be transmitted object.
+
         :type cb: int
-        :param num_cb: (optional) If a callback is specified with the cb parameter
-             this parameter determines the granularity of the callback by defining
-             the maximum number of times the callback will be called during the file transfer.  
-             
+        :param num_cb: (optional) If a callback is specified with
+                       the cb parameter this parameter determines the
+                       granularity of the callback by defining
+                       the maximum number of times the callback will
+                       be called during the file transfer.
+
         :type policy: :class:`boto.s3.acl.CannedACLStrings`
-        :param policy: A canned ACL policy that will be applied to the new key in S3.
-             
-        :type md5: A tuple containing the hexdigest version of the MD5 checksum of the
-                   file as the first element and the Base64-encoded version of the plain
-                   checksum as the second element.  This is the same format returned by
+        :param policy: A canned ACL policy that will be applied to the
+                       new key in S3.
+
+        :type md5: A tuple containing the hexdigest version of the MD5
+                   checksum of the file as the first element and the
+                   Base64-encoded version of the plain checksum as the
+                   second element.  This is the same format returned by
                    the compute_md5 method.
-        :param md5: If you need to compute the MD5 for any reason prior to upload,
-                    it's silly to have to do it twice so this param, if present, will be
-                    used as the MD5 values of the file.  Otherwise, the checksum will be computed.
-                    
+        :param md5: If you need to compute the MD5 for any reason prior
+                    to upload, it's silly to have to do it twice so this
+                    param, if present, will be used as the MD5 values
+                    of the file.  Otherwise, the checksum will be computed.
+
         :type reduced_redundancy: bool
         :param reduced_redundancy: If True, this will set the storage
                                    class of the new Key to be
                                    REDUCED_REDUNDANCY. The Reduced Redundancy
                                    Storage (RRS) feature of S3, provides lower
                                    redundancy at lower storage cost.
+        :type encrypt_key: bool
+        :param encrypt_key: If True, the new copy of the object will
+                            be encrypted on the server-side by S3 and
+                            will be stored in an encrypted form while
+                            at rest in S3.
         """
+        if isinstance(s, unicode):
+            s = s.encode("utf-8")
         fp = StringIO.StringIO(s)
         r = self.set_contents_from_file(fp, headers, replace, cb, num_cb,
-                                        policy, md5, reduced_redundancy)
+                                        policy, md5, reduced_redundancy,
+                                        encrypt_key=encrypt_key)
         fp.close()
         return r
 
@@ -733,26 +930,28 @@
                  response_headers=None):
         """
         Retrieves a file from an S3 Key
-        
+
         :type fp: file
         :param fp: File pointer to put the data into
-        
+
         :type headers: string
         :param: headers to send when retrieving the files
-        
+
         :type cb: function
-        :param cb: (optional) a callback function that will be called to report
-             progress on the download.  The callback should accept two integer
-             parameters, the first representing the number of bytes that have
-             been successfully transmitted from S3 and the second representing
-             the total number of bytes that need to be transmitted.
-        
-                    
+        :param cb: a callback function that will be called to report
+                   progress on the upload.  The callback should accept
+                   two integer parameters, the first representing the
+                   number of bytes that have been successfully
+                   transmitted to S3 and the second representing the
+                   size of the to be transmitted object.
+
         :type cb: int
-        :param num_cb: (optional) If a callback is specified with the cb parameter
-             this parameter determines the granularity of the callback by defining
-             the maximum number of times the callback will be called during the file transfer.  
-             
+        :param num_cb: (optional) If a callback is specified with
+                       the cb parameter this parameter determines the
+                       granularity of the callback by defining
+                       the maximum number of times the callback will
+                       be called during the file transfer.
+
         :type torrent: bool
         :param torrent: Flag for whether to get a torrent for the file
 
@@ -778,7 +977,7 @@
         save_debug = self.bucket.connection.debug
         if self.bucket.connection.debug == 1:
             self.bucket.connection.debug = 0
-        
+
         query_args = []
         if torrent:
             query_args.append('torrent')
@@ -811,31 +1010,31 @@
     def get_torrent_file(self, fp, headers=None, cb=None, num_cb=10):
         """
         Get a torrent file (see to get_file)
-        
+
         :type fp: file
         :param fp: The file pointer of where to put the torrent
-        
+
         :type headers: dict
         :param headers: Headers to be passed
-        
-        :type cb: function
-        :param cb: (optional) a callback function that will be called to
-                   report progress on the download.  The callback should
-                   accept two integer parameters, the first representing
-                   the number of bytes that have been successfully
-                   transmitted from S3 and the second representing the
-                   total number of bytes that need to be transmitted.
 
-        :type num_cb: int
-        :param num_cb: (optional) If a callback is specified with the
-                       cb parameter this parameter determines the
-                       granularity of the callback by defining the
-                       maximum number of times the callback will be
-                       called during the file transfer.  
-             
+        :type cb: function
+        :param cb: a callback function that will be called to report
+                   progress on the upload.  The callback should accept
+                   two integer parameters, the first representing the
+                   number of bytes that have been successfully
+                   transmitted to S3 and the second representing the
+                   size of the to be transmitted object.
+
+        :type cb: int
+        :param num_cb: (optional) If a callback is specified with
+                       the cb parameter this parameter determines the
+                       granularity of the callback by defining
+                       the maximum number of times the callback will
+                       be called during the file transfer.
+
         """
         return self.get_file(fp, headers, cb, num_cb, torrent=True)
-    
+
     def get_contents_to_file(self, fp, headers=None,
                              cb=None, num_cb=10,
                              torrent=False,
@@ -846,29 +1045,29 @@
         Retrieve an object from S3 using the name of the Key object as the
         key in S3.  Write the contents of the object to the file pointed
         to by 'fp'.
-        
+
         :type fp: File -like object
         :param fp:
-        
+
         :type headers: dict
         :param headers: additional HTTP headers that will be sent with
                         the GET request.
-        
-        :type cb: function
-        :param cb: (optional) a callback function that will be called to
-                   report progress on the download.  The callback should
-                   accept two integer parameters, the first representing
-                   the number of bytes that have been successfully
-                   transmitted from S3 and the second representing the
-                   total number of bytes that need to be transmitted.
 
-        :type num_cb: int
-        :param num_cb: (optional) If a callback is specified with the
-                       cb parameter this parameter determines the
-                       granularity of the callback by defining the
-                       maximum number of times the callback will be
-                       called during the file transfer.  
-             
+        :type cb: function
+        :param cb: a callback function that will be called to report
+                   progress on the upload.  The callback should accept
+                   two integer parameters, the first representing the
+                   number of bytes that have been successfully
+                   transmitted to S3 and the second representing the
+                   size of the to be transmitted object.
+
+        :type cb: int
+        :param num_cb: (optional) If a callback is specified with
+                       the cb parameter this parameter determines the
+                       granularity of the callback by defining
+                       the maximum number of times the callback will
+                       be called during the file transfer.
+
         :type torrent: bool
         :param torrent: If True, returns the contents of a torrent
                         file as a string.
@@ -904,28 +1103,28 @@
         key in S3.  Store contents of the object to a file named by 'filename'.
         See get_contents_to_file method for details about the
         parameters.
-        
+
         :type filename: string
         :param filename: The filename of where to put the file contents
-        
+
         :type headers: dict
         :param headers: Any additional headers to send in the request
-        
-        :type cb: function
-        :param cb: (optional) a callback function that will be called to
-                   report progress on the download.  The callback should
-                   accept two integer parameters, the first representing
-                   the number of bytes that have been successfully
-                   transmitted from S3 and the second representing the
-                   total number of bytes that need to be transmitted.
 
-        :type num_cb: int
-        :param num_cb: (optional) If a callback is specified with the
-                       cb parameter this parameter determines the
-                       granularity of the callback by defining the
-                       maximum number of times the callback will be
-                       called during the file transfer.  
-             
+        :type cb: function
+        :param cb: a callback function that will be called to report
+                   progress on the upload.  The callback should accept
+                   two integer parameters, the first representing the
+                   number of bytes that have been successfully
+                   transmitted to S3 and the second representing the
+                   size of the to be transmitted object.
+
+        :type cb: int
+        :param num_cb: (optional) If a callback is specified with
+                       the cb parameter this parameter determines the
+                       granularity of the callback by defining
+                       the maximum number of times the callback will
+                       be called during the file transfer.
+
         :type torrent: bool
         :param torrent: If True, returns the contents of a torrent file
                         as a string.
@@ -964,35 +1163,35 @@
         key in S3.  Return the contents of the object as a string.
         See get_contents_to_file method for details about the
         parameters.
-        
+
         :type headers: dict
         :param headers: Any additional headers to send in the request
-        
-        :type cb: function
-        :param cb: (optional) a callback function that will be called to
-                   report progress on the download.  The callback should
-                   accept two integer parameters, the first representing
-                   the number of bytes that have been successfully
-                   transmitted from S3 and the second representing the
-                   total number of bytes that need to be transmitted.
 
-        :type num_cb: int
-        :param num_cb: (optional) If a callback is specified with the
-                       cb parameter this parameter determines the
-                       granularity of the callback by defining the
-                       maximum number of times the callback will be
-                       called during the file transfer.  
-             
+        :type cb: function
+        :param cb: a callback function that will be called to report
+                   progress on the upload.  The callback should accept
+                   two integer parameters, the first representing the
+                   number of bytes that have been successfully
+                   transmitted to S3 and the second representing the
+                   size of the to be transmitted object.
+
+        :type cb: int
+        :param num_cb: (optional) If a callback is specified with
+                       the cb parameter this parameter determines the
+                       granularity of the callback by defining
+                       the maximum number of times the callback will
+                       be called during the file transfer.
+
         :type torrent: bool
         :param torrent: If True, returns the contents of a torrent file
                         as a string.
-        
+
         :type response_headers: dict
         :param response_headers: A dictionary containing HTTP headers/values
                                  that will override any headers associated with
                                  the stored object in the response.
                                  See http://goo.gl/EWOPb for details.
-                                 
+
         :rtype: string
         :returns: The contents of the file as a string
         """
@@ -1008,15 +1207,15 @@
         to a key. This method retrieves the current ACL, creates a new
         grant based on the parameters passed in, adds that grant to the ACL
         and then PUT's the new ACL back to S3.
-        
+
         :type permission: string
         :param permission: The permission being granted. Should be one of:
                            (READ, WRITE, READ_ACP, WRITE_ACP, FULL_CONTROL).
-        
+
         :type email_address: string
         :param email_address: The email address associated with the AWS
                               account your are granting the permission to.
-        
+
         :type recursive: boolean
         :param recursive: A boolean value to controls whether the command
                           will apply the grant to all keys within the bucket
@@ -1030,30 +1229,27 @@
         policy.acl.add_email_grant(permission, email_address)
         self.set_acl(policy, headers=headers)
 
-    def add_user_grant(self, permission, user_id, headers=None):
+    def add_user_grant(self, permission, user_id, headers=None,
+                       display_name=None):
         """
         Convenience method that provides a quick way to add a canonical
         user grant to a key.  This method retrieves the current ACL,
         creates a new grant based on the parameters passed in, adds that
         grant to the ACL and then PUT's the new ACL back to S3.
-        
+
         :type permission: string
         :param permission: The permission being granted. Should be one of:
                            (READ, WRITE, READ_ACP, WRITE_ACP, FULL_CONTROL).
-        
+
         :type user_id: string
         :param user_id:     The canonical user id associated with the AWS
                             account your are granting the permission to.
-                            
-        :type recursive: boolean
-        :param recursive: A boolean value to controls whether the command
-                          will apply the grant to all keys within the bucket
-                          or not.  The default value is False.  By passing a
-                          True value, the call will iterate through all keys
-                          in the bucket and apply the same grant to each key.
-                          CAUTION: If you have a lot of keys, this could take
-                          a long time!
+
+        :type display_name: string
+        :param display_name: An option string containing the user's
+                             Display Name.  Only required on Walrus.
         """
         policy = self.get_acl()
-        policy.acl.add_user_grant(permission, user_id)
+        policy.acl.add_user_grant(permission, user_id,
+                                  display_name=display_name)
         self.set_acl(policy, headers=headers)
diff --git a/boto/s3/multipart.py b/boto/s3/multipart.py
index f68540a..8befc5e 100644
--- a/boto/s3/multipart.py
+++ b/boto/s3/multipart.py
@@ -102,7 +102,7 @@
         else:
             setattr(self, name, value)
         
-def part_lister(mpupload, part_number_marker=''):
+def part_lister(mpupload, part_number_marker=None):
     """
     A generator function for listing parts of a multipart upload.
     """
@@ -139,7 +139,7 @@
         return '<MultiPartUpload %s>' % self.key_name
 
     def __iter__(self):
-        return part_lister(self, part_number_marker=self.part_number_marker)
+        return part_lister(self)
 
     def to_xml(self):
         self.get_all_parts()
@@ -212,8 +212,7 @@
             return self._parts
 
     def upload_part_from_file(self, fp, part_num, headers=None, replace=True,
-                               cb=None, num_cb=10, policy=None, md5=None,
-                               reduced_redundancy=False):
+                               cb=None, num_cb=10, policy=None, md5=None):
         """
         Upload another part of this MultiPart Upload.
         
@@ -231,7 +230,8 @@
         query_args = 'uploadId=%s&partNumber=%d' % (self.id, part_num)
         key = self.bucket.new_key(self.key_name)
         key.set_contents_from_file(fp, headers, replace, cb, num_cb, policy,
-                                   md5, reduced_redundancy, query_args)
+                                   md5, reduced_redundancy=False,
+                                   query_args=query_args)
 
     def complete_upload(self):
         """
@@ -243,8 +243,8 @@
         :returns: An object representing the completed upload.
         """
         xml = self.to_xml()
-        self.bucket.complete_multipart_upload(self.key_name,
-                                              self.id, xml)
+        return self.bucket.complete_multipart_upload(self.key_name,
+                                                     self.id, xml)
 
     def cancel_upload(self):
         """
diff --git a/boto/s3/prefix.py b/boto/s3/prefix.py
index fc0f26a..0b0196c 100644
--- a/boto/s3/prefix.py
+++ b/boto/s3/prefix.py
@@ -19,7 +19,7 @@
 # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
 # IN THE SOFTWARE.
 
-class Prefix:
+class Prefix(object):
     def __init__(self, bucket=None, name=None):
         self.bucket = bucket
         self.name = name
diff --git a/boto/s3/resumable_download_handler.py b/boto/s3/resumable_download_handler.py
index 0d01477..5492e14 100644
--- a/boto/s3/resumable_download_handler.py
+++ b/boto/s3/resumable_download_handler.py
@@ -65,7 +65,7 @@
 
     def call(self, total_bytes_uploaded, total_size):
         self.proxied_cb(self.download_start_point + total_bytes_uploaded,
-                        self.download_start_point + total_size)
+                        total_size)
 
 
 def get_cur_file_size(fp, position_to_eof=False):
@@ -294,12 +294,28 @@
             except self.RETRYABLE_EXCEPTIONS, e:
                 if debug >= 1:
                     print('Caught exception (%s)' % e.__repr__())
+                if isinstance(e, IOError) and e.errno == errno.EPIPE:
+                    # Broken pipe error causes httplib to immediately
+                    # close the socket (http://bugs.python.org/issue5542),
+                    # so we need to close and reopen the key before resuming
+                    # the download.
+                    key.get_file(fp, headers, cb, num_cb, torrent, version_id,
+                                 override_num_retries=0)
             except ResumableDownloadException, e:
-                if e.disposition == ResumableTransferDisposition.ABORT:
+                if (e.disposition ==
+                    ResumableTransferDisposition.ABORT_CUR_PROCESS):
                     if debug >= 1:
                         print('Caught non-retryable ResumableDownloadException '
                               '(%s)' % e.message)
                     raise
+                elif (e.disposition ==
+                    ResumableTransferDisposition.ABORT):
+                    if debug >= 1:
+                        print('Caught non-retryable ResumableDownloadException '
+                              '(%s); aborting and removing tracker file' %
+                              e.message)
+                    self._remove_tracker_file()
+                    raise
                 else:
                     if debug >= 1:
                         print('Caught ResumableDownloadException (%s) - will '
@@ -316,11 +332,18 @@
                 raise ResumableDownloadException(
                     'Too many resumable download attempts failed without '
                     'progress. You might try this download again later',
-                    ResumableTransferDisposition.ABORT)
+                    ResumableTransferDisposition.ABORT_CUR_PROCESS)
 
             # Close the key, in case a previous download died partway
             # through and left data in the underlying key HTTP buffer.
-            key.close()
+            # Do this within a try/except block in case the connection is
+            # closed (since key.close() attempts to do a final read, in which
+            # case this read attempt would get an IncompleteRead exception,
+            # which we can safely ignore.
+            try:
+                key.close()
+            except httplib.IncompleteRead:
+                pass
 
             sleep_time_secs = 2**progress_less_iterations
             if debug >= 1:
diff --git a/boto/sdb/__init__.py b/boto/sdb/__init__.py
index f5642c1..ef2b9ce 100644
--- a/boto/sdb/__init__.py
+++ b/boto/sdb/__init__.py
@@ -35,22 +35,42 @@
                           endpoint='sdb.eu-west-1.amazonaws.com'),
             SDBRegionInfo(name='us-west-1',
                           endpoint='sdb.us-west-1.amazonaws.com'),
+            SDBRegionInfo(name='ap-northeast-1',
+                          endpoint='sdb.ap-northeast-1.amazonaws.com'),
             SDBRegionInfo(name='ap-southeast-1',
                           endpoint='sdb.ap-southeast-1.amazonaws.com')
             ]
 
-def connect_to_region(region_name):
+def connect_to_region(region_name, **kw_params):
     """
     Given a valid region name, return a 
     :class:`boto.sdb.connection.SDBConnection`.
-    
-    :param str region_name: The name of the region to connect to.
+
+    :type: str
+    :param region_name: The name of the region to connect to.
     
     :rtype: :class:`boto.sdb.connection.SDBConnection` or ``None``
     :return: A connection to the given region, or None if an invalid region
-        name is given
+             name is given
     """
     for region in regions():
         if region.name == region_name:
-            return region.connect()
+            return region.connect(**kw_params)
+    return None
+
+def get_region(region_name, **kw_params):
+    """
+    Find and return a :class:`boto.sdb.regioninfo.RegionInfo` object
+    given a region name.
+
+    :type: str
+    :param: The name of the region.
+
+    :rtype: :class:`boto.sdb.regioninfo.RegionInfo`
+    :return: The RegionInfo object for the given region or None if
+             an invalid region name is provided.
+    """
+    for region in regions(**kw_params):
+        if region.name == region_name:
+            return region
     return None
diff --git a/boto/sdb/connection.py b/boto/sdb/connection.py
index b5a45b8..f043193 100644
--- a/boto/sdb/connection.py
+++ b/boto/sdb/connection.py
@@ -21,6 +21,7 @@
 
 import xml.sax
 import threading
+import boto
 from boto import handler
 from boto.connection import AWSQueryConnection
 from boto.sdb.domain import Domain, DomainMetaData
@@ -32,12 +33,10 @@
     """
     A threaded :class:`Item <boto.sdb.item.Item>` retriever utility class. 
     Retrieved :class:`Item <boto.sdb.item.Item>` objects are stored in the
-    ``items`` instance variable after 
-    :py:meth:`run() <run>` is called. 
+    ``items`` instance variable after :py:meth:`run() <run>` is called.
     
-    .. tip:: 
-        The item retrieval will not start until the 
-        :func:`run() <boto.sdb.connection.ItemThread.run>` method is called.
+    .. tip:: The item retrieval will not start until
+        the :func:`run() <boto.sdb.connection.ItemThread.run>` method is called.
     """
     def __init__(self, name, domain_name, item_names):
         """
@@ -87,7 +86,7 @@
                  is_secure=True, port=None, proxy=None, proxy_port=None,
                  proxy_user=None, proxy_pass=None, debug=0,
                  https_connection_factory=None, region=None, path='/',
-                 converter=None):
+                 converter=None, security_token=None):
         """
         For any keywords that aren't documented, refer to the parent class,
         :py:class:`boto.connection.AWSAuthConnection`. You can avoid having
@@ -95,19 +94,30 @@
         via :py:func:`boto.connect_sdb`.
     
         :type region: :class:`boto.sdb.regioninfo.SDBRegionInfo`
-        :keyword region: Explicitly specify a region. Defaults to ``us-east-1`` 
-            if not specified.
+        :keyword region: Explicitly specify a region. Defaults to ``us-east-1``
+            if not specified. You may also specify the region in your ``boto.cfg``:
+
+            .. code-block:: cfg
+
+                [SDB]
+                region = eu-west-1
+
         """
         if not region:
-            region = SDBRegionInfo(self, self.DefaultRegionName,
-                                   self.DefaultRegionEndpoint)
+            region_name = boto.config.get('SDB', 'region', self.DefaultRegionName)
+            for reg in boto.sdb.regions():
+                if reg.name == region_name:
+                    region = reg
+                    break
+
         self.region = region
         AWSQueryConnection.__init__(self, aws_access_key_id,
                                     aws_secret_access_key,
                                     is_secure, port, proxy,
                                     proxy_port, proxy_user, proxy_pass,
                                     self.region.endpoint, debug,
-                                    https_connection_factory, path)
+                                    https_connection_factory, path,
+                                    security_token=security_token)
         self.box_usage = 0.0
         self.converter = converter
         self.item_cls = Item
diff --git a/boto/sdb/db/blob.py b/boto/sdb/db/blob.py
index 45a3624..b50794c 100644
--- a/boto/sdb/db/blob.py
+++ b/boto/sdb/db/blob.py
@@ -37,17 +37,24 @@
         return f
 
     def __str__(self):
+        return unicode(self).encode('utf-8')
+
+    def __unicode__(self):
         if hasattr(self.file, "get_contents_as_string"):
             value = self.file.get_contents_as_string()
         else:
             value = self.file.getvalue()
-        try:
-            return str(value)
-        except:
-            return unicode(value)
+        if isinstance(value, unicode):
+            return value
+        else:
+            return value.decode('utf-8')
+
 
     def read(self):
-        return self.file.read()
+        if hasattr(self.file, "get_contents_as_string"):
+            return self.file.get_contents_as_string()
+        else:
+            return self.file.read()
 
     def readline(self):
         return self.file.readline()
diff --git a/boto/sdb/db/manager/__init__.py b/boto/sdb/db/manager/__init__.py
index 0777796..55b32a4 100644
--- a/boto/sdb/db/manager/__init__.py
+++ b/boto/sdb/db/manager/__init__.py
@@ -66,6 +66,9 @@
         db_port = boto.config.getint(db_section, 'db_port', db_port)
         enable_ssl = boto.config.getint(db_section, 'enable_ssl', enable_ssl)
         debug = boto.config.getint(db_section, 'debug', debug)
+    elif hasattr(cls, "_db_name") and cls._db_name is not None:
+        # More specific then the generic DB config is any _db_name class property
+        db_name = cls._db_name
     elif hasattr(cls.__bases__[0], "_manager"):
         return cls.__bases__[0]._manager
     if db_type == 'SimpleDB':
diff --git a/boto/sdb/db/manager/pgmanager.py b/boto/sdb/db/manager/pgmanager.py
index 73a93f0..31f27ca 100644
--- a/boto/sdb/db/manager/pgmanager.py
+++ b/boto/sdb/db/manager/pgmanager.py
@@ -361,7 +361,7 @@
     def _find_calculated_props(self, obj):
         return [p for p in obj.properties() if hasattr(p, 'calculated_type')]
 
-    def save_object(self, obj):
+    def save_object(self, obj, expected_value=None):
         obj._auto_update = False
         calculated = self._find_calculated_props(obj)
         if not obj.id:
diff --git a/boto/sdb/db/manager/sdbmanager.py b/boto/sdb/db/manager/sdbmanager.py
index 6aac568..8218f81 100644
--- a/boto/sdb/db/manager/sdbmanager.py
+++ b/boto/sdb/db/manager/sdbmanager.py
@@ -27,13 +27,15 @@
 from boto.sdb.db.model import Model
 from boto.sdb.db.blob import Blob
 from boto.sdb.db.property import ListProperty, MapProperty
-from datetime import datetime, date
-from boto.exception import SDBPersistenceError
+from datetime import datetime, date, time
+from boto.exception import SDBPersistenceError, S3ResponseError
 
 ISO8601 = '%Y-%m-%dT%H:%M:%SZ'
 
+class TimeDecodeError(Exception):
+    pass
 
-class SDBConverter:
+class SDBConverter(object):
     """
     Responsible for converting base Python types to format compatible with underlying
     database.  For SimpleDB, that means everything needs to be converted to a string
@@ -55,7 +57,9 @@
                           Key : (self.encode_reference, self.decode_reference),
                           datetime : (self.encode_datetime, self.decode_datetime),
                           date : (self.encode_date, self.decode_date),
+                          time : (self.encode_time, self.decode_time),
                           Blob: (self.encode_blob, self.decode_blob),
+                          str: (self.encode_string, self.decode_string),
                       }
 
     def encode(self, item_type, value):
@@ -93,6 +97,7 @@
         return self.encode_map(prop, values)
 
     def encode_map(self, prop, value):
+        import urllib
         if value == None:
             return None
         if not isinstance(value, dict):
@@ -104,7 +109,7 @@
                 item_type = Model
             encoded_value = self.encode(item_type, value[key])
             if encoded_value != None:
-                new_value.append('%s:%s' % (key, encoded_value))
+                new_value.append('%s:%s' % (urllib.quote(key), encoded_value))
         return new_value
 
     def encode_prop(self, prop, value):
@@ -144,9 +149,11 @@
 
     def decode_map_element(self, item_type, value):
         """Decode a single element for a map"""
+        import urllib
         key = value
         if ":" in value:
             key, value = value.split(':',1)
+            key = urllib.unquote(key)
         if Model in item_type.mro():
             value = item_type(id=value)
         else:
@@ -270,6 +277,25 @@
         except:
             return None
 
+    encode_time = encode_date
+
+    def decode_time(self, value):
+        """ converts strings in the form of HH:MM:SS.mmmmmm
+            (created by datetime.time.isoformat()) to
+            datetime.time objects.
+
+            Timzone-aware strings ("HH:MM:SS.mmmmmm+HH:MM") won't
+            be handled right now and will raise TimeDecodeError.
+        """
+        if '-' in value or '+' in value:
+            # TODO: Handle tzinfo
+            raise TimeDecodeError("Can't handle timezone aware objects: %r" % value)
+        tmp = value.split('.')
+        arg = map(int, tmp[0].split(':'))
+        if len(tmp) == 2:
+            arg.append(int(tmp[1]))
+        return time(*arg)
+
     def encode_reference(self, value):
         if value in (None, 'None', '', ' '):
             return None
@@ -314,7 +340,12 @@
         if match:
             s3 = self.manager.get_s3_connection()
             bucket = s3.get_bucket(match.group(1), validate=False)
-            key = bucket.get_key(match.group(2))
+            try:
+                key = bucket.get_key(match.group(2))
+            except S3ResponseError, e:
+                if e.reason != "Forbidden":
+                    raise
+                return None
         else:
             return None
         if key:
@@ -322,6 +353,24 @@
         else:
             return None
 
+    def encode_string(self, value):
+        """Convert ASCII, Latin-1 or UTF-8 to pure Unicode"""
+        if not isinstance(value, str): return value
+        try:
+            return unicode(value, 'utf-8')
+        except: # really, this should throw an exception.
+                # in the interest of not breaking current
+                # systems, however:
+            arr = []
+            for ch in value:
+                arr.append(unichr(ord(ch)))
+            return u"".join(arr)
+
+    def decode_string(self, value):
+        """Decoding a string is really nothing, just
+        return the value as-is"""
+        return value
+
 class SDBManager(object):
     
     def __init__(self, cls, db_name, db_user, db_passwd,
@@ -357,9 +406,15 @@
         return self._domain
 
     def _connect(self):
-        self._sdb = boto.connect_sdb(aws_access_key_id=self.db_user,
-                                    aws_secret_access_key=self.db_passwd,
-                                    is_secure=self.enable_ssl)
+        args = dict(aws_access_key_id=self.db_user,
+                    aws_secret_access_key=self.db_passwd,
+                    is_secure=self.enable_ssl)
+        try:
+            region = [x for x in boto.sdb.regions() if x.endpoint == self.db_host][0]
+            args['region'] = region
+        except IndexError:
+            pass
+        self._sdb = boto.connect_sdb(**args)
         # This assumes that the domain has already been created
         # It's much more efficient to do it this way rather than
         # having this make a roundtrip each time to validate.
@@ -486,16 +541,26 @@
         """
         import types
         query_parts = []
+
         order_by_filtered = False
+
         if order_by:
             if order_by[0] == "-":
                 order_by_method = "DESC";
                 order_by = order_by[1:]
             else:
                 order_by_method = "ASC";
+
+        if select:
+            if order_by and order_by in select:
+                order_by_filtered = True
+            query_parts.append("(%s)" % select)
+
         if isinstance(filters, str) or isinstance(filters, unicode):
-            query = "WHERE `__type__` = '%s' AND %s" % (cls.__name__, filters)
-            if order_by != None:
+            query = "WHERE %s AND `__type__` = '%s'" % (filters, cls.__name__)
+            if order_by in ["__id__", "itemName()"]:
+                query += " ORDER BY itemName() %s" % order_by_method
+            elif order_by != None:
                 query += " ORDER BY `%s` %s" % (order_by, order_by_method)
             return query
 
@@ -537,13 +602,14 @@
         query_parts.append(type_query)
 
         order_by_query = ""
+
         if order_by:
             if not order_by_filtered:
                 query_parts.append("`%s` LIKE '%%'" % order_by)
-            order_by_query = " ORDER BY `%s` %s" % (order_by, order_by_method)
-
-        if select:
-            query_parts.append("(%s)" % select)
+            if order_by in ["__id__", "itemName()"]:
+                order_by_query = " ORDER BY itemName() %s" % order_by_method
+            else:
+                order_by_query = " ORDER BY `%s` %s" % (order_by, order_by_method)
 
         if len(query_parts) > 0:
             return "WHERE %s %s" % (" AND ".join(query_parts), order_by_query)
@@ -562,7 +628,7 @@
     def query_gql(self, query_string, *args, **kwds):
         raise NotImplementedError, "GQL queries not supported in SimpleDB"
 
-    def save_object(self, obj):
+    def save_object(self, obj, expected_value=None):
         if not obj.id:
             obj.id = str(uuid.uuid4())
 
@@ -588,7 +654,14 @@
                         raise SDBPersistenceError("Error: %s must be unique!" % property.name)
                 except(StopIteration):
                     pass
-        self.domain.put_attributes(obj.id, attrs, replace=True)
+        # Convert the Expected value to SDB format
+        if expected_value:
+            prop = obj.find_property(expected_value[0])
+            v = expected_value[1]
+            if v is not None and not type(v) == bool:
+                v = self.encode_value(prop, v)
+            expected_value[1] = v
+        self.domain.put_attributes(obj.id, attrs, replace=True, expected_value=expected_value)
         if len(del_attrs) > 0:
             self.domain.delete_attributes(obj.id, del_attrs)
         return obj
@@ -597,6 +670,7 @@
         self.domain.delete_attributes(obj.id)
 
     def set_property(self, prop, obj, name, value):
+        setattr(obj, name, value)
         value = prop.get_value_for_datastore(obj)
         value = self.encode_value(prop, value)
         if prop.unique:
diff --git a/boto/sdb/db/manager/xmlmanager.py b/boto/sdb/db/manager/xmlmanager.py
index 9765df1..3608b2c 100644
--- a/boto/sdb/db/manager/xmlmanager.py
+++ b/boto/sdb/db/manager/xmlmanager.py
@@ -409,7 +409,7 @@
                 text_node = doc.createTextNode(item)
                 item_node.appendChild(text_node)
 
-    def save_object(self, obj):
+    def save_object(self, obj, expected_value=None):
         """
         Marshal the object and do a PUT
         """
diff --git a/boto/sdb/db/model.py b/boto/sdb/db/model.py
index 18bec4b..eab8276 100644
--- a/boto/sdb/db/model.py
+++ b/boto/sdb/db/model.py
@@ -187,10 +187,56 @@
             self._loaded = False
             self._manager.load_object(self)
 
-    def put(self):
-        self._manager.save_object(self)
+    def put(self, expected_value=None):
+        """
+        Save this object as it is, with an optional expected value
+
+        :param expected_value: Optional tuple of Attribute, and Value that 
+            must be the same in order to save this object. If this 
+            condition is not met, an SDBResponseError will be raised with a
+            Confict status code.
+        :type expected_value: tuple or list
+        :return: This object
+        :rtype: :class:`boto.sdb.db.model.Model`
+        """
+        self._manager.save_object(self, expected_value)
+        return self
 
     save = put
+
+    def put_attributes(self, attrs):
+        """
+        Save just these few attributes, not the whole object
+
+        :param attrs: Attributes to save, key->value dict
+        :type attrs: dict
+        :return: self
+        :rtype: :class:`boto.sdb.db.model.Model`
+        """
+        assert(isinstance(attrs, dict)), "Argument must be a dict of key->values to save"
+        for prop_name in attrs:
+            value = attrs[prop_name]
+            prop = self.find_property(prop_name)
+            assert(prop), "Property not found: %s" % prop_name
+            self._manager.set_property(prop, self, prop_name, value)
+        self.reload()
+        return self
+
+    def delete_attributes(self, attrs):
+        """
+        Delete just these attributes, not the whole object.
+
+        :param attrs: Attributes to save, as a list of string names
+        :type attrs: list
+        :return: self
+        :rtype: :class:`boto.sdb.db.model.Model`
+        """
+        assert(isinstance(attrs, list)), "Argument must be a list of names of keys to delete."
+        self._manager.domain.delete_attributes(self.id, attrs)
+        self.reload()
+        return self
+
+    save_attributes = put_attributes
         
     def delete(self):
         self._manager.delete_object(self)
diff --git a/boto/sdb/db/property.py b/boto/sdb/db/property.py
index ab4f7a8..1929a02 100644
--- a/boto/sdb/db/property.py
+++ b/boto/sdb/db/property.py
@@ -75,7 +75,7 @@
         self.slot_name = '_' + self.name
 
     def default_validator(self, value):
-        if value == self.default_value():
+        if isinstance(value, basestring) or value == self.default_value():
             return
         if not isinstance(value, self.data_type):
             raise TypeError, 'Validation Error, expecting %s, got %s' % (self.data_type, type(value))
@@ -135,6 +135,7 @@
         self.max_length = max_length
 
     def validate(self, value):
+        value = super(TextProperty, self).validate(value)
         if not isinstance(value, str) and not isinstance(value, unicode):
             raise TypeError, 'Expecting Text, got %s' % type(value)
         if self.max_length and len(value) > self.max_length:
@@ -142,18 +143,67 @@
 
 class PasswordProperty(StringProperty):
     """
-    Hashed property who's original value can not be
-    retrieved, but still can be compaired.
+
+    Hashed property whose original value can not be
+    retrieved, but still can be compared.
+
+    Works by storing a hash of the original value instead
+    of the original value.  Once that's done all that
+    can be retrieved is the hash.
+
+    The comparison
+
+       obj.password == 'foo' 
+
+    generates a hash of 'foo' and compares it to the
+    stored hash.
+
+    Underlying data type for hashing, storing, and comparing
+    is boto.utils.Password.  The default hash function is
+    defined there ( currently sha512 in most cases, md5
+    where sha512 is not available )
+
+    It's unlikely you'll ever need to use a different hash
+    function, but if you do, you can control the behavior 
+    in one of two ways:
+
+      1) Specifying hashfunc in PasswordProperty constructor
+
+         import hashlib
+
+         class MyModel(model):
+             password = PasswordProperty(hashfunc=hashlib.sha224)
+
+      2) Subclassing Password and PasswordProperty
+        
+         class SHA224Password(Password):
+             hashfunc=hashlib.sha224
+
+         class SHA224PasswordProperty(PasswordProperty):
+             data_type=MyPassword
+             type_name="MyPassword"
+
+         class MyModel(Model):
+             password = SHA224PasswordProperty()
+
     """
     data_type = Password
     type_name = 'Password'
 
     def __init__(self, verbose_name=None, name=None, default='', required=False,
-                 validator=None, choices=None, unique=False):
+                 validator=None, choices=None, unique=False, hashfunc=None):
+
+        """
+           The hashfunc parameter overrides the default hashfunc in boto.utils.Password.
+
+           The remaining parameters are passed through to StringProperty.__init__"""
+
+
         StringProperty.__init__(self, verbose_name, name, default, required, validator, choices, unique)
+        self.hashfunc=hashfunc
 
     def make_value_from_datastore(self, value):
-        p = Password(value)
+        p = self.data_type(value, hashfunc=self.hashfunc)
         return p
 
     def get_value_for_datastore(self, model_instance):
@@ -164,22 +214,22 @@
             return None
 
     def __set__(self, obj, value):
-        if not isinstance(value, Password):
-            p = Password()
+        if not isinstance(value, self.data_type):
+            p = self.data_type(hashfunc=self.hashfunc)
             p.set(value)
             value = p
         Property.__set__(self, obj, value)
 
     def __get__(self, obj, objtype):
-        return Password(StringProperty.__get__(self, obj, objtype))
+        return self.data_type(StringProperty.__get__(self, obj, objtype), hashfunc=self.hashfunc)
 
     def validate(self, value):
         value = Property.validate(self, value)
-        if isinstance(value, Password):
+        if isinstance(value, self.data_type):
             if len(value) > 1024:
                 raise ValueError, 'Length of value greater than maxlength'
         else:
-            raise TypeError, 'Expecting Password, got %s' % type(value)
+            raise TypeError, 'Expecting %s, got %s' % (type(self.data_type), type(value))
 
 class BlobProperty(Property):
     data_type = Blob
@@ -208,6 +258,7 @@
                           validator, choices, unique)
 
     def validate(self, value):
+        value = super(S3KeyProperty, self).validate(value)
         if value == self.default_value() or value == str(self.default_value()):
             return self.default_value()
         if isinstance(value, self.data_type):
@@ -340,6 +391,7 @@
         return Property.default_value(self)
 
     def validate(self, value):
+        value = super(DateTimeProperty, self).validate(value)
         if value == None:
             return
         if not isinstance(value, self.data_type):
@@ -370,6 +422,7 @@
         return Property.default_value(self)
 
     def validate(self, value):
+        value = super(DateProperty, self).validate(value)
         if value == None:
             return
         if not isinstance(value, self.data_type):
@@ -386,6 +439,23 @@
     def now(self):
         return datetime.date.today()
 
+
+class TimeProperty(Property):
+    data_type = datetime.time
+    type_name = 'Time'
+
+    def __init__(self, verbose_name=None, name=None,
+                 default=None, required=False, validator=None, choices=None, unique=False):
+        Property.__init__(self, verbose_name, name, default, required, validator, choices, unique)
+
+    def validate(self, value):
+        value = super(TimeProperty, self).validate(value)
+        if value is None:
+            return
+        if not isinstance(value, self.data_type):
+            raise TypeError, 'Validation Error, expecting %s, got %s' % (self.data_type, type(value))
+
+
 class ReferenceProperty(Property):
 
     data_type = Key
@@ -443,6 +513,8 @@
             raise ValueError, '%s is not a Model' % value
             
     def validate(self, value):
+        if self.validator:
+            self.validator(value)
         if self.required and value==None:
             raise ValueError, '%s is a required property' % self.name
         if value == self.default_value():
@@ -528,6 +600,8 @@
         Property.__init__(self, verbose_name, name, default=default, required=True, **kwds)
 
     def validate(self, value):
+        if self.validator:
+            self.validator(value)
         if value is not None:
             if not isinstance(value, list):
                 value = [value]
@@ -581,6 +655,7 @@
         Property.__init__(self, verbose_name, name, default=default, required=True, **kwds)
 
     def validate(self, value):
+        value = super(MapProperty, self).validate(value)
         if value is not None:
             if not isinstance(value, dict):
                 raise ValueError, 'Value must of type dict'
diff --git a/boto/sdb/db/sequence.py b/boto/sdb/db/sequence.py
index be79c56..8d10b17 100644
--- a/boto/sdb/db/sequence.py
+++ b/boto/sdb/db/sequence.py
@@ -197,9 +197,9 @@
     def _connect(self):
         """Connect to our domain"""
         if not self._db:
+            import boto
+            sdb = boto.connect_sdb()
             if not self.domain_name:
-                import boto
-                sdb = boto.connect_sdb()
                 self.domain_name = boto.config.get("DB", "sequence_db", boto.config.get("DB", "db_name", "default"))
             try:
                 self._db = sdb.get_domain(self.domain_name)
@@ -219,6 +219,3 @@
     def delete(self):
         """Remove this sequence"""
         self.db.delete_attributes(self.id)
-
-    def __del__(self):
-        self.delete()
diff --git a/boto/sdb/domain.py b/boto/sdb/domain.py
index e809124..f348c8a 100644
--- a/boto/sdb/domain.py
+++ b/boto/sdb/domain.py
@@ -276,7 +276,7 @@
         """
         Delete this domain, and all items under it
         """
-        return self.connection.delete(self)
+        return self.connection.delete_domain(self)
 
 
 class DomainMetaData:
diff --git a/boto/sdb/item.py b/boto/sdb/item.py
index 4705b31..86bc70c 100644
--- a/boto/sdb/item.py
+++ b/boto/sdb/item.py
@@ -31,11 +31,9 @@
     The keys on instances of this object correspond to attributes that are
     stored on the SDB item. 
     
-    .. tip::
-        While it is possible to instantiate this class directly, you may want
-        to use the convenience methods on :py:class:`boto.sdb.domain.Domain`
-        for that purpose. For example, 
-        :py:meth:`boto.sdb.domain.Domain.get_item`.
+    .. tip:: While it is possible to instantiate this class directly, you may
+        want to use the convenience methods on :py:class:`boto.sdb.domain.Domain`
+        for that purpose. For example, :py:meth:`boto.sdb.domain.Domain.get_item`.
     """
     def __init__(self, domain, name='', active=False):
         """
diff --git a/boto/sdb/persist/__init__.py b/boto/sdb/persist/__init__.py
deleted file mode 100644
index 2f2b0c1..0000000
--- a/boto/sdb/persist/__init__.py
+++ /dev/null
@@ -1,83 +0,0 @@
-# Copyright (c) 2006,2007,2008 Mitch Garnaat http://garnaat.org/
-#
-# Permission is hereby granted, free of charge, to any person obtaining a
-# copy of this software and associated documentation files (the
-# "Software"), to deal in the Software without restriction, including
-# without limitation the rights to use, copy, modify, merge, publish, dis-
-# tribute, sublicense, and/or sell copies of the Software, and to permit
-# persons to whom the Software is furnished to do so, subject to the fol-
-# lowing conditions:
-#
-# The above copyright notice and this permission notice shall be included
-# in all copies or substantial portions of the Software.
-#
-# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
-# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
-# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
-# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, 
-# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
-# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
-# IN THE SOFTWARE.
-
-import boto
-from boto.utils import find_class
-
-class Manager(object):
-
-    DefaultDomainName = boto.config.get('Persist', 'default_domain', None)
-
-    def __init__(self, domain_name=None, aws_access_key_id=None, aws_secret_access_key=None, debug=0):
-        self.domain_name = domain_name
-        self.aws_access_key_id = aws_access_key_id
-        self.aws_secret_access_key = aws_secret_access_key
-        self.domain = None
-        self.sdb = None
-        self.s3 = None
-        if not self.domain_name:
-            self.domain_name = self.DefaultDomainName
-            if self.domain_name:
-                boto.log.info('No SimpleDB domain set, using default_domain: %s' % self.domain_name)
-            else:
-                boto.log.warning('No SimpleDB domain set, persistance is disabled')
-        if self.domain_name:
-            self.sdb = boto.connect_sdb(aws_access_key_id=self.aws_access_key_id,
-                                        aws_secret_access_key=self.aws_secret_access_key,
-                                        debug=debug)
-            self.domain = self.sdb.lookup(self.domain_name)
-            if not self.domain:
-                self.domain = self.sdb.create_domain(self.domain_name)
-
-    def get_s3_connection(self):
-        if not self.s3:
-            self.s3 = boto.connect_s3(self.aws_access_key_id, self.aws_secret_access_key)
-        return self.s3
-
-def get_manager(domain_name=None, aws_access_key_id=None, aws_secret_access_key=None, debug=0):
-    return Manager(domain_name, aws_access_key_id, aws_secret_access_key, debug=debug)
-
-def set_domain(domain_name):
-    Manager.DefaultDomainName = domain_name
-
-def get_domain():
-    return Manager.DefaultDomainName
-
-def revive_object_from_id(id, manager):
-    if not manager.domain:
-        return None
-    attrs = manager.domain.get_attributes(id, ['__module__', '__type__', '__lineage__'])
-    try:
-        cls = find_class(attrs['__module__'], attrs['__type__'])
-        return cls(id, manager=manager)
-    except ImportError:
-        return None
-
-def object_lister(cls, query_lister, manager):
-    for item in query_lister:
-        if cls:
-            yield cls(item.name)
-        else:
-            o = revive_object_from_id(item.name, manager)
-            if o:
-                yield o
-                
-
diff --git a/boto/sdb/persist/checker.py b/boto/sdb/persist/checker.py
deleted file mode 100644
index e2146c9..0000000
--- a/boto/sdb/persist/checker.py
+++ /dev/null
@@ -1,302 +0,0 @@
-# Copyright (c) 2006,2007,2008 Mitch Garnaat http://garnaat.org/
-#
-# Permission is hereby granted, free of charge, to any person obtaining a
-# copy of this software and associated documentation files (the
-# "Software"), to deal in the Software without restriction, including
-# without limitation the rights to use, copy, modify, merge, publish, dis-
-# tribute, sublicense, and/or sell copies of the Software, and to permit
-# persons to whom the Software is furnished to do so, subject to the fol-
-# lowing conditions:
-#
-# The above copyright notice and this permission notice shall be included
-# in all copies or substantial portions of the Software.
-#
-# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
-# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
-# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
-# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, 
-# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
-# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
-# IN THE SOFTWARE.
-
-from datetime import datetime
-from boto.s3.key import Key
-from boto.s3.bucket import Bucket
-from boto.sdb.persist import revive_object_from_id
-from boto.exception import SDBPersistenceError
-from boto.utils import Password
-
-ISO8601 = '%Y-%m-%dT%H:%M:%SZ'
-
-class ValueChecker:
-
-    def check(self, value):
-        """
-        Checks a value to see if it is of the right type.
-
-        Should raise a TypeError exception if an in appropriate value is passed in.
-        """
-        raise TypeError
-
-    def from_string(self, str_value, obj):
-        """
-        Takes a string as input and returns the type-specific value represented by that string.
-
-        Should raise a ValueError if the value cannot be converted to the appropriate type.
-        """
-        raise ValueError
-
-    def to_string(self, value):
-        """
-        Convert a value to it's string representation.
-
-        Should raise a ValueError if the value cannot be converted to a string representation.
-        """
-        raise ValueError
-    
-class StringChecker(ValueChecker):
-
-    def __init__(self, **params):
-        if params.has_key('maxlength'):
-            self.maxlength = params['maxlength']
-        else:
-            self.maxlength = 1024
-        if params.has_key('default'):
-            self.check(params['default'])
-            self.default = params['default']
-        else:
-            self.default = ''
-
-    def check(self, value):
-        if isinstance(value, str) or isinstance(value, unicode):
-            if len(value) > self.maxlength:
-                raise ValueError, 'Length of value greater than maxlength'
-        else:
-            raise TypeError, 'Expecting String, got %s' % type(value)
-
-    def from_string(self, str_value, obj):
-        return str_value
-
-    def to_string(self, value):
-        self.check(value)
-        return value
-
-class PasswordChecker(StringChecker):
-    def check(self, value):
-        if isinstance(value, str) or isinstance(value, unicode) or isinstance(value, Password):
-            if len(value) > self.maxlength:
-                raise ValueError, 'Length of value greater than maxlength'
-        else:
-            raise TypeError, 'Expecting String, got %s' % type(value)
-
-class IntegerChecker(ValueChecker):
-
-    __sizes__ = { 'small' : (65535, 32767, -32768, 5),
-                  'medium' : (4294967295, 2147483647, -2147483648, 10),
-                  'large' : (18446744073709551615, 9223372036854775807, -9223372036854775808, 20)}
-
-    def __init__(self, **params):
-        self.size = params.get('size', 'medium')
-        if self.size not in self.__sizes__.keys():
-            raise ValueError, 'size must be one of %s' % self.__sizes__.keys()
-        self.signed = params.get('signed', True)
-        self.default = params.get('default', 0)
-        self.format_string = '%%0%dd' % self.__sizes__[self.size][-1]
-
-    def check(self, value):
-        if not isinstance(value, int) and not isinstance(value, long):
-            raise TypeError, 'Expecting int or long, got %s' % type(value)
-        if self.signed:
-            min = self.__sizes__[self.size][2]
-            max = self.__sizes__[self.size][1]
-        else:
-            min = 0
-            max = self.__sizes__[self.size][0]
-        if value > max:
-            raise ValueError, 'Maximum value is %d' % max
-        if value < min:
-            raise ValueError, 'Minimum value is %d' % min
-
-    def from_string(self, str_value, obj):
-        val = int(str_value)
-        if self.signed:
-            val = val + self.__sizes__[self.size][2]
-        return val
-
-    def to_string(self, value):
-        self.check(value)
-        if self.signed:
-            value += -self.__sizes__[self.size][2]
-        return self.format_string % value
-    
-class BooleanChecker(ValueChecker):
-
-    def __init__(self, **params):
-        if params.has_key('default'):
-            self.default = params['default']
-        else:
-            self.default = False
-
-    def check(self, value):
-        if not isinstance(value, bool):
-            raise TypeError, 'Expecting bool, got %s' % type(value)
-
-    def from_string(self, str_value, obj):
-        if str_value.lower() == 'true':
-            return True
-        else:
-            return False
-        
-    def to_string(self, value):
-        self.check(value)
-        if value == True:
-            return 'true'
-        else:
-            return 'false'
-    
-class DateTimeChecker(ValueChecker):
-
-    def __init__(self, **params):
-        if params.has_key('maxlength'):
-            self.maxlength = params['maxlength']
-        else:
-            self.maxlength = 1024
-        if params.has_key('default'):
-            self.default = params['default']
-        else:
-            self.default = datetime.now()
-
-    def check(self, value):
-        if not isinstance(value, datetime):
-            raise TypeError, 'Expecting datetime, got %s' % type(value)
-
-    def from_string(self, str_value, obj):
-        try:
-            return datetime.strptime(str_value, ISO8601)
-        except:
-            raise ValueError, 'Unable to convert %s to DateTime' % str_value
-
-    def to_string(self, value):
-        self.check(value)
-        return value.strftime(ISO8601)
-    
-class ObjectChecker(ValueChecker):
-
-    def __init__(self, **params):
-        self.default = None
-        self.ref_class = params.get('ref_class', None)
-        if self.ref_class == None:
-            raise SDBPersistenceError('ref_class parameter is required')
-
-    def check(self, value):
-        if value == None:
-            return
-        if isinstance(value, str) or isinstance(value, unicode):
-            # ugly little hack - sometimes I want to just stick a UUID string
-            # in here rather than instantiate an object. 
-            # This does a bit of hand waving to "type check" the string
-            t = value.split('-')
-            if len(t) != 5:
-                raise ValueError
-        else:
-            try:
-                obj_lineage = value.get_lineage()
-                cls_lineage = self.ref_class.get_lineage()
-                if obj_lineage.startswith(cls_lineage):
-                    return
-                raise TypeError, '%s not instance of %s' % (obj_lineage, cls_lineage)
-            except:
-                raise ValueError, '%s is not an SDBObject' % value
-
-    def from_string(self, str_value, obj):
-        if not str_value:
-            return None
-        try:
-            return revive_object_from_id(str_value, obj._manager)
-        except:
-            raise ValueError, 'Unable to convert %s to Object' % str_value
-
-    def to_string(self, value):
-        self.check(value)
-        if isinstance(value, str) or isinstance(value, unicode):
-            return value
-        if value == None:
-            return ''
-        else:
-            return value.id
-
-class S3KeyChecker(ValueChecker):
-
-    def __init__(self, **params):
-        self.default = None
-
-    def check(self, value):
-        if value == None:
-            return
-        if isinstance(value, str) or isinstance(value, unicode):
-            try:
-                bucket_name, key_name = value.split('/', 1)
-            except:
-                raise ValueError
-        elif not isinstance(value, Key):
-            raise TypeError, 'Expecting Key, got %s' % type(value)
-
-    def from_string(self, str_value, obj):
-        if not str_value:
-            return None
-        if str_value == 'None':
-            return None
-        try:
-            bucket_name, key_name = str_value.split('/', 1)
-            if obj:
-                s3 = obj._manager.get_s3_connection()
-                bucket = s3.get_bucket(bucket_name)
-                key = bucket.get_key(key_name)
-                if not key:
-                    key = bucket.new_key(key_name)
-                return key
-        except:
-            raise ValueError, 'Unable to convert %s to S3Key' % str_value
-
-    def to_string(self, value):
-        self.check(value)
-        if isinstance(value, str) or isinstance(value, unicode):
-            return value
-        if value == None:
-            return ''
-        else:
-            return '%s/%s' % (value.bucket.name, value.name)
-
-class S3BucketChecker(ValueChecker):
-
-    def __init__(self, **params):
-        self.default = None
-
-    def check(self, value):
-        if value == None:
-            return
-        if isinstance(value, str) or isinstance(value, unicode):
-            return
-        elif not isinstance(value, Bucket):
-            raise TypeError, 'Expecting Bucket, got %s' % type(value)
-
-    def from_string(self, str_value, obj):
-        if not str_value:
-            return None
-        if str_value == 'None':
-            return None
-        try:
-            if obj:
-                s3 = obj._manager.get_s3_connection()
-                bucket = s3.get_bucket(str_value)
-                return bucket
-        except:
-            raise ValueError, 'Unable to convert %s to S3Bucket' % str_value
-
-    def to_string(self, value):
-        self.check(value)
-        if value == None:
-            return ''
-        else:
-            return '%s' % value.name
-
diff --git a/boto/sdb/persist/object.py b/boto/sdb/persist/object.py
deleted file mode 100644
index 993df1e..0000000
--- a/boto/sdb/persist/object.py
+++ /dev/null
@@ -1,207 +0,0 @@
-# Copyright (c) 2006,2007,2008 Mitch Garnaat http://garnaat.org/
-#
-# Permission is hereby granted, free of charge, to any person obtaining a
-# copy of this software and associated documentation files (the
-# "Software"), to deal in the Software without restriction, including
-# without limitation the rights to use, copy, modify, merge, publish, dis-
-# tribute, sublicense, and/or sell copies of the Software, and to permit
-# persons to whom the Software is furnished to do so, subject to the fol-
-# lowing conditions:
-#
-# The above copyright notice and this permission notice shall be included
-# in all copies or substantial portions of the Software.
-#
-# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
-# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
-# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
-# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, 
-# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
-# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
-# IN THE SOFTWARE.
-
-from boto.exception import SDBPersistenceError
-from boto.sdb.persist import get_manager, object_lister
-from boto.sdb.persist.property import Property, ScalarProperty
-import uuid
-
-class SDBBase(type):
-    "Metaclass for all SDBObjects"
-    def __init__(cls, name, bases, dict):
-        super(SDBBase, cls).__init__(name, bases, dict)
-        # Make sure this is a subclass of SDBObject - mainly copied from django ModelBase (thanks!)
-        try:
-            if filter(lambda b: issubclass(b, SDBObject), bases):
-                # look for all of the Properties and set their names
-                for key in dict.keys():
-                    if isinstance(dict[key], Property):
-                        property = dict[key]
-                        property.set_name(key)
-                prop_names = []
-                props = cls.properties()
-                for prop in props:
-                    prop_names.append(prop.name)
-                setattr(cls, '_prop_names', prop_names)
-        except NameError:
-            # 'SDBObject' isn't defined yet, meaning we're looking at our own
-            # SDBObject class, defined below.
-            pass
-        
-class SDBObject(object):
-    __metaclass__ = SDBBase
-
-    _manager = get_manager()
-
-    @classmethod
-    def get_lineage(cls):
-        l = [c.__name__ for c in cls.mro()]
-        l.reverse()
-        return '.'.join(l)
-    
-    @classmethod
-    def get(cls, id=None, **params):
-        if params.has_key('manager'):
-            manager = params['manager']
-        else:
-            manager = cls._manager
-        if manager.domain and id:
-            a = cls._manager.domain.get_attributes(id, '__type__')
-            if a.has_key('__type__'):
-                return cls(id, manager)
-            else:
-                raise SDBPersistenceError('%s object with id=%s does not exist' % (cls.__name__, id))
-        else:
-            rs = cls.find(**params)
-            try:
-                obj = rs.next()
-            except StopIteration:
-                raise SDBPersistenceError('%s object matching query does not exist' % cls.__name__)
-            try:
-                rs.next()
-            except StopIteration:
-                return obj
-            raise SDBPersistenceError('Query matched more than 1 item')
-
-    @classmethod
-    def find(cls, **params):
-        if params.has_key('manager'):
-            manager = params['manager']
-            del params['manager']
-        else:
-            manager = cls._manager
-        keys = params.keys()
-        if len(keys) > 4:
-            raise SDBPersistenceError('Too many fields, max is 4')
-        parts = ["['__type__'='%s'] union ['__lineage__'starts-with'%s']" % (cls.__name__, cls.get_lineage())]
-        properties = cls.properties()
-        for key in keys:
-            found = False
-            for property in properties:
-                if property.name == key:
-                    found = True
-                    if isinstance(property, ScalarProperty):
-                        checker = property.checker
-                        parts.append("['%s' = '%s']" % (key, checker.to_string(params[key])))
-                    else:
-                        raise SDBPersistenceError('%s is not a searchable field' % key)
-            if not found:
-                raise SDBPersistenceError('%s is not a valid field' % key)
-        query = ' intersection '.join(parts)
-        if manager.domain:
-            rs = manager.domain.query(query)
-        else:
-            rs = []
-        return object_lister(None, rs, manager)
-
-    @classmethod
-    def list(cls, max_items=None, manager=None):
-        if not manager:
-            manager = cls._manager
-        if manager.domain:
-            rs = manager.domain.query("['__type__' = '%s']" % cls.__name__, max_items=max_items)
-        else:
-            rs = []
-        return object_lister(cls, rs, manager)
-
-    @classmethod
-    def properties(cls):
-        properties = []
-        while cls:
-            for key in cls.__dict__.keys():
-                if isinstance(cls.__dict__[key], Property):
-                    properties.append(cls.__dict__[key])
-            if len(cls.__bases__) > 0:
-                cls = cls.__bases__[0]
-            else:
-                cls = None
-        return properties
-
-    # for backwards compatibility
-    find_properties = properties
-
-    def __init__(self, id=None, manager=None):
-        if manager:
-            self._manager = manager
-        self.id = id
-        if self.id:
-            self._auto_update = True
-            if self._manager.domain:
-                attrs = self._manager.domain.get_attributes(self.id, '__type__')
-                if len(attrs.keys()) == 0:
-                    raise SDBPersistenceError('Object %s: not found' % self.id)
-        else:
-            self.id = str(uuid.uuid4())
-            self._auto_update = False
-
-    def __setattr__(self, name, value):
-        if name in self._prop_names:
-            object.__setattr__(self, name, value)
-        elif name.startswith('_'):
-            object.__setattr__(self, name, value)
-        elif name == 'id':
-            object.__setattr__(self, name, value)
-        else:
-            self._persist_attribute(name, value)
-            object.__setattr__(self, name, value)
-
-    def __getattr__(self, name):
-        if not name.startswith('_'):
-            a = self._manager.domain.get_attributes(self.id, name)
-            if a.has_key(name):
-                object.__setattr__(self, name, a[name])
-                return a[name]
-        raise AttributeError
-
-    def __repr__(self):
-        return '%s<%s>' % (self.__class__.__name__, self.id)
-
-    def _persist_attribute(self, name, value):
-        if self.id:
-            self._manager.domain.put_attributes(self.id, {name : value}, replace=True)
-
-    def _get_sdb_item(self):
-        return self._manager.domain.get_item(self.id)
-
-    def save(self):
-        attrs = {'__type__' : self.__class__.__name__,
-                 '__module__' : self.__class__.__module__,
-                 '__lineage__' : self.get_lineage()}
-        for property in self.properties():
-            attrs[property.name] = property.to_string(self)
-        if self._manager.domain:
-            self._manager.domain.put_attributes(self.id, attrs, replace=True)
-            self._auto_update = True
-        
-    def delete(self):
-        if self._manager.domain:
-            self._manager.domain.delete_attributes(self.id)
-
-    def get_related_objects(self, ref_name, ref_cls=None):
-        if self._manager.domain:
-            query = "['%s' = '%s']" % (ref_name, self.id)
-            if ref_cls:
-                query += " intersection ['__type__'='%s']" % ref_cls.__name__
-            rs = self._manager.domain.query(query)
-        else:
-            rs = []
-        return object_lister(ref_cls, rs, self._manager)
-
diff --git a/boto/sdb/persist/property.py b/boto/sdb/persist/property.py
deleted file mode 100644
index 4776d35..0000000
--- a/boto/sdb/persist/property.py
+++ /dev/null
@@ -1,371 +0,0 @@
-# Copyright (c) 2006,2007,2008 Mitch Garnaat http://garnaat.org/
-#
-# Permission is hereby granted, free of charge, to any person obtaining a
-# copy of this software and associated documentation files (the
-# "Software"), to deal in the Software without restriction, including
-# without limitation the rights to use, copy, modify, merge, publish, dis-
-# tribute, sublicense, and/or sell copies of the Software, and to permit
-# persons to whom the Software is furnished to do so, subject to the fol-
-# lowing conditions:
-#
-# The above copyright notice and this permission notice shall be included
-# in all copies or substantial portions of the Software.
-#
-# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
-# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
-# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
-# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, 
-# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
-# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
-# IN THE SOFTWARE.
-
-from boto.exception import SDBPersistenceError
-from boto.sdb.persist.checker import StringChecker, PasswordChecker, IntegerChecker, BooleanChecker
-from boto.sdb.persist.checker import DateTimeChecker, ObjectChecker, S3KeyChecker, S3BucketChecker
-from boto.utils import Password
-
-class Property(object):
-
-    def __init__(self, checker_class, **params):
-        self.name = ''
-        self.checker = checker_class(**params)
-        self.slot_name = '__'
-        
-    def set_name(self, name):
-        self.name = name
-        self.slot_name = '__' + self.name
-
-class ScalarProperty(Property):
-
-    def save(self, obj):
-        domain = obj._manager.domain
-        domain.put_attributes(obj.id, {self.name : self.to_string(obj)}, replace=True)
-
-    def to_string(self, obj):
-        return self.checker.to_string(getattr(obj, self.name))
-
-    def load(self, obj):
-        domain = obj._manager.domain
-        a = domain.get_attributes(obj.id, self.name)
-        # try to get the attribute value from SDB
-        if self.name in a:
-            value = self.checker.from_string(a[self.name], obj)
-            setattr(obj, self.slot_name, value)
-        # if it's not there, set the value to the default value
-        else:
-            self.__set__(obj, self.checker.default)
-
-    def __get__(self, obj, objtype):
-        if obj:
-            try:
-                value = getattr(obj, self.slot_name)
-            except AttributeError:
-                if obj._auto_update:
-                    self.load(obj)
-                    value = getattr(obj, self.slot_name)
-                else:
-                    value = self.checker.default
-                    setattr(obj, self.slot_name, self.checker.default)
-        return value
-
-    def __set__(self, obj, value):
-        self.checker.check(value)
-        try:
-            old_value = getattr(obj, self.slot_name)
-        except:
-            old_value = self.checker.default
-        setattr(obj, self.slot_name, value)
-        if obj._auto_update:
-            try:
-                self.save(obj)
-            except:
-                setattr(obj, self.slot_name, old_value)
-                raise
-                                      
-class StringProperty(ScalarProperty):
-
-    def __init__(self, **params):
-        ScalarProperty.__init__(self, StringChecker, **params)
-
-class PasswordProperty(ScalarProperty):
-    """
-    Hashed password
-    """
-
-    def __init__(self, **params):
-        ScalarProperty.__init__(self, PasswordChecker, **params)
-
-    def __set__(self, obj, value):
-        p = Password()
-        p.set(value)
-        ScalarProperty.__set__(self, obj, p)
-
-    def __get__(self, obj, objtype):
-        return Password(ScalarProperty.__get__(self, obj, objtype))
-
-class SmallPositiveIntegerProperty(ScalarProperty):
-
-    def __init__(self, **params):
-        params['size'] = 'small'
-        params['signed'] = False
-        ScalarProperty.__init__(self, IntegerChecker, **params)
-
-class SmallIntegerProperty(ScalarProperty):
-
-    def __init__(self, **params):
-        params['size'] = 'small'
-        params['signed'] = True
-        ScalarProperty.__init__(self, IntegerChecker, **params)
-
-class PositiveIntegerProperty(ScalarProperty):
-
-    def __init__(self, **params):
-        params['size'] = 'medium'
-        params['signed'] = False
-        ScalarProperty.__init__(self, IntegerChecker, **params)
-
-class IntegerProperty(ScalarProperty):
-
-    def __init__(self, **params):
-        params['size'] = 'medium'
-        params['signed'] = True
-        ScalarProperty.__init__(self, IntegerChecker, **params)
-
-class LargePositiveIntegerProperty(ScalarProperty):
-
-    def __init__(self, **params):
-        params['size'] = 'large'
-        params['signed'] = False
-        ScalarProperty.__init__(self, IntegerChecker, **params)
-
-class LargeIntegerProperty(ScalarProperty):
-
-    def __init__(self, **params):
-        params['size'] = 'large'
-        params['signed'] = True
-        ScalarProperty.__init__(self, IntegerChecker, **params)
-
-class BooleanProperty(ScalarProperty):
-
-    def __init__(self, **params):
-        ScalarProperty.__init__(self, BooleanChecker, **params)
-
-class DateTimeProperty(ScalarProperty):
-
-    def __init__(self, **params):
-        ScalarProperty.__init__(self, DateTimeChecker, **params)
-
-class ObjectProperty(ScalarProperty):
-
-    def __init__(self, **params):
-        ScalarProperty.__init__(self, ObjectChecker, **params)
-
-class S3KeyProperty(ScalarProperty):
-
-    def __init__(self, **params):
-        ScalarProperty.__init__(self, S3KeyChecker, **params)
-        
-    def __set__(self, obj, value):
-        self.checker.check(value)
-        try:
-            old_value = getattr(obj, self.slot_name)
-        except:
-            old_value = self.checker.default
-        if isinstance(value, str):
-            value = self.checker.from_string(value, obj)
-        setattr(obj, self.slot_name, value)
-        if obj._auto_update:
-            try:
-                self.save(obj)
-            except:
-                setattr(obj, self.slot_name, old_value)
-                raise
-                                      
-class S3BucketProperty(ScalarProperty):
-
-    def __init__(self, **params):
-        ScalarProperty.__init__(self, S3BucketChecker, **params)
-        
-    def __set__(self, obj, value):
-        self.checker.check(value)
-        try:
-            old_value = getattr(obj, self.slot_name)
-        except:
-            old_value = self.checker.default
-        if isinstance(value, str):
-            value = self.checker.from_string(value, obj)
-        setattr(obj, self.slot_name, value)
-        if obj._auto_update:
-            try:
-                self.save(obj)
-            except:
-                setattr(obj, self.slot_name, old_value)
-                raise
-
-class MultiValueProperty(Property):
-
-    def __init__(self, checker_class, **params):
-        Property.__init__(self, checker_class, **params)
-
-    def __get__(self, obj, objtype):
-        if obj:
-            try:
-                value = getattr(obj, self.slot_name)
-            except AttributeError:
-                if obj._auto_update:
-                    self.load(obj)
-                    value = getattr(obj, self.slot_name)
-                else:
-                    value = MultiValue(self, obj, [])
-                    setattr(obj, self.slot_name, value)
-        return value
-
-    def load(self, obj):
-        if obj != None:
-            _list = []
-            domain = obj._manager.domain
-            a = domain.get_attributes(obj.id, self.name)
-            if self.name in a:
-                lst = a[self.name]
-                if not isinstance(lst, list):
-                    lst = [lst]
-                for value in lst:
-                    value = self.checker.from_string(value, obj)
-                    _list.append(value)
-        setattr(obj, self.slot_name, MultiValue(self, obj, _list))
-
-    def __set__(self, obj, value):
-        if not isinstance(value, list):
-            raise SDBPersistenceError('Value must be a list')
-        setattr(obj, self.slot_name, MultiValue(self, obj, value))
-        str_list = self.to_string(obj)
-        domain = obj._manager.domain
-        if obj._auto_update:
-            if len(str_list) == 1:
-                domain.put_attributes(obj.id, {self.name : str_list[0]}, replace=True)
-            else:
-                try:
-                    self.__delete__(obj)
-                except:
-                    pass
-                domain.put_attributes(obj.id, {self.name : str_list}, replace=True)
-                setattr(obj, self.slot_name, MultiValue(self, obj, value))
-
-    def __delete__(self, obj):
-        if obj._auto_update:
-            domain = obj._manager.domain
-            domain.delete_attributes(obj.id, [self.name])
-        setattr(obj, self.slot_name, MultiValue(self, obj, []))
-
-    def to_string(self, obj):
-        str_list = []
-        for value in self.__get__(obj, type(obj)):
-            str_list.append(self.checker.to_string(value))
-        return str_list
-
-class StringListProperty(MultiValueProperty):
-
-    def __init__(self, **params):
-        MultiValueProperty.__init__(self, StringChecker, **params)
-
-class SmallIntegerListProperty(MultiValueProperty):
-
-    def __init__(self, **params):
-        params['size'] = 'small'
-        params['signed'] = True
-        MultiValueProperty.__init__(self, IntegerChecker, **params)
-
-class SmallPositiveIntegerListProperty(MultiValueProperty):
-
-    def __init__(self, **params):
-        params['size'] = 'small'
-        params['signed'] = False
-        MultiValueProperty.__init__(self, IntegerChecker, **params)
-
-class IntegerListProperty(MultiValueProperty):
-
-    def __init__(self, **params):
-        params['size'] = 'medium'
-        params['signed'] = True
-        MultiValueProperty.__init__(self, IntegerChecker, **params)
-
-class PositiveIntegerListProperty(MultiValueProperty):
-
-    def __init__(self, **params):
-        params['size'] = 'medium'
-        params['signed'] = False
-        MultiValueProperty.__init__(self, IntegerChecker, **params)
-
-class LargeIntegerListProperty(MultiValueProperty):
-
-    def __init__(self, **params):
-        params['size'] = 'large'
-        params['signed'] = True
-        MultiValueProperty.__init__(self, IntegerChecker, **params)
-
-class LargePositiveIntegerListProperty(MultiValueProperty):
-
-    def __init__(self, **params):
-        params['size'] = 'large'
-        params['signed'] = False
-        MultiValueProperty.__init__(self, IntegerChecker, **params)
-
-class BooleanListProperty(MultiValueProperty):
-
-    def __init__(self, **params):
-        MultiValueProperty.__init__(self, BooleanChecker, **params)
-
-class ObjectListProperty(MultiValueProperty):
-
-    def __init__(self, **params):
-        MultiValueProperty.__init__(self, ObjectChecker, **params)
-        
-class HasManyProperty(Property):
-
-    def set_name(self, name):
-        self.name = name
-        self.slot_name = '__' + self.name
-
-    def __get__(self, obj, objtype):
-        return self
-
-
-class MultiValue:
-    """
-    Special Multi Value for boto persistence layer to allow us to do 
-    obj.list.append(foo)
-    """
-    def __init__(self, property, obj, _list):
-        self.checker = property.checker
-        self.name = property.name
-        self.object = obj
-        self._list = _list
-
-    def __repr__(self):
-        return repr(self._list)
-
-    def __getitem__(self, key):
-        return self._list.__getitem__(key)
-
-    def __delitem__(self, key):
-        item = self[key]
-        self._list.__delitem__(key)
-        domain = self.object._manager.domain
-        domain.delete_attributes(self.object.id, {self.name: [self.checker.to_string(item)]})
-
-    def __len__(self):
-        return len(self._list)
-
-    def append(self, value):
-        self.checker.check(value)
-        self._list.append(value)
-        domain = self.object._manager.domain
-        domain.put_attributes(self.object.id, {self.name: self.checker.to_string(value)}, replace=False)
-
-    def index(self, value):
-        for x in self._list:
-            if x.id == value.id:
-                return self._list.index(x)
-
-    def remove(self, value):
-        del(self[self.index(value)])
diff --git a/boto/sdb/persist/test_persist.py b/boto/sdb/persist/test_persist.py
deleted file mode 100644
index 080935d..0000000
--- a/boto/sdb/persist/test_persist.py
+++ /dev/null
@@ -1,141 +0,0 @@
-from boto.sdb.persist.object import SDBObject
-from boto.sdb.persist.property import StringProperty, PositiveIntegerProperty, IntegerProperty
-from boto.sdb.persist.property import BooleanProperty, DateTimeProperty, S3KeyProperty
-from boto.sdb.persist.property import ObjectProperty, StringListProperty
-from boto.sdb.persist.property import PositiveIntegerListProperty, BooleanListProperty, ObjectListProperty
-from boto.sdb.persist import Manager
-from datetime import datetime
-import time
-
-#
-# This will eventually be moved to the boto.tests module and become a real unit test
-# but for now it will live here.  It shows examples of each of the Property types in
-# use and tests the basic operations.
-#
-class TestScalar(SDBObject):
-
-    name = StringProperty()
-    description = StringProperty()
-    size = PositiveIntegerProperty()
-    offset = IntegerProperty()
-    foo = BooleanProperty()
-    date = DateTimeProperty()
-    file = S3KeyProperty()
-
-class TestRef(SDBObject):
-
-    name = StringProperty()
-    ref = ObjectProperty(ref_class=TestScalar)
-
-class TestSubClass1(TestRef):
-
-    answer = PositiveIntegerProperty()
-
-class TestSubClass2(TestScalar):
-
-    flag = BooleanProperty()
-
-class TestList(SDBObject):
-
-    names = StringListProperty()
-    numbers = PositiveIntegerListProperty()
-    bools = BooleanListProperty()
-    objects = ObjectListProperty(ref_class=TestScalar)
-    
-def test1():
-    s = TestScalar()
-    s.name = 'foo'
-    s.description = 'This is foo'
-    s.size = 42
-    s.offset = -100
-    s.foo = True
-    s.date = datetime.now()
-    s.save()
-    return s
-
-def test2(ref_name):
-    s = TestRef()
-    s.name = 'testref'
-    rs = TestScalar.find(name=ref_name)
-    s.ref = rs.next()
-    s.save()
-    return s
-
-def test3():
-    s = TestScalar()
-    s.name = 'bar'
-    s.description = 'This is bar'
-    s.size = 24
-    s.foo = False
-    s.date = datetime.now()
-    s.save()
-    return s
-
-def test4(ref1, ref2):
-    s = TestList()
-    s.names.append(ref1.name)
-    s.names.append(ref2.name)
-    s.numbers.append(ref1.size)
-    s.numbers.append(ref2.size)
-    s.bools.append(ref1.foo)
-    s.bools.append(ref2.foo)
-    s.objects.append(ref1)
-    s.objects.append(ref2)
-    s.save()
-    return s
-
-def test5(ref):
-    s = TestSubClass1()
-    s.answer = 42
-    s.ref = ref
-    s.save()
-    # test out free form attribute
-    s.fiddlefaddle = 'this is fiddlefaddle'
-    s._fiddlefaddle = 'this is not fiddlefaddle'
-    return s
-
-def test6():
-    s = TestSubClass2()
-    s.name = 'fie'
-    s.description = 'This is fie'
-    s.size = 4200
-    s.offset = -820
-    s.foo = False
-    s.date = datetime.now()
-    s.flag = True
-    s.save()
-    return s
-
-def test(domain_name):
-    print 'Initialize the Persistance system'
-    Manager.DefaultDomainName = domain_name
-    print 'Call test1'
-    s1 = test1()
-    # now create a new instance and read the saved data from SDB
-    print 'Now sleep to wait for things to converge'
-    time.sleep(5)
-    print 'Now lookup the object and compare the fields'
-    s2 = TestScalar(s1.id)
-    assert s1.name == s2.name
-    assert s1.description == s2.description
-    assert s1.size == s2.size
-    assert s1.offset == s2.offset
-    assert s1.foo == s2.foo
-    #assert s1.date == s2.date
-    print 'Call test2'
-    s2 = test2(s1.name)
-    print 'Call test3'
-    s3 = test3()
-    print 'Call test4'
-    s4 = test4(s1, s3)
-    print 'Call test5'
-    s6 = test6()
-    s5 = test5(s6)
-    domain = s5._manager.domain
-    item1 = domain.get_item(s1.id)
-    item2 = domain.get_item(s2.id)
-    item3 = domain.get_item(s3.id)
-    item4 = domain.get_item(s4.id)
-    item5 = domain.get_item(s5.id)
-    item6 = domain.get_item(s6.id)
-    return [(s1, item1), (s2, item2), (s3, item3), (s4, item4), (s5, item5), (s6, item6)]
diff --git a/boto/ses/__init__.py b/boto/ses/__init__.py
index 167080b..f893423 100644
--- a/boto/ses/__init__.py
+++ b/boto/ses/__init__.py
@@ -21,4 +21,49 @@
 # IN THE SOFTWARE.
 
 from connection import SESConnection
+from boto.regioninfo import RegionInfo
 
+def regions():
+    """
+    Get all available regions for the SES service.
+
+    :rtype: list
+    :return: A list of :class:`boto.regioninfo.RegionInfo` instances
+    """
+    return [RegionInfo(name='us-east-1',
+                       endpoint='email.us-east-1.amazonaws.com',
+                       connection_cls=SESConnection)]
+
+def connect_to_region(region_name, **kw_params):
+    """
+    Given a valid region name, return a 
+    :class:`boto.sns.connection.SESConnection`.
+
+    :type: str
+    :param region_name: The name of the region to connect to.
+    
+    :rtype: :class:`boto.sns.connection.SESConnection` or ``None``
+    :return: A connection to the given region, or None if an invalid region
+             name is given
+    """
+    for region in regions():
+        if region.name == region_name:
+            return region.connect(**kw_params)
+    return None
+
+def get_region(region_name, **kw_params):
+    """
+    Find and return a :class:`boto.regioninfo.RegionInfo` object
+    given a region name.
+
+    :type: str
+    :param: The name of the region.
+
+    :rtype: :class:`boto.regioninfo.RegionInfo`
+    :return: The RegionInfo object for the given region or None if
+             an invalid region name is provided.
+    """
+    for region in regions(**kw_params):
+        if region.name == region_name:
+            return region
+    return None
diff --git a/boto/ses/connection.py b/boto/ses/connection.py
index 57a2c7e..8cd80c3 100644
--- a/boto/ses/connection.py
+++ b/boto/ses/connection.py
@@ -15,13 +15,14 @@
 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
 # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
 # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
-# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, 
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
 # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
 # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
 # IN THE SOFTWARE.
 
 from boto.connection import AWSAuthConnection
 from boto.exception import BotoServerError
+from boto.regioninfo import RegionInfo
 import boto
 import boto.jsonresponse
 
@@ -32,15 +33,23 @@
 class SESConnection(AWSAuthConnection):
 
     ResponseError = BotoServerError
-    DefaultHost = 'email.us-east-1.amazonaws.com'
+    DefaultRegionName = 'us-east-1'
+    DefaultRegionEndpoint = 'email.us-east-1.amazonaws.com'
     APIVersion = '2010-12-01'
 
     def __init__(self, aws_access_key_id=None, aws_secret_access_key=None,
-                 port=None, proxy=None, proxy_port=None,
-                 host=DefaultHost, debug=0):
-        AWSAuthConnection.__init__(self, host, aws_access_key_id,
-                                   aws_secret_access_key, True, port, proxy,
-                                   proxy_port, debug=debug)
+                 is_secure=True, port=None, proxy=None, proxy_port=None,
+                 proxy_user=None, proxy_pass=None, debug=0,
+                 https_connection_factory=None, region=None, path='/'):
+        if not region:
+            region = RegionInfo(self, self.DefaultRegionName,
+                                self.DefaultRegionEndpoint)
+        self.region = region
+        AWSAuthConnection.__init__(self, self.region.endpoint,
+                                   aws_access_key_id, aws_secret_access_key,
+                                   is_secure, port, proxy, proxy_port,
+                                   proxy_user, proxy_pass, debug,
+                                   https_connection_factory, path)
 
     def _required_auth_capability(self):
         return ['ses']
@@ -57,7 +66,7 @@
         :type label: string
         :param label: The parameter list's name
         """
-        if isinstance(items, str):
+        if isinstance(items, basestring):
             items = [items]
         for i in range(1, len(items) + 1):
             params['%s.%d' % (label, i)] = items[i - 1]
@@ -73,9 +82,15 @@
         :param params: Parameters that will be sent as POST data with the API
                        call.
         """
-        headers = {'Content-Type': 'application/x-www-form-urlencoded'}
+        ct = 'application/x-www-form-urlencoded; charset=UTF-8'
+        headers = {'Content-Type': ct}
         params = params or {}
         params['Action'] = action
+
+        for k, v in params.items():
+            if isinstance(v, unicode):  # UTF-8 encode only if it's Unicode
+                params[k] = v.encode('utf-8')
+
         response = super(SESConnection, self).make_request(
             'POST',
             '/',
@@ -96,7 +111,8 @@
 
 
     def send_email(self, source, subject, body, to_addresses, cc_addresses=None,
-                   bcc_addresses=None, format='text'):
+                   bcc_addresses=None, format='text', reply_addresses=None,
+                   return_path=None, text_body=None, html_body=None):
         """Composes an email message based on input data, and then immediately
         queues the message for sending.
 
@@ -123,20 +139,57 @@
         :param format: The format of the message's body, must be either "text"
                        or "html".
 
+        :type reply_addresses: list of strings or string
+        :param reply_addresses: The reply-to email address(es) for the
+                                message. If the recipient replies to the
+                                message, each reply-to address will
+                                receive the reply.
+
+        :type return_path: string
+        :param return_path: The email address to which bounce notifications are
+                            to be forwarded. If the message cannot be delivered
+                            to the recipient, then an error message will be
+                            returned from the recipient's ISP; this message will
+                            then be forwarded to the email address specified by
+                            the ReturnPath parameter.
+
+        :type text_body: string
+        :param text_body: The text body to send with this email.
+
+        :type html_body: string
+        :param html_body: The html body to send with this email.
+
         """
+        format = format.lower().strip()
+        if body is not None:
+            if format == "text":
+                if text_body is not None:
+                    raise Warning("You've passed in both a body and a text_body; please choose one or the other.")
+                text_body = body
+            else:
+                if html_body is not None:
+                    raise Warning("You've passed in both a body and an html_body; please choose one or the other.")
+                html_body = body
+
         params = {
             'Source': source,
             'Message.Subject.Data': subject,
         }
 
-        format = format.lower().strip()
-        if format == 'html':
-            params['Message.Body.Html.Data'] = body
-        elif format == 'text':
-            params['Message.Body.Text.Data'] = body
-        else:
+        if return_path:
+            params['ReturnPath'] = return_path
+
+        if html_body is not None:
+            params['Message.Body.Html.Data'] = html_body
+        if text_body is not None:
+            params['Message.Body.Text.Data'] = text_body
+
+        if(format not in ("text","html")):
             raise ValueError("'format' argument must be 'text' or 'html'")
 
+        if(not (html_body or text_body)):
+            raise ValueError("No text or html body found for mail")
+
         self._build_list_params(params, to_addresses,
                                'Destination.ToAddresses.member')
         if cc_addresses:
@@ -147,9 +200,13 @@
             self._build_list_params(params, bcc_addresses,
                                    'Destination.BccAddresses.member')
 
+        if reply_addresses:
+            self._build_list_params(params, reply_addresses,
+                                   'ReplyToAddresses.member')
+
         return self._make_request('SendEmail', params)
 
-    def send_raw_email(self, source, raw_message, destinations=None):
+    def send_raw_email(self, raw_message, source=None, destinations=None):
         """Sends an email message, with header and content specified by the
         client. The SendRawEmail action is useful for sending multipart MIME
         emails, with attachments or inline content. The raw text of the message
@@ -157,7 +214,12 @@
         cannot be sent.
 
         :type source: string
-        :param source: The sender's email address.
+        :param source: The sender's email address. Amazon's docs say:
+
+          If you specify the Source parameter, then bounce notifications and
+          complaints will be sent to this email address. This takes precedence
+          over any Return-Path header that you might include in the raw text of
+          the message.
 
         :type raw_message: string
         :param raw_message: The raw text of the message. The client is
@@ -175,12 +237,15 @@
 
         """
         params = {
-            'Source': source,
             'RawMessage.Data': base64.b64encode(raw_message),
         }
+        
+        if source:
+            params['Source'] = source
 
-        self._build_list_params(params, destinations,
-                               'Destinations.member')
+        if destinations:
+            self._build_list_params(params, destinations,
+                                   'Destinations.member')
 
         return self._make_request('SendRawEmail', params)
 
@@ -245,4 +310,3 @@
         return self._make_request('VerifyEmailAddress', {
             'EmailAddress': email_address,
         })
-
diff --git a/boto/sns/__init__.py b/boto/sns/__init__.py
index 9c5a7d7..c748661 100644
--- a/boto/sns/__init__.py
+++ b/boto/sns/__init__.py
@@ -23,3 +23,62 @@
 # this is here for backward compatibility
 # originally, the SNSConnection class was defined here
 from connection import SNSConnection
+from boto.regioninfo import RegionInfo
+
+def regions():
+    """
+    Get all available regions for the SNS service.
+
+    :rtype: list
+    :return: A list of :class:`boto.regioninfo.RegionInfo` instances
+    """
+    return [RegionInfo(name='us-east-1',
+                       endpoint='sns.us-east-1.amazonaws.com',
+                       connection_cls=SNSConnection),
+            RegionInfo(name='eu-west-1',
+                       endpoint='sns.eu-west-1.amazonaws.com',
+                       connection_cls=SNSConnection),
+            RegionInfo(name='us-west-1',
+                       endpoint='sns.us-west-1.amazonaws.com',
+                       connection_cls=SNSConnection),
+            RegionInfo(name='ap-northeast-1',
+                       endpoint='sns.ap-northeast-1.amazonaws.com',
+                       connection_cls=SNSConnection),
+            RegionInfo(name='ap-southeast-1',
+                       endpoint='sns.ap-southeast-1.amazonaws.com',
+                       connection_cls=SNSConnection),
+            ]
+
+def connect_to_region(region_name, **kw_params):
+    """
+    Given a valid region name, return a 
+    :class:`boto.sns.connection.SNSConnection`.
+
+    :type: str
+    :param region_name: The name of the region to connect to.
+    
+    :rtype: :class:`boto.sns.connection.SNSConnection` or ``None``
+    :return: A connection to the given region, or None if an invalid region
+             name is given
+    """
+    for region in regions():
+        if region.name == region_name:
+            return region.connect(**kw_params)
+    return None
+
+def get_region(region_name, **kw_params):
+    """
+    Find and return a :class:`boto.regioninfo.RegionInfo` object
+    given a region name.
+
+    :type: str
+    :param: The name of the region.
+
+    :rtype: :class:`boto.regioninfo.RegionInfo`
+    :return: The RegionInfo object for the given region or None if
+             an invalid region name is provided.
+    """
+    for region in regions(**kw_params):
+        if region.name == region_name:
+            return region
+    return None
diff --git a/boto/sns/connection.py b/boto/sns/connection.py
index 2a49adb..6ce4ff1 100644
--- a/boto/sns/connection.py
+++ b/boto/sns/connection.py
@@ -20,15 +20,13 @@
 # IN THE SOFTWARE.
 
 from boto.connection import AWSQueryConnection
-from boto.sdb.regioninfo import SDBRegionInfo
+from boto.regioninfo import RegionInfo
 import boto
 import uuid
 try:
-    import json
-except ImportError:
     import simplejson as json
-
-#boto.set_stream_logger('sns')
+except ImportError:
+    import json
 
 class SNSConnection(AWSQueryConnection):
 
@@ -39,13 +37,20 @@
     def __init__(self, aws_access_key_id=None, aws_secret_access_key=None,
                  is_secure=True, port=None, proxy=None, proxy_port=None,
                  proxy_user=None, proxy_pass=None, debug=0,
-                 https_connection_factory=None, region=None, path='/', converter=None):
+                 https_connection_factory=None, region=None, path='/',
+                 security_token=None):
         if not region:
-            region = SDBRegionInfo(self, self.DefaultRegionName, self.DefaultRegionEndpoint)
+            region = RegionInfo(self, self.DefaultRegionName,
+                                self.DefaultRegionEndpoint,
+                                connection_cls=SNSConnection)
         self.region = region
-        AWSQueryConnection.__init__(self, aws_access_key_id, aws_secret_access_key,
-                                    is_secure, port, proxy, proxy_port, proxy_user, proxy_pass,
-                                    self.region.endpoint, debug, https_connection_factory, path)
+        AWSQueryConnection.__init__(self, aws_access_key_id,
+                                    aws_secret_access_key,
+                                    is_secure, port, proxy, proxy_port,
+                                    proxy_user, proxy_pass,
+                                    self.region.endpoint, debug,
+                                    https_connection_factory, path,
+                                    security_token=security_token)
 
     def _required_auth_capability(self):
         return ['sns']
@@ -88,6 +93,35 @@
             boto.log.error('%s' % body)
             raise self.ResponseError(response.status, response.reason, body)
         
+    def set_topic_attributes(self, topic, attr_name, attr_value):
+        """
+        Get attributes of a Topic
+
+        :type topic: string
+        :param topic: The ARN of the topic.
+
+        :type attr_name: string
+        :param attr_name: The name of the attribute you want to set.
+                          Only a subset of the topic's attributes are mutable.
+                          Valid values: Policy | DisplayName
+
+        :type attr_value: string
+        :param attr_value: The new value for the attribute.
+
+        """
+        params = {'ContentType' : 'JSON',
+                  'TopicArn' : topic,
+                  'AttributeName' : attr_name,
+                  'AttributeValue' : attr_value}
+        response = self.make_request('SetTopicAttributes', params, '/', 'GET')
+        body = response.read()
+        if response.status == 200:
+            return json.loads(body)
+        else:
+            boto.log.error('%s %s' % (response.status, response.reason))
+            boto.log.error('%s' % body)
+            raise self.ResponseError(response.status, response.reason, body)
+        
     def add_permission(self, topic, label, account_ids, actions):
         """
         Adds a statement to a topic's access control policy, granting
@@ -238,8 +272,6 @@
                          * For https, this would be a URL beginning with https
                          * For sqs, this would be the ARN of an SQS Queue
 
-        :rtype: :class:`boto.sdb.domain.Domain` object
-        :return: The newly created domain
         """
         params = {'ContentType' : 'JSON',
                   'TopicArn' : topic,
@@ -387,7 +419,8 @@
                   'TopicArn' : topic}
         if next_token:
             params['NextToken'] = next_token
-        response = self.make_request('ListSubscriptions', params, '/', 'GET')
+        response = self.make_request('ListSubscriptionsByTopic', params,
+                                     '/', 'GET')
         body = response.read()
         if response.status == 200:
             return json.loads(body)
diff --git a/boto/sqs/__init__.py b/boto/sqs/__init__.py
index 463c42c..d5a9e4c 100644
--- a/boto/sqs/__init__.py
+++ b/boto/sqs/__init__.py
@@ -35,12 +35,14 @@
                           endpoint='eu-west-1.queue.amazonaws.com'),
             SQSRegionInfo(name='us-west-1',
                           endpoint='us-west-1.queue.amazonaws.com'),
+            SQSRegionInfo(name='ap-northeast-1',
+                          endpoint='ap-northeast-1.queue.amazonaws.com'),
             SQSRegionInfo(name='ap-southeast-1',
                           endpoint='ap-southeast-1.queue.amazonaws.com')
             ]
 
-def connect_to_region(region_name):
+def connect_to_region(region_name, **kw_params):
     for region in regions():
         if region.name == region_name:
-            return region.connect()
+            return region.connect(**kw_params)
     return None
diff --git a/boto/sqs/connection.py b/boto/sqs/connection.py
index 240fc72..56ddc43 100644
--- a/boto/sqs/connection.py
+++ b/boto/sqs/connection.py
@@ -40,13 +40,20 @@
     def __init__(self, aws_access_key_id=None, aws_secret_access_key=None,
                  is_secure=True, port=None, proxy=None, proxy_port=None,
                  proxy_user=None, proxy_pass=None, debug=0,
-                 https_connection_factory=None, region=None, path='/'):
+                 https_connection_factory=None, region=None, path='/',
+                 security_token=None):
         if not region:
-            region = SQSRegionInfo(self, self.DefaultRegionName, self.DefaultRegionEndpoint)
+            region = SQSRegionInfo(self, self.DefaultRegionName,
+                                   self.DefaultRegionEndpoint)
         self.region = region
-        AWSQueryConnection.__init__(self, aws_access_key_id, aws_secret_access_key,
-                                    is_secure, port, proxy, proxy_port, proxy_user, proxy_pass,
-                                    self.region.endpoint, debug, https_connection_factory, path)
+        AWSQueryConnection.__init__(self, aws_access_key_id,
+                                    aws_secret_access_key,
+                                    is_secure, port,
+                                    proxy, proxy_port,
+                                    proxy_user, proxy_pass,
+                                    self.region.endpoint, debug,
+                                    https_connection_factory, path,
+                                    security_token=security_token)
 
     def _required_auth_capability(self):
         return ['sqs']
@@ -56,17 +63,21 @@
         Create an SQS Queue.
 
         :type queue_name: str or unicode
-        :param queue_name: The name of the new queue.  Names are scoped to an account and need to
-                           be unique within that account.  Calling this method on an existing
-                           queue name will not return an error from SQS unless the value for
-                           visibility_timeout is different than the value of the existing queue
-                           of that name.  This is still an expensive operation, though, and not
-                           the preferred way to check for the existence of a queue.  See the
+        :param queue_name: The name of the new queue.  Names are scoped to
+                           an account and need to be unique within that
+                           account.  Calling this method on an existing
+                           queue name will not return an error from SQS
+                           unless the value for visibility_timeout is
+                           different than the value of the existing queue
+                           of that name.  This is still an expensive operation,
+                           though, and not the preferred way to check for
+                           the existence of a queue.  See the
                            :func:`boto.sqs.connection.SQSConnection.lookup` method.
 
         :type visibility_timeout: int
-        :param visibility_timeout: The default visibility timeout for all messages written in the
-                                   queue.  This can be overridden on a per-message.
+        :param visibility_timeout: The default visibility timeout for all
+                                   messages written in the queue.  This can
+                                   be overridden on a per-message.
 
         :rtype: :class:`boto.sqs.queue.Queue`
         :return: The newly created queue.
@@ -85,10 +96,12 @@
         :param queue: The SQS queue to be deleted
         
         :type force_deletion: Boolean
-        :param force_deletion: Normally, SQS will not delete a queue that contains messages.
-                               However, if the force_deletion argument is True, the
-                               queue will be deleted regardless of whether there are messages in
-                               the queue or not.  USE WITH CAUTION.  This will delete all
+        :param force_deletion: Normally, SQS will not delete a queue that
+                               contains messages.  However, if the
+                               force_deletion argument is True, the
+                               queue will be deleted regardless of whether
+                               there are messages in the queue or not.
+                               USE WITH CAUTION.  This will delete all
                                messages in the queue as well.
                                
         :rtype: bool
@@ -104,27 +117,30 @@
         :param queue: The SQS queue to be deleted
 
         :type attribute: str
-        :type attribute: The specific attribute requested.  If not supplied, the default
-                         is to return all attributes.  Valid attributes are:
-                         ApproximateNumberOfMessages,
-                         ApproximateNumberOfMessagesNotVisible,
-                         VisibilityTimeout,
-                         CreatedTimestamp,
-                         LastModifiedTimestamp,
+        :type attribute: The specific attribute requested.  If not supplied,
+                         the default is to return all attributes.
+                         Valid attributes are:
+                         
+                         ApproximateNumberOfMessages|
+                         ApproximateNumberOfMessagesNotVisible|
+                         VisibilityTimeout|
+                         CreatedTimestamp|
+                         LastModifiedTimestamp|
                          Policy
                          
         :rtype: :class:`boto.sqs.attributes.Attributes`
         :return: An Attributes object containing request value(s).
         """
         params = {'AttributeName' : attribute}
-        return self.get_object('GetQueueAttributes', params, Attributes, queue.id)
+        return self.get_object('GetQueueAttributes', params,
+                               Attributes, queue.id)
 
     def set_queue_attribute(self, queue, attribute, value):
         params = {'Attribute.Name' : attribute, 'Attribute.Value' : value}
         return self.get_status('SetQueueAttributes', params, queue.id)
 
-    def receive_message(self, queue, number_messages=1, visibility_timeout=None,
-                        attributes=None):
+    def receive_message(self, queue, number_messages=1,
+                        visibility_timeout=None, attributes=None):
         """
         Read messages from an SQS Queue.
 
@@ -132,20 +148,22 @@
         :param queue: The Queue from which messages are read.
         
         :type number_messages: int
-        :param number_messages: The maximum number of messages to read (default=1)
+        :param number_messages: The maximum number of messages to read
+                                (default=1)
         
         :type visibility_timeout: int
-        :param visibility_timeout: The number of seconds the message should remain invisible
-                                   to other queue readers (default=None which uses the Queues default)
+        :param visibility_timeout: The number of seconds the message should
+                                   remain invisible to other queue readers
+                                   (default=None which uses the Queues default)
 
         :type attributes: str
-        :param attributes: The name of additional attribute to return with response
-                           or All if you want all attributes.  The default is to
-                           return no additional attributes.  Valid values:
-                           All
-                           SenderId
-                           SentTimestamp
-                           ApproximateReceiveCount
+        :param attributes: The name of additional attribute to return
+                           with response or All if you want all attributes.
+                           The default is to return no additional attributes.
+                           Valid values:
+                           
+                           All|SenderId|SentTimestamp|
+                           ApproximateReceiveCount|
                            ApproximateFirstReceiveTimestamp
         
         :rtype: list
@@ -156,7 +174,8 @@
             params['VisibilityTimeout'] = visibility_timeout
         if attributes:
             self.build_list_params(params, attributes, 'AttributeName')
-        return self.get_list('ReceiveMessage', params, [('Message', queue.message_class)],
+        return self.get_list('ReceiveMessage', params,
+                             [('Message', queue.message_class)],
                              queue.id, queue)
 
     def delete_message(self, queue, message):
@@ -193,12 +212,14 @@
 
     def send_message(self, queue, message_content):
         params = {'MessageBody' : message_content}
-        return self.get_object('SendMessage', params, Message, queue.id, verb='POST')
+        return self.get_object('SendMessage', params, Message,
+                               queue.id, verb='POST')
 
-    def change_message_visibility(self, queue, receipt_handle, visibility_timeout):
+    def change_message_visibility(self, queue, receipt_handle,
+                                  visibility_timeout):
         """
-        Extends the read lock timeout for the specified message from the specified queue
-        to the specified value.
+        Extends the read lock timeout for the specified message from
+        the specified queue to the specified value.
 
         :type queue: A :class:`boto.sqs.queue.Queue` object
         :param queue: The Queue from which messages are read.
@@ -208,8 +229,8 @@
                       visibility timeout will be changed.
         
         :type visibility_timeout: int
-        :param visibility_timeout: The new value of the message's visibility timeout
-                                   in seconds.
+        :param visibility_timeout: The new value of the message's visibility
+                                   timeout in seconds.
         """
         params = {'ReceiptHandle' : receipt_handle,
                   'VisibilityTimeout' : visibility_timeout}
@@ -247,9 +268,10 @@
                       Example, AliceSendMessage
 
         :type aws_account_id: str or unicode
-        :param principal_id: The AWS account number of the principal who will be given
-                             permission.  The principal must have an AWS account, but
-                             does not need to be signed up for Amazon SQS. For information
+        :param principal_id: The AWS account number of the principal who will
+                             be given permission.  The principal must have
+                             an AWS account, but does not need to be signed
+                             up for Amazon SQS. For information
                              about locating the AWS account identification.
 
         :type action_name: str or unicode
@@ -274,7 +296,8 @@
         :param queue: The queue object
 
         :type label: str or unicode
-        :param label: The unique label associated with the permission being removed.
+        :param label: The unique label associated with the permission
+                      being removed.
 
         :rtype: bool
         :return: True if successful, False otherwise.
diff --git a/boto/sqs/jsonmessage.py b/boto/sqs/jsonmessage.py
index 24a3be2..fb0a4c3 100644
--- a/boto/sqs/jsonmessage.py
+++ b/boto/sqs/jsonmessage.py
@@ -23,9 +23,9 @@
 from boto.exception import SQSDecodeError
 import base64
 try:
-    import json
-except ImportError:
     import simplejson as json
+except ImportError:
+    import json
 
 class JSONMessage(MHMessage):
     """
diff --git a/boto/sqs/queue.py b/boto/sqs/queue.py
index 9965e43..fe25f58 100644
--- a/boto/sqs/queue.py
+++ b/boto/sqs/queue.py
@@ -405,7 +405,7 @@
     def load_from_filename(self, file_name, sep='\n'):
         """Utility function to load messages from a local filename to a queue"""
         fp = open(file_name, 'rb')
-        n = self.load_file_file(fp, sep)
+        n = self.load_from_file(fp, sep)
         fp.close()
         return n
 
diff --git a/boto/storage_uri.py b/boto/storage_uri.py
index 9c051a4..96b8eca 100755
--- a/boto/storage_uri.py
+++ b/boto/storage_uri.py
@@ -1,4 +1,5 @@
 # Copyright 2010 Google Inc.
+# Copyright (c) 2011, Nexenta Systems Inc.
 #
 # Permission is hereby granted, free of charge, to any person obtaining a
 # copy of this software and associated documentation files (the
@@ -19,6 +20,7 @@
 # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
 # IN THE SOFTWARE.
 
+import boto
 import os
 from boto.exception import BotoClientError
 from boto.exception import InvalidUriError
@@ -34,6 +36,10 @@
     """
 
     connection = None
+    # Optional args that can be set from one of the concrete subclass
+    # constructors, to change connection behavior (e.g., to override
+    # https_connection_factory).
+    connection_args = None
 
     def __init__(self):
         """Uncallable constructor on abstract base StorageUri class.
@@ -66,15 +72,28 @@
         @return: A connection to storage service provider of the given URI.
         """
 
+        connection_args = dict(self.connection_args or ())
+        # Use OrdinaryCallingFormat instead of boto-default
+        # SubdomainCallingFormat because the latter changes the hostname
+        # that's checked during cert validation for HTTPS connections,
+        # which will fail cert validation (when cert validation is enabled).
+        # Note: the following import can't be moved up to the start of
+        # this file else it causes a config import failure when run from
+        # the resumable upload/download tests.
+        from boto.s3.connection import OrdinaryCallingFormat
+        connection_args['calling_format'] = OrdinaryCallingFormat()
+        connection_args.update(kwargs)
         if not self.connection:
             if self.scheme == 's3':
                 from boto.s3.connection import S3Connection
                 self.connection = S3Connection(access_key_id,
-                                               secret_access_key, **kwargs)
+                                               secret_access_key,
+                                               **connection_args)
             elif self.scheme == 'gs':
                 from boto.gs.connection import GSConnection
                 self.connection = GSConnection(access_key_id,
-                                               secret_access_key, **kwargs)
+                                               secret_access_key,
+                                               **connection_args)
             elif self.scheme == 'file':
                 from boto.file.connection import FileConnection
                 self.connection = FileConnection(self)
@@ -156,7 +175,7 @@
     """
 
     def __init__(self, scheme, bucket_name=None, object_name=None,
-                 debug=0):
+                 debug=0, connection_args=None):
         """Instantiate a BucketStorageUri from scheme,bucket,object tuple.
 
         @type scheme: string
@@ -167,6 +186,10 @@
         @param object_name: object name
         @type debug: int
         @param debug: debug level to pass in to connection (range 0..2)
+        @type connection_args: map
+        @param connection_args: optional map containing args to be
+            passed to {S3,GS}Connection constructor (e.g., to override
+            https_connection_factory).
 
         After instantiation the components are available in the following
         fields: uri, scheme, bucket_name, object_name.
@@ -175,6 +198,8 @@
         self.scheme = scheme
         self.bucket_name = bucket_name
         self.object_name = object_name
+        if connection_args:
+            self.connection_args = connection_args
         if self.bucket_name and self.object_name:
             self.uri = ('%s://%s/%s' % (self.scheme, self.bucket_name,
                                         self.object_name))
@@ -207,6 +232,22 @@
         self.check_response(acl, 'acl', self.uri)
         return acl
 
+    def get_location(self, validate=True, headers=None):
+        if not self.bucket_name:
+            raise InvalidUriError('get_location on bucket-less URI (%s)' %
+                                  self.uri)
+        bucket = self.get_bucket(validate, headers)
+        return bucket.get_location()
+
+    def get_subresource(self, subresource, validate=True, headers=None,
+                        version_id=None):
+        if not self.bucket_name:
+            raise InvalidUriError(
+                'get_subresource on bucket-less URI (%s)' % self.uri)
+        bucket = self.get_bucket(validate, headers)
+        return bucket.get_subresource(subresource, self.object_name, headers,
+                                      version_id)
+
     def add_group_email_grant(self, permission, email_address, recursive=False,
                               validate=True, headers=None):
         if self.scheme != 'gs':
@@ -318,6 +359,15 @@
         self.check_response(key, 'key', self.uri)
         key.set_canned_acl(acl_str, headers, version_id)
 
+    def set_subresource(self, subresource, value, validate=True, headers=None,
+                        version_id=None):
+        if not self.bucket_name:
+            raise InvalidUriError(
+                'set_subresource on bucket-less URI (%s)' % self.uri)
+        bucket = self.get_bucket(validate, headers)
+        bucket.set_subresource(subresource, value, self.object_name, headers,
+                               version_id)
+
     def set_contents_from_string(self, s, headers=None, replace=True,
                                  cb=None, num_cb=10, policy=None, md5=None,
                                  reduced_redundancy=False):
@@ -325,6 +375,23 @@
         key.set_contents_from_string(s, headers, replace, cb, num_cb, policy,
                                      md5, reduced_redundancy)
 
+    def enable_logging(self, target_bucket, target_prefix=None,
+                       canned_acl=None, validate=True, headers=None,
+                       version_id=None):
+        if not self.bucket_name:
+            raise InvalidUriError(
+                'disable_logging on bucket-less URI (%s)' % self.uri)
+        bucket = self.get_bucket(validate, headers)
+        bucket.enable_logging(target_bucket, target_prefix, headers=headers,
+                              canned_acl=canned_acl)
+
+    def disable_logging(self, validate=True, headers=None, version_id=None):
+        if not self.bucket_name:
+            raise InvalidUriError(
+                'disable_logging on bucket-less URI (%s)' % self.uri)
+        bucket = self.get_bucket(validate, headers)
+        bucket.disable_logging(headers=headers)
+
 
 
 class FileStorageUri(StorageUri):
@@ -335,7 +402,7 @@
     See file/README about how we map StorageUri operations onto a file system.
     """
 
-    def __init__(self, object_name, debug):
+    def __init__(self, object_name, debug, is_stream=False):
         """Instantiate a FileStorageUri from a path name.
 
         @type object_name: string
@@ -353,6 +420,7 @@
         self.object_name = object_name
         self.uri = 'file://' + object_name
         self.debug = debug
+        self.stream = is_stream
 
     def clone_replace_name(self, new_name):
         """Instantiate a FileStorageUri from the current FileStorageUri,
@@ -361,20 +429,33 @@
         @type new_name: string
         @param new_name: new object name
         """
-        return FileStorageUri(new_name, self.debug)
+        return FileStorageUri(new_name, self.debug, self.stream)
 
     def names_container(self):
-        """Returns True if this URI names a directory.
+        """Returns True if this URI is not representing input/output stream
+        and names a directory.
         """
-        return os.path.isdir(self.object_name)
+        if not self.stream:
+            return os.path.isdir(self.object_name)
+        else:
+            return False
 
     def names_singleton(self):
-        """Returns True if this URI names a file.
+        """Returns True if this URI names a file or
+        if URI represents input/output stream.
         """
-        return os.path.isfile(self.object_name)
+        if self.stream:
+            return True
+        else:
+            return os.path.isfile(self.object_name)
 
     def is_file_uri(self):
         return True
 
     def is_cloud_uri(self):
         return False
+
+    def is_stream(self):
+        """Retruns True if this URI represents input/output stream.
+        """
+        return self.stream
diff --git a/boto/sts/__init__.py b/boto/sts/__init__.py
new file mode 100644
index 0000000..7ee10b4
--- /dev/null
+++ b/boto/sts/__init__.py
@@ -0,0 +1,70 @@
+# Copyright (c) 2010-2011 Mitch Garnaat http://garnaat.org/
+# Copyright (c) 2010-2011, Eucalyptus Systems, Inc.
+#
+# Permission is hereby granted, free of charge, to any person obtaining a
+# copy of this software and associated documentation files (the
+# "Software"), to deal in the Software without restriction, including
+# without limitation the rights to use, copy, modify, merge, publish, dis-
+# tribute, sublicense, and/or sell copies of the Software, and to permit
+# persons to whom the Software is furnished to do so, subject to the fol-
+# lowing conditions:
+#
+# The above copyright notice and this permission notice shall be included
+# in all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
+# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
+# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, 
+# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+# IN THE SOFTWARE.
+
+from connection import STSConnection
+from boto.regioninfo import RegionInfo
+
+def regions():
+    """
+    Get all available regions for the STS service.
+
+    :rtype: list
+    :return: A list of :class:`boto.regioninfo.RegionInfo` instances
+    """
+    return [RegionInfo(name='us-east-1',
+                       endpoint='sts.amazonaws.com',
+                       connection_cls=STSConnection)
+            ]
+
+def connect_to_region(region_name, **kw_params):
+    """
+    Given a valid region name, return a 
+    :class:`boto.sts.connection.STSConnection`.
+
+    :type: str
+    :param region_name: The name of the region to connect to.
+    
+    :rtype: :class:`boto.sts.connection.STSConnection` or ``None``
+    :return: A connection to the given region, or None if an invalid region
+             name is given
+    """
+    for region in regions():
+        if region.name == region_name:
+            return region.connect(**kw_params)
+    return None
+
+def get_region(region_name, **kw_params):
+    """
+    Find and return a :class:`boto.regioninfo.RegionInfo` object
+    given a region name.
+
+    :type: str
+    :param: The name of the region.
+
+    :rtype: :class:`boto.regioninfo.RegionInfo`
+    :return: The RegionInfo object for the given region or None if
+             an invalid region name is provided.
+    """
+    for region in regions(**kw_params):
+        if region.name == region_name:
+            return region
+    return None
diff --git a/boto/sts/connection.py b/boto/sts/connection.py
new file mode 100644
index 0000000..6761327
--- /dev/null
+++ b/boto/sts/connection.py
@@ -0,0 +1,90 @@
+# Copyright (c) 2011 Mitch Garnaat http://garnaat.org/
+# Copyright (c) 2011, Eucalyptus Systems, Inc.
+#
+# Permission is hereby granted, free of charge, to any person obtaining a
+# copy of this software and associated documentation files (the
+# "Software"), to deal in the Software without restriction, including
+# without limitation the rights to use, copy, modify, merge, publish, dis-
+# tribute, sublicense, and/or sell copies of the Software, and to permit
+# persons to whom the Software is furnished to do so, subject to the fol-
+# lowing conditions:
+#
+# The above copyright notice and this permission notice shall be included
+# in all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
+# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
+# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, 
+# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+# IN THE SOFTWARE.
+
+from boto.connection import AWSQueryConnection
+from boto.regioninfo import RegionInfo
+from credentials import Credentials, FederationToken
+import boto
+
+class STSConnection(AWSQueryConnection):
+
+    DefaultRegionName = 'us-east-1'
+    DefaultRegionEndpoint = 'sts.amazonaws.com'
+    APIVersion = '2011-06-15'
+
+    def __init__(self, aws_access_key_id=None, aws_secret_access_key=None,
+                 is_secure=True, port=None, proxy=None, proxy_port=None,
+                 proxy_user=None, proxy_pass=None, debug=0,
+                 https_connection_factory=None, region=None, path='/',
+                 converter=None):
+        if not region:
+            region = RegionInfo(self, self.DefaultRegionName,
+                                self.DefaultRegionEndpoint,
+                                connection_cls=STSConnection)
+        self.region = region
+        AWSQueryConnection.__init__(self, aws_access_key_id,
+                                    aws_secret_access_key,
+                                    is_secure, port, proxy, proxy_port,
+                                    proxy_user, proxy_pass,
+                                    self.region.endpoint, debug,
+                                    https_connection_factory, path)
+
+    def _required_auth_capability(self):
+        return ['sign-v2']
+
+    def get_session_token(self, duration=None):
+        """
+        :type duration: int
+        :param duration: The number of seconds the credentials should
+                         remain valid.
+
+        """
+        params = {}
+        if duration:
+            params['Duration'] = duration
+        return self.get_object('GetSessionToken', params,
+                                Credentials, verb='POST')
+        
+        
+    def get_federation_token(self, name, duration=None, policy=None):
+        """
+        :type name: str
+        :param name: The name of the Federated user associated with
+                     the credentials.
+                     
+        :type duration: int
+        :param duration: The number of seconds the credentials should
+                         remain valid.
+
+        :type policy: str
+        :param policy: A JSON policy to associate with these credentials.
+
+        """
+        params = {'Name' : name}
+        if duration:
+            params['Duration'] = duration
+        if policy:
+            params['Policy'] = policy
+        return self.get_object('GetFederationToken', params,
+                                FederationToken, verb='POST')
+        
+        
diff --git a/boto/sts/credentials.py b/boto/sts/credentials.py
new file mode 100644
index 0000000..daf4c78
--- /dev/null
+++ b/boto/sts/credentials.py
@@ -0,0 +1,90 @@
+# Copyright (c) 2011 Mitch Garnaat http://garnaat.org/
+# Copyright (c) 2011, Eucalyptus Systems, Inc.
+#
+# Permission is hereby granted, free of charge, to any person obtaining a
+# copy of this software and associated documentation files (the
+# "Software"), to deal in the Software without restriction, including
+# without limitation the rights to use, copy, modify, merge, publish, dis-
+# tribute, sublicense, and/or sell copies of the Software, and to permit
+# persons to whom the Software is furnished to do so, subject to the fol-
+# lowing conditions:
+#
+# The above copyright notice and this permission notice shall be included
+# in all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
+# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
+# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, 
+# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+# IN THE SOFTWARE.
+
+class Credentials(object):
+    """
+    :ivar access_key: The AccessKeyID.
+    :ivar secret_key: The SecretAccessKey.
+    :ivar session_token: The session token that must be passed with
+                         requests to use the temporary credentials
+    :ivar expiration: The timestamp for when the credentials will expire
+    """
+
+    def __init__(self, parent=None):
+        self.parent = parent
+        self.access_key = None
+        self.secret_key = None
+        self.session_token = None
+        self.expiration = None
+
+    def startElement(self, name, attrs, connection):
+        return None
+
+    def endElement(self, name, value, connection):
+        if name == 'AccessKeyId':
+            self.access_key = value
+        elif name == 'SecretAccessKey':
+            self.secret_key = value
+        elif name == 'SessionToken':
+            self.session_token = value
+        elif name == 'Expiration':
+            self.expiration = value
+        elif name == 'RequestId':
+            self.request_id = value
+        else:
+            pass
+    
+class FederationToken(object):
+    """
+    :ivar credentials: A Credentials object containing the credentials.
+    :ivar federated_user_arn: ARN specifying federated user using credentials.
+    :ivar federated_user_id: The ID of the federated user using credentials.
+    :ivar packed_policy_size: A percentage value indicating the size of
+                             the policy in packed form
+    """
+
+    def __init__(self, parent=None):
+        self.parent = parent
+        self.credentials = None
+        self.federated_user_arn = None
+        self.federated_user_id = None
+        self.packed_policy_size = None
+
+    def startElement(self, name, attrs, connection):
+        if name == 'Credentials':
+            self.credentials = Credentials()
+            return self.credentials
+        else:
+            return None
+
+    def endElement(self, name, value, connection):
+        if name == 'Arn':
+            self.federated_user_arn = value
+        elif name == 'FederatedUserId':
+            self.federated_user_id = value
+        elif name == 'PackedPolicySize':
+            self.packed_policy_size = int(value)
+        elif name == 'RequestId':
+            self.request_id = value
+        else:
+            pass
+        
diff --git a/boto/tests/test.py b/boto/tests/test.py
deleted file mode 100755
index 8648e70..0000000
--- a/boto/tests/test.py
+++ /dev/null
@@ -1,90 +0,0 @@
-#!/usr/bin/env python
-# Copyright (c) 2006,2007 Mitch Garnaat http://garnaat.org/
-#
-# Permission is hereby granted, free of charge, to any person obtaining a
-# copy of this software and associated documentation files (the
-# "Software"), to deal in the Software without restriction, including
-# without limitation the rights to use, copy, modify, merge, publish, dis-
-# tribute, sublicense, and/or sell copies of the Software, and to permit
-# persons to whom the Software is furnished to do so, subject to the fol-
-# lowing conditions:
-#
-# The above copyright notice and this permission notice shall be included
-# in all copies or substantial portions of the Software.
-#
-# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
-# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
-# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
-# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, 
-# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
-# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
-# IN THE SOFTWARE.
-
-"""
-do the unit tests!
-"""
-
-import sys
-import unittest
-import getopt
-
-from boto.tests.test_sqsconnection import SQSConnectionTest
-from boto.tests.test_s3connection import S3ConnectionTest
-from boto.tests.test_s3versioning import S3VersionTest
-from boto.tests.test_gsconnection import GSConnectionTest
-from boto.tests.test_ec2connection import EC2ConnectionTest
-from boto.tests.test_sdbconnection import SDBConnectionTest
-
-def usage():
-    print 'test.py  [-t testsuite] [-v verbosity]'
-    print '    -t   run specific testsuite (s3|s3ver|s3nover|gs|sqs|ec2|sdb|all)'
-    print '    -v   verbosity (0|1|2)'
-  
-def main():
-    try:
-        opts, args = getopt.getopt(sys.argv[1:], 'ht:v:',
-                                   ['help', 'testsuite', 'verbosity'])
-    except:
-        usage()
-        sys.exit(2)
-    testsuite = 'all'
-    verbosity = 1
-    for o, a in opts:
-        if o in ('-h', '--help'):
-            usage()
-            sys.exit()
-        if o in ('-t', '--testsuite'):
-            testsuite = a
-        if o in ('-v', '--verbosity'):
-            verbosity = int(a)
-    if len(args) != 0:
-        usage()
-        sys.exit()
-    suite = unittest.TestSuite()
-    if testsuite == 'all':
-        suite.addTest(unittest.makeSuite(SQSConnectionTest))
-        suite.addTest(unittest.makeSuite(S3ConnectionTest))
-        suite.addTest(unittest.makeSuite(EC2ConnectionTest))
-        suite.addTest(unittest.makeSuite(SDBConnectionTest))
-    elif testsuite == 's3':
-        suite.addTest(unittest.makeSuite(S3ConnectionTest))
-        suite.addTest(unittest.makeSuite(S3VersionTest))
-    elif testsuite == 's3ver':
-        suite.addTest(unittest.makeSuite(S3VersionTest))
-    elif testsuite == 's3nover':
-        suite.addTest(unittest.makeSuite(S3ConnectionTest))
-    elif testsuite == 'gs':
-        suite.addTest(unittest.makeSuite(GSConnectionTest))
-    elif testsuite == 'sqs':
-        suite.addTest(unittest.makeSuite(SQSConnectionTest))
-    elif testsuite == 'ec2':
-        suite.addTest(unittest.makeSuite(EC2ConnectionTest))
-    elif testsuite == 'sdb':
-        suite.addTest(unittest.makeSuite(SDBConnectionTest))
-    else:
-        usage()
-        sys.exit()
-    unittest.TextTestRunner(verbosity=verbosity).run(suite)
-
-if __name__ == "__main__":
-    main()
diff --git a/boto/utils.py b/boto/utils.py
index 6bad25d..9a4ff31 100644
--- a/boto/utils.py
+++ b/boto/utils.py
@@ -54,6 +54,8 @@
 from email.MIMEText import MIMEText
 from email.Utils import formatdate
 from email import Encoders
+import gzip
+
 
 try:
     import hashlib
@@ -149,12 +151,15 @@
     for hkey in headers.keys():
         if hkey.lower().startswith(metadata_prefix):
             val = urllib.unquote_plus(headers[hkey])
-            metadata[hkey[len(metadata_prefix):]] = unicode(val, 'utf-8')
+            try:
+                metadata[hkey[len(metadata_prefix):]] = unicode(val, 'utf-8')
+            except UnicodeDecodeError:
+                metadata[hkey[len(metadata_prefix):]] = val
             del headers[hkey]
     return metadata
 
-def retry_url(url, retry_on_404=True):
-    for i in range(0, 10):
+def retry_url(url, retry_on_404=True, num_retries=10):
+    for i in range(0, num_retries):
         try:
             req = urllib2.Request(url)
             resp = urllib2.urlopen(req)
@@ -196,7 +201,7 @@
                 d[key] = val
     return d
 
-def get_instance_metadata(version='latest'):
+def get_instance_metadata(version='latest', url='http://169.254.169.254'):
     """
     Returns the instance metadata as a nested Python dictionary.
     Simple values (e.g. local_hostname, hostname, etc.) will be
@@ -204,12 +209,12 @@
     be stored in the dict as a list of string values.  More complex
     fields such as public-keys and will be stored as nested dicts.
     """
-    url = 'http://169.254.169.254/%s/meta-data/' % version
-    return _get_instance_metadata(url)
+    return _get_instance_metadata('%s/%s/meta-data/' % (url, version))
 
-def get_instance_userdata(version='latest', sep=None):
-    url = 'http://169.254.169.254/%s/user-data' % version
-    user_data = retry_url(url, retry_on_404=False)
+def get_instance_userdata(version='latest', sep=None,
+                          url='http://169.254.169.254'):
+    ud_url = '%s/%s/user-data' % (url,version)
+    user_data = retry_url(ud_url, retry_on_404=False)
     if user_data:
         if sep:
             l = user_data.split(sep)
@@ -220,6 +225,7 @@
     return user_data
 
 ISO8601 = '%Y-%m-%dT%H:%M:%SZ'
+ISO8601_MS = '%Y-%m-%dT%H:%M:%S.%fZ'
     
 def get_ts(ts=None):
     if not ts:
@@ -227,7 +233,12 @@
     return time.strftime(ISO8601, ts)
 
 def parse_ts(ts):
-    return datetime.datetime.strptime(ts, ISO8601)
+    try:
+        dt = datetime.datetime.strptime(ts, ISO8601)
+        return dt
+    except ValueError:
+        dt = datetime.datetime.strptime(ts, ISO8601_MS)
+        return dt
 
 def find_class(module_name, class_name=None):
     if class_name:
@@ -502,16 +513,20 @@
 
 class Password(object):
     """
-    Password object that stores itself as SHA512 hashed.
+    Password object that stores itself as hashed.
+    Hash defaults to SHA512 if available, MD5 otherwise.
     """
-    def __init__(self, str=None):
+    hashfunc=_hashfn
+    def __init__(self, str=None, hashfunc=None):
         """
-        Load the string from an initial value, this should be the raw SHA512 hashed password
+        Load the string from an initial value, this should be the raw hashed password.
         """
         self.str = str
+        if hashfunc:
+           self.hashfunc = hashfunc
 
     def set(self, value):
-        self.str = _hashfn(value).hexdigest()
+        self.str = self.hashfunc(value).hexdigest()
    
     def __str__(self):
         return str(self.str)
@@ -519,7 +534,7 @@
     def __eq__(self, other):
         if other == None:
             return False
-        return str(_hashfn(other).hexdigest()) == str(self.str)
+        return str(self.hashfunc(other).hexdigest()) == str(self.str)
 
     def __len__(self):
         if self.str:
@@ -527,7 +542,8 @@
         else:
             return 0
 
-def notify(subject, body=None, html_body=None, to_string=None, attachments=[], append_instance_id=True):
+def notify(subject, body=None, html_body=None, to_string=None, attachments=None, append_instance_id=True):
+    attachments = attachments or []
     if append_instance_id:
         subject = "[%s] %s" % (boto.config.get_value("Instance", "instance-id"), subject)
     if not to_string:
@@ -603,5 +619,73 @@
             s += c
     return s
 
-def awsify_name(name):
-    return name[0:1].upper()+name[1:]
+def write_mime_multipart(content, compress=False, deftype='text/plain', delimiter=':'):
+    """Description:
+    :param content: A list of tuples of name-content pairs. This is used
+    instead of a dict to ensure that scripts run in order
+    :type list of tuples:
+
+    :param compress: Use gzip to compress the scripts, defaults to no compression
+    :type bool:
+
+    :param deftype: The type that should be assumed if nothing else can be figured out
+    :type str:
+
+    :param delimiter: mime delimiter
+    :type str:
+
+    :return: Final mime multipart
+    :rtype: str:
+    """
+    wrapper = MIMEMultipart()
+    for name,con in content:
+        definite_type = guess_mime_type(con, deftype)
+        maintype, subtype = definite_type.split('/', 1)
+        if maintype == 'text':
+            mime_con = MIMEText(con, _subtype=subtype)
+        else:
+            mime_con = MIMEBase(maintype, subtype)
+            mime_con.set_payload(con)
+            # Encode the payload using Base64
+            Encoders.encode_base64(mime_con)
+        mime_con.add_header('Content-Disposition', 'attachment', filename=name)
+        wrapper.attach(mime_con)
+    rcontent = wrapper.as_string()
+
+    if compress:
+        buf = StringIO.StringIO()
+        gz = gzip.GzipFile(mode='wb', fileobj=buf)
+        try:
+            gz.write(rcontent)
+        finally:
+            gz.close()
+        rcontent = buf.getvalue()
+
+    return rcontent
+
+def guess_mime_type(content, deftype):
+    """Description: Guess the mime type of a block of text
+    :param content: content we're finding the type of
+    :type str:
+
+    :param deftype: Default mime type
+    :type str:
+
+    :rtype: <type>:
+    :return: <description>
+    """
+    #Mappings recognized by cloudinit
+    starts_with_mappings={
+        '#include' : 'text/x-include-url',
+        '#!' : 'text/x-shellscript',
+        '#cloud-config' : 'text/cloud-config',
+        '#upstart-job'  : 'text/upstart-job',
+        '#part-handler' : 'text/part-handler',
+        '#cloud-boothook' : 'text/cloud-boothook'
+    }
+    rtype = deftype
+    for possible_type,mimetype in starts_with_mappings.items():
+        if content.startswith(possible_type):
+            rtype = mimetype
+            break
+    return(rtype)
diff --git a/boto/vpc/__init__.py b/boto/vpc/__init__.py
index 76eea82..ae55a26 100644
--- a/boto/vpc/__init__.py
+++ b/boto/vpc/__init__.py
@@ -14,7 +14,7 @@
 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
 # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
 # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
-# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, 
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
 # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
 # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
 # IN THE SOFTWARE.
@@ -24,8 +24,11 @@
 """
 
 from boto.ec2.connection import EC2Connection
+from boto.resultset import ResultSet
 from boto.vpc.vpc import VPC
 from boto.vpc.customergateway import CustomerGateway
+from boto.vpc.routetable import RouteTable
+from boto.vpc.internetgateway import InternetGateway
 from boto.vpc.vpngateway import VpnGateway, Attachment
 from boto.vpc.dhcpoptions import DhcpOptions
 from boto.vpc.subnet import Subnet
@@ -34,22 +37,22 @@
 class VPCConnection(EC2Connection):
 
     # VPC methods
-        
+
     def get_all_vpcs(self, vpc_ids=None, filters=None):
         """
         Retrieve information about your VPCs.  You can filter results to
         return information only about those VPCs that match your search
         parameters.  Otherwise, all VPCs associated with your account
         are returned.
-        
+
         :type vpc_ids: list
         :param vpc_ids: A list of strings with the desired VPC ID's
-        
+
         :type filters: list of tuples
         :param filters: A list of tuples containing filters.  Each tuple
                         consists of a filter key and a filter value.
                         Possible filter keys are:
-                        
+
                         - *state*, the state of the VPC (pending or available)
                         - *cidrBlock*, CIDR block of the VPC
                         - *dhcpOptionsId*, the ID of a set of DHCP options
@@ -63,8 +66,8 @@
         if filters:
             i = 1
             for filter in filters:
-                params[('Filter.%d.Key' % i)] = filter[0]
-                params[('Filter.%d.Value.1')] = filter[1]
+                params[('Filter.%d.Name' % i)] = filter[0]
+                params[('Filter.%d.Value.1' % i)] = filter[1]
                 i += 1
         return self.get_list('DescribeVpcs', params, [('item', VPC)])
 
@@ -80,7 +83,7 @@
         """
         params = {'CidrBlock' : cidr_block}
         return self.get_object('CreateVpc', params, VPC)
-        
+
     def delete_vpc(self, vpc_id):
         """
         Delete a Virtual Private Cloud.
@@ -94,6 +97,235 @@
         params = {'VpcId': vpc_id}
         return self.get_status('DeleteVpc', params)
 
+    # Route Tables
+
+    def get_all_route_tables(self, route_table_ids=None, filters=None):
+        """
+        Retrieve information about your routing tables. You can filter results
+        to return information only about those route tables that match your
+        search parameters. Otherwise, all route tables associated with your
+        account are returned.
+
+        :type route_table_ids: list
+        :param route_table_ids: A list of strings with the desired route table
+                                IDs.
+
+        :type filters: list of tuples
+        :param filters: A list of tuples containing filters. Each tuple
+                        consists of a filter key and a filter value.
+
+        :rtype: list
+        :return: A list of :class:`boto.vpc.routetable.RouteTable`
+        """
+        params = {}
+        if route_table_ids:
+            self.build_list_params(params, route_table_ids, "RouteTableId")
+        if filters:
+            self.build_filter_params(params, dict(filters))
+        return self.get_list('DescribeRouteTables', params, [('item', RouteTable)])
+
+    def associate_route_table(self, route_table_id, subnet_id):
+        """
+        Associates a route table with a specific subnet.
+
+        :type route_table_id: str
+        :param route_table_id: The ID of the route table to associate.
+
+        :type subnet_id: str
+        :param subnet_id: The ID of the subnet to associate with.
+
+        :rtype: str
+        :return: The ID of the association created
+        """
+        params = {
+            'RouteTableId': route_table_id,
+            'SubnetId': subnet_id
+        }
+
+        result = self.get_object('AssociateRouteTable', params, ResultSet)
+        return result.associationId
+
+    def disassociate_route_table(self, association_id):
+        """
+        Removes an association from a route table. This will cause all subnets
+        that would've used this association to now use the main routing
+        association instead.
+
+        :type association_id: str
+        :param association_id: The ID of the association to disassociate.
+
+        :rtype: bool
+        :return: True if successful
+        """
+        params = { 'AssociationId': association_id }
+        return self.get_status('DisassociateRouteTable', params)
+
+    def create_route_table(self, vpc_id):
+        """
+        Creates a new route table.
+
+        :type vpc_id: str
+        :param vpc_id: The VPC ID to associate this route table with.
+
+        :rtype: The newly created route table
+        :return: A :class:`boto.vpc.routetable.RouteTable` object
+        """
+        params = { 'VpcId': vpc_id }
+        return self.get_object('CreateRouteTable', params, RouteTable)
+
+    def delete_route_table(self, route_table_id):
+        """
+        Delete a route table.
+
+        :type route_table_id: str
+        :param route_table_id: The ID of the route table to delete.
+
+        :rtype: bool
+        :return: True if successful
+        """
+        params = { 'RouteTableId': route_table_id }
+        return self.get_status('DeleteRouteTable', params)
+
+    def create_route(self, route_table_id, destination_cidr_block, gateway_id=None, instance_id=None):
+        """
+        Creates a new route in the route table within a VPC. The route's target
+        can be either a gateway attached to the VPC or a NAT instance in the
+        VPC.
+
+        :type route_table_id: str
+        :param route_table_id: The ID of the route table for the route.
+
+        :type destination_cidr_block: str
+        :param destination_cidr_block: The CIDR address block used for the
+                                       destination match.
+
+        :type gateway_id: str
+        :param gateway_id: The ID of the gateway attached to your VPC.
+
+        :type instance_id: str
+        :param instance_id: The ID of a NAT instance in your VPC.
+
+        :rtype: bool
+        :return: True if successful
+        """
+        params = {
+            'RouteTableId': route_table_id,
+            'DestinationCidrBlock': destination_cidr_block
+        }
+
+        if gateway_id is not None:
+            params['GatewayId'] = gateway_id
+        elif instance_id is not None:
+            params['InstanceId'] = instance_id
+
+        return self.get_status('CreateRoute', params)
+
+    def delete_route(self, route_table_id, destination_cidr_block):
+        """
+        Deletes a route from a route table within a VPC.
+
+        :type route_table_id: str
+        :param route_table_id: The ID of the route table with the route.
+
+        :type destination_cidr_block: str
+        :param destination_cidr_block: The CIDR address block used for
+                                       destination match.
+
+        :rtype: bool
+        :return: True if successful
+        """
+        params = {
+            'RouteTableId': route_table_id,
+            'DestinationCidrBlock': destination_cidr_block
+        }
+
+        return self.get_status('DeleteRoute', params)
+
+    # Internet Gateways
+
+    def get_all_internet_gateways(self, internet_gateway_ids=None, filters=None):
+        """
+        Get a list of internet gateways. You can filter results to return information
+        about only those gateways that you're interested in.
+
+        :type internet_gateway_ids: list
+        :param internet_gateway_ids: A list of strings with the desired gateway IDs.
+
+        :type filters: list of tuples
+        :param filters: A list of tuples containing filters.  Each tuple
+                        consists of a filter key and a filter value.
+        """
+        params = {}
+
+        if internet_gateway_ids:
+            self.build_list_params(params, internet_gateway_ids, 'InternetGatewayId')
+        if filters:
+            self.build_filter_params(params, dict(filters))
+
+        return self.get_list('DescribeInternetGateways', params, [('item', InternetGateway)])
+
+    def create_internet_gateway(self):
+        """
+        Creates an internet gateway for VPC.
+
+        :rtype: Newly created internet gateway.
+        :return: `boto.vpc.internetgateway.InternetGateway`
+        """
+        return self.get_object('CreateInternetGateway', {}, InternetGateway)
+
+    def delete_internet_gateway(self, internet_gateway_id):
+        """
+        Deletes an internet gateway from the VPC.
+
+        :type internet_gateway_id: str
+        :param internet_gateway_id: The ID of the internet gateway to delete.
+
+        :rtype: Bool
+        :return: True if successful
+        """
+        params = { 'InternetGatewayId': internet_gateway_id }
+        return self.get_status('DeleteInternetGateway', params)
+
+    def attach_internet_gateway(self, internet_gateway_id, vpc_id):
+        """
+        Attach an internet gateway to a specific VPC.
+
+        :type internet_gateway_id: str
+        :param internet_gateway_id: The ID of the internet gateway to delete.
+
+        :type vpc_id: str
+        :param vpc_id: The ID of the VPC to attach to.
+
+        :rtype: Bool
+        :return: True if successful
+        """
+        params = {
+            'InternetGatewayId': internet_gateway_id,
+            'VpcId': vpc_id
+        }
+
+        return self.get_status('AttachInternetGateway', params)
+
+    def detach_internet_gateway(self, internet_gateway_id, vpc_id):
+        """
+        Detach an internet gateway from a specific VPC.
+
+        :type internet_gateway_id: str
+        :param internet_gateway_id: The ID of the internet gateway to delete.
+
+        :type vpc_id: str
+        :param vpc_id: The ID of the VPC to attach to.
+
+        :rtype: Bool
+        :return: True if successful
+        """
+        params = {
+            'InternetGatewayId': internet_gateway_id,
+            'VpcId': vpc_id
+        }
+
+        return self.get_status('DetachInternetGateway', params)
+
     # Customer Gateways
 
     def get_all_customer_gateways(self, customer_gateway_ids=None, filters=None):
@@ -102,15 +334,15 @@
         return information only about those CustomerGateways that match your search
         parameters.  Otherwise, all CustomerGateways associated with your account
         are returned.
-        
+
         :type customer_gateway_ids: list
         :param customer_gateway_ids: A list of strings with the desired CustomerGateway ID's
-        
+
         :type filters: list of tuples
         :param filters: A list of tuples containing filters.  Each tuple
                         consists of a filter key and a filter value.
                         Possible filter keys are:
-                        
+
                          - *state*, the state of the CustomerGateway
                            (pending,available,deleting,deleted)
                          - *type*, the type of customer gateway (ipsec.1)
@@ -126,7 +358,7 @@
         if filters:
             i = 1
             for filter in filters:
-                params[('Filter.%d.Key' % i)] = filter[0]
+                params[('Filter.%d.Name' % i)] = filter[0]
                 params[('Filter.%d.Value.1')] = filter[1]
                 i += 1
         return self.get_list('DescribeCustomerGateways', params, [('item', CustomerGateway)])
@@ -153,7 +385,7 @@
                   'IpAddress' : ip_address,
                   'BgpAsn' : bgp_asn}
         return self.get_object('CreateCustomerGateway', params, CustomerGateway)
-        
+
     def delete_customer_gateway(self, customer_gateway_id):
         """
         Delete a Customer Gateway.
@@ -178,12 +410,12 @@
 
         :type vpn_gateway_ids: list
         :param vpn_gateway_ids: A list of strings with the desired VpnGateway ID's
-        
+
         :type filters: list of tuples
         :param filters: A list of tuples containing filters.  Each tuple
                         consists of a filter key and a filter value.
                         Possible filter keys are:
-                        
+
                         - *state*, the state of the VpnGateway
                           (pending,available,deleting,deleted)
                         - *type*, the type of customer gateway (ipsec.1)
@@ -199,7 +431,7 @@
         if filters:
             i = 1
             for filter in filters:
-                params[('Filter.%d.Key' % i)] = filter[0]
+                params[('Filter.%d.Name' % i)] = filter[0]
                 params[('Filter.%d.Value.1')] = filter[1]
                 i += 1
         return self.get_list('DescribeVpnGateways', params, [('item', VpnGateway)])
@@ -221,7 +453,7 @@
         if availability_zone:
             params['AvailabilityZone'] = availability_zone
         return self.get_object('CreateVpnGateway', params, VpnGateway)
-        
+
     def delete_vpn_gateway(self, vpn_gateway_id):
         """
         Delete a Vpn Gateway.
@@ -260,15 +492,15 @@
         return information only about those Subnets that match your search
         parameters.  Otherwise, all Subnets associated with your account
         are returned.
-        
+
         :type subnet_ids: list
         :param subnet_ids: A list of strings with the desired Subnet ID's
-        
+
         :type filters: list of tuples
         :param filters: A list of tuples containing filters.  Each tuple
                         consists of a filter key and a filter value.
                         Possible filter keys are:
-                        
+
                         - *state*, the state of the Subnet
                           (pending,available)
                         - *vpdId*, the ID of teh VPC the subnet is in.
@@ -286,7 +518,7 @@
         if filters:
             i = 1
             for filter in filters:
-                params[('Filter.%d.Key' % i)] = filter[0]
+                params[('Filter.%d.Name' % i)] = filter[0]
                 params[('Filter.%d.Value.1' % i)] = filter[1]
                 i += 1
         return self.get_list('DescribeSubnets', params, [('item', Subnet)])
@@ -312,7 +544,7 @@
         if availability_zone:
             params['AvailabilityZone'] = availability_zone
         return self.get_object('CreateSubnet', params, Subnet)
-        
+
     def delete_subnet(self, subnet_id):
         """
         Delete a subnet.
@@ -326,16 +558,16 @@
         params = {'SubnetId': subnet_id}
         return self.get_status('DeleteSubnet', params)
 
-    
+
     # DHCP Options
 
     def get_all_dhcp_options(self, dhcp_options_ids=None):
         """
         Retrieve information about your DhcpOptions.
-        
+
         :type dhcp_options_ids: list
         :param dhcp_options_ids: A list of strings with the desired DhcpOption ID's
-        
+
         :rtype: list
         :return: A list of :class:`boto.vpc.dhcpoptions.DhcpOptions`
         """
@@ -365,7 +597,7 @@
         if availability_zone:
             params['AvailabilityZone'] = availability_zone
         return self.get_object('CreateDhcpOption', params, DhcpOptions)
-        
+
     def delete_dhcp_options(self, dhcp_options_id):
         """
         Delete a DHCP Options
@@ -382,13 +614,13 @@
     def associate_dhcp_options(self, dhcp_options_id, vpc_id):
         """
         Associate a set of Dhcp Options with a VPC.
-        
+
         :type dhcp_options_id: str
         :param dhcp_options_id: The ID of the Dhcp Options
-        
+
         :type vpc_id: str
         :param vpc_id: The ID of the VPC.
-        
+
         :rtype: bool
         :return: True if successful
         """
@@ -404,15 +636,15 @@
         return information only about those VPN_CONNECTIONs that match your search
         parameters.  Otherwise, all VPN_CONNECTIONs associated with your account
         are returned.
-        
+
         :type vpn_connection_ids: list
         :param vpn_connection_ids: A list of strings with the desired VPN_CONNECTION ID's
-        
+
         :type filters: list of tuples
         :param filters: A list of tuples containing filters.  Each tuple
                         consists of a filter key and a filter value.
                         Possible filter keys are:
-                        
+
                         - *state*, the state of the VPN_CONNECTION
                           pending,available,deleting,deleted
                         - *type*, the type of connection, currently 'ipsec.1'
@@ -430,7 +662,7 @@
         if filters:
             i = 1
             for filter in filters:
-                params[('Filter.%d.Key' % i)] = filter[0]
+                params[('Filter.%d.Name' % i)] = filter[0]
                 params[('Filter.%d.Value.1')] = filter[1]
                 i += 1
         return self.get_list('DescribeVpnConnections', params, [('item', VpnConnection)])
@@ -456,7 +688,7 @@
                   'CustomerGatewayId' : customer_gateway_id,
                   'VpnGatewayId' : vpn_gateway_id}
         return self.get_object('CreateVpnConnection', params, VpnConnection)
-        
+
     def delete_vpn_connection(self, vpn_connection_id):
         """
         Delete a VPN Connection.
@@ -470,4 +702,4 @@
         params = {'VpnConnectionId': vpn_connection_id}
         return self.get_status('DeleteVpnConnection', params)
 
-    
+
diff --git a/boto/vpc/internetgateway.py b/boto/vpc/internetgateway.py
new file mode 100644
index 0000000..011fdee
--- /dev/null
+++ b/boto/vpc/internetgateway.py
@@ -0,0 +1,72 @@
+# Copyright (c) 2009-2010 Mitch Garnaat http://garnaat.org/
+#
+# Permission is hereby granted, free of charge, to any person obtaining a
+# copy of this software and associated documentation files (the
+# "Software"), to deal in the Software without restriction, including
+# without limitation the rights to use, copy, modify, merge, publish, dis-
+# tribute, sublicense, and/or sell copies of the Software, and to permit
+# persons to whom the Software is furnished to do so, subject to the fol-
+# lowing conditions:
+#
+# The above copyright notice and this permission notice shall be included
+# in all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
+# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
+# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+# IN THE SOFTWARE.
+
+"""
+Represents an Internet Gateway
+"""
+
+from boto.ec2.ec2object import TaggedEC2Object
+from boto.resultset import ResultSet
+
+class InternetGateway(TaggedEC2Object):
+    def __init__(self, connection=None):
+        TaggedEC2Object.__init__(self, connection)
+        self.id = None
+        self.attachments = []
+
+    def __repr__(self):
+        return 'InternetGateway:%s' % self.id
+
+    def startElement(self, name, attrs, connection):
+        result = super(InternetGateway, self).startElement(name, attrs, connection)
+
+        if result is not None:
+            # Parent found an interested element, just return it
+            return result
+
+        if name == 'attachmentSet':
+            self.attachments = ResultSet([('item', InternetGatewayAttachment)])
+            return self.attachments
+        else:
+            return None
+
+    def endElement(self, name, value, connection):
+        if name == 'internetGatewayId':
+            self.id = value
+        else:
+            setattr(self, name, value)
+
+class InternetGatewayAttachment(object):
+    def __init__(self, connection=None):
+        self.vpc_id = None
+        self.state = None
+
+    def __repr__(self):
+        return 'InternetGatewayAttachment:%s' % self.vpc_id
+
+    def startElement(self, name, attrs, connection):
+        return None
+
+    def endElement(self, name, value, connection):
+        if name == 'vpcId':
+            self.vpc_id = value
+        elif name == 'state':
+            self.state = value
diff --git a/boto/vpc/routetable.py b/boto/vpc/routetable.py
new file mode 100644
index 0000000..b3f0055
--- /dev/null
+++ b/boto/vpc/routetable.py
@@ -0,0 +1,109 @@
+# Copyright (c) 2009-2010 Mitch Garnaat http://garnaat.org/
+#
+# Permission is hereby granted, free of charge, to any person obtaining a
+# copy of this software and associated documentation files (the
+# "Software"), to deal in the Software without restriction, including
+# without limitation the rights to use, copy, modify, merge, publish, dis-
+# tribute, sublicense, and/or sell copies of the Software, and to permit
+# persons to whom the Software is furnished to do so, subject to the fol-
+# lowing conditions:
+#
+# The above copyright notice and this permission notice shall be included
+# in all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
+# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
+# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+# IN THE SOFTWARE.
+
+"""
+Represents a Route Table
+"""
+
+from boto.ec2.ec2object import TaggedEC2Object
+from boto.resultset import ResultSet
+
+class RouteTable(TaggedEC2Object):
+
+    def __init__(self, connection=None):
+        TaggedEC2Object.__init__(self, connection)
+        self.id = None
+        self.vpc_id = None
+        self.routes = []
+        self.associations = []
+
+    def __repr__(self):
+        return 'RouteTable:%s' % self.id
+
+    def startElement(self, name, attrs, connection):
+        result = super(RouteTable, self).startElement(name, attrs, connection)
+
+        if result is not None:
+            # Parent found an interested element, just return it
+            return result
+
+        if name == 'routeSet':
+            self.routes = ResultSet([('item', Route)])
+            return self.routes
+        elif name == 'associationSet':
+            self.associations = ResultSet([('item', RouteAssociation)])
+            return self.associations
+        else:
+            return None
+
+    def endElement(self, name, value, connection):
+        if name == 'routeTableId':
+            self.id = value
+        elif name == 'vpcId':
+            self.vpc_id = value
+        else:
+            setattr(self, name, value)
+
+class Route(object):
+    def __init__(self, connection=None):
+        self.destination_cidr_block = None
+        self.gateway_id = None
+        self.instance_id = None
+        self.state = None
+
+    def __repr__(self):
+        return 'Route:%s' % self.destination_cidr_block
+
+    def startElement(self, name, attrs, connection):
+        return None
+
+    def endElement(self, name, value, connection):
+        if name == 'destinationCidrBlock':
+            self.destination_cidr_block = value
+        elif name == 'gatewayId':
+            self.gateway_id = value
+        elif name == 'instanceId':
+            self.instance_id = value
+        elif name == 'state':
+            self.state = value
+
+class RouteAssociation(object):
+    def __init__(self, connection=None):
+        self.id = None
+        self.route_table_id = None
+        self.subnet_id = None
+        self.main = False
+
+    def __repr__(self):
+        return 'RouteAssociation:%s' % self.id
+
+    def startElement(self, name, attrs, connection):
+        return None
+
+    def endElement(self, name, value, connection):
+        if name == 'routeTableAssociationId':
+            self.id = value
+        elif name == 'routeTableId':
+            self.route_table_id = value
+        elif name == 'subnetId':
+            self.subnet_id = value
+        elif name == 'main':
+            self.main = value == 'true'
diff --git a/boto/vpc/subnet.py b/boto/vpc/subnet.py
index 135e1a2..f87d72c 100644
--- a/boto/vpc/subnet.py
+++ b/boto/vpc/subnet.py
@@ -14,7 +14,7 @@
 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
 # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
 # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
-# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, 
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
 # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
 # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
 # IN THE SOFTWARE.
@@ -30,6 +30,7 @@
     def __init__(self, connection=None):
         TaggedEC2Object.__init__(self, connection)
         self.id = None
+        self.vpc_id = None
         self.state = None
         self.cidr_block = None
         self.available_ip_address_count = 0
@@ -37,10 +38,12 @@
 
     def __repr__(self):
         return 'Subnet:%s' % self.id
-    
+
     def endElement(self, name, value, connection):
         if name == 'subnetId':
             self.id = value
+        elif name == 'vpcId':
+            self.vpc_id = value
         elif name == 'state':
             self.state = value
         elif name == 'cidrBlock':
diff --git a/docs/source/autoscale_tut.rst b/docs/source/autoscale_tut.rst
index 9f9d399..a99bc3e 100644
--- a/docs/source/autoscale_tut.rst
+++ b/docs/source/autoscale_tut.rst
@@ -42,12 +42,12 @@
 default the US endpoint is used. To choose a specific region, instantiate the
 AutoScaleConnection object with that region's endpoint.
 
->>> ec2 = boto.connect_autoscale(host='eu-west-1.autoscaling.amazonaws.com')
+>>> ec2 = boto.connect_autoscale(host='autoscaling.eu-west-1.amazonaws.com')
 
 Alternatively, edit your boto.cfg with the default Autoscale endpoint to use::
 
     [Boto]
-    autoscale_endpoint = eu-west-1.autoscaling.amazonaws.com
+    autoscale_endpoint = autoscaling.eu-west-1.amazonaws.com
 
 Getting Existing AutoScale Groups
 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
diff --git a/docs/source/elb_tut.rst b/docs/source/elb_tut.rst
index b873578..7440b08 100644
--- a/docs/source/elb_tut.rst
+++ b/docs/source/elb_tut.rst
@@ -54,14 +54,35 @@
 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
 Like EC2 the ELB service has a different endpoint for each region. By default
 the US endpoint is used. To choose a specific region, instantiate the
-ELBConnection object with that region's endpoint.
+ELBConnection object with that region's information.
 
->>> ec2 = boto.connect_elb(host='eu-west-1.elasticloadbalancing.amazonaws.com')
+>>> from boto.regioninfo import RegionInfo
+>>> reg = RegionInfo(name='eu-west-1', endpoint='elasticloadbalancing.eu-west-1.amazonaws.com')
+>>> elb = boto.connect_elb(region=reg)
+
+Another way to connect to an alternative region is like this:
+
+>>> import boto.ec2.elb
+>>> elb = boto.ec2.elb.connect_to_region('eu-west-1')
+
+Here's yet another way to discover what regions are available and then
+connect to one:
+
+>>> import boto.ec2.elb
+>>> regions = boto.ec2.elb.regions()
+>>> regions
+[RegionInfo:us-east-1,
+ RegionInfo:ap-northeast-1,
+ RegionInfo:us-west-1,
+ RegionInfo:ap-southeast-1,
+ RegionInfo:eu-west-1]
+>>> elb = regions[-1].connect()
 
 Alternatively, edit your boto.cfg with the default ELB endpoint to use::
 
     [Boto]
-    elb_endpoint = eu-west-1.elasticloadbalancing.amazonaws.com
+    elb_region_name = eu-west-1
+    elb_region_endpoint = elasticloadbalancing.eu-west-1.amazonaws.com
 
 Getting Existing Load Balancers
 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
diff --git a/docs/source/index.rst b/docs/source/index.rst
index 2ecd1d6..bb76ff4 100644
--- a/docs/source/index.rst
+++ b/docs/source/index.rst
@@ -9,24 +9,56 @@
 
 Currently, this includes:
 
-- Simple Storage Service (S3)
-- Simple Queue Service (SQS)
-- Elastic Compute Cloud (EC2)
+* Compute
 
- - Elastic Load Balancer (ELB)
- - CloudWatch
- - AutoScale
- 
-- Mechanical Turk
-- SimpleDB (SDB) - See SimpleDbPage for details
-- CloudFront
-- Virtual Private Cloud (VPC)
-- Relational Data Services (RDS)
-- Elastic Map Reduce (EMR)
-- Flexible Payment Service (FPS)
-- Identity and Access Management (IAM)
+  * Elastic Compute Cloud (EC2)
+  * Elastic MapReduce (EMR)
+  * Auto Scaling
 
-The boto project page is at http://boto.googlecode.com/
+* Content Delivery
+
+  * CloudFront
+
+* Database
+
+  * SimpleDB
+  * Relational Data Services (RDS)
+
+* Deployment and Management
+
+  * CloudFormation
+
+* Identity & Access
+
+  * Identity and Access Management (IAM)
+
+* Messaging
+
+  * Simple Queue Service (SQS)
+  * Simple Notificaiton Service (SNS)
+  * Simple Email Service (SES)
+
+* Monitoring
+
+  * CloudWatch
+
+* Networking
+
+  * Route 53
+  * Virtual Private Cloud (VPC)
+  * Elastic Load Balancing (ELB)
+
+* Payments & Billing
+
+  * Flexible Payments Service (FPS)
+
+* Storage
+
+  * Simple Storage Service (S3)
+
+* Workforce
+
+  * Mechanical Turk
 
 The boto source repository is at http://github.com/boto
 
diff --git a/docs/source/ref/cloudformation.rst b/docs/source/ref/cloudformation.rst
new file mode 100644
index 0000000..447f487
--- /dev/null
+++ b/docs/source/ref/cloudformation.rst
@@ -0,0 +1,27 @@
+.. ref-cloudformation
+
+==============
+cloudformation
+==============
+
+boto.cloudformation
+---------------
+
+.. automodule:: boto.cloudformation
+   :members:   
+   :undoc-members:
+
+boto.cloudformation.stack
+----------------------------
+
+.. automodule:: boto.cloudformation.stack
+   :members:
+   :undoc-members:
+
+boto.cloudformation.template
+----------------------------
+
+.. automodule:: boto.cloudformation.template
+   :members:
+   :undoc-members:
+
diff --git a/docs/source/ref/cloudfront.rst b/docs/source/ref/cloudfront.rst
index 5cb80be..51f9455 100644
--- a/docs/source/ref/cloudfront.rst
+++ b/docs/source/ref/cloudfront.rst
@@ -9,11 +9,11 @@
 
 This new boto module provides an interface to Amazon's new Content Service, CloudFront.
 
-Caveats:
+.. warning::
 
-This module is not well tested.  Paging of distributions is not yet
-supported.  CNAME support is completely untested.  Use with caution.
-Feedback and bug reports are greatly appreciated.
+    This module is not well tested.  Paging of distributions is not yet
+    supported.  CNAME support is completely untested.  Use with caution.
+    Feedback and bug reports are greatly appreciated.
 
 The following shows the main features of the cloudfront module from an interactive shell:
 
@@ -34,7 +34,7 @@
 >>> d.config.comment
 u'My new distribution'
 >>> d.config.origin
-u'mybucket.s3.amazonaws.com'
+<S3Origin: mybucket.s3.amazonaws.com>
 >>> d.config.caller_reference
 u'31b8d9cf-a623-4a28-b062-a91856fac6d0'
 >>> d.config.enabled
@@ -97,7 +97,14 @@
 ----------------------------
 
 .. automodule:: boto.cloudfront.distribution
-   :members:   
+   :members:
+   :undoc-members:
+
+boto.cloudfront.origin
+----------------------------
+
+.. automodule:: boto.cloudfront.origin
+   :members:
    :undoc-members:
 
 boto.cloudfront.exception
diff --git a/docs/source/ref/ec2.rst b/docs/source/ref/ec2.rst
index e6215d7..edc3bc2 100644
--- a/docs/source/ref/ec2.rst
+++ b/docs/source/ref/ec2.rst
@@ -54,6 +54,13 @@
    :members:   
    :undoc-members:
 
+boto.ec2.autoscale.policy
+--------------------------
+
+.. automodule:: boto.ec2.autoscale.policy
+   :members:   
+   :undoc-members:
+
 boto.ec2.autoscale.request
 --------------------------
 
@@ -61,10 +68,10 @@
    :members:   
    :undoc-members:
 
-boto.ec2.autoscale.trigger
---------------------------
+boto.ec2.autoscale.scheduled
+----------------------------
 
-.. automodule:: boto.ec2.autoscale.trigger
+.. automodule:: boto.ec2.autoscale.scheduled
    :members:   
    :undoc-members:
 
diff --git a/docs/source/ref/ecs.rst b/docs/source/ref/ecs.rst
new file mode 100644
index 0000000..97613b4
--- /dev/null
+++ b/docs/source/ref/ecs.rst
@@ -0,0 +1,19 @@
+.. ref-ecs
+
+===
+ECS
+===
+
+boto.ecs
+--------
+
+.. automodule:: boto.ecs
+   :members:   
+   :undoc-members:
+
+boto.ecs.item
+----------------------------
+
+.. automodule:: boto.ecs.item
+   :members:   
+   :undoc-members:
diff --git a/docs/source/ref/iam.rst b/docs/source/ref/iam.rst
index ace6170..73e825e 100644
--- a/docs/source/ref/iam.rst
+++ b/docs/source/ref/iam.rst
@@ -11,6 +11,13 @@
    :members:   
    :undoc-members:
 
+boto.iam.connection
+-------------------
+
+.. automodule:: boto.iam.connection
+   :members:   
+   :undoc-members:
+
 boto.iam.response
 -----------------
 
diff --git a/docs/source/ref/index.rst b/docs/source/ref/index.rst
index 08b8ef2..e6bee79 100644
--- a/docs/source/ref/index.rst
+++ b/docs/source/ref/index.rst
@@ -8,23 +8,28 @@
    :maxdepth: 4
 
    boto
+   cloudformation
    cloudfront
    contrib
    ec2
+   ecs
+   emr
+   file
    fps
+   gs
+   iam
    manage
    mashups
    mturk
    pyami
    rds
+   route53 
    s3
-   gs
-   file
    sdb
    services
+   ses
    sns
    sqs
+   sts
    vpc
-   emr
-   iam
-   route53
+ 
diff --git a/docs/source/ref/route53.rst b/docs/source/ref/route53.rst
index 3100801..e267d9b 100644
--- a/docs/source/ref/route53.rst
+++ b/docs/source/ref/route53.rst
@@ -5,22 +5,22 @@
 =======
 
 
-boto.route53
-------------
+boto.route53.connection
+-----------------------
 
-.. automodule:: boto.route53
+.. automodule:: boto.route53.connection
    :members:   
    :undoc-members:
 
 boto.route53.hostedzone
-----------------------------
+------------------------
 
 .. automodule:: boto.route53.hostedzone
    :members:   
    :undoc-members:
 
 boto.route53.exception
--------------------------
+----------------------
 
 .. automodule:: boto.route53.exception
    :members:   
diff --git a/docs/source/ref/ses.rst b/docs/source/ref/ses.rst
new file mode 100644
index 0000000..d59126a
--- /dev/null
+++ b/docs/source/ref/ses.rst
@@ -0,0 +1,21 @@
+.. ref-ses
+
+===
+SES
+===
+
+
+boto.ses
+------------
+
+.. automodule:: boto.ses
+   :members:   
+   :undoc-members:
+
+boto.ses.connection
+---------------------
+
+.. automodule:: boto.ses.connection
+   :members:   
+   :undoc-members:
+
diff --git a/docs/source/ref/sts.rst b/docs/source/ref/sts.rst
new file mode 100644
index 0000000..e3ce581
--- /dev/null
+++ b/docs/source/ref/sts.rst
@@ -0,0 +1,25 @@
+.. ref-sts
+
+===
+STS
+===
+
+boto.sts
+--------
+
+.. automodule:: boto.sts
+   :members:   
+   :undoc-members:
+
+.. autoclass:: boto.sts.STSConnection
+   :members:
+   :undoc-members:
+
+boto.sts.credentials
+--------------------
+
+.. automodule:: boto.sts.credentials
+   :members:   
+   :undoc-members:
+   
+
diff --git a/setup.py b/setup.py
index 36af722..c30f12c 100644
--- a/setup.py
+++ b/setup.py
@@ -1,4 +1,4 @@
-#!/usr/bin/python
+#!/usr/bin/env python
 
 # Copyright (c) 2006-2010 Mitch Garnaat http://garnaat.org/
 # Copyright (c) 2010, Eucalyptus Systems, Inc.
@@ -25,46 +25,48 @@
 
 try:
     from setuptools import setup
+    extra = dict(test_suite="tests.test.suite", include_package_data=True)
 except ImportError:
     from distutils.core import setup
+    extra = {}
 
 import sys
 
-from boto import Version
+from boto import __version__
 
-install_requires = []
-maj, min, micro, rel, serial = sys.version_info
-if (maj, min) == (2, 4):
-    # boto needs hashlib module which is not in py2.4
-    install_requires.append("hashlib")
+if sys.version_info <= (2, 4):
+    error = "ERROR: boto requires Python Version 2.5 or above...exiting."
+    print >> sys.stderr, error
+    sys.exit(1)
 
 setup(name = "boto",
-      version = Version,
+      version = __version__,
       description = "Amazon Web Services Library",
-      long_description="Python interface to Amazon's Web Services.",
+      long_description = "Python interface to Amazon's Web Services.",
       author = "Mitch Garnaat",
       author_email = "mitch@garnaat.com",
       scripts = ["bin/sdbadmin", "bin/elbadmin", "bin/cfadmin",
                  "bin/s3put", "bin/fetch_file", "bin/launch_instance",
                  "bin/list_instances", "bin/taskadmin", "bin/kill_instance",
                  "bin/bundle_image", "bin/pyami_sendmail", "bin/lss3",
-                 "bin/cq", "bin/route53"],
-      install_requires=install_requires,
+                 "bin/cq", "bin/route53", "bin/s3multiput", "bin/cwutil"],
       url = "http://code.google.com/p/boto/",
-      packages = [ 'boto', 'boto.sqs', 'boto.s3', 'boto.gs', 'boto.file',
-                   'boto.ec2', 'boto.ec2.cloudwatch', 'boto.ec2.autoscale',
-                   'boto.ec2.elb', 'boto.sdb', 'boto.sdb.persist',
-                   'boto.sdb.db', 'boto.sdb.db.manager', 'boto.mturk',
-                   'boto.pyami', 'boto.mashups', 'boto.contrib', 'boto.manage',
-                   'boto.services', 'boto.tests', 'boto.cloudfront',
-                   'boto.rds', 'boto.vpc', 'boto.fps', 'boto.emr', 'boto.sns',
-                   'boto.ecs', 'boto.iam', 'boto.route53', 'boto.ses'],
-      license = 'MIT',
-      platforms = 'Posix; MacOS X; Windows',
-      classifiers = [ 'Development Status :: 5 - Production/Stable',
-                      'Intended Audience :: Developers',
-                      'License :: OSI Approved :: MIT License',
-                      'Operating System :: OS Independent',
-                      'Topic :: Internet',
-                      ],
+      packages = ["boto", "boto.sqs", "boto.s3", "boto.gs", "boto.file",
+                  "boto.ec2", "boto.ec2.cloudwatch", "boto.ec2.autoscale",
+                  "boto.ec2.elb", "boto.sdb", "boto.cacerts",
+                  "boto.sdb.db", "boto.sdb.db.manager", "boto.mturk",
+                  "boto.pyami", "boto.mashups", "boto.contrib", "boto.manage",
+                  "boto.services", "boto.cloudfront", "boto.roboto",
+                  "boto.rds", "boto.vpc", "boto.fps", "boto.emr", "boto.sns",
+                  "boto.ecs", "boto.iam", "boto.route53", "boto.ses",
+                  "boto.cloudformation", "boto.sts"],
+      package_data = {"boto.cacerts": ["cacerts.txt"]},
+      license = "MIT",
+      platforms = "Posix; MacOS X; Windows",
+      classifiers = ["Development Status :: 5 - Production/Stable",
+                     "Intended Audience :: Developers",
+                     "License :: OSI Approved :: MIT License",
+                     "Operating System :: OS Independent",
+                     "Topic :: Internet"],
+      **extra
       )
diff --git a/boto/tests/__init__.py b/tests/__init__.py
similarity index 93%
copy from boto/tests/__init__.py
copy to tests/__init__.py
index 449bd16..b3fc3a0 100644
--- a/boto/tests/__init__.py
+++ b/tests/__init__.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2006,2007 Mitch Garnaat http://garnaat.org/
+# Copyright (c) 2006-2011 Mitch Garnaat http://garnaat.org/
 #
 # Permission is hereby granted, free of charge, to any person obtaining a
 # copy of this software and associated documentation files (the
@@ -18,6 +18,3 @@
 # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
 # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
 # IN THE SOFTWARE.
-#
-
-
diff --git a/boto/tests/__init__.py b/tests/autoscale/__init__.py
similarity index 93%
copy from boto/tests/__init__.py
copy to tests/autoscale/__init__.py
index 449bd16..4a55c4b 100644
--- a/boto/tests/__init__.py
+++ b/tests/autoscale/__init__.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2006,2007 Mitch Garnaat http://garnaat.org/
+# Copyright (c) 2011 Reza Lotun http://reza.lotun.name
 #
 # Permission is hereby granted, free of charge, to any person obtaining a
 # copy of this software and associated documentation files (the
@@ -14,10 +14,8 @@
 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
 # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
 # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
-# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, 
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
 # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
 # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
 # IN THE SOFTWARE.
-#
-
 
diff --git a/tests/autoscale/test_connection.py b/tests/autoscale/test_connection.py
new file mode 100644
index 0000000..921fe43
--- /dev/null
+++ b/tests/autoscale/test_connection.py
@@ -0,0 +1,95 @@
+# Copyright (c) 2011 Reza Lotun http://reza.lotun.name
+# All rights reserved.
+#
+# Permission is hereby granted, free of charge, to any person obtaining a
+# copy of this software and associated documentation files (the
+# "Software"), to deal in the Software without restriction, including
+# without limitation the rights to use, copy, modify, merge, publish, dis-
+# tribute, sublicense, and/or sell copies of the Software, and to permit
+# persons to whom the Software is furnished to do so, subject to the fol-
+# lowing conditions:
+#
+# The above copyright notice and this permission notice shall be included
+# in all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
+# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
+# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+# IN THE SOFTWARE.
+
+"""
+Some unit tests for the AutoscaleConnection
+"""
+
+import unittest
+import time
+from boto.ec2.autoscale import AutoScaleConnection
+from boto.ec2.autoscale.activity import Activity
+from boto.ec2.autoscale.group import AutoScalingGroup, ProcessType
+from boto.ec2.autoscale.launchconfig import LaunchConfiguration
+from boto.ec2.autoscale.policy import AdjustmentType, MetricCollectionTypes, ScalingPolicy
+from boto.ec2.autoscale.scheduled import ScheduledUpdateGroupAction
+from boto.ec2.autoscale.instance import Instance
+
+class AutoscaleConnectionTest(unittest.TestCase):
+
+    def test_basic(self):
+        # NB: as it says on the tin these are really basic tests that only
+        # (lightly) exercise read-only behaviour - and that's only if you
+        # have any autoscale groups to introspect. It's useful, however, to
+        # catch simple errors
+
+        print '--- running %s tests ---' % self.__class__.__name__
+        c = AutoScaleConnection()
+
+        self.assertTrue(repr(c).startswith('AutoScaleConnection'))
+
+        groups = c.get_all_groups()
+        for group in groups:
+            self.assertTrue(type(group), AutoScalingGroup)
+
+            # get activities
+            activities = group.get_activities()
+
+            for activity in activities:
+                self.assertEqual(type(activity), Activity)
+
+        # get launch configs
+        configs = c.get_all_launch_configurations()
+        for config in configs:
+            self.assertTrue(type(config), LaunchConfiguration)
+
+        # get policies
+        policies = c.get_all_policies()
+        for policy in policies:
+            self.assertTrue(type(policy), ScalingPolicy)
+
+        # get scheduled actions
+        actions = c.get_all_scheduled_actions()
+        for action in actions:
+            self.assertTrue(type(action), ScheduledUpdateGroupAction)
+
+        # get instances
+        instances = c.get_all_autoscaling_instances()
+        for instance in instances:
+            self.assertTrue(type(instance), Instance)
+
+        # get all scaling process types
+        ptypes = c.get_all_scaling_process_types()
+        for ptype in ptypes:
+            self.assertTrue(type(ptype), ProcessType)
+
+        # get adjustment types
+        adjustments = c.get_all_adjustment_types()
+        for adjustment in adjustments:
+            self.assertTrue(type(adjustment), AdjustmentType)
+
+        # get metrics collection types
+        types = c.get_all_metric_collection_types()
+        self.assertTrue(type(types), MetricCollectionTypes)
+
+        print '--- tests completed ---'
+
diff --git a/tests/cloudfront/__init__.py b/tests/cloudfront/__init__.py
new file mode 100644
index 0000000..e69de29
--- /dev/null
+++ b/tests/cloudfront/__init__.py
diff --git a/tests/cloudfront/test_signed_urls.py b/tests/cloudfront/test_signed_urls.py
new file mode 100644
index 0000000..118117c
--- /dev/null
+++ b/tests/cloudfront/test_signed_urls.py
@@ -0,0 +1,311 @@
+
+import unittest
+import json
+from textwrap import dedent
+from boto.cloudfront.distribution import Distribution
+
+class CloudfrontSignedUrlsTest(unittest.TestCase):
+    def setUp(self):
+        self.pk_str = dedent("""
+            -----BEGIN RSA PRIVATE KEY-----
+            MIICXQIBAAKBgQDA7ki9gI/lRygIoOjV1yymgx6FYFlzJ+z1ATMaLo57nL57AavW
+            hb68HYY8EA0GJU9xQdMVaHBogF3eiCWYXSUZCWM/+M5+ZcdQraRRScucmn6g4EvY
+            2K4W2pxbqH8vmUikPxir41EeBPLjMOzKvbzzQy9e/zzIQVREKSp/7y1mywIDAQAB
+            AoGABc7mp7XYHynuPZxChjWNJZIq+A73gm0ASDv6At7F8Vi9r0xUlQe/v0AQS3yc
+            N8QlyR4XMbzMLYk3yjxFDXo4ZKQtOGzLGteCU2srANiLv26/imXA8FVidZftTAtL
+            viWQZBVPTeYIA69ATUYPEq0a5u5wjGyUOij9OWyuy01mbPkCQQDluYoNpPOekQ0Z
+            WrPgJ5rxc8f6zG37ZVoDBiexqtVShIF5W3xYuWhW5kYb0hliYfkq15cS7t9m95h3
+            1QJf/xI/AkEA1v9l/WN1a1N3rOK4VGoCokx7kR2SyTMSbZgF9IWJNOugR/WZw7HT
+            njipO3c9dy1Ms9pUKwUF46d7049ck8HwdQJARgrSKuLWXMyBH+/l1Dx/I4tXuAJI
+            rlPyo+VmiOc7b5NzHptkSHEPfR9s1OK0VqjknclqCJ3Ig86OMEtEFBzjZQJBAKYz
+            470hcPkaGk7tKYAgP48FvxRsnzeooptURW5E+M+PQ2W9iDPPOX9739+Xi02hGEWF
+            B0IGbQoTRFdE4VVcPK0CQQCeS84lODlC0Y2BZv2JxW3Osv/WkUQ4dslfAQl1T303
+            7uwwr7XTroMv8dIFQIPreoPhRKmd/SbJzbiKfS/4QDhU
+            -----END RSA PRIVATE KEY-----
+            """)
+        self.pk_id = "PK123456789754"
+        self.dist = Distribution()
+        self.canned_policy = (
+            '{"Statement":[{"Resource":'
+            '"http://d604721fxaaqy9.cloudfront.net/horizon.jpg'
+            '?large=yes&license=yes",'
+            '"Condition":{"DateLessThan":{"AWS:EpochTime":1258237200}}}]}')
+        self.custom_policy_1 = (
+            '{ \n'
+            '   "Statement": [{ \n'
+            '      "Resource":"http://d604721fxaaqy9.cloudfront.net/training/*", \n'
+            '      "Condition":{ \n'
+            '         "IpAddress":{"AWS:SourceIp":"145.168.143.0/24"}, \n'
+            '         "DateLessThan":{"AWS:EpochTime":1258237200}      \n'
+            '      } \n'
+            '   }] \n'
+            '}\n')
+        self.custom_policy_2 = (
+            '{ \n'
+            '   "Statement": [{ \n'
+            '      "Resource":"http://*", \n'
+            '      "Condition":{ \n'
+            '         "IpAddress":{"AWS:SourceIp":"216.98.35.1/32"},\n'
+            '         "DateGreaterThan":{"AWS:EpochTime":1241073790},\n'
+            '         "DateLessThan":{"AWS:EpochTime":1255674716}\n'
+            '      } \n'
+            '   }] \n'
+            '}\n')
+
+    def test_encode_custom_policy_1(self):
+        """
+        Test base64 encoding custom policy 1 from Amazon's documentation.
+        """
+        expected = ("eyAKICAgIlN0YXRlbWVudCI6IFt7IAogICAgICAiUmVzb3VyY2Ui"
+                    "OiJodHRwOi8vZDYwNDcyMWZ4YWFxeTkuY2xvdWRmcm9udC5uZXQv"
+                    "dHJhaW5pbmcvKiIsIAogICAgICAiQ29uZGl0aW9uIjp7IAogICAg"
+                    "ICAgICAiSXBBZGRyZXNzIjp7IkFXUzpTb3VyY2VJcCI6IjE0NS4x"
+                    "NjguMTQzLjAvMjQifSwgCiAgICAgICAgICJEYXRlTGVzc1RoYW4i"
+                    "OnsiQVdTOkVwb2NoVGltZSI6MTI1ODIzNzIwMH0gICAgICAKICAg"
+                    "ICAgfSAKICAgfV0gCn0K")
+        encoded = self.dist._url_base64_encode(self.custom_policy_1)
+        self.assertEqual(expected, encoded)
+
+    def test_encode_custom_policy_2(self):
+        """
+        Test base64 encoding custom policy 2 from Amazon's documentation.
+        """
+        expected = ("eyAKICAgIlN0YXRlbWVudCI6IFt7IAogICAgICAiUmVzb3VyY2Ui"
+                    "OiJodHRwOi8vKiIsIAogICAgICAiQ29uZGl0aW9uIjp7IAogICAg"
+                    "ICAgICAiSXBBZGRyZXNzIjp7IkFXUzpTb3VyY2VJcCI6IjIxNi45"
+                    "OC4zNS4xLzMyIn0sCiAgICAgICAgICJEYXRlR3JlYXRlclRoYW4i"
+                    "OnsiQVdTOkVwb2NoVGltZSI6MTI0MTA3Mzc5MH0sCiAgICAgICAg"
+                    "ICJEYXRlTGVzc1RoYW4iOnsiQVdTOkVwb2NoVGltZSI6MTI1NTY3"
+                    "NDcxNn0KICAgICAgfSAKICAgfV0gCn0K")
+        encoded = self.dist._url_base64_encode(self.custom_policy_2)
+        self.assertEqual(expected, encoded)
+
+    def test_sign_canned_policy(self):
+        """
+        Test signing the canned policy from amazon's cloudfront documentation.
+        """
+        expected = ("Nql641NHEUkUaXQHZINK1FZ~SYeUSoBJMxjdgqrzIdzV2gyEXPDN"
+                    "v0pYdWJkflDKJ3xIu7lbwRpSkG98NBlgPi4ZJpRRnVX4kXAJK6td"
+                    "Nx6FucDB7OVqzcxkxHsGFd8VCG1BkC-Afh9~lOCMIYHIaiOB6~5j"
+                    "t9w2EOwi6sIIqrg_")
+        sig = self.dist._sign_string(self.canned_policy, private_key_string=self.pk_str)
+        encoded_sig = self.dist._url_base64_encode(sig)
+        self.assertEqual(expected, encoded_sig)
+
+    def test_sign_canned_policy_unicode(self):
+        """
+        Test signing the canned policy from amazon's cloudfront documentation.
+        """
+        expected = ("Nql641NHEUkUaXQHZINK1FZ~SYeUSoBJMxjdgqrzIdzV2gyEXPDN"
+                    "v0pYdWJkflDKJ3xIu7lbwRpSkG98NBlgPi4ZJpRRnVX4kXAJK6td"
+                    "Nx6FucDB7OVqzcxkxHsGFd8VCG1BkC-Afh9~lOCMIYHIaiOB6~5j"
+                    "t9w2EOwi6sIIqrg_")
+        unicode_policy = unicode(self.canned_policy)
+        sig = self.dist._sign_string(unicode_policy, private_key_string=self.pk_str)
+        encoded_sig = self.dist._url_base64_encode(sig)
+        self.assertEqual(expected, encoded_sig)
+
+    def test_sign_custom_policy_1(self):
+        """
+        Test signing custom policy 1 from amazon's cloudfront documentation.
+        """
+        expected = ("cPFtRKvUfYNYmxek6ZNs6vgKEZP6G3Cb4cyVt~FjqbHOnMdxdT7e"
+                    "T6pYmhHYzuDsFH4Jpsctke2Ux6PCXcKxUcTIm8SO4b29~1QvhMl-"
+                    "CIojki3Hd3~Unxjw7Cpo1qRjtvrimW0DPZBZYHFZtiZXsaPt87yB"
+                    "P9GWnTQoaVysMxQ_")
+        sig = self.dist._sign_string(self.custom_policy_1, private_key_string=self.pk_str)
+        encoded_sig = self.dist._url_base64_encode(sig)
+        self.assertEqual(expected, encoded_sig)
+
+    def test_sign_custom_policy_2(self):
+        """
+        Test signing custom policy 2 from amazon's cloudfront documentation.
+        """
+        expected = ("rc~5Qbbm8EJXjUTQ6Cn0LAxR72g1DOPrTmdtfbWVVgQNw0q~KHUA"
+                    "mBa2Zv1Wjj8dDET4XSL~Myh44CLQdu4dOH~N9huH7QfPSR~O4tIO"
+                    "S1WWcP~2JmtVPoQyLlEc8YHRCuN3nVNZJ0m4EZcXXNAS-0x6Zco2"
+                    "SYx~hywTRxWR~5Q_")
+        sig = self.dist._sign_string(self.custom_policy_2, private_key_string=self.pk_str)
+        encoded_sig = self.dist._url_base64_encode(sig)
+        self.assertEqual(expected, encoded_sig)
+
+    def test_create_canned_policy(self):
+        """
+        Test that a canned policy is generated correctly.
+        """
+        url = "http://1234567.cloudfront.com/test_resource.mp3?dog=true"
+        expires = 999999
+        policy = self.dist._canned_policy(url, expires)
+        policy = json.loads(policy)
+
+        self.assertEqual(1, len(policy.keys()))
+        statements = policy["Statement"]
+        self.assertEqual(1, len(statements))
+        statement = statements[0]
+        resource = statement["Resource"]
+        self.assertEqual(url, resource)
+        condition = statement["Condition"]
+        self.assertEqual(1, len(condition.keys()))
+        date_less_than = condition["DateLessThan"]
+        self.assertEqual(1, len(date_less_than.keys()))
+        aws_epoch_time = date_less_than["AWS:EpochTime"]
+        self.assertEqual(expires, aws_epoch_time)
+        
+    def test_custom_policy_expires_and_policy_url(self):
+        """
+        Test that a custom policy can be created with an expire time and an
+        arbitrary URL.
+        """
+        url = "http://1234567.cloudfront.com/*"
+        expires = 999999
+        policy = self.dist._custom_policy(url, expires=expires)
+        policy = json.loads(policy)
+
+        self.assertEqual(1, len(policy.keys()))
+        statements = policy["Statement"]
+        self.assertEqual(1, len(statements))
+        statement = statements[0]
+        resource = statement["Resource"]
+        self.assertEqual(url, resource)
+        condition = statement["Condition"]
+        self.assertEqual(1, len(condition.keys()))
+        date_less_than = condition["DateLessThan"]
+        self.assertEqual(1, len(date_less_than.keys()))
+        aws_epoch_time = date_less_than["AWS:EpochTime"]
+        self.assertEqual(expires, aws_epoch_time)
+
+    def test_custom_policy_valid_after(self):
+        """
+        Test that a custom policy can be created with a valid-after time and
+        an arbitrary URL.
+        """
+        url = "http://1234567.cloudfront.com/*"
+        valid_after = 999999
+        policy = self.dist._custom_policy(url, valid_after=valid_after)
+        policy = json.loads(policy)
+
+        self.assertEqual(1, len(policy.keys()))
+        statements = policy["Statement"]
+        self.assertEqual(1, len(statements))
+        statement = statements[0]
+        resource = statement["Resource"]
+        self.assertEqual(url, resource)
+        condition = statement["Condition"]
+        self.assertEqual(1, len(condition.keys()))
+        date_greater_than = condition["DateGreaterThan"]
+        self.assertEqual(1, len(date_greater_than.keys()))
+        aws_epoch_time = date_greater_than["AWS:EpochTime"]
+        self.assertEqual(valid_after, aws_epoch_time)
+
+    def test_custom_policy_ip_address(self):
+        """
+        Test that a custom policy can be created with an IP address and
+        an arbitrary URL.
+        """
+        url = "http://1234567.cloudfront.com/*"
+        ip_range = "192.168.0.1"
+        policy = self.dist._custom_policy(url, ip_address=ip_range)
+        policy = json.loads(policy)
+
+        self.assertEqual(1, len(policy.keys()))
+        statements = policy["Statement"]
+        self.assertEqual(1, len(statements))
+        statement = statements[0]
+        resource = statement["Resource"]
+        self.assertEqual(url, resource)
+        condition = statement["Condition"]
+        self.assertEqual(1, len(condition.keys()))
+        ip_address = condition["IpAddress"]
+        self.assertEqual(1, len(ip_address.keys()))
+        source_ip = ip_address["AWS:SourceIp"]
+        self.assertEqual("%s/32" % ip_range, source_ip)
+
+    def test_custom_policy_ip_range(self):
+        """
+        Test that a custom policy can be created with an IP address and
+        an arbitrary URL.
+        """
+        url = "http://1234567.cloudfront.com/*"
+        ip_range = "192.168.0.0/24"
+        policy = self.dist._custom_policy(url, ip_address=ip_range)
+        policy = json.loads(policy)
+
+        self.assertEqual(1, len(policy.keys()))
+        statements = policy["Statement"]
+        self.assertEqual(1, len(statements))
+        statement = statements[0]
+        resource = statement["Resource"]
+        self.assertEqual(url, resource)
+        condition = statement["Condition"]
+        self.assertEqual(1, len(condition.keys()))
+        ip_address = condition["IpAddress"]
+        self.assertEqual(1, len(ip_address.keys()))
+        source_ip = ip_address["AWS:SourceIp"]
+        self.assertEqual(ip_range, source_ip)
+
+    def test_custom_policy_all(self):
+        """
+        Test that a custom policy can be created with an IP address and
+        an arbitrary URL.
+        """
+        url = "http://1234567.cloudfront.com/test.txt"
+        expires = 999999
+        valid_after = 111111
+        ip_range = "192.168.0.0/24"
+        policy = self.dist._custom_policy(url, expires=expires,
+                                          valid_after=valid_after,
+                                          ip_address=ip_range)
+        policy = json.loads(policy)
+
+        self.assertEqual(1, len(policy.keys()))
+        statements = policy["Statement"]
+        self.assertEqual(1, len(statements))
+        statement = statements[0]
+        resource = statement["Resource"]
+        self.assertEqual(url, resource)
+        condition = statement["Condition"]
+        self.assertEqual(3, len(condition.keys()))
+        #check expires condition
+        date_less_than = condition["DateLessThan"]
+        self.assertEqual(1, len(date_less_than.keys()))
+        aws_epoch_time = date_less_than["AWS:EpochTime"]
+        self.assertEqual(expires, aws_epoch_time)
+        #check valid_after condition
+        date_greater_than = condition["DateGreaterThan"]
+        self.assertEqual(1, len(date_greater_than.keys()))
+        aws_epoch_time = date_greater_than["AWS:EpochTime"]
+        self.assertEqual(valid_after, aws_epoch_time)
+        #check source ip address condition
+        ip_address = condition["IpAddress"]
+        self.assertEqual(1, len(ip_address.keys()))
+        source_ip = ip_address["AWS:SourceIp"]
+        self.assertEqual(ip_range, source_ip)
+
+    def test_params_canned_policy(self):
+        """
+        Test the correct params are generated for a canned policy.
+        """
+        url = "http://d604721fxaaqy9.cloudfront.net/horizon.jpg?large=yes&license=yes"
+        expire_time = 1258237200
+        expected_sig = ("Nql641NHEUkUaXQHZINK1FZ~SYeUSoBJMxjdgqrzIdzV2gyE"
+                        "XPDNv0pYdWJkflDKJ3xIu7lbwRpSkG98NBlgPi4ZJpRRnVX4"
+                        "kXAJK6tdNx6FucDB7OVqzcxkxHsGFd8VCG1BkC-Afh9~lOCM"
+                        "IYHIaiOB6~5jt9w2EOwi6sIIqrg_")
+        signed_url_params = self.dist._create_signing_params(url, self.pk_id, expire_time, private_key_string=self.pk_str)
+        self.assertEqual(3, len(signed_url_params))
+        self.assertEqual(signed_url_params["Expires"], "1258237200")
+        self.assertEqual(signed_url_params["Signature"], expected_sig)
+        self.assertEqual(signed_url_params["Key-Pair-Id"], "PK123456789754")
+
+    def test_canned_policy(self):
+        """
+        Generate signed url from the Example Canned Policy in Amazon's
+        documentation.
+        """
+        url = "http://d604721fxaaqy9.cloudfront.net/horizon.jpg?large=yes&license=yes"
+        expire_time = 1258237200
+        expected_url = "http://d604721fxaaqy9.cloudfront.net/horizon.jpg?large=yes&license=yes&Expires=1258237200&Signature=Nql641NHEUkUaXQHZINK1FZ~SYeUSoBJMxjdgqrzIdzV2gyEXPDNv0pYdWJkflDKJ3xIu7lbwRpSkG98NBlgPi4ZJpRRnVX4kXAJK6tdNx6FucDB7OVqzcxkxHsGFd8VCG1BkC-Afh9~lOCMIYHIaiOB6~5jt9w2EOwi6sIIqrg_&Key-Pair-Id=PK123456789754"
+        signed_url = self.dist.create_signed_url(
+            url, self.pk_id, expire_time, private_key_string=self.pk_str)
+        self.assertEqual(expected_url, signed_url)
+
diff --git a/tests/db/test_password.py b/tests/db/test_password.py
new file mode 100644
index 0000000..a0c1424
--- /dev/null
+++ b/tests/db/test_password.py
@@ -0,0 +1,128 @@
+# Copyright (c) 2010 Robert Mela
+#
+# Permission is hereby granted, free of charge, to any person obtaining a
+# copy of this software and associated documentation files (the
+# "Software"), to deal in the Software without restriction, including
+# without limitation the rights to use, copy, modify, merge, publish, dis-
+# tribute, sublicense, and/or sell copies of the Software, and to permit
+# persons to whom the Software is furnished to do so, subject to the fol-
+# lowing conditions:
+#
+# The above copyright notice and this permission notice shall be included
+# in all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
+# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
+# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, 
+# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+# IN THE SOFTWARE.
+ 
+
+import unittest
+import logging
+import time
+
+log= logging.getLogger('password_property_test')
+log.setLevel(logging.DEBUG)
+
+class PasswordPropertyTest(unittest.TestCase):
+    """Test the PasswordProperty"""
+
+    def tearDown(self):
+        cls=self.test_model()
+        for obj in cls.all(): obj.delete()
+
+    def hmac_hashfunc(self):
+        import hmac
+        def hashfunc(msg):
+            return hmac.new('mysecret', msg)
+        return hashfunc
+
+    def test_model(self,hashfunc=None):
+        from boto.utils import Password
+        from boto.sdb.db.model import Model
+        from boto.sdb.db.property import PasswordProperty
+        import hashlib
+        class MyModel(Model):
+            password=PasswordProperty(hashfunc=hashfunc)
+        return MyModel
+
+    def test_custom_password_class(self):
+        from boto.utils import Password
+        from boto.sdb.db.model import Model
+        from boto.sdb.db.property import PasswordProperty
+        import hmac, hashlib
+
+
+        myhashfunc = hashlib.md5
+	## Define a new Password class
+        class MyPassword(Password):
+            hashfunc = myhashfunc #hashlib.md5 #lambda cls,msg: hmac.new('mysecret',msg)
+
+	## Define a custom password property using the new Password class
+
+        class MyPasswordProperty(PasswordProperty):
+            data_type=MyPassword
+            type_name=MyPassword.__name__
+
+	## Define a model using the new password property
+
+        class MyModel(Model):
+            password=MyPasswordProperty()#hashfunc=hashlib.md5)
+
+        obj = MyModel()
+        obj.password = 'bar'
+        expected = myhashfunc('bar').hexdigest() #hmac.new('mysecret','bar').hexdigest()
+        log.debug("\npassword=%s\nexpected=%s" % (obj.password, expected))
+        self.assertTrue(obj.password == 'bar' )
+        obj.save()
+        id= obj.id
+        time.sleep(5)
+        obj = MyModel.get_by_id(id)
+        self.assertEquals(obj.password,'bar')
+        self.assertEquals(str(obj.password), expected)
+                          #hmac.new('mysecret','bar').hexdigest())
+ 
+        
+    def test_aaa_default_password_property(self):
+        cls = self.test_model()
+        obj = cls(id='passwordtest')
+        obj.password = 'foo'
+        self.assertEquals('foo', obj.password)
+        obj.save()
+        time.sleep(5)
+        obj = cls.get_by_id('passwordtest')
+        self.assertEquals('foo', obj.password)
+
+    def test_password_constructor_hashfunc(self):
+        import hmac
+        myhashfunc=lambda msg: hmac.new('mysecret',msg)
+        cls = self.test_model(hashfunc=myhashfunc)
+        obj = cls()
+        obj.password='hello'
+        expected = myhashfunc('hello').hexdigest()
+        self.assertEquals(obj.password, 'hello')
+        self.assertEquals(str(obj.password), expected)
+        obj.save()
+        id = obj.id
+        time.sleep(5)
+        obj = cls.get_by_id(id)
+        log.debug("\npassword=%s" % obj.password)
+        self.assertTrue(obj.password == 'hello')
+
+       
+ 
+if __name__ == '__main__':
+    import sys, os
+    curdir = os.path.dirname( os.path.abspath(__file__) )
+    srcroot = curdir + "/../.."
+    sys.path = [ srcroot ] + sys.path
+    logging.basicConfig()
+    log.setLevel(logging.INFO)
+    suite = unittest.TestLoader().loadTestsFromTestCase(PasswordPropertyTest)
+    unittest.TextTestRunner(verbosity=2).run(suite)
+
+    import boto
+ 
diff --git a/tests/devpay/__init__.py b/tests/devpay/__init__.py
new file mode 100644
index 0000000..e69de29
--- /dev/null
+++ b/tests/devpay/__init__.py
diff --git a/boto/tests/devpay_s3.py b/tests/devpay/test_s3.py
similarity index 100%
rename from boto/tests/devpay_s3.py
rename to tests/devpay/test_s3.py
diff --git a/boto/tests/__init__.py b/tests/ec2/__init__.py
similarity index 93%
copy from boto/tests/__init__.py
copy to tests/ec2/__init__.py
index 449bd16..b3fc3a0 100644
--- a/boto/tests/__init__.py
+++ b/tests/ec2/__init__.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2006,2007 Mitch Garnaat http://garnaat.org/
+# Copyright (c) 2006-2011 Mitch Garnaat http://garnaat.org/
 #
 # Permission is hereby granted, free of charge, to any person obtaining a
 # copy of this software and associated documentation files (the
@@ -18,6 +18,3 @@
 # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
 # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
 # IN THE SOFTWARE.
-#
-
-
diff --git a/boto/tests/__init__.py b/tests/ec2/cloudwatch/__init__.py
similarity index 93%
copy from boto/tests/__init__.py
copy to tests/ec2/cloudwatch/__init__.py
index 449bd16..b3fc3a0 100644
--- a/boto/tests/__init__.py
+++ b/tests/ec2/cloudwatch/__init__.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2006,2007 Mitch Garnaat http://garnaat.org/
+# Copyright (c) 2006-2011 Mitch Garnaat http://garnaat.org/
 #
 # Permission is hereby granted, free of charge, to any person obtaining a
 # copy of this software and associated documentation files (the
@@ -18,6 +18,3 @@
 # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
 # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
 # IN THE SOFTWARE.
-#
-
-
diff --git a/tests/ec2/cloudwatch/test_connection.py b/tests/ec2/cloudwatch/test_connection.py
new file mode 100644
index 0000000..c549c1d
--- /dev/null
+++ b/tests/ec2/cloudwatch/test_connection.py
@@ -0,0 +1,133 @@
+# Copyright (c) 2010 Hunter Blanks http://artifex.org/~hblanks/
+# All rights reserved.
+#
+# Permission is hereby granted, free of charge, to any person obtaining a
+# copy of this software and associated documentation files (the
+# "Software"), to deal in the Software without restriction, including
+# without limitation the rights to use, copy, modify, merge, publish, dis-
+# tribute, sublicense, and/or sell copies of the Software, and to permit
+# persons to whom the Software is furnished to do so, subject to the fol-
+# lowing conditions:
+#
+# The above copyright notice and this permission notice shall be included
+# in all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
+# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
+# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+# IN THE SOFTWARE.
+
+"""
+Initial, and very limited, unit tests for CloudWatchConnection.
+"""
+
+import datetime
+import time
+import unittest
+
+from boto.ec2.cloudwatch import CloudWatchConnection
+from boto.ec2.cloudwatch.metric import Metric
+
+class CloudWatchConnectionTest(unittest.TestCase):
+
+    def test_build_list_params(self):
+        c = CloudWatchConnection()
+        params = {}
+        c.build_list_params(
+            params, ['thing1', 'thing2', 'thing3'], 'ThingName%d')
+        expected_params = {
+            'ThingName1': 'thing1',
+            'ThingName2': 'thing2',
+            'ThingName3': 'thing3'
+            }
+        self.assertEqual(params, expected_params)
+
+    def test_build_put_params_one(self):
+        c = CloudWatchConnection()
+        params = {}
+        c.build_put_params(params, name="N", value=1, dimensions={"D": "V"})
+        expected_params = {
+            'MetricData.member.1.MetricName': 'N',
+            'MetricData.member.1.Value': 1,
+            'MetricData.member.1.Dimensions.member.1.Name': 'D',
+            'MetricData.member.1.Dimensions.member.1.Value': 'V',
+            }
+        self.assertEqual(params, expected_params)
+
+    def test_build_put_params_multiple_metrics(self):
+        c = CloudWatchConnection()
+        params = {}
+        c.build_put_params(params, name=["N", "M"], value=[1, 2], dimensions={"D": "V"})
+        expected_params = {
+            'MetricData.member.1.MetricName': 'N',
+            'MetricData.member.1.Value': 1,
+            'MetricData.member.1.Dimensions.member.1.Name': 'D',
+            'MetricData.member.1.Dimensions.member.1.Value': 'V',
+            'MetricData.member.2.MetricName': 'M',
+            'MetricData.member.2.Value': 2,
+            'MetricData.member.2.Dimensions.member.1.Name': 'D',
+            'MetricData.member.2.Dimensions.member.1.Value': 'V',
+            }
+        self.assertEqual(params, expected_params)
+
+    def test_build_put_params_multiple_dimensions(self):
+        c = CloudWatchConnection()
+        params = {}
+        c.build_put_params(params, name="N", value=[1, 2], dimensions=[{"D": "V"}, {"D": "W"}])
+        expected_params = {
+            'MetricData.member.1.MetricName': 'N',
+            'MetricData.member.1.Value': 1,
+            'MetricData.member.1.Dimensions.member.1.Name': 'D',
+            'MetricData.member.1.Dimensions.member.1.Value': 'V',
+            'MetricData.member.2.MetricName': 'N',
+            'MetricData.member.2.Value': 2,
+            'MetricData.member.2.Dimensions.member.1.Name': 'D',
+            'MetricData.member.2.Dimensions.member.1.Value': 'W',
+            }
+        self.assertEqual(params, expected_params)
+
+    def test_build_put_params_invalid(self):
+        c = CloudWatchConnection()
+        params = {}
+        try:
+            c.build_put_params(params, name=["N", "M"], value=[1, 2, 3])
+        except:
+            pass
+        else:
+            self.fail("Should not accept lists of different lengths.")
+
+    def test_get_metric_statistics(self):
+        c = CloudWatchConnection()
+        m = c.list_metrics()[0]
+        end = datetime.datetime.now()
+        start = end - datetime.timedelta(hours=24*14)
+        c.get_metric_statistics(
+            3600*24, start, end, m.name, m.namespace, ['Average', 'Sum'])
+
+    def test_put_metric_data(self):
+        c = CloudWatchConnection()
+        now = datetime.datetime.now()
+        name, namespace = 'unit-test-metric', 'boto-unit-test'
+        c.put_metric_data(namespace, name, 5, now, 'Bytes')
+
+        # Uncomment the following lines for a slower but more thorough
+        # test. (Hurrah for eventual consistency...)
+        #
+        # metric = Metric(connection=c)
+        # metric.name = name
+        # metric.namespace = namespace
+        # time.sleep(60)
+        # l = metric.query(
+        #     now - datetime.timedelta(seconds=60),
+        #     datetime.datetime.now(),
+        #     'Average')
+        # assert l
+        # for row in l:
+        #     self.assertEqual(row['Unit'], 'Bytes')
+        #     self.assertEqual(row['Average'], 5.0)
+
+if __name__ == '__main__':
+    unittest.main()
diff --git a/tests/ec2/elb/test_connection.py b/tests/ec2/elb/test_connection.py
new file mode 100644
index 0000000..4b6b7bb
--- /dev/null
+++ b/tests/ec2/elb/test_connection.py
@@ -0,0 +1,104 @@
+# Copyright (c) 2010 Hunter Blanks http://artifex.org/~hblanks/
+# All rights reserved.
+#
+# Permission is hereby granted, free of charge, to any person obtaining a
+# copy of this software and associated documentation files (the
+# "Software"), to deal in the Software without restriction, including
+# without limitation the rights to use, copy, modify, merge, publish, dis-
+# tribute, sublicense, and/or sell copies of the Software, and to permit
+# persons to whom the Software is furnished to do so, subject to the fol-
+# lowing conditions:
+#
+# The above copyright notice and this permission notice shall be included
+# in all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
+# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
+# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+# IN THE SOFTWARE.
+
+"""
+Initial, and very limited, unit tests for ELBConnection.
+"""
+
+import unittest
+from boto.ec2.elb import ELBConnection
+
+class ELBConnectionTest(unittest.TestCase):
+
+    def tearDown(self):
+        """ Deletes all load balancers after every test. """
+        for lb in ELBConnection().get_all_load_balancers():
+            lb.delete()
+
+    def test_build_list_params(self):
+        c = ELBConnection()
+        params = {}
+        c.build_list_params(
+            params, ['thing1', 'thing2', 'thing3'], 'ThingName%d')
+        expected_params = {
+            'ThingName1': 'thing1',
+            'ThingName2': 'thing2',
+            'ThingName3': 'thing3'
+            }
+        self.assertEqual(params, expected_params)
+
+    # TODO: for these next tests, consider sleeping until our load
+    # balancer comes up, then testing for connectivity to
+    # balancer.dns_name, along the lines of the existing EC2 unit tests.
+
+    def test_create_load_balancer(self):
+        c = ELBConnection()
+        name = 'elb-boto-unit-test'
+        availability_zones = ['us-east-1a']
+        listeners = [(80, 8000, 'HTTP')]
+        balancer = c.create_load_balancer(name, availability_zones, listeners)
+        self.assertEqual(balancer.name, name)
+        self.assertEqual(balancer.availability_zones, availability_zones)
+        self.assertEqual(balancer.listeners, listeners)
+
+        balancers = c.get_all_load_balancers()
+        self.assertEqual([lb.name for lb in balancers], [name])
+
+    def test_create_load_balancer_listeners(self):
+        c = ELBConnection()
+        name = 'elb-boto-unit-test'
+        availability_zones = ['us-east-1a']
+        listeners = [(80, 8000, 'HTTP')]
+        balancer = c.create_load_balancer(name, availability_zones, listeners)
+
+        more_listeners = [(443, 8001, 'HTTP')]
+        c.create_load_balancer_listeners(name, more_listeners)
+        balancers = c.get_all_load_balancers()
+        self.assertEqual([lb.name for lb in balancers], [name])
+        self.assertEqual(
+            sorted(l.get_tuple() for l in balancers[0].listeners),
+            sorted(listeners + more_listeners)
+            )
+
+    def test_delete_load_balancer_listeners(self):
+        c = ELBConnection()
+        name = 'elb-boto-unit-test'
+        availability_zones = ['us-east-1a']
+        listeners = [(80, 8000, 'HTTP'), (443, 8001, 'HTTP')]
+        balancer = c.create_load_balancer(name, availability_zones, listeners)
+
+        balancers = c.get_all_load_balancers()
+        self.assertEqual([lb.name for lb in balancers], [name])
+        self.assertEqual(
+            [l.get_tuple() for l in balancers[0].listeners], listeners)
+
+        c.delete_load_balancer_listeners(name, [443])
+        balancers = c.get_all_load_balancers()
+        self.assertEqual([lb.name for lb in balancers], [name])
+        self.assertEqual(
+            [l.get_tuple() for l in balancers[0].listeners],
+            listeners[:1]
+            )
+
+
+if __name__ == '__main__':
+    unittest.main()
\ No newline at end of file
diff --git a/boto/tests/test_ec2connection.py b/tests/ec2/test_connection.py
similarity index 98%
rename from boto/tests/test_ec2connection.py
rename to tests/ec2/test_connection.py
index 046ff92..6b7ece1 100644
--- a/boto/tests/test_ec2connection.py
+++ b/tests/ec2/test_connection.py
@@ -1,5 +1,3 @@
-#!/usr/bin/env python
-
 # Copyright (c) 2006-2010 Mitch Garnaat http://garnaat.org/
 # Copyright (c) 2009, Eucalyptus Systems, Inc.
 # All rights reserved.
@@ -55,6 +53,7 @@
         # now remove that permission
         status = image.remove_launch_permissions(group_names=['all'])
         assert status
+        time.sleep(10)
         d = image.get_launch_permissions()
         assert not d.has_key('groups')
         
diff --git a/boto/tests/__init__.py b/tests/s3/__init__.py
similarity index 93%
copy from boto/tests/__init__.py
copy to tests/s3/__init__.py
index 449bd16..b3fc3a0 100644
--- a/boto/tests/__init__.py
+++ b/tests/s3/__init__.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2006,2007 Mitch Garnaat http://garnaat.org/
+# Copyright (c) 2006-2011 Mitch Garnaat http://garnaat.org/
 #
 # Permission is hereby granted, free of charge, to any person obtaining a
 # copy of this software and associated documentation files (the
@@ -18,6 +18,3 @@
 # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
 # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
 # IN THE SOFTWARE.
-#
-
-
diff --git a/boto/tests/cb_test_harnass.py b/tests/s3/cb_test_harnass.py
similarity index 100%
rename from boto/tests/cb_test_harnass.py
rename to tests/s3/cb_test_harnass.py
diff --git a/boto/tests/mock_storage_service.py b/tests/s3/mock_storage_service.py
similarity index 78%
rename from boto/tests/mock_storage_service.py
rename to tests/s3/mock_storage_service.py
index 10b5253..0f3ea7b 100644
--- a/boto/tests/mock_storage_service.py
+++ b/tests/s3/mock_storage_service.py
@@ -92,7 +92,7 @@
                                cb=NOT_IMPL, num_cb=NOT_IMPL,
                                policy=NOT_IMPL, md5=NOT_IMPL,
                                res_upload_handler=NOT_IMPL):
-        self.data = fp.readlines()
+        self.data = fp.read()
         self.size = len(self.data)
         self._handle_headers(headers)
 
@@ -103,13 +103,31 @@
         self.size = len(s)
         self._handle_headers(headers)
 
+    def set_contents_from_filename(self, filename, headers=None, replace=NOT_IMPL,
+                                   cb=NOT_IMPL, num_cb=NOT_IMPL,
+                                   policy=NOT_IMPL, md5=NOT_IMPL,
+                                   res_upload_handler=NOT_IMPL):
+        fp = open(filename, 'rb')
+        self.set_contents_from_file(fp, headers, replace, cb, num_cb,
+                                    policy, md5, res_upload_handler)
+        fp.close()
+    
+    def copy(self, dst_bucket_name, dst_key, metadata=NOT_IMPL,
+             reduced_redundancy=NOT_IMPL, preserve_acl=NOT_IMPL):
+        dst_bucket = self.bucket.connection.get_bucket(dst_bucket_name)
+        return dst_bucket.copy_key(dst_key, self.bucket.name,
+                                   self.name, metadata)
+
 
 class MockBucket(object):
 
-    def __init__(self, connection=NOT_IMPL, name=None, key_class=NOT_IMPL):
+    def __init__(self, connection=None, name=None, key_class=NOT_IMPL):
         self.name = name
         self.keys = {}
         self.acls = {name: MockAcl()}
+        self.subresources = {}
+        self.connection = connection
+        self.logging = False
 
     def copy_key(self, new_key_name, src_bucket_name,
                  src_key_name, metadata=NOT_IMPL, src_version_id=NOT_IMPL,
@@ -119,6 +137,13 @@
             src_bucket_name).get_key(src_key_name)
         new_key.data = copy.copy(src_key.data)
         new_key.size = len(new_key.data)
+        return new_key
+
+    def disable_logging(self):
+        self.logging = False
+
+    def enable_logging(self, target_bucket_prefix):
+        self.logging = True
 
     def get_acl(self, key_name='', headers=NOT_IMPL, version_id=NOT_IMPL):
         if key_name:
@@ -128,6 +153,13 @@
             # Return ACL for the bucket.
             return self.acls[self.name]
 
+    def get_subresource(self, subresource, key_name=NOT_IMPL, headers=NOT_IMPL,
+                        version_id=NOT_IMPL):
+        if subresource in self.subresources:
+            return self.subresources[subresource]
+        else:
+            return '<Subresource/>'
+
     def new_key(self, key_name=None):
         mock_key = MockKey(self, key_name)
         self.keys[key_name] = mock_key
@@ -173,6 +205,10 @@
             # Set ACL for the bucket.
             self.acls[self.name] = acl_or_str
 
+    def set_subresource(self, subresource, value, key_name=NOT_IMPL,
+                        headers=NOT_IMPL, version_id=NOT_IMPL):
+        self.subresources[subresource] = value
+
 
 class MockConnection(object):
 
@@ -191,15 +227,17 @@
                       policy=NOT_IMPL):
         if bucket_name in self.buckets:
             raise boto.exception.StorageCreateError(
-                409, 'BucketAlreadyOwnedByYou', 'bucket already exists')
-        mock_bucket = MockBucket(name=bucket_name)
+                409, 'BucketAlreadyOwnedByYou',
+                "<Message>Your previous request to create the named bucket "
+                "succeeded and you already own it.</Message>")
+        mock_bucket = MockBucket(name=bucket_name, connection=self)
         self.buckets[bucket_name] = mock_bucket
         return mock_bucket
 
     def delete_bucket(self, bucket, headers=NOT_IMPL):
         if bucket not in self.buckets:
-            raise boto.exception.StorageResponseError(404, 'NoSuchBucket',
-                                                'no such bucket')
+            raise boto.exception.StorageResponseError(
+                404, 'NoSuchBucket', '<Message>no such bucket</Message>')
         del self.buckets[bucket]
 
     def get_bucket(self, bucket_name, validate=NOT_IMPL, headers=NOT_IMPL):
@@ -258,12 +296,25 @@
                    version_id=NOT_IMPL, mfa_token=NOT_IMPL):
         self.get_bucket().delete_key(self.object_name)
 
+    def disable_logging(self, validate=NOT_IMPL, headers=NOT_IMPL,
+                        version_id=NOT_IMPL):
+        self.get_bucket().disable_logging()
+
+    def enable_logging(self, target_bucket, target_prefix, canned_acl=NOT_IMPL,
+                       validate=NOT_IMPL, headers=NOT_IMPL,
+                       version_id=NOT_IMPL):
+        self.get_bucket().enable_logging(target_bucket)
+
     def equals(self, uri):
         return self.uri == uri.uri
 
     def get_acl(self, validate=NOT_IMPL, headers=NOT_IMPL, version_id=NOT_IMPL):
         return self.get_bucket().get_acl(self.object_name)
 
+    def get_subresource(self, subresource, validate=NOT_IMPL, headers=NOT_IMPL,
+                        version_id=NOT_IMPL):
+        return self.get_bucket().get_subresource(subresource, self.object_name)
+
     def get_all_buckets(self, headers=NOT_IMPL):
         return self.connect().get_all_buckets()
 
@@ -296,3 +347,7 @@
     def set_acl(self, acl_or_str, key_name='', validate=NOT_IMPL,
                 headers=NOT_IMPL, version_id=NOT_IMPL):
         self.get_bucket().set_acl(acl_or_str, key_name)
+
+    def set_subresource(self, subresource, value, validate=NOT_IMPL,
+                        headers=NOT_IMPL, version_id=NOT_IMPL):
+        self.get_bucket().set_subresource(subresource, value, self.object_name)
diff --git a/tests/s3/other_cacerts.txt b/tests/s3/other_cacerts.txt
new file mode 100644
index 0000000..360954a
--- /dev/null
+++ b/tests/s3/other_cacerts.txt
@@ -0,0 +1,70 @@
+# Certifcate Authority certificates for validating SSL connections.
+#
+# This file contains PEM format certificates generated from
+# http://mxr.mozilla.org/seamonkey/source/security/nss/lib/ckfw/builtins/certdata.txt
+#
+# ***** BEGIN LICENSE BLOCK *****
+# Version: MPL 1.1/GPL 2.0/LGPL 2.1
+#
+# The contents of this file are subject to the Mozilla Public License Version
+# 1.1 (the "License"); you may not use this file except in compliance with
+# the License. You may obtain a copy of the License at
+# http://www.mozilla.org/MPL/
+#
+# Software distributed under the License is distributed on an "AS IS" basis,
+# WITHOUT WARRANTY OF ANY KIND, either express or implied. See the License
+# for the specific language governing rights and limitations under the
+# License.
+#
+# The Original Code is the Netscape security libraries.
+#
+# The Initial Developer of the Original Code is
+# Netscape Communications Corporation.
+# Portions created by the Initial Developer are Copyright (C) 1994-2000
+# the Initial Developer. All Rights Reserved.
+#
+# Contributor(s):
+#
+# Alternatively, the contents of this file may be used under the terms of
+# either the GNU General Public License Version 2 or later (the "GPL"), or
+# the GNU Lesser General Public License Version 2.1 or later (the "LGPL"),
+# in which case the provisions of the GPL or the LGPL are applicable instead
+# of those above. If you wish to allow use of your version of this file only
+# under the terms of either the GPL or the LGPL, and not to allow others to
+# use your version of this file under the terms of the MPL, indicate your
+# decision by deleting the provisions above and replace them with the notice
+# and other provisions required by the GPL or the LGPL. If you do not delete
+# the provisions above, a recipient may use your version of this file under
+# the terms of any one of the MPL, the GPL or the LGPL.
+#
+# ***** END LICENSE BLOCK *****
+
+
+Comodo CA Limited, CN=Trusted Certificate Services
+==================================================
+
+-----BEGIN CERTIFICATE-----
+MIIEQzCCAyugAwIBAgIBATANBgkqhkiG9w0BAQUFADB/MQswCQYDVQQGEwJHQjEb
+MBkGA1UECAwSR3JlYXRlciBNYW5jaGVzdGVyMRAwDgYDVQQHDAdTYWxmb3JkMRow
+GAYDVQQKDBFDb21vZG8gQ0EgTGltaXRlZDElMCMGA1UEAwwcVHJ1c3RlZCBDZXJ0
+aWZpY2F0ZSBTZXJ2aWNlczAeFw0wNDAxMDEwMDAwMDBaFw0yODEyMzEyMzU5NTla
+MH8xCzAJBgNVBAYTAkdCMRswGQYDVQQIDBJHcmVhdGVyIE1hbmNoZXN0ZXIxEDAO
+BgNVBAcMB1NhbGZvcmQxGjAYBgNVBAoMEUNvbW9kbyBDQSBMaW1pdGVkMSUwIwYD
+VQQDDBxUcnVzdGVkIENlcnRpZmljYXRlIFNlcnZpY2VzMIIBIjANBgkqhkiG9w0B
+AQEFAAOCAQ8AMIIBCgKCAQEA33FvNlhTWvI2VFeAxHQIIO0Yfyod5jWaHiWsnOWW
+fnJSoBVC21ndZHoa0Lh73TkVvFVIxO06AOoxEbrycXQaZ7jPM8yoMa+j49d/vzMt
+TGo87IvDktJTdyR0nAducPy9C1t2ul/y/9c3S0pgePfw+spwtOpZqqPOSC+pw7IL
+fhdyFgymBwwbOM/JYrc/oJOlh0Hyt3BAd9i+FHzjqMB6juljatEPmsbS9Is6FARW
+1O24zG71++IsWL1/T2sr92AkWCTOJu80kTrV44HQsvAEAtdbtz6SrGsSivnkBbA7
+kUlcsutT6vifR4buv5XAwAaf0lteERv0xwQ1KdJVXOTt6wIDAQABo4HJMIHGMB0G
+A1UdDgQWBBTFe1i97doladL3WRaoszLAeydb9DAOBgNVHQ8BAf8EBAMCAQYwDwYD
+VR0TAQH/BAUwAwEB/zCBgwYDVR0fBHwwejA8oDqgOIY2aHR0cDovL2NybC5jb21v
+ZG9jYS5jb20vVHJ1c3RlZENlcnRpZmljYXRlU2VydmljZXMuY3JsMDqgOKA2hjRo
+dHRwOi8vY3JsLmNvbW9kby5uZXQvVHJ1c3RlZENlcnRpZmljYXRlU2VydmljZXMu
+Y3JsMA0GCSqGSIb3DQEBBQUAA4IBAQDIk4E7ibSvuIQSTI3S8NtwuleGFTQQuS9/
+HrCoiWChisJ3DFBKmwCL2Iv0QeLQg4pKHBQGsKNoBXAxMKdTmw7pSqBYaWcOrp32
+pSxBvzwGa+RZzG0Q8ZZvH9/0BAKkn0U+yNj6NkZEUD+Cl5EfKNsYEYwq5GWDVxIS
+jBc/lDb+XbDABHcTuPQV1T84zJQ6VdCsmPW6AF/ghhmBeC8owH7TzEIK9a5QoNE+
+xqFx7D+gIIxmOom0jtTYsU0lR+4viMi14QVFwL4Ucd56/Y57fU0IlqUSc/Atyjcn
+dBInTMu2l+nZrghtWjlA3QVHdWpaIbOjGM9O9y5Xt5hwXsjEeLBi
+-----END CERTIFICATE-----
diff --git a/boto/tests/test_s3connection.py b/tests/s3/test_connection.py
similarity index 96%
rename from boto/tests/test_s3connection.py
rename to tests/s3/test_connection.py
index 3dd936f..4c209bd 100644
--- a/boto/tests/test_s3connection.py
+++ b/tests/s3/test_connection.py
@@ -1,6 +1,5 @@
-#!/usr/bin/env python
 # -*- coding: utf-8 -*-
-# Copyright (c) 2006,2007 Mitch Garnaat http://garnaat.org/
+# Copyright (c) 2006-2011 Mitch Garnaat http://garnaat.org/
 #
 # Permission is hereby granted, free of charge, to any person obtaining a
 # copy of this software and associated documentation files (the
@@ -68,6 +67,9 @@
         url = k.generate_url(3600, force_http=True)
         file = urllib.urlopen(url)
         assert s1 == file.read(), 'invalid URL %s' % url
+        url = k.generate_url(3600, force_http=True, headers={'x-amz-x-token' : 'XYZ'})
+        file = urllib.urlopen(url)
+        assert s1 == file.read(), 'invalid URL %s' % url
         bucket.delete_key(k)
         # test a few variations on get_all_keys - first load some data
         # for the first one, let's override the content type
diff --git a/tests/s3/test_encryption.py b/tests/s3/test_encryption.py
new file mode 100644
index 0000000..91ef71c
--- /dev/null
+++ b/tests/s3/test_encryption.py
@@ -0,0 +1,114 @@
+# Copyright (c) 2010 Mitch Garnaat http://garnaat.org/
+# Copyright (c) 2010, Eucalyptus Systems, Inc.
+# All rights reserved.
+#
+# Permission is hereby granted, free of charge, to any person obtaining a
+# copy of this software and associated documentation files (the
+# "Software"), to deal in the Software without restriction, including
+# without limitation the rights to use, copy, modify, merge, publish, dis-
+# tribute, sublicense, and/or sell copies of the Software, and to permit
+# persons to whom the Software is furnished to do so, subject to the fol-
+# lowing conditions:
+#
+# The above copyright notice and this permission notice shall be included
+# in all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
+# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
+# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, 
+# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+# IN THE SOFTWARE.
+
+"""
+Some unit tests for the S3 Encryption
+"""
+
+import unittest
+import time
+from boto.s3.connection import S3Connection
+from boto.exception import S3ResponseError
+
+json_policy = """{
+   "Version":"2008-10-17",
+   "Id":"PutObjPolicy",
+   "Statement":[{
+         "Sid":"DenyUnEncryptedObjectUploads",
+         "Effect":"Deny",
+         "Principal":{
+            "AWS":"*"
+         },
+         "Action":"s3:PutObject",
+         "Resource":"arn:aws:s3:::%s/*",
+         "Condition":{
+            "StringNotEquals":{
+               "s3:x-amz-server-side-encryption":"AES256"
+            }
+         }
+      }
+   ]
+}"""
+
+class S3EncryptionTest (unittest.TestCase):
+
+    def test_1_versions(self):
+        print '--- running S3Encryption tests ---'
+        c = S3Connection()
+        # create a new, empty bucket
+        bucket_name = 'encryption-%d' % int(time.time())
+        bucket = c.create_bucket(bucket_name)
+        
+        # now try a get_bucket call and see if it's really there
+        bucket = c.get_bucket(bucket_name)
+        
+        # create an unencrypted key
+        k = bucket.new_key('foobar')
+        s1 = 'This is unencrypted data'
+        s2 = 'This is encrypted data'
+        k.set_contents_from_string(s1)
+        time.sleep(5)
+        
+        # now get the contents from s3 
+        o = k.get_contents_as_string()
+        
+        # check to make sure content read from s3 is identical to original
+        assert o == s1
+        
+        # now overwrite that same key with encrypted data
+        k.set_contents_from_string(s2, encrypt_key=True)
+        time.sleep(5)
+        
+        # now retrieve the contents as a string and compare
+        o = k.get_contents_as_string()
+        assert o == s2
+        
+        # now set bucket policy to require encrypted objects
+        bucket.set_policy(json_policy % bucket.name)
+        time.sleep(5)
+        
+        # now try to write unencrypted key
+        write_failed = False
+        try:
+            k.set_contents_from_string(s1)
+        except S3ResponseError:
+            write_failed = True
+
+        assert write_failed
+        
+        # now try to write unencrypted key
+        write_failed = False
+        try:
+            k.set_contents_from_string(s1, encrypt_key=True)
+        except S3ResponseError:
+            write_failed = True
+
+        assert not write_failed
+        
+        # Now do regular delete
+        k.delete()
+        time.sleep(5)
+
+        # now delete bucket
+        bucket.delete()
+        print '--- tests completed ---'
diff --git a/boto/tests/test_gsconnection.py b/tests/s3/test_gsconnection.py
similarity index 78%
rename from boto/tests/test_gsconnection.py
rename to tests/s3/test_gsconnection.py
index 5c324fa..e38f442 100644
--- a/boto/tests/test_gsconnection.py
+++ b/tests/s3/test_gsconnection.py
@@ -1,7 +1,7 @@
-#!/usr/bin/env python
 # -*- coding: utf-8 -*-
-# Copyright (c) 2006-2010 Mitch Garnaat http://garnaat.org/
+# Copyright (c) 2006-2011 Mitch Garnaat http://garnaat.org/
 # Copyright (c) 2010, Eucalyptus Systems, Inc.
+# Copyright (c) 2011, Nexenta Systems, Inc.
 # All rights reserved.
 #
 # Permission is hereby granted, free of charge, to any person obtaining a
@@ -18,7 +18,7 @@
 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
 # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
 # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
-# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, 
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
 # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
 # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
 # IN THE SOFTWARE.
@@ -76,6 +76,20 @@
         md5 = k.md5
         k.set_contents_from_string(s2)
         assert k.md5 != md5
+        # Test for stream API
+        fp2 = open('foobar', 'rb')
+        k.md5 = None
+        k.base64md5 = None
+        k.set_contents_from_stream(fp2, headers=headers)
+        fp = open('foobar1', 'wb')
+        k.get_contents_to_file(fp)
+        fp.close()
+        fp2.seek(0,0)
+        fp = open('foobar1', 'rb')
+        assert (fp2.read() == fp.read()), 'Chunked Transfer corrupted the Data'
+        fp.close()
+        fp2.close()
+        os.unlink('foobar1')
         os.unlink('foobar')
         all = bucket.get_all_keys()
         assert len(all) == 6
@@ -101,12 +115,12 @@
         mdval2 = 'This is the second metadata value'
         k.set_metadata(mdkey2, mdval2)
         # try a unicode metadata value
-        
+
         mdval3 = u'föö'
         mdkey3 = 'meta3'
         k.set_metadata(mdkey3, mdval3)
         k.set_contents_from_string(s1)
-        
+
         k = bucket.lookup('has_metadata')
         assert k.get_metadata(mdkey1) == mdval1
         assert k.get_metadata(mdkey2) == mdval2
@@ -140,6 +154,22 @@
         k.set_acl('private')
         acl = k.get_acl()
         assert len(acl.entries.entry_list) == 1
+        # try set/get raw logging subresource
+        empty_logging_str="<?xml version='1.0' encoding='UTF-8'?><Logging/>"
+        logging_str = (
+            "<?xml version='1.0' encoding='UTF-8'?><Logging>"
+            "<LogBucket>log-bucket</LogBucket>" +
+            "<LogObjectPrefix>example</LogObjectPrefix>" +
+            "<PredefinedAcl>bucket-owner-full-control</PredefinedAcl>" +
+            "</Logging>")
+        bucket.set_subresource('logging', logging_str);
+        assert bucket.get_subresource('logging') == logging_str;
+        # try disable/enable logging
+        bucket.disable_logging()
+        assert bucket.get_subresource('logging') == empty_logging_str
+        bucket.enable_logging('log-bucket', 'example',
+                             canned_acl='bucket-owner-full-control')
+        assert bucket.get_subresource('logging') == logging_str;
         # now delete all keys in bucket
         for k in bucket:
             bucket.delete_key(k)
diff --git a/tests/s3/test_https_cert_validation.py b/tests/s3/test_https_cert_validation.py
new file mode 100644
index 0000000..c8babb5
--- /dev/null
+++ b/tests/s3/test_https_cert_validation.py
@@ -0,0 +1,137 @@
+# Copyright 2011 Google Inc.
+#
+# Permission is hereby granted, free of charge, to any person obtaining a
+# copy of this software and associated documentation files (the
+# "Software"), to deal in the Software without restriction, including
+# without limitation the rights to use, copy, modify, merge, publish, dis-
+# tribute, sublicense, and/or sell copies of the Software, and to permit
+# persons to whom the Software is furnished to do so, subject to the fol-
+# lowing conditions:
+#
+# The above copyright notice and this permission notice shall be included
+# in all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
+# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
+# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+# IN THE SOFTWARE.
+
+"""
+Tests to validate correct validation of SSL server certificates.
+
+Note that this test assumes two external dependencies are available:
+  - A http proxy, which by default is assumed to be at host 'cache' and port
+    3128.  This can be overridden with environment variables PROXY_HOST and
+    PROXY_PORT, respectively.
+  - An ssl-enabled web server that will return a valid certificate signed by one
+    of the bundled CAs, and which can be reached by an alternate hostname that
+    does not match the CN in that certificate.  By default, this test uses host
+    'www' (without fully qualified domain). This can be overridden with
+    environment variable INVALID_HOSTNAME_HOST. If no suitable host is already
+    available, such a mapping can be established by temporarily adding an IP
+    address for, say, www.google.com or www.amazon.com to /etc/hosts.
+"""
+
+import os
+import ssl
+import unittest
+
+import boto
+from boto import exception, https_connection
+from boto.gs.connection import GSConnection
+from boto.s3.connection import S3Connection
+
+# File 'other_cacerts.txt' contains a valid CA certificate of a CA that is used
+# by neither S3 nor Google Cloud Storage. Validation against this CA cert should
+# result in a certificate error.
+DEFAULT_CA_CERTS_FILE = os.path.join(
+        os.path.dirname(os.path.abspath(__file__ )), 'other_cacerts.txt')
+
+
+PROXY_HOST = os.environ.get('PROXY_HOST', 'cache')
+PROXY_PORT = os.environ.get('PROXY_PORT', '3128')
+
+# This test assumes that this host returns a certificate signed by one of the
+# trusted CAs, but with a Common Name that won't match host name 'www' (i.e.,
+# the server should return a certificate with CN 'www.<somedomain>.com').
+INVALID_HOSTNAME_HOST = os.environ.get('INVALID_HOSTNAME_HOST', 'www')
+
+class CertValidationTest (unittest.TestCase):
+
+    def setUp(self):
+        # Clear config
+        for section in boto.config.sections():
+            boto.config.remove_section(section)
+
+        # Enable https_validate_certificates.
+        boto.config.add_section('Boto')
+        boto.config.setbool('Boto', 'https_validate_certificates', True)
+
+        # Set up bogus credentials so that the auth module is willing to go
+        # ahead and make a request; the request should fail with a service-level
+        # error if it does get to the service (S3 or GS).
+        boto.config.add_section('Credentials')
+        boto.config.set('Credentials', 'gs_access_key_id', 'xyz')
+        boto.config.set('Credentials', 'gs_secret_access_key', 'xyz')
+        boto.config.set('Credentials', 'aws_access_key_id', 'xyz')
+        boto.config.set('Credentials', 'aws_secret_access_key', 'xyz')
+
+    def enableProxy(self):
+        boto.config.set('Boto', 'proxy', PROXY_HOST)
+        boto.config.set('Boto', 'proxy_port', PROXY_PORT)
+
+    def assertConnectionThrows(self, connection_class, error):
+        conn = connection_class()
+        self.assertRaises(error, conn.get_all_buckets)
+
+    def do_test_valid_cert(self):
+        # When connecting to actual servers with bundled root certificates, no
+        # cert errors should be thrown; instead we will get "invalid
+        # credentials" errors since the config used does not contain any
+        # credentials.
+        self.assertConnectionThrows(S3Connection, exception.S3ResponseError)
+        self.assertConnectionThrows(GSConnection, exception.GSResponseError)
+
+    def test_valid_cert(self):
+        self.do_test_valid_cert()
+
+    def test_valid_cert_with_proxy(self):
+        self.enableProxy()
+        self.do_test_valid_cert()
+
+    def do_test_invalid_signature(self):
+        boto.config.set('Boto', 'ca_certificates_file', DEFAULT_CA_CERTS_FILE)
+        self.assertConnectionThrows(S3Connection, ssl.SSLError)
+        self.assertConnectionThrows(GSConnection, ssl.SSLError)
+
+    def test_invalid_signature(self):
+        self.do_test_invalid_signature()
+
+    def test_invalid_signature_with_proxy(self):
+        self.enableProxy()
+        self.do_test_invalid_signature()
+
+    def do_test_invalid_host(self):
+        boto.config.set('Credentials', 'gs_host', INVALID_HOSTNAME_HOST)
+        boto.config.set('Credentials', 's3_host', INVALID_HOSTNAME_HOST)
+        self.assertConnectionThrows(S3Connection, ssl.SSLError)
+        self.assertConnectionThrows(GSConnection, ssl.SSLError)
+
+    def do_test_invalid_host(self):
+        boto.config.set('Credentials', 'gs_host', INVALID_HOSTNAME_HOST)
+        boto.config.set('Credentials', 's3_host', INVALID_HOSTNAME_HOST)
+        self.assertConnectionThrows(
+                S3Connection, https_connection.InvalidCertificateException)
+        self.assertConnectionThrows(
+                GSConnection, https_connection.InvalidCertificateException)
+
+    def test_invalid_host(self):
+        self.do_test_invalid_host()
+
+    def test_invalid_host_with_proxy(self):
+        self.enableProxy()
+        self.do_test_invalid_host()
+
diff --git a/tests/s3/test_pool.py b/tests/s3/test_pool.py
new file mode 100644
index 0000000..ebb68c8
--- /dev/null
+++ b/tests/s3/test_pool.py
@@ -0,0 +1,246 @@
+# Copyright (c) 2011 Brian Beach
+# All rights reserved.
+#
+# Permission is hereby granted, free of charge, to any person obtaining a
+# copy of this software and associated documentation files (the
+# "Software"), to deal in the Software without restriction, including
+# without limitation the rights to use, copy, modify, merge, publish, dis-
+# tribute, sublicense, and/or sell copies of the Software, and to permit
+# persons to whom the Software is furnished to do so, subject to the fol-
+# lowing conditions:
+#
+# The above copyright notice and this permission notice shall be included
+# in all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
+# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
+# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, 
+# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+# IN THE SOFTWARE.
+
+"""
+Some multi-threading tests of boto in a greenlet environment.
+"""
+
+import boto
+import time
+import uuid
+
+from StringIO import StringIO
+
+from threading import Thread
+
+def spawn(function, *args, **kwargs):
+    """
+    Spawns a new thread.  API is the same as
+    gevent.greenlet.Greenlet.spawn.
+    """
+    t = Thread(target = function, args = args, kwargs = kwargs)
+    t.start()
+    return t
+
+def put_object(bucket, name):
+    bucket.new_key(name).set_contents_from_string(name)
+
+def get_object(bucket, name):
+    assert bucket.get_key(name).get_contents_as_string() == name
+
+def test_close_connections():
+    """
+    A test that exposes the problem where connections are returned to the
+    connection pool (and closed) before the caller reads the response.
+    
+    I couldn't think of a way to test it without greenlets, so this test
+    doesn't run as part of the standard test suite.  That way, no more
+    dependencies are added to the test suite.
+    """
+    
+    print "Running test_close_connections"
+
+    # Connect to S3
+    s3 = boto.connect_s3()
+
+    # Clean previous tests.
+    for b in s3.get_all_buckets():
+        if b.name.startswith('test-'):
+            for key in b.get_all_keys():
+                key.delete()
+            b.delete()
+
+    # Make a test bucket
+    bucket = s3.create_bucket('test-%d' % int(time.time()))
+
+    # Create 30 threads that each create an object in S3.  The number
+    # 30 is chosen because it is larger than the connection pool size
+    # (20). 
+    names = [str(uuid.uuid4) for _ in range(30)]
+    threads = [
+        spawn(put_object, bucket, name)
+        for name in names
+        ]
+    for t in threads:
+        t.join()
+
+    # Create 30 threads to read the contents of the new objects.  This
+    # is where closing the connection early is a problem, because
+    # there is a response that needs to be read, and it can't be read
+    # if the connection has already been closed.
+    threads = [
+        spawn(get_object, bucket, name)
+        for name in names
+        ]
+    for t in threads:
+        t.join()
+
+# test_reuse_connections needs to read a file that is big enough that
+# one read() call on the socket won't read the whole thing.  
+BIG_SIZE = 10000
+
+class WriteAndCount(object):
+
+    """
+    A file-like object that counts the number of characters written.
+    """
+
+    def __init__(self):
+        self.size = 0
+
+    def write(self, data):
+        self.size += len(data)
+        time.sleep(0) # yield to other threads
+
+def read_big_object(s3, bucket, name, count):
+    for _ in range(count):
+        key = bucket.get_key(name)
+        out = WriteAndCount()
+        key.get_contents_to_file(out)
+        if out.size != BIG_SIZE:
+            print out.size, BIG_SIZE
+        assert out.size == BIG_SIZE
+        print "    pool size:", s3._pool.size()
+
+class LittleQuerier(object):
+
+    """
+    An object that manages a thread that keeps pulling down small
+    objects from S3 and checking the answers until told to stop.
+    """
+
+    def __init__(self, bucket, small_names):
+        self.running = True
+        self.bucket = bucket
+        self.small_names = small_names
+        self.thread = spawn(self.run)
+
+    def stop(self):
+        self.running = False
+        self.thread.join()
+
+    def run(self):
+        count = 0
+        while self.running:
+            i = count % 4
+            key = self.bucket.get_key(self.small_names[i])
+            expected = str(i)
+            rh = { 'response-content-type' : 'small/' + str(i) }
+            actual = key.get_contents_as_string(response_headers = rh)
+            if expected != actual:
+                print "AHA:", repr(expected), repr(actual)
+            assert expected == actual
+            count += 1
+
+def test_reuse_connections():
+    """
+    This test is an attempt to expose problems because of the fact
+    that boto returns connections to the connection pool before
+    reading the response.  The strategy is to start a couple big reads
+    from S3, where it will take time to read the response, and then
+    start other requests that will reuse the same connection from the
+    pool while the big response is still being read.
+
+    The test passes because of an interesting combination of factors.
+    I was expecting that it would fail because two threads would be
+    reading the same connection at the same time.  That doesn't happen
+    because httplib catches the problem before it happens and raises
+    an exception.
+
+    Here's the sequence of events:
+
+       - Thread 1: Send a request to read a big S3 object.
+       - Thread 1: Returns connection to pool.
+       - Thread 1: Start reading the body if the response.
+
+       - Thread 2: Get the same connection from the pool.
+       - Thread 2: Send another request on the same connection.
+       - Thread 2: Try to read the response, but
+                   HTTPConnection.get_response notices that the
+                   previous response isn't done reading yet, and
+                   raises a ResponseNotReady exception.
+       - Thread 2: _mexe catches the exception, does not return the
+                   connection to the pool, gets a new connection, and
+                   retries.
+
+       - Thread 1: Finish reading the body of its response.
+       
+       - Server:   Gets the second request on the connection, and
+                   sends a response.  This response is ignored because
+                   the connection has been dropped on the client end.
+
+    If you add a print statement in HTTPConnection.get_response at the
+    point where it raises ResponseNotReady, and then run this test,
+    you can see that it's happening.
+    """
+
+    print "Running test_reuse_connections"
+
+    # Connect to S3
+    s3 = boto.connect_s3()
+
+    # Make a test bucket
+    bucket = s3.create_bucket('test-%d' % int(time.time()))
+
+    # Create some small objects in S3.
+    small_names = [str(uuid.uuid4()) for _ in range(4)]
+    for (i, name) in enumerate(small_names):
+        bucket.new_key(name).set_contents_from_string(str(i))
+
+    # Wait, clean the connection pool, and make sure it's empty.
+    print "    waiting for all connections to become stale"
+    time.sleep(s3._pool.STALE_DURATION + 1)
+    s3._pool.clean()
+    assert s3._pool.size() == 0
+    print "    pool is empty"
+    
+    # Create a big object in S3.
+    big_name = str(uuid.uuid4())
+    contents = "-" * BIG_SIZE
+    bucket.new_key(big_name).set_contents_from_string(contents)
+
+    # Start some threads to read it and check that they are reading
+    # the correct thing.  Each thread will read the object 40 times.
+    threads = [
+        spawn(read_big_object, s3, bucket, big_name, 20)
+        for _ in range(5)
+        ]
+
+    # Do some other things that may (incorrectly) re-use the same
+    # connections while the big objects are being read.
+    queriers = [
+        LittleQuerier(bucket, small_names)
+        for _ in range(5)
+        ]
+
+    # Clean up.
+    for t in threads:
+        t.join()
+    for q in queriers:
+        q.stop()
+
+def main():
+    test_close_connections()
+    test_reuse_connections()
+
+if __name__ == '__main__':
+    main()
diff --git a/boto/tests/test_resumable_downloads.py b/tests/s3/test_resumable_downloads.py
similarity index 91%
rename from boto/tests/test_resumable_downloads.py
rename to tests/s3/test_resumable_downloads.py
index d7ced7f..4e3e6ba 100755
--- a/boto/tests/test_resumable_downloads.py
+++ b/tests/s3/test_resumable_downloads.py
@@ -45,7 +45,16 @@
 from boto.exception import ResumableTransferDisposition
 from boto.exception import ResumableDownloadException
 from boto.exception import StorageResponseError
-from boto.tests.cb_test_harnass import CallbackTestHarnass
+from cb_test_harnass import CallbackTestHarnass
+
+# We don't use the OAuth2 authentication plugin directly; importing it here
+# ensures that it's loaded and available by default.
+try:
+  from oauth2_plugin import oauth2_plugin
+except ImportError:
+  # Do nothing - if user doesn't have OAuth2 configured it doesn't matter;
+  # and if they do, the tests will fail (as they should in that case).
+  pass
 
 
 class ResumableDownloadTests(unittest.TestCase):
@@ -113,9 +122,9 @@
 
         # Create the test bucket.
         hostname = socket.gethostname().split('.')[0]
-        uri_base_str = 'gs://res_download_test_%s_%s_%s' % (
+        uri_base_str = 'gs://res-download-test-%s-%s-%s' % (
             hostname, os.getpid(), int(time.time()))
-        cls.src_bucket_uri = storage_uri('%s_dst' % uri_base_str)
+        cls.src_bucket_uri = storage_uri('%s-dst' % uri_base_str)
         cls.src_bucket_uri.create_bucket()
 
         # Create test source objects.
@@ -212,7 +221,8 @@
             # We'll get a ResumableDownloadException at this point because
             # of CallbackTestHarnass (above). Check that the tracker file was
             # created correctly.
-            self.assertEqual(e.disposition, ResumableTransferDisposition.ABORT)
+            self.assertEqual(e.disposition,
+                             ResumableTransferDisposition.ABORT_CUR_PROCESS)
             self.assertTrue(os.path.exists(self.tracker_file_name))
             f = open(self.tracker_file_name)
             etag_line = f.readline()
@@ -237,6 +247,22 @@
         self.assertEqual(self.small_src_key_as_string,
                          self.small_src_key.get_contents_as_string())
 
+    def test_broken_pipe_recovery(self):
+        """
+        Tests handling of a Broken Pipe (which interacts with an httplib bug)
+        """
+        exception = IOError(errno.EPIPE, "Broken pipe")
+        harnass = CallbackTestHarnass(exception=exception)
+        res_download_handler = ResumableDownloadHandler(num_retries=1)
+        self.small_src_key.get_contents_to_file(
+            self.dst_fp, cb=harnass.call,
+            res_download_handler=res_download_handler)
+        # Ensure downloaded object has correct content.
+        self.assertEqual(self.small_src_key_size,
+                         get_cur_file_size(self.dst_fp))
+        self.assertEqual(self.small_src_key_as_string,
+                         self.small_src_key.get_contents_as_string())
+
     def test_non_retryable_exception_handling(self):
         """
         Tests resumable download that fails with a non-retryable exception
@@ -303,7 +329,8 @@
                 res_download_handler=res_download_handler)
             self.fail('Did not get expected ResumableDownloadException')
         except ResumableDownloadException, e:
-            self.assertEqual(e.disposition, ResumableTransferDisposition.ABORT)
+            self.assertEqual(e.disposition,
+                             ResumableTransferDisposition.ABORT_CUR_PROCESS)
             # Ensure a tracker file survived.
             self.assertTrue(os.path.exists(self.tracker_file_name))
         # Try it one more time; this time should succeed.
@@ -370,7 +397,9 @@
                 res_download_handler=res_download_handler)
             self.fail('Did not get expected ResumableDownloadException')
         except ResumableDownloadException, e:
-            self.assertEqual(e.disposition, ResumableTransferDisposition.ABORT)
+            # First abort (from harnass-forced failure) should be
+            # ABORT_CUR_PROCESS.
+            self.assertEqual(e.disposition, ResumableTransferDisposition.ABORT_CUR_PROCESS)
             # Ensure a tracker file survived.
             self.assertTrue(os.path.exists(self.tracker_file_name))
         # Try it again, this time with different src key (simulating an
@@ -380,6 +409,8 @@
                 self.dst_fp, res_download_handler=res_download_handler)
             self.fail('Did not get expected ResumableDownloadException')
         except ResumableDownloadException, e:
+            # This abort should be a hard abort (object size changing during
+            # transfer).
             self.assertEqual(e.disposition, ResumableTransferDisposition.ABORT)
             self.assertNotEqual(
                 e.message.find('md5 signature doesn\'t match etag'), -1)
@@ -403,7 +434,10 @@
                 res_download_handler=res_download_handler)
             self.fail('Did not get expected ResumableDownloadException')
         except ResumableDownloadException, e:
-            self.assertEqual(e.disposition, ResumableTransferDisposition.ABORT)
+            # First abort (from harnass-forced failure) should be
+            # ABORT_CUR_PROCESS.
+            self.assertEqual(e.disposition,
+                             ResumableTransferDisposition.ABORT_CUR_PROCESS)
             # Ensure a tracker file survived.
             self.assertTrue(os.path.exists(self.tracker_file_name))
         # Before trying again change the first byte of the file fragment
@@ -419,6 +453,8 @@
                 res_download_handler=res_download_handler)
             self.fail('Did not get expected ResumableDownloadException')
         except ResumableDownloadException, e:
+            # This abort should be a hard abort (file content changing during
+            # transfer).
             self.assertEqual(e.disposition, ResumableTransferDisposition.ABORT)
             self.assertNotEqual(
                 e.message.find('md5 signature doesn\'t match etag'), -1)
diff --git a/boto/tests/test_resumable_uploads.py b/tests/s3/test_resumable_uploads.py
similarity index 88%
rename from boto/tests/test_resumable_uploads.py
rename to tests/s3/test_resumable_uploads.py
index da7b086..bb0f7a9 100755
--- a/boto/tests/test_resumable_uploads.py
+++ b/tests/s3/test_resumable_uploads.py
@@ -22,7 +22,7 @@
 # IN THE SOFTWARE.
 
 """
-Tests of resumable uploads.
+Tests of Google Cloud Storage resumable uploads.
 """
 
 import errno
@@ -41,10 +41,20 @@
 import boto
 from boto.exception import GSResponseError
 from boto.gs.resumable_upload_handler import ResumableUploadHandler
+from boto.exception import InvalidUriError
 from boto.exception import ResumableTransferDisposition
 from boto.exception import ResumableUploadException
 from boto.exception import StorageResponseError
-from boto.tests.cb_test_harnass import CallbackTestHarnass
+from cb_test_harnass import CallbackTestHarnass
+
+# We don't use the OAuth2 authentication plugin directly; importing it here
+# ensures that it's loaded and available by default.
+try:
+  from oauth2_plugin import oauth2_plugin
+except ImportError:
+  # Do nothing - if user doesn't have OAuth2 configured it doesn't matter;
+  # and if they do, the tests will fail (as they should in that case).
+  pass
 
 
 class ResumableUploadTests(unittest.TestCase):
@@ -55,31 +65,29 @@
     def get_suite_description(self):
         return 'Resumable upload test suite'
 
-    @classmethod
-    def setUp(cls):
+    def setUp(self):
         """
         Creates dst_key needed by all tests.
 
         This method's namingCase is required by the unittest framework.
         """
-        cls.dst_key = cls.dst_key_uri.new_key(validate=False)
+        self.dst_key = self.dst_key_uri.new_key(validate=False)
 
-    @classmethod
-    def tearDown(cls):
+    def tearDown(self):
         """
         Deletes any objects or files created by last test run.
 
         This method's namingCase is required by the unittest framework.
         """
         try:
-            cls.dst_key_uri.delete_key()
+            self.dst_key_uri.delete_key()
         except GSResponseError:
             # Ignore possible not-found error.
             pass
         # Recursively delete dst dir and then re-create it, so in effect we
         # remove all dirs and files under that directory.
-        shutil.rmtree(cls.tmp_dir)
-        os.mkdir(cls.tmp_dir)
+        shutil.rmtree(self.tmp_dir)
+        os.mkdir(self.tmp_dir)
 
     @staticmethod
     def build_test_input_file(size):
@@ -95,6 +103,31 @@
         return (file_as_string, StringIO.StringIO(file_as_string))
 
     @classmethod
+    def get_dst_bucket_uri(cls, debug):
+        """A unique bucket to test."""
+        hostname = socket.gethostname().split('.')[0]
+        uri_base_str = 'gs://res-upload-test-%s-%s-%s' % (
+            hostname, os.getpid(), int(time.time()))
+        return boto.storage_uri('%s-dst' % uri_base_str, debug=debug)
+
+    @classmethod
+    def get_dst_key_uri(cls):
+        """A key to test."""
+        return cls.dst_bucket_uri.clone_replace_name('obj')
+
+    @classmethod
+    def get_staged_host(cls):
+        """URL of an existing bucket."""
+        return 'pub.commondatastorage.googleapis.com'
+
+    @classmethod
+    def get_invalid_upload_id(cls):
+        return (
+            'http://%s/?upload_id='
+            'AyzB2Uo74W4EYxyi5dp_-r68jz8rtbvshsv4TX7srJVkJ57CxTY5Dw2' % (
+                cls.get_staged_host()))
+
+    @classmethod
     def set_up_class(cls, debug):
         """
         Initializes test suite.
@@ -122,13 +155,9 @@
         cls.tmp_dir = tempfile.mkdtemp(prefix=cls.tmpdir_prefix)
 
         # Create the test bucket.
-        hostname = socket.gethostname().split('.')[0]
-        cls.uri_base_str = 'gs://res_upload_test_%s_%s_%s' % (
-            hostname, os.getpid(), int(time.time()))
-        cls.dst_bucket_uri = boto.storage_uri('%s_dst' %
-                                              cls.uri_base_str, debug=debug)
+        cls.dst_bucket_uri = cls.get_dst_bucket_uri(debug)
         cls.dst_bucket_uri.create_bucket()
-        cls.dst_key_uri = cls.dst_bucket_uri.clone_replace_name('obj')
+        cls.dst_key_uri = cls.get_dst_key_uri()
 
         cls.tracker_file_name = '%s%suri_tracker' % (cls.tmp_dir, os.sep)
 
@@ -138,9 +167,7 @@
         f.write('ftp://example.com')
         f.close()
 
-        cls.invalid_upload_id = (
-            'http://pub.commondatastorage.googleapis.com/?upload_id='
-            'AyzB2Uo74W4EYxyi5dp_-r68jz8rtbvshsv4TX7srJVkJ57CxTY5Dw2')
+        cls.invalid_upload_id = cls.get_invalid_upload_id()
         cls.invalid_upload_id_tracker_file_name = (
             '%s%sinvalid_upload_id_tracker' % (cls.tmp_dir, os.sep))
         f = open(cls.invalid_upload_id_tracker_file_name, 'w')
@@ -156,9 +183,6 @@
         """
         if not hasattr(cls, 'created_test_data'):
             return
-        # Call cls.tearDown() in case the tests got interrupted, to ensure
-        # dst objects get deleted.
-        cls.tearDown()
 
         # Retry (for up to 2 minutes) the bucket gets deleted (it may not
         # the first time round, due to eventual consistency of bucket delete
@@ -210,7 +234,8 @@
             # We'll get a ResumableUploadException at this point because
             # of CallbackTestHarnass (above). Check that the tracker file was
             # created correctly.
-            self.assertEqual(e.disposition, ResumableTransferDisposition.ABORT)
+            self.assertEqual(e.disposition,
+                             ResumableTransferDisposition.ABORT_CUR_PROCESS)
             self.assertTrue(os.path.exists(self.tracker_file_name))
             f = open(self.tracker_file_name)
             uri_from_file = f.readline().strip()
@@ -234,6 +259,21 @@
         self.assertEqual(self.small_src_file_as_string,
                          self.dst_key.get_contents_as_string())
 
+    def test_broken_pipe_recovery(self):
+        """
+        Tests handling of a Broken Pipe (which interacts with an httplib bug)
+        """
+        exception = IOError(errno.EPIPE, "Broken pipe")
+        harnass = CallbackTestHarnass(exception=exception)
+        res_upload_handler = ResumableUploadHandler(num_retries=1)
+        self.dst_key.set_contents_from_file(
+            self.small_src_file, cb=harnass.call,
+            res_upload_handler=res_upload_handler)
+        # Ensure uploaded object has correct content.
+        self.assertEqual(self.small_src_file_size, self.dst_key.size)
+        self.assertEqual(self.small_src_file_as_string,
+                         self.dst_key.get_contents_as_string())
+
     def test_non_retryable_exception_handling(self):
         """
         Tests a resumable upload that fails with a non-retryable exception
@@ -298,7 +338,8 @@
                 res_upload_handler=res_upload_handler)
             self.fail('Did not get expected ResumableUploadException')
         except ResumableUploadException, e:
-            self.assertEqual(e.disposition, ResumableTransferDisposition.ABORT)
+            self.assertEqual(e.disposition,
+                             ResumableTransferDisposition.ABORT_CUR_PROCESS)
             # Ensure a tracker file survived.
             self.assertTrue(os.path.exists(self.tracker_file_name))
         # Try it one more time; this time should succeed.
@@ -388,7 +429,9 @@
                 res_upload_handler=res_upload_handler)
             self.fail('Did not get expected ResumableUploadException')
         except ResumableUploadException, e:
-            self.assertEqual(e.disposition, ResumableTransferDisposition.ABORT)
+            # First abort (from harnass-forced failure) should be
+            # ABORT_CUR_PROCESS.
+            self.assertEqual(e.disposition, ResumableTransferDisposition.ABORT_CUR_PROCESS)
             # Ensure a tracker file survived.
             self.assertTrue(os.path.exists(self.tracker_file_name))
         # Try it again, this time with different size source file.
@@ -401,9 +444,10 @@
                 self.largest_src_file, res_upload_handler=res_upload_handler)
             self.fail('Did not get expected ResumableUploadException')
         except ResumableUploadException, e:
+            # This abort should be a hard abort (file size changing during
+            # transfer).
             self.assertEqual(e.disposition, ResumableTransferDisposition.ABORT)
-            self.assertNotEqual(
-                e.message.find('attempt to upload a different size file'), -1)
+            self.assertNotEqual(e.message.find('file size changed'), -1, e.message) 
 
     def test_upload_with_file_size_change_during_upload(self):
         """
@@ -453,8 +497,11 @@
             self.assertNotEqual(
                 e.message.find('md5 signature doesn\'t match etag'), -1)
             # Ensure the bad data wasn't left around.
-            all_keys = self.dst_key_uri.get_all_keys()
-            self.assertEqual(0, len(all_keys))
+            try:
+              self.dst_key_uri.get_key()
+              self.fail('Did not get expected InvalidUriError')
+            except InvalidUriError, e:
+              pass
 
     def test_upload_with_content_length_header_set(self):
         """
diff --git a/boto/tests/test_s3versioning.py b/tests/s3/test_versioning.py
similarity index 98%
rename from boto/tests/test_s3versioning.py
rename to tests/s3/test_versioning.py
index b778db0..7a84b99 100644
--- a/boto/tests/test_s3versioning.py
+++ b/tests/s3/test_versioning.py
@@ -1,5 +1,3 @@
-#!/usr/bin/env python
-# -*- coding: utf-8 -*-
 # Copyright (c) 2010 Mitch Garnaat http://garnaat.org/
 # Copyright (c) 2010, Eucalyptus Systems, Inc.
 # All rights reserved.
@@ -49,7 +47,7 @@
         d = bucket.get_versioning_status()
         assert not d.has_key('Versioning')
         bucket.configure_versioning(versioning=True)
-        time.sleep(5)
+        time.sleep(15)
         d = bucket.get_versioning_status()
         assert d['Versioning'] == 'Enabled'
         
diff --git a/boto/tests/__init__.py b/tests/sdb/__init__.py
similarity index 93%
copy from boto/tests/__init__.py
copy to tests/sdb/__init__.py
index 449bd16..b3fc3a0 100644
--- a/boto/tests/__init__.py
+++ b/tests/sdb/__init__.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2006,2007 Mitch Garnaat http://garnaat.org/
+# Copyright (c) 2006-2011 Mitch Garnaat http://garnaat.org/
 #
 # Permission is hereby granted, free of charge, to any person obtaining a
 # copy of this software and associated documentation files (the
@@ -18,6 +18,3 @@
 # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
 # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
 # IN THE SOFTWARE.
-#
-
-
diff --git a/boto/tests/test_sdbconnection.py b/tests/sdb/test_connection.py
similarity index 98%
rename from boto/tests/test_sdbconnection.py
rename to tests/sdb/test_connection.py
index eac57f7..a834a9d 100644
--- a/boto/tests/test_sdbconnection.py
+++ b/tests/sdb/test_connection.py
@@ -1,5 +1,3 @@
-#!/usr/bin/env python
-
 # Copyright (c) 2006-2010 Mitch Garnaat http://garnaat.org/
 # Copyright (c) 2010, Eucalyptus Systems, Inc.
 # All rights reserved.
diff --git a/boto/tests/__init__.py b/tests/sqs/__init__.py
similarity index 93%
copy from boto/tests/__init__.py
copy to tests/sqs/__init__.py
index 449bd16..b3fc3a0 100644
--- a/boto/tests/__init__.py
+++ b/tests/sqs/__init__.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2006,2007 Mitch Garnaat http://garnaat.org/
+# Copyright (c) 2006-2011 Mitch Garnaat http://garnaat.org/
 #
 # Permission is hereby granted, free of charge, to any person obtaining a
 # copy of this software and associated documentation files (the
@@ -18,6 +18,3 @@
 # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
 # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
 # IN THE SOFTWARE.
-#
-
-
diff --git a/boto/tests/test_sqsconnection.py b/tests/sqs/test_connection.py
similarity index 97%
rename from boto/tests/test_sqsconnection.py
rename to tests/sqs/test_connection.py
index dd0cfcc..6996a54 100644
--- a/boto/tests/test_sqsconnection.py
+++ b/tests/sqs/test_connection.py
@@ -1,5 +1,3 @@
-#!/usr/bin/env python
-
 # Copyright (c) 2006-2010 Mitch Garnaat http://garnaat.org/
 # Copyright (c) 2010, Eucalyptus Systems, Inc.
 # All rights reserved.
@@ -45,7 +43,8 @@
     
         # try illegal name
         try:
-            queue = c.create_queue('bad_queue_name')
+            queue = c.create_queue('bad*queue*name')
+            self.fail('queue name should have been bad')
         except SQSError:
             pass
         
diff --git a/tests/test.py b/tests/test.py
new file mode 100755
index 0000000..9e14cda
--- /dev/null
+++ b/tests/test.py
@@ -0,0 +1,114 @@
+#!/usr/bin/env python
+# Copyright (c) 2006-2011 Mitch Garnaat http://garnaat.org/
+#
+# Permission is hereby granted, free of charge, to any person obtaining a
+# copy of this software and associated documentation files (the
+# "Software"), to deal in the Software without restriction, including
+# without limitation the rights to use, copy, modify, merge, publish, dis-
+# tribute, sublicense, and/or sell copies of the Software, and to permit
+# persons to whom the Software is furnished to do so, subject to the fol-
+# lowing conditions:
+#
+# The above copyright notice and this permission notice shall be included
+# in all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
+# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
+# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+# IN THE SOFTWARE.
+
+"""
+do the unit tests!
+"""
+
+import logging
+import sys
+import unittest
+import getopt
+
+from sqs.test_connection import SQSConnectionTest
+from s3.test_connection import S3ConnectionTest
+from s3.test_versioning import S3VersionTest
+from s3.test_encryption import S3EncryptionTest
+from s3.test_gsconnection import GSConnectionTest
+from s3.test_https_cert_validation import CertValidationTest
+from ec2.test_connection import EC2ConnectionTest
+from autoscale.test_connection import AutoscaleConnectionTest
+from sdb.test_connection import SDBConnectionTest
+from cloudfront.test_signed_urls import CloudfrontSignedUrlsTest
+
+def usage():
+    print "test.py  [-t testsuite] [-v verbosity]"
+    print "    -t   run specific testsuite (s3|ssl|s3ver|s3nover|gs|sqs|ec2|sdb|all)"
+    print "    -v   verbosity (0|1|2)"
+
+def main():
+    try:
+        opts, args = getopt.getopt(sys.argv[1:], "ht:v:",
+                                   ["help", "testsuite", "verbosity"])
+    except:
+        usage()
+        sys.exit(2)
+    testsuite = "all"
+    verbosity = 1
+    for o, a in opts:
+        if o in ("-h", "--help"):
+            usage()
+            sys.exit()
+        if o in ("-t", "--testsuite"):
+            testsuite = a
+        if o in ("-v", "--verbosity"):
+            verbosity = int(a)
+    if len(args) != 0:
+        usage()
+        sys.exit()
+    try:
+        tests = suite(testsuite)
+    except ValueError:
+        usage()
+        sys.exit()
+    if verbosity > 1:
+        logging.basicConfig(level=logging.DEBUG)
+    unittest.TextTestRunner(verbosity=verbosity).run(tests)
+
+def suite(testsuite="all"):
+    tests = unittest.TestSuite()
+    if testsuite == "all":
+        tests.addTest(unittest.makeSuite(SQSConnectionTest))
+        tests.addTest(unittest.makeSuite(S3ConnectionTest))
+        tests.addTest(unittest.makeSuite(EC2ConnectionTest))
+        tests.addTest(unittest.makeSuite(SDBConnectionTest))
+        tests.addTest(unittest.makeSuite(AutoscaleConnectionTest))
+        tests.addTest(unittest.makeSuite(CloudfrontSignedUrlsTest))
+    elif testsuite == "s3":
+        tests.addTest(unittest.makeSuite(S3ConnectionTest))
+        tests.addTest(unittest.makeSuite(S3VersionTest))
+        tests.addTest(unittest.makeSuite(S3EncryptionTest))
+    elif testsuite == "ssl":
+        tests.addTest(unittest.makeSuite(CertValidationTest))
+    elif testsuite == "s3ver":
+        tests.addTest(unittest.makeSuite(S3VersionTest))
+    elif testsuite == "s3nover":
+        tests.addTest(unittest.makeSuite(S3ConnectionTest))
+        tests.addTest(unittest.makeSuite(S3EncryptionTest))
+    elif testsuite == "gs":
+        tests.addTest(unittest.makeSuite(GSConnectionTest))
+    elif testsuite == "sqs":
+        tests.addTest(unittest.makeSuite(SQSConnectionTest))
+    elif testsuite == "ec2":
+        tests.addTest(unittest.makeSuite(EC2ConnectionTest))
+    elif testsuite == "autoscale":
+        tests.addTest(unittest.makeSuite(AutoscaleConnectionTest))
+    elif testsuite == "sdb":
+        tests.addTest(unittest.makeSuite(SDBConnectionTest))
+    elif testsuite == "cloudfront":
+        tests.addTest(unittest.makeSuite(CloudfrontSignedUrlsTest))
+    else:
+        raise ValueError("Invalid choice.")
+    return tests
+
+if __name__ == "__main__":
+    main()
diff --git a/tests/utils/test_password.py b/tests/utils/test_password.py
new file mode 100644
index 0000000..9bfb638
--- /dev/null
+++ b/tests/utils/test_password.py
@@ -0,0 +1,101 @@
+# Copyright (c) 2010 Robert Mela
+#
+# Permission is hereby granted, free of charge, to any person obtaining a
+# copy of this software and associated documentation files (the
+# "Software"), to deal in the Software without restriction, including
+# without limitation the rights to use, copy, modify, merge, publish, dis-
+# tribute, sublicense, and/or sell copies of the Software, and to permit
+# persons to whom the Software is furnished to do so, subject to the fol-
+# lowing conditions:
+#
+# The above copyright notice and this permission notice shall be included
+# in all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
+# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
+# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, 
+# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+# IN THE SOFTWARE.
+#
+import unittest
+
+
+import logging
+log = logging.getLogger(__file__)
+
+class TestPassword(unittest.TestCase):
+    """Test basic password functionality"""
+    
+    def clstest(self,cls):
+
+        """Insure that password.__eq__ hashes test value before compare"""
+
+        password=cls('foo')
+        log.debug( "Password %s" % password )
+        self.assertNotEquals(password , 'foo')
+
+        password.set('foo')
+        hashed = str(password)
+        self.assertEquals(password , 'foo')
+        self.assertEquals(password.str, hashed)
+
+        password = cls(hashed)
+        self.assertNotEquals(password.str , 'foo')
+        self.assertEquals(password , 'foo')
+        self.assertEquals(password.str , hashed)
+
+ 
+    def test_aaa_version_1_9_default_behavior(self):
+        from boto.utils import Password
+        self.clstest(Password)
+
+    def test_custom_hashclass(self):
+
+        from boto.utils import Password
+        import hashlib
+
+        class SHA224Password(Password):
+            hashfunc=hashlib.sha224
+
+        password=SHA224Password()
+        password.set('foo')
+        self.assertEquals( hashlib.sha224('foo').hexdigest(), str(password))
+ 
+    def test_hmac(self):
+        from boto.utils import Password
+        import hmac
+
+        def hmac_hashfunc(cls,msg):
+            log.debug("\n%s %s" % (cls.__class__, cls) )
+            return hmac.new('mysecretkey', msg)
+
+        class HMACPassword(Password):
+            hashfunc=hmac_hashfunc
+
+        self.clstest(HMACPassword)
+        password=HMACPassword()
+        password.set('foo')
+  
+        self.assertEquals(str(password), hmac.new('mysecretkey','foo').hexdigest())
+
+    def test_constructor(self):
+        from boto.utils import Password
+        import hmac
+
+        hmac_hashfunc = lambda msg: hmac.new('mysecretkey', msg )
+
+        password = Password(hashfunc=hmac_hashfunc)
+        password.set('foo')
+        self.assertEquals(password.str, hmac.new('mysecretkey','foo').hexdigest())
+
+        
+       
+if __name__ == '__main__':
+    import sys
+    sys.path = [ '../../' ] + sys.path
+    #logging.basicConfig()
+    #log.setLevel(logging.DEBUG)
+    suite = unittest.TestLoader().loadTestsFromTestCase(TestPassword)
+    unittest.TextTestRunner(verbosity=2).run(suite)