Updating mozdownload (excluding tests)

We need this to get the WebRTC Firefox interop test back
online. Our ancient mozdownload no longer works, but this
one appears to work.

BUG=545862
R=kjellander@chromium.org

Review URL: https://codereview.chromium.org/1451373002 .
diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md
new file mode 100644
index 0000000..5dc1285
--- /dev/null
+++ b/CONTRIBUTING.md
@@ -0,0 +1,18 @@
+## How to contribute
+So you want to write code and get it landed in the official mozdownload repository? Then first fork [our repository](https://github.com/mozilla/mozdownload) into your own Github account, and create a local clone of it as described in the [installation instructions](https://github.com/mozilla/mozdownload#installation). The latter will be used to get new features implemented or bugs fixed. Once done and you have the code locally on the disk, you can get started. We advice to not work directly on the master branch, but to create a separate branch for each issue you are working on. That way you can easily switch between different work, and you can update each one for latest changes on upstream master individually. Check also our [best practices for Git](http://ateam-bootcamp.readthedocs.org/en/latest/reference/git_github.html).
+
+### Writing Code
+For writing the code just follow our [Python style guide](http://ateam-bootcamp.readthedocs.org/en/latest/reference/python-style.html), and also test with [pylama](https://pypi.python.org/pypi/pylama). If there is something unclear of the style, just look at existing code which might help you to understand it better.
+
+### Submitting Patches
+When you think the code is ready for review a pull request should be created on Github. Owners of the repository will watch out for new PR's and review them in regular intervals. By default for each change in the PR we automatically run all the tests via [Travis CI](http://travis-ci.org/). If tests are failing make sure to address the failures immediately. Otherwise you can wait for a review. If comments have been given in a review, they have to get integrated. For those changes a separate commit should be created and pushed to your remote development branch. Don't forget to add a comment in the PR afterward, so everyone gets notified by Github. Keep in mind that reviews can span multiple cycles until the owners are happy with the new code.
+
+## Managing the Repository
+
+### Merging Pull Requests
+Once a PR is in its final state it needs to be merged into the upstream master branch. For that please **DO NOT** use the Github merge button! But merge it yourself on the command line. Reason is that we want to hvae a clean history. Before pushing the changes to upstream master make sure that all individual commits have been squashed into a single one with a commit message ending with the issue number, e.g. "Fix for broken download behavior (#45)". Also check with `git log` to not push merge commits. Only merge PRs where Travis does not report any failure!
+
+### Versioning
+In irregular intervals we are releasing new versions of mozdownload. Therefore we make use of milestones in the repository. For each implemented feature or fix the issue's milestone flag should be set to the next upcoming release. That way its easier to see what will be part of the next release.
+
+When releasing a new version of mozdownload please ensure to also update the history.md file with all the landed features and bug fixes. You are advised to use the [following issue](https://github.com/mozilla/mozdownload/issues/303) as template for the new release issue which needs to be filed. Please also check the associated PR for the code changes to be made.
\ No newline at end of file
diff --git a/History.md b/History.md
new file mode 100644
index 0000000..6f68f4a
--- /dev/null
+++ b/History.md
@@ -0,0 +1,204 @@
+1.19 / 2015-10-23
+=================
+
+ * Fix parser and scraper to handle new S3 based archive.mozilla.org (#329)
+
+1.18.1 / 2015-10-21
+===================
+
+ * Workaround for downloading files via ftp-origin-scl3.mozilla.org (#329)
+
+1.18 / 2015-09-14
+=================
+
+ * Improve API and documentation (#324)
+ * Create Factory class for various Scraper instances (#320)
+ * Switch from optparse to argparse (#318)
+ * Move CLI into its own module (#316)
+ * test_direct_scraper.py should make use of local http server (#214)
+ * Add instructions for contribution (#310)
+ * Enhance documentation for developers (#307)
+
+1.17 / 2015-08-03
+=================
+
+ * Remove dependency for mozlog (#304)
+ * Replace ftp.mozilla.org with archive.mozilla.org (#302)
+ * Removed all unused variables from get_build_info_for_version (#169)
+ * Adapt to mozlog 3.0 (#300)
+ * Re-add scraper.cli to __init__.py (#298)
+
+1.16 / 2015-06-30
+=================
+
+ * Remove support for Python 2.6 (#250)
+ * Enhance Travis CI job with more validity checks (#157)
+ * Add support for downloading Fennec (Firefox Mobile) daily builds (#292)
+ * Update dependencies for python packages for test environment (#293)
+ * Add waffle.io badge to README
+ * Remove support for unsigned candidate builds (#108)
+
+1.15 / 2015-06-02
+=================
+
+ * Daily Scraper: Limit Check for non-empty build dirs (#255)
+ * Use -stub suffix for release and candidate builds (#167)
+ * Upgrade dependency for requests module to 2.7.0 (#271)
+ * Ensure that --destination is always using an absolute path (#267)
+ * Test for correct choice of scraper (#257)
+ * Ensure to close the HTTP connection also in case of failures (#275)
+ * Close HTTP connections after traversing directories on ftp.mozilla.org (#272)
+
+1.14 / 2015-03-05
+=================
+
+ * Allow download of files with specified extension for Tinderbox builds on Windows (#264)
+ * Replace --directory option with --destination to allow a filename for the target file (#92)
+ * Always show correct build number for candidate builds (#232)
+ * Add test for invalid branches of daily builds (#236)
+ * `mac` platform option for tinderbox builds should default to `macosx64` (#215)
+ * Reverse check for non-empty tinderbox build directories (#253)
+
+1.13 / 2015-02-11
+=================
+
+ * Add support for Firefox try server builds (#239)
+ * If latest tinderbox folder is empty, get the next folder with a build (#143)
+ * Add official support for win64 builds (#243)
+ * Support downloading from sites with optional authentication (#195)
+ * Update all PLATFORM_FRAGMENTS values to regex (#154)
+ * Catch KeyboardInterrupt exception for user abort (#226)
+
+1.12 / 2014-09-10
+=================
+
+ * Display selected build when downloading (#149)
+ * Add support for downloading B2G desktop builds (#104)
+ * Download candidate builds from candidates/ and not nightly/ (#218)
+ * Add Travis CI build status and PyPI version badges to README (#220)
+ * Add Python 2.6 to test matrix (#210)
+ * Fix broken download of mac64 tinderbox builds (#144)
+ * Allow download even if content-length header is missing (#194)
+ * Convert run_tests script to Python (#168)
+ * Ensure that --date option is a valid date (#196)
+
+1.11.1 / 2014-02-04
+===================
+
+  * Revert "Adjust mozbase package dependencies to be more flexible (#206)"
+
+1.11 / 2014-02-03
+=================
+
+  * Adjust mozbase package dependencies to be more flexible (#201)
+  * Log the name of the output file for discovery (#199)
+  * Changed logger info level in tests to ERROR (#175)
+  * PEP8 fixes in test_daily_scraper (#188)
+
+1.10 / 2013-11-19
+=================
+
+  * Allow to download files with different extensions than exe (#119)
+  * Added stub support for TinderboxScraper (#180)
+  * Add tests for TinderboxScraper class (#161)
+  * Add tests for ReleaseCandidateScraper class (#160)
+  * Update run_tests.sh to force package version to our dependencies (#177)
+  * Add method to get the latest daily build (#163)
+  * Add tests for DailyScraper class (#159)
+  * Add target_url to ReleaseScraper tests (#171)
+  * Add tests for ReleaseScraper class (#156)
+  * Adding new tests using mozhttpd server
+  * Use mozlog as default logger (#116)
+  * Show user instructions when calling mozdownload without arguments (#150)
+  * Display found candidate builds when build number is given (#148)
+
+1.9 / 2013-08-29
+================
+
+  * Invalid branch or locale should display proper error message (#115)
+  * Fix PEP8 issues and add checking to Travis-CI (#140)
+  * Add support for stub installer on Windows (#29)
+  * On linux64 a 64-bit tinderbox build has to be downloaded (#138)
+  * Removed date_validation_regex from TinderboxScraper (#130)
+  * Add Travis-CI configuration for running the tests (#132)
+  * Added urljoin method for handling URLs (#123)
+  * Added test harness and first test (#10)
+  * Unable to download tinderbox builds by timestamp (#103)
+
+1.8 / 2013-07-25
+================
+
+  * Multiple matches are shown when specifying a unique build ID (#102)
+  * Filter potential build dirs by whether or not they contain a build (#11)
+  * Download the file specified with --url to the correct target folder (#105)
+  * Add pause between initial attempt and first retry (#106)
+  * Output details of matching builds (#17)
+  * Fallback to hostname if downloading from a URL without specifying a path (#89)
+  * Removed default timeout for downloads (#91)
+  * Fixed issues with --retry-attempts when download fails (#81)
+  * Add timeout for network requests (#86)
+  * Comply with PEP 8 (#63)
+  * Disable caching when fetching build information (#13)
+  * Add support for python requests (#83)
+
+1.7.2 / 2013-05-13
+==================
+
+  *  Add support for hidden release candidate builds (#77)
+
+1.7.1 / 2013-04-30
+==================
+
+  * total_seconds is not an attribute on timedelta in Python 2.6 (#73)
+
+1.7 / 2013-04-24
+==================
+
+  * Revert to no retries by default (#65)
+  * Add a percentage completion counter (#48)
+  * Remove default=None from OptionParser options (#43)
+  * Added full command line options to README (#44)
+  * Added version number to docstring and --help output (#34)
+  * Implement automatic retries for locating the binary (#58)
+  * Implemented a download timeout (#50)
+
+1.6 / 2013-02-20
+==================
+
+  * Automatically retry on failure (#39)
+  * Improve handling of exceptions when temporary file does not exist (#51)
+
+1.5 / 2012-12-04
+==================
+
+  * Don't download stub installer for tinderbox builds (#41)
+  * Support basic authentication (#36)
+  * Support downloading from an arbitrary URL (#35)
+
+1.4 / 2012-10-08
+==================
+
+  * Don't download stub installer by default (#31)
+  * Move build-id to option group (#28)
+
+1.3 / 2012-10-04
+==================
+
+  * Put --build-id option into Daily Build option group, where it appears to belong (#25)
+  * Ignore the build/ and dist/ directories created by setup.py (#24)
+  * Add support for downloading b2g builds (#23)
+
+1.2 / 2012-08-16
+==================
+
+  * Download of builds via build-id fails if more than one subfolder is present for that day (#19)
+
+1.1 / 2012-07-26
+==================
+
+  * Use last, not 1st .txt file in latest- dirs. Fixes issue #14.
+
+1.0 / 2012-05-23
+==================
+
+  * Initial version
diff --git a/PKG-INFO b/PKG-INFO
deleted file mode 100644
index 31a474c..0000000
--- a/PKG-INFO
+++ /dev/null
@@ -1,49 +0,0 @@
-Metadata-Version: 1.0
-Name: mozdownload
-Version: 1.6
-Summary: Script to download builds for Firefox and Thunderbird from the Mozilla server.
-Home-page: http://github.com/mozilla/mozdownload
-Author: Mozilla Automation and Testing Team
-Author-email: tools@lists.mozilla.com
-License: Mozilla Public License 2.0 (MPL 2.0)
-Description: # mozdownload
-        
-        [mozdownload](https://github.com/mozilla/mozdownload)
-        is a [python package](http://pypi.python.org/pypi/mozdownload)
-        which handles downloading of Mozilla applications.
-        
-        ## Command Line Usage
-        
-        The `mozdownload` command will download the application based on the provided
-        command line options.
-        
-        ### Examples
-        
-        Download the latest official Firefox release for your platform:
-        
-            mozdownload --version=latest
-        
-        Download the latest Firefox Aurora build for Windows (32bit):
-        
-            mozdownload --type=daily --branch=mozilla-aurora --platform=win32
-        
-        Download the latest official Thunderbird release for your platform: 
-        
-            mozdownload --application=thunderbird --version=latest
-        
-        Download the latest Earlybird build for Linux (64bit): 
-        
-            mozdownload --application=thunderbird --type=daily --branch=comm-aurora --platform=linux64
-        
-        Download this README file: 
-        
-            mozdownload --url=https://raw.github.com/mozilla/mozdownload/master/README.md
-        
-        Download a file from a URL protected with basic authentication: 
-        
-            mozdownload --url=http://example.com/secrets.txt --username=admin --password=password
-        
-        Run `mozdownload --help` for detailed information on the command line options.
-        
-Keywords: mozilla
-Platform: UNKNOWN
diff --git a/README.chromium b/README.chromium
index 2841177..f4a1807 100644
--- a/README.chromium
+++ b/README.chromium
@@ -1,13 +1,15 @@
-Name: Mozilla Download Library

-Short Name: mozdownload

-URL: https://github.com/mozilla/mozdownload

-Version: 1.6

-License: None

-License File: No

-Security Critical: No

-

-Description:

-Used by WebRTC Firefox interop tests (see https://code.google.com/p/chromium/codesearch#search/&q=FirefoxApprtcInteropTest&sq=package:chromium&type=cs) to download the latest Firefox nightly.

-

-Local Modifications:

-None

+Name: Mozilla Download Library
+Short Name: mozdownload
+URL: https://github.com/mozilla/mozdownload
+Version: 1.19
+License: None
+License File: No
+Security Critical: No
+
+Description:
+Used by WebRTC Firefox interop tests (see https://code.google.com/p/chromium/codesearch#search/&q=FirefoxApprtcInteropTest&sq=package:chromium&type=cs) to download the latest Firefox nightly.
+
+Local Modifications:
+Removed progressbar dependency from scraper.
+Removed cli from __init__.py
+Removed tests/ folder
diff --git a/README.md b/README.md
new file mode 100644
index 0000000..77bdf70
--- /dev/null
+++ b/README.md
@@ -0,0 +1,99 @@
+[![PyPI version](https://badge.fury.io/py/mozdownload.svg)](http://badge.fury.io/py/mozdownload)
+[![Build Status](https://travis-ci.org/mozilla/mozdownload.svg?branch=master)](https://travis-ci.org/mozilla/mozdownload)
+[![Stories in Ready](https://badge.waffle.io/mozilla/mozdownload.png?label=ready&title=Ready)](https://waffle.io/mozilla/mozdownload)
+
+# mozdownload
+
+[mozdownload](https://github.com/mozilla/mozdownload)
+is a [python package](http://pypi.python.org/pypi/mozdownload)
+which handles downloading of Mozilla applications.
+
+## Installation
+
+If the tool should only be used for downloading applications we propose to
+install it via pip. The following command will install the latest release:
+
+    pip install mozdownload
+
+Otherwise follow the steps below to setup a development environment. It is
+recommended that [virtualenv](http://virtualenv.readthedocs.org/en/latest/installation.html)
+and [virtualenvwrapper](http://virtualenvwrapper.readthedocs.org/en/latest/)
+be used in conjunction with mozdownload. Start by installing these. Then first fork
+our repository into your own github account, and run:
+
+    git clone https://github.com/%your_account%/mozdownload.git
+    cd mozdownload
+    python setup.py develop
+
+More detailed developer documentation can be found in the [wiki](https://github.com/mozilla/mozdownload/wiki).
+
+## Command Line Usage
+
+The `mozdownload` command will download the application based on the provided
+command line options.
+
+### Examples
+
+Download the latest official Firefox release for your platform (as long as there is no
+64bit build of Firefox for Windows64, users on that platform have to download the 32bit build):
+
+    mozdownload --version=latest
+
+Download the latest Firefox Aurora build for Windows (32bit):
+
+    mozdownload --type=daily --branch=mozilla-aurora --platform=win32
+
+Download the latest official Thunderbird release for your platform:
+
+    mozdownload --application=thunderbird --version=latest
+
+Download the latest Earlybird build for Linux (64bit):
+
+    mozdownload --application=thunderbird --type=daily --branch=comm-aurora --platform=linux64
+
+Download this README file:
+
+    mozdownload --url=https://raw.github.com/mozilla/mozdownload/master/README.md
+
+Download a file from a URL protected with basic authentication:
+
+    mozdownload --url=http://example.com/secrets.txt --username=admin --password=password
+
+Run `mozdownload --help` for detailed information on the command line options.
+
+### Command Line Options
+
+To see the full list of command line options, execute the command below and check the list
+of options for the build type to download:
+
+    mozdownload --help
+
+## API
+
+Beside the CLI mozdownload also offers an API to be used. To create specific instances of scrapers
+the FactoryScraper class can be used. Here some examples:
+
+    # Create a release scraper for the German locale of Firefox 40.0.3
+    from mozdownload import FactoryScraper
+    scraper = mozdownload.FactoryScraper('release', version='40.0.3', locale='de')
+
+    # Create a candidate scraper for Windows 32bit of Firefox 41.0b9
+    from mozdownload import FactoryScraper
+    scraper = mozdownload.FactoryScraper('candidate', version='41.0b9', platform='win32')
+
+    # Create a daily scraper for the latest Dev Edition build on the current platform
+    from mozdownload import FactoryScraper
+    scraper = mozdownload.FactoryScraper('daily', branch='mozilla-aurora')
+
+All those scraper instances allow you to retrieve the url which is used to download the files, and the filename for the local destination:
+
+    from mozdownload import FactoryScraper
+    scraper = mozdownload.FactoryScraper('daily')
+    print scraper.url
+    print scraper.filename
+
+To actually download the remote file the download() method has to be called:
+
+    from mozdownload import FactoryScraper
+    scraper = mozdownload.FactoryScraper('daily')
+    filename = scraper.download()
diff --git a/mozdownload.egg-info/PKG-INFO b/mozdownload.egg-info/PKG-INFO
deleted file mode 100644
index 31a474c..0000000
--- a/mozdownload.egg-info/PKG-INFO
+++ /dev/null
@@ -1,49 +0,0 @@
-Metadata-Version: 1.0
-Name: mozdownload
-Version: 1.6
-Summary: Script to download builds for Firefox and Thunderbird from the Mozilla server.
-Home-page: http://github.com/mozilla/mozdownload
-Author: Mozilla Automation and Testing Team
-Author-email: tools@lists.mozilla.com
-License: Mozilla Public License 2.0 (MPL 2.0)
-Description: # mozdownload
-        
-        [mozdownload](https://github.com/mozilla/mozdownload)
-        is a [python package](http://pypi.python.org/pypi/mozdownload)
-        which handles downloading of Mozilla applications.
-        
-        ## Command Line Usage
-        
-        The `mozdownload` command will download the application based on the provided
-        command line options.
-        
-        ### Examples
-        
-        Download the latest official Firefox release for your platform:
-        
-            mozdownload --version=latest
-        
-        Download the latest Firefox Aurora build for Windows (32bit):
-        
-            mozdownload --type=daily --branch=mozilla-aurora --platform=win32
-        
-        Download the latest official Thunderbird release for your platform: 
-        
-            mozdownload --application=thunderbird --version=latest
-        
-        Download the latest Earlybird build for Linux (64bit): 
-        
-            mozdownload --application=thunderbird --type=daily --branch=comm-aurora --platform=linux64
-        
-        Download this README file: 
-        
-            mozdownload --url=https://raw.github.com/mozilla/mozdownload/master/README.md
-        
-        Download a file from a URL protected with basic authentication: 
-        
-            mozdownload --url=http://example.com/secrets.txt --username=admin --password=password
-        
-        Run `mozdownload --help` for detailed information on the command line options.
-        
-Keywords: mozilla
-Platform: UNKNOWN
diff --git a/mozdownload.egg-info/SOURCES.txt b/mozdownload.egg-info/SOURCES.txt
deleted file mode 100644
index fa8c91e..0000000
--- a/mozdownload.egg-info/SOURCES.txt
+++ /dev/null
@@ -1,12 +0,0 @@
-setup.py
-mozdownload/__init__.py
-mozdownload/parser.py
-mozdownload/scraper.py
-mozdownload/timezones.py
-mozdownload.egg-info/PKG-INFO
-mozdownload.egg-info/SOURCES.txt
-mozdownload.egg-info/dependency_links.txt
-mozdownload.egg-info/entry_points.txt
-mozdownload.egg-info/not-zip-safe
-mozdownload.egg-info/requires.txt
-mozdownload.egg-info/top_level.txt
\ No newline at end of file
diff --git a/mozdownload.egg-info/dependency_links.txt b/mozdownload.egg-info/dependency_links.txt
deleted file mode 100644
index 8b13789..0000000
--- a/mozdownload.egg-info/dependency_links.txt
+++ /dev/null
@@ -1 +0,0 @@
-
diff --git a/mozdownload.egg-info/entry_points.txt b/mozdownload.egg-info/entry_points.txt
deleted file mode 100644
index 905888d..0000000
--- a/mozdownload.egg-info/entry_points.txt
+++ /dev/null
@@ -1,5 +0,0 @@
-
-      # -*- Entry points: -*-
-      [console_scripts]
-      mozdownload = mozdownload:cli
-      
\ No newline at end of file
diff --git a/mozdownload.egg-info/not-zip-safe b/mozdownload.egg-info/not-zip-safe
deleted file mode 100644
index 8b13789..0000000
--- a/mozdownload.egg-info/not-zip-safe
+++ /dev/null
@@ -1 +0,0 @@
-
diff --git a/mozdownload.egg-info/requires.txt b/mozdownload.egg-info/requires.txt
deleted file mode 100644
index a2b7e07..0000000
--- a/mozdownload.egg-info/requires.txt
+++ /dev/null
@@ -1 +0,0 @@
-mozinfo==0.3.3
\ No newline at end of file
diff --git a/mozdownload.egg-info/top_level.txt b/mozdownload.egg-info/top_level.txt
deleted file mode 100644
index 5c8217a..0000000
--- a/mozdownload.egg-info/top_level.txt
+++ /dev/null
@@ -1 +0,0 @@
-mozdownload
diff --git a/mozdownload/__init__.py b/mozdownload/__init__.py
index 604bfe4..9a09914 100644
--- a/mozdownload/__init__.py
+++ b/mozdownload/__init__.py
@@ -2,4 +2,23 @@
 # License, v. 2.0. If a copy of the MPL was not distributed with this file,
 # You can obtain one at http://mozilla.org/MPL/2.0/.
 
-from scraper import *
+from .factory import FactoryScraper
+
+from .scraper import (Scraper,
+                      DailyScraper,
+                      DirectScraper,
+                      ReleaseScraper,
+                      ReleaseCandidateScraper,
+                      TinderboxScraper,
+                      TryScraper,
+                      )
+
+__all__ = [FactoryScraper,
+           Scraper,
+           DailyScraper,
+           DirectScraper,
+           ReleaseScraper,
+           ReleaseCandidateScraper,
+           TinderboxScraper,
+           TryScraper,
+           ]
diff --git a/mozdownload/cli.py b/mozdownload/cli.py
new file mode 100644
index 0000000..c22a6bb
--- /dev/null
+++ b/mozdownload/cli.py
@@ -0,0 +1,174 @@
+#!/usr/bin/env python
+
+# This Source Code Form is subject to the terms of the Mozilla Public
+# License, v. 2.0. If a copy of the MPL was not distributed with this
+# file, You can obtain one at http://mozilla.org/MPL/2.0/.
+
+import argparse
+import os
+import pkg_resources
+import sys
+
+from . import factory
+from . import scraper
+
+
+version = pkg_resources.require("mozdownload")[0].version
+
+__doc__ = """
+Module to handle downloads for different types of archive.mozilla.org hosted
+applications.
+
+mozdownload version: %(version)s
+""" % {'version': version}
+
+
+def cli():
+    """Main function for the downloader"""
+
+    parser = argparse.ArgumentParser(description=__doc__)
+    parser.add_argument('--application', '-a',
+                        dest='application',
+                        choices=scraper.APPLICATIONS,
+                        default='firefox',
+                        metavar='APPLICATION',
+                        help='The name of the application to download, default: "%(default)s"')
+    parser.add_argument('--base_url',
+                        dest='base_url',
+                        default=scraper.BASE_URL,
+                        metavar='BASE_URL',
+                        help='The base url to be used, default: "%(default)s"')
+    parser.add_argument('--build-number',
+                        dest='build_number',
+                        type=int,
+                        metavar='BUILD_NUMBER',
+                        help='Number of the build (for candidate, daily, and tinderbox builds)')
+    parser.add_argument('--destination', '-d',
+                        dest='destination',
+                        default=os.getcwd(),
+                        metavar='DESTINATION',
+                        help='Directory or file name to download the '
+                             'file to, default: current working directory')
+    parser.add_argument('--extension',
+                        dest='extension',
+                        metavar='EXTENSION',
+                        help='File extension of the build (e.g. "zip"), default: '
+                             'the standard build extension on the platform.')
+    parser.add_argument('--locale', '-l',
+                        dest='locale',
+                        metavar='LOCALE',
+                        help='Locale of the application, default: "en-US" or "multi"')
+    parser.add_argument('--log-level',
+                        action='store',
+                        dest='log_level',
+                        default='INFO',
+                        metavar='LOG_LEVEL',
+                        help='Threshold for log output (default: %(default)s)')
+    parser.add_argument('--password',
+                        dest='password',
+                        metavar='PASSWORD',
+                        help='Password for basic HTTP authentication.')
+    parser.add_argument('--platform', '-p',
+                        dest='platform',
+                        choices=scraper.PLATFORM_FRAGMENTS.keys(),
+                        metavar='PLATFORM',
+                        help='Platform of the application')
+    parser.add_argument('--retry-attempts',
+                        dest='retry_attempts',
+                        default=0,
+                        type=int,
+                        metavar='RETRY_ATTEMPTS',
+                        help='Number of times the download will be attempted in '
+                             'the event of a failure, default: %(default)s')
+    parser.add_argument('--retry-delay',
+                        dest='retry_delay',
+                        default=10.,
+                        type=float,
+                        metavar='RETRY_DELAY',
+                        help='Amount of time (in seconds) to wait between retry '
+                             'attempts, default: %(default)s')
+    parser.add_argument('--stub',
+                        dest='is_stub_installer',
+                        action='store_true',
+                        help='Stub installer (Only applicable to Windows builds).')
+    parser.add_argument('--timeout',
+                        dest='timeout',
+                        type=float,
+                        metavar='TIMEOUT',
+                        help='Amount of time (in seconds) until a download times out.')
+    parser.add_argument('--type', '-t',
+                        dest='scraper_type',
+                        choices=factory.scraper_types.keys(),
+                        default='release',
+                        metavar='SCRAPER_TYPE',
+                        help='Type of build to download, default: "%(default)s"')
+    parser.add_argument('--url',
+                        dest='url',
+                        metavar='URL',
+                        help='URL to download. Note: Reserved characters (such '
+                             'as &) must be escaped or put in quotes otherwise '
+                             'CLI output may be abnormal.')
+    parser.add_argument('--username',
+                        dest='username',
+                        metavar='USERNAME',
+                        help='Username for basic HTTP authentication.')
+    parser.add_argument('--version', '-v',
+                        dest='version',
+                        metavar='VERSION',
+                        help='Version of the application to be downloaded for release '
+                             'and candidate builds, i.e. "3.6"')
+
+    # Group for daily builds
+    group = parser.add_argument_group('Daily builds', 'Extra options for daily builds.')
+    group.add_argument('--branch',
+                       dest='branch',
+                       default='mozilla-central',
+                       metavar='BRANCH',
+                       help='Name of the branch, default: "%(default)s"')
+    group.add_argument('--build-id',
+                       dest='build_id',
+                       metavar='BUILD_ID',
+                       help='ID of the build to download.')
+    group.add_argument('--date',
+                       dest='date',
+                       metavar='DATE',
+                       help='Date of the build, default: latest build')
+
+    # Group for tinderbox builds
+    group = parser.add_argument_group('Tinderbox builds', 'Extra options for tinderbox builds.')
+    group.add_argument('--debug-build',
+                       dest='debug_build',
+                       action='store_true',
+                       help='Download a debug build.')
+
+    # Group for try builds
+    group = parser.add_argument_group('Try builds', 'Extra options for try builds.')
+    group.add_argument('--changeset',
+                       dest='changeset',
+                       help='Changeset of the try build to download')
+
+    kwargs = vars(parser.parse_args())
+
+    # Gives instructions to user when no arguments were passed
+    if len(sys.argv) == 1:
+        print(__doc__)
+        parser.format_usage()
+        print('Specify --help for more information on args. '
+              'Please see the README for examples.')
+        return
+
+    try:
+        scraper_type = kwargs.pop('scraper_type')
+
+        # If a URL has been specified use the direct scraper
+        if kwargs.get('url'):
+            scraper_type = 'direct'
+
+        build = factory.FactoryScraper(scraper_type, **kwargs)
+        build.download()
+    except KeyboardInterrupt:
+        print('\nDownload interrupted by the user')
+
+
+if __name__ == '__main__':
+    cli()
diff --git a/mozdownload/errors.py b/mozdownload/errors.py
new file mode 100644
index 0000000..4fcf394
--- /dev/null
+++ b/mozdownload/errors.py
@@ -0,0 +1,29 @@
+# This Source Code Form is subject to the terms of the Mozilla Public
+# License, v. 2.0. If a copy of the MPL was not distributed with this
+# file, You can obtain one at http://mozilla.org/MPL/2.0/.
+
+
+class NotSupportedError(Exception):
+    """Exception for a build not being supported"""
+    def __init__(self, message):
+        Exception.__init__(self, message)
+
+
+class NotFoundError(Exception):
+    """Exception for a resource not being found (e.g. no logs)"""
+    def __init__(self, message, location):
+        self.location = location
+        Exception.__init__(self, ': '.join([message, location]))
+
+
+class NotImplementedError(Exception):
+    """Exception for a feature which is not implemented yet"""
+    def __init__(self, message):
+        Exception.__init__(self, message)
+
+
+class TimeoutError(Exception):
+    """Exception for a download exceeding the allocated timeout"""
+    def __init__(self):
+        self.message = 'The download exceeded the allocated timeout'
+        Exception.__init__(self, self.message)
diff --git a/mozdownload/factory.py b/mozdownload/factory.py
new file mode 100644
index 0000000..2d3ddc0
--- /dev/null
+++ b/mozdownload/factory.py
@@ -0,0 +1,118 @@
+# This Source Code Form is subject to the terms of the Mozilla Public
+# License, v. 2.0. If a copy of the MPL was not distributed with this
+# file, You can obtain one at http://mozilla.org/MPL/2.0/.
+
+from . import errors
+from . import scraper
+
+
+# List of known download scrapers
+scraper_types = {'candidate': scraper.ReleaseCandidateScraper,
+                 'daily': scraper.DailyScraper,
+                 'direct': scraper.DirectScraper,
+                 'release': scraper.ReleaseScraper,
+                 'tinderbox': scraper.TinderboxScraper,
+                 'try': scraper.TryScraper,
+                 }
+
+
+class FactoryScraper(scraper.Scraper):
+
+    def __init__(self, scraper_type, **kwargs):
+        """Creates an instance of a scraper class based on the given type.
+
+        :param scraper_type: The type of scraper to use.
+
+        Scraper:
+        :param application: The name of the application to download.
+        :param base_url: The base url to be used
+        :param build_number: Number of the build (for candidate, daily, and tinderbox builds).
+        :param destination: Directory or file name to download the file to.
+        :param extension: File extension of the build (e.g. ".zip").
+        :param is_stub_installer: Stub installer (Only applicable to Windows builds).
+        :param locale: Locale of the application.
+        :param log_level: Threshold for log output.
+        :param password: Password for basic HTTP authentication.
+        :param platform: Platform of the application
+        :param retry_attempts: Number of times the download will be attempted
+            in the event of a failure
+        :param retry_delay: Amount of time (in seconds) to wait between retry attempts.
+        :param timeout: Amount of time (in seconds) until a download times out.
+        :param url: URL to download.
+        :param username: Username for basic HTTP authentication.
+        :param version: Version of the application to be downloaded.
+
+        Daily builds:
+        :param branch: Name of the branch.
+        :param build_id: ID of the build to download.
+        :param date: Date of the build.
+
+        Tinderbox:
+        :param debug_build: Download a debug build.
+
+        Try:
+        :param changeset: Changeset of the try build to download.
+
+        """
+        # Check for valid arguments
+        if scraper_type in ('candidate', 'release') and not kwargs.get('version'):
+            raise ValueError('The version to download has to be specified.')
+
+        if kwargs.get('application') == 'b2g' and scraper_type in ('candidate', 'release'):
+            error_msg = '%s build is not yet supported for B2G' % scraper_type
+            raise errors.NotSupportedError(error_msg)
+
+        if kwargs.get('application') == 'fennec' and scraper_type not in ('daily'):
+            error_msg = '%s build is not yet supported for fennec' % scraper_type
+            raise errors.NotSupportedError(error_msg)
+
+        # Instantiate scraper and download the build
+        scraper_keywords = {'application': kwargs.get('application', 'firefox'),
+                            'base_url': kwargs.get('base_url', scraper.BASE_URL),
+                            'destination': kwargs.get('destination'),
+                            'extension': kwargs.get('extension'),
+                            'is_stub_installer': kwargs.get('is_stub_installer'),
+                            'locale': kwargs.get('locale'),
+                            'log_level': kwargs.get('log_level', 'INFO'),
+                            'password': kwargs.get('password'),
+                            'platform': kwargs.get('platform'),
+                            'retry_attempts': kwargs.get('retry_attempts', 0),
+                            'retry_delay': kwargs.get('retry_delay', 10),
+                            'timeout': kwargs.get('timeout'),
+                            'username': kwargs.get('username'),
+                            }
+
+        scraper_type_keywords = {
+            'release': {
+                'version': kwargs.get('version'),
+            },
+            'candidate': {
+                'build_number': kwargs.get('build_number'),
+                'version': kwargs.get('version'),
+            },
+            'daily': {
+                'branch': kwargs.get('branch', 'mozilla-central'),
+                'build_number': kwargs.get('build_number'),
+                'build_id': kwargs.get('build_id'),
+                'date': kwargs.get('date'),
+            },
+            'direct': {
+                'url': kwargs.get('url'),
+            },
+            'tinderbox': {
+                'branch': kwargs.get('branch', 'mozilla-central'),
+                'build_number': kwargs.get('build_number'),
+                'date': kwargs.get('date'),
+                'debug_build': kwargs.get('debug_build', False),
+            },
+            'try': {
+                'changeset': kwargs.get('changeset'),
+                'debug_build': kwargs.get('debug_build', False),
+            },
+        }
+
+        kwargs = scraper_keywords.copy()
+        kwargs.update(scraper_type_keywords.get(scraper_type, {}))
+
+        self.__class__ = scraper_types[scraper_type]
+        scraper_types[scraper_type].__init__(self, **kwargs)
diff --git a/mozdownload/parser.py b/mozdownload/parser.py
index 98390a5..8590286 100644
--- a/mozdownload/parser.py
+++ b/mozdownload/parser.py
@@ -6,24 +6,55 @@
 
 from HTMLParser import HTMLParser
 import re
+import requests
 import urllib
 
 
 class DirectoryParser(HTMLParser):
-    """Class to parse directory listings"""
+    """
+    Class to parse directory listings.
 
-    def __init__(self, url):
+    :param url: url of the directory on the web server.
+    :param session: a requests Session instance used to fetch the directory
+                    content. If None, a new session will be created.
+    :param authentication: a tuple (username, password) to authenticate against
+                           the web server, or None for no authentication. Note
+                           that it will only be used if the given *session* is
+                           None.
+    :param timeout: timeout in seconds used when fetching the directory
+                    content.
+    """
+
+    def __init__(self, url, session=None, authentication=None, timeout=None):
+        if not session:
+            session = requests.Session()
+            session.auth = authentication
+        self.session = session
+        self.timeout = timeout
+
+        self.active_url = None
+        self.entries = []
+
         HTMLParser.__init__(self)
 
-        self.entries = [ ]
-        self.active_url = None
+        # Force the server to not send cached content
+        headers = {'Cache-Control': 'max-age=0'}
+        r = self.session.get(url, headers=headers, timeout=self.timeout)
 
-        req = urllib.urlopen(url)
-        self.feed(req.read())
+        try:
+            r.raise_for_status()
+            self.feed(r.text)
+        finally:
+            r.close()
 
-    def filter(self, regex):
-        pattern = re.compile(regex, re.IGNORECASE)
-        return [entry for entry in self.entries if pattern.match(entry)]
+    def filter(self, filter):
+        """Filter entries by calling function or applying regex."""
+
+        if hasattr(filter, '__call__'):
+            return [entry for entry in self.entries if filter(entry)]
+        else:
+            pattern = re.compile(filter, re.IGNORECASE)
+            return [entry for entry in self.entries if pattern.match(entry)]
 
     def handle_starttag(self, tag, attrs):
         if not tag == 'a':
@@ -31,7 +62,16 @@
 
         for attr in attrs:
             if attr[0] == 'href':
-                self.active_url = attr[1].strip('/')
+                # Links look like: /pub/firefox/nightly/2015/
+                # We have to trim the fragment down to the last item. Also to ensure we
+                # always get it, we remove a possible final slash first
+                has_final_slash = attr[1][-1] == '/'
+                self.active_url = attr[1].rstrip('/').split('/')[-1]
+
+                # Add back slash in case of sub folders
+                if has_final_slash:
+                    self.active_url = '%s/' % self.active_url
+
                 return
 
     def handle_endtag(self, tag):
@@ -43,6 +83,5 @@
         if not self.active_url:
             return
 
-        name = urllib.quote(data.strip('/'))
-        if self.active_url == name:
-            self.entries.append(self.active_url)
+        if self.active_url in (data, urllib.quote(data)):
+            self.entries.append(self.active_url.strip('/'))
diff --git a/mozdownload/scraper.py b/mozdownload/scraper.py
index 9011cab..9a68b2a 100755
--- a/mozdownload/scraper.py
+++ b/mozdownload/scraper.py
@@ -1,211 +1,344 @@
-#!/usr/bin/env python
-
 # This Source Code Form is subject to the terms of the Mozilla Public
 # License, v. 2.0. If a copy of the MPL was not distributed with this
 # file, You can obtain one at http://mozilla.org/MPL/2.0/.
 
-"""Module to handle downloads for different types of Firefox and Thunderbird builds."""
-
-
 from datetime import datetime
-from optparse import OptionParser, OptionGroup
+import logging
 import os
 import re
+import requests
 import sys
 import time
 import urllib
-import urllib2
+from urlparse import urlparse
 
 import mozinfo
 
+import errors
+
 from parser import DirectoryParser
 from timezones import PacificTimezone
+from utils import urljoin
 
 
-APPLICATIONS = ['b2g', 'firefox', 'thunderbird']
+APPLICATIONS = ('b2g', 'firefox', 'fennec', 'thunderbird')
+
+# Some applications contain all locales in a single build
+APPLICATIONS_MULTI_LOCALE = ('b2g', 'fennec')
+
+# Used if the application is named differently than the subfolder on the server
+APPLICATIONS_TO_FTP_DIRECTORY = {'fennec': 'mobile'}
 
 # Base URL for the path to all builds
-BASE_URL = 'https://ftp.mozilla.org/pub/mozilla.org'
+BASE_URL = 'https://archive.mozilla.org/pub/'
 
-PLATFORM_FRAGMENTS = {'linux': 'linux-i686',
-                      'linux64': 'linux-x86_64',
-                      'mac': 'mac',
-                      'mac64': 'mac64',
-                      'win32': 'win32',
-                      'win64': 'win64-x86_64'}
+# Chunk size when downloading a file
+CHUNK_SIZE = 16 * 1024
 
-DEFAULT_FILE_EXTENSIONS = {'linux': 'tar.bz2',
+DEFAULT_FILE_EXTENSIONS = {'android-api-9': 'apk',
+                           'android-api-11': 'apk',
+                           'android-x86': 'apk',
+                           'linux': 'tar.bz2',
                            'linux64': 'tar.bz2',
                            'mac': 'dmg',
                            'mac64': 'dmg',
                            'win32': 'exe',
                            'win64': 'exe'}
 
-class NotFoundException(Exception):
-    """Exception for a resource not being found (e.g. no logs)"""
-    def __init__(self, message, location):
-        self.location = location
-        Exception.__init__(self, ': '.join([message, location]))
+PLATFORM_FRAGMENTS = {'android-api-9': r'android-arm',
+                      'android-api-11': r'android-arm',
+                      'android-x86': r'android-i386',
+                      'linux': r'linux-i686',
+                      'linux64': r'linux-x86_64',
+                      'mac': r'mac',
+                      'mac64': r'mac(64)?',
+                      'win32': r'win32',
+                      'win64': r'win64(-x86_64)?'}
 
 
 class Scraper(object):
     """Generic class to download an application from the Mozilla server"""
 
-    def __init__(self, directory, version, platform=None,
-                 application='firefox', locale='en-US', extension=None,
-                 authentication=None, retry_attempts=3, retry_delay=10):
+    def __init__(self, destination=None, platform=None,
+                 application='firefox', locale=None, extension=None,
+                 username=None, password=None,
+                 retry_attempts=0, retry_delay=10.,
+                 is_stub_installer=False, timeout=None,
+                 log_level='INFO',
+                 base_url=BASE_URL):
 
         # Private properties for caching
-        self._target = None
+        self._filename = None
         self._binary = None
 
-        self.directory = directory
-        self.locale = locale
+        self.destination = destination or os.getcwd()
+
+        if not locale:
+            if application in APPLICATIONS_MULTI_LOCALE:
+                self.locale = 'multi'
+            else:
+                self.locale = 'en-US'
+        else:
+            self.locale = locale
+
         self.platform = platform or self.detect_platform()
-        self.version = version
-        self.extension = extension or DEFAULT_FILE_EXTENSIONS[self.platform]
-        self.authentication = authentication
+
+        self.session = requests.Session()
+        if (username, password) != (None, None):
+            self.session.auth = (username, password)
+
         self.retry_attempts = retry_attempts
         self.retry_delay = retry_delay
+        self.is_stub_installer = is_stub_installer
+        self.timeout_download = timeout
+        # this is the timeout used in requests.get. Unlike "auth",
+        # it does not work if we attach it on the session, so we handle
+        # it independently.
+        self.timeout_network = 60.
+
+        logging.basicConfig(format=' %(levelname)s | %(message)s')
+        self.logger = logging.getLogger(self.__module__)
+        self.logger.setLevel(log_level)
 
         # build the base URL
         self.application = application
-        self.base_url = '/'.join([BASE_URL, self.application])
+        self.base_url = '%s/' % urljoin(
+            base_url,
+            APPLICATIONS_TO_FTP_DIRECTORY.get(self.application, self.application)
+        )
 
+        if extension:
+            self.extension = extension
+        else:
+            if self.application in APPLICATIONS_MULTI_LOCALE and \
+                    self.platform in ('win32', 'win64'):
+                # builds for APPLICATIONS_MULTI_LOCALE only exist in zip
+                self.extension = 'zip'
+            else:
+                self.extension = DEFAULT_FILE_EXTENSIONS[self.platform]
+
+        attempt = 0
+        while True:
+            attempt += 1
+            try:
+                self.get_build_info()
+                break
+            except (errors.NotFoundError, requests.exceptions.RequestException), e:
+                if self.retry_attempts > 0:
+                    # Log only if multiple attempts are requested
+                    self.logger.warning("Build not found: '%s'" % e.message)
+                    self.logger.info('Will retry in %s seconds...' %
+                                     (self.retry_delay))
+                    time.sleep(self.retry_delay)
+                    self.logger.info("Retrying... (attempt %s)" % attempt)
+
+                if attempt >= self.retry_attempts:
+                    if hasattr(e, 'response') and \
+                            e.response.status_code == 404:
+                        message = "Specified build has not been found"
+                        raise errors.NotFoundError(message, e.response.url)
+                    else:
+                        raise
+
+    def _create_directory_parser(self, url):
+        return DirectoryParser(url,
+                               session=self.session,
+                               timeout=self.timeout_network)
 
     @property
     def binary(self):
         """Return the name of the build"""
 
-        if self._binary is None:
-            # Retrieve all entries from the remote virtual folder
-            parser = DirectoryParser(self.path)
-            if not parser.entries:
-                raise NotFoundException('No entries found', self.path)
-    
-            # Download the first matched directory entry
-            pattern = re.compile(self.binary_regex, re.IGNORECASE)
-            for entry in parser.entries:
-                try:
-                    self._binary = pattern.match(entry).group()
-                    break
-                except:
-                    # No match, continue with next entry
-                    continue
+        attempt = 0
 
-        if self._binary is None:
-            raise NotFoundException("Binary not found in folder", self.path)
-        else:
-            return self._binary
+        while self._binary is None:
+            attempt += 1
+            try:
+                # Retrieve all entries from the remote virtual folder
+                parser = self._create_directory_parser(self.path)
+                if not parser.entries:
+                    raise errors.NotFoundError('No entries found', self.path)
 
+                # Download the first matched directory entry
+                pattern = re.compile(self.binary_regex, re.IGNORECASE)
+                for entry in parser.entries:
+                    try:
+                        self._binary = pattern.match(entry).group()
+                        break
+                    except:
+                        # No match, continue with next entry
+                        continue
+                else:
+                    raise errors.NotFoundError("Binary not found in folder",
+                                               self.path)
+            except (errors.NotFoundError, requests.exceptions.RequestException), e:
+                if self.retry_attempts > 0:
+                    # Log only if multiple attempts are requested
+                    self.logger.warning("Build not found: '%s'" % e.message)
+                    self.logger.info('Will retry in %s seconds...' %
+                                     (self.retry_delay))
+                    time.sleep(self.retry_delay)
+                    self.logger.info("Retrying... (attempt %s)" % attempt)
+
+                if attempt >= self.retry_attempts:
+                    if hasattr(e, 'response') and \
+                            e.response.status_code == 404:
+                        message = "Specified build has not been found"
+                        raise errors.NotFoundError(message, self.path)
+                    else:
+                        raise
+
+        return self._binary
 
     @property
     def binary_regex(self):
         """Return the regex for the binary filename"""
 
-        raise NotImplementedError(sys._getframe(0).f_code.co_name)
-
+        raise errors.NotImplementedError(sys._getframe(0).f_code.co_name)
 
     @property
-    def final_url(self):
-        """Return the final URL of the build"""
+    def url(self):
+        """Return the URL of the build"""
 
-        return '/'.join([self.path, self.binary])
-
+        return urljoin(self.path, self.binary)
 
     @property
     def path(self):
-        """Return the path to the build"""
+        """Return the path to the build folder"""
 
-        return '/'.join([self.base_url, self.path_regex])
-
+        return urljoin(self.base_url, self.path_regex)
 
     @property
     def path_regex(self):
-        """Return the regex for the path to the build"""
+        """Return the regex for the path to the build folder"""
 
-        raise NotImplementedError(sys._getframe(0).f_code.co_name)
-
+        raise errors.NotImplementedError(sys._getframe(0).f_code.co_name)
 
     @property
     def platform_regex(self):
         """Return the platform fragment of the URL"""
 
-        return PLATFORM_FRAGMENTS[self.platform];
-
+        return PLATFORM_FRAGMENTS[self.platform]
 
     @property
-    def target(self):
-        """Return the target file name of the build"""
+    def filename(self):
+        """Return the local filename of the build"""
 
-        if self._target is None:
-            self._target = os.path.join(self.directory,
-                                        self.build_filename(self.binary))
-        return self._target
+        if self._filename is None:
+            if os.path.splitext(self.destination)[1]:
+                # If the filename has been given make use of it
+                target_file = self.destination
+            else:
+                # Otherwise create it from the build details
+                target_file = os.path.join(self.destination,
+                                           self.build_filename(self.binary))
 
+            self._filename = os.path.abspath(target_file)
+
+        return self._filename
+
+    def get_build_info(self):
+        """Returns additional build information in subclasses if necessary"""
+        pass
 
     def build_filename(self, binary):
         """Return the proposed filename with extension for the binary"""
 
-        raise NotImplementedError(sys._getframe(0).f_code.co_name)
-
+        raise errors.NotImplementedError(sys._getframe(0).f_code.co_name)
 
     def detect_platform(self):
         """Detect the current platform"""
 
         # For Mac and Linux 32bit we do not need the bits appended
-        if mozinfo.os == 'mac' or (mozinfo.os == 'linux' and mozinfo.bits == 32):
+        if mozinfo.os == 'mac' or \
+                (mozinfo.os == 'linux' and mozinfo.bits == 32):
             return mozinfo.os
         else:
             return "%s%d" % (mozinfo.os, mozinfo.bits)
 
-
     def download(self):
         """Download the specified file"""
 
-        attempts = 0
+        def total_seconds(td):
+            # Keep backward compatibility with Python 2.6 which doesn't have
+            # this method
+            if hasattr(td, 'total_seconds'):
+                return td.total_seconds()
+            else:
+                return (td.microseconds +
+                        (td.seconds + td.days * 24 * 3600) * 10 ** 6) / 10 ** 6
 
-        if not os.path.isdir(self.directory):
-            os.makedirs(self.directory)
+        attempt = 0
 
         # Don't re-download the file
-        if os.path.isfile(os.path.abspath(self.target)):
-            print "File has already been downloaded: %s" % (self.target)
-            return
+        if os.path.isfile(os.path.abspath(self.filename)):
+            self.logger.info("File has already been downloaded: %s" %
+                             (self.filename))
+            return self.filename
 
-        print 'Downloading from: %s' % (urllib.unquote(self.final_url))
-        tmp_file = self.target + ".part"
+        directory = os.path.dirname(self.filename)
+        if not os.path.isdir(directory):
+            os.makedirs(directory)
 
-        if self.authentication \
-           and self.authentication['username'] \
-           and self.authentication['password']:
-            password_mgr = urllib2.HTTPPasswordMgrWithDefaultRealm()
-            password_mgr.add_password(None,
-                                      self.final_url,
-                                      self.authentication['username'],
-                                      self.authentication['password'])
-            handler = urllib2.HTTPBasicAuthHandler(password_mgr)
-            opener = urllib2.build_opener(urllib2.HTTPHandler, handler)
-            urllib2.install_opener(opener)
+        self.logger.info('Downloading from: %s' %
+                         (urllib.unquote(self.url)))
+        self.logger.info('Saving as: %s' % self.filename)
+
+        tmp_file = self.filename + ".part"
 
         while True:
-            attempts += 1
+            attempt += 1
             try:
-                r = urllib2.urlopen(self.final_url)
-                CHUNK = 16 * 1024
+                start_time = datetime.now()
+
+                # Enable streaming mode so we can download content in chunks
+                r = self.session.get(self.url, stream=True)
+                r.raise_for_status()
+
+                content_length = r.headers.get('Content-length')
+                # ValueError: Value out of range if only total_size given
+                if content_length:
+                    total_size = int(content_length.strip())
+                    max_value = ((total_size / CHUNK_SIZE) + 1) * CHUNK_SIZE
+
+                bytes_downloaded = 0
+
                 with open(tmp_file, 'wb') as f:
-                    for chunk in iter(lambda: r.read(CHUNK), ''):
+                    for chunk in iter(lambda: r.raw.read(CHUNK_SIZE), ''):
                         f.write(chunk)
+                        bytes_downloaded += CHUNK_SIZE
+
+                        t1 = total_seconds(datetime.now() - start_time)
+                        if self.timeout_download and \
+                                t1 >= self.timeout_download:
+                            raise errors.TimeoutError
                 break
-            except (urllib2.HTTPError, urllib2.URLError):
+            except (requests.exceptions.RequestException, errors.TimeoutError), e:
                 if tmp_file and os.path.isfile(tmp_file):
                     os.remove(tmp_file)
-                print 'Download failed! Retrying... (attempt %s)' % attempts
-                if attempts >= self.retry_attempts:
+                if self.retry_attempts > 0:
+                    # Log only if multiple attempts are requested
+                    self.logger.warning('Download failed: "%s"' % str(e))
+                    self.logger.info('Will retry in %s seconds...' %
+                                     (self.retry_delay))
+                    time.sleep(self.retry_delay)
+                    self.logger.info("Retrying... (attempt %s)" % attempt)
+                if attempt >= self.retry_attempts:
                     raise
                 time.sleep(self.retry_delay)
 
-        os.rename(tmp_file, self.target)
+        os.rename(tmp_file, self.filename)
+
+        return self.filename
+
+    def show_matching_builds(self, builds):
+        """Output the matching builds"""
+        self.logger.info('Found %s build%s: %s' % (
+            len(builds),
+            len(builds) > 1 and 's' or '',
+            len(builds) > 10 and
+            ' ... '.join([', '.join(builds[:5]), ', '.join(builds[-5:])]) or
+            ', '.join(builds)))
 
 
 class DailyScraper(Scraper):
@@ -214,94 +347,160 @@
     def __init__(self, branch='mozilla-central', build_id=None, date=None,
                  build_number=None, *args, **kwargs):
 
-        Scraper.__init__(self, *args, **kwargs)
         self.branch = branch
+        self.build_id = build_id
+        self.date = date
+        self.build_number = build_number
+
+        Scraper.__init__(self, *args, **kwargs)
+
+    def get_build_info(self):
+        """Defines additional build information"""
 
         # Internally we access builds via index
-        if build_number is not None:
-            self.build_index = int(build_number) - 1
+        if self.build_number is not None:
+            self.build_index = int(self.build_number) - 1
         else:
             self.build_index = None
 
-        if build_id:
-            # A build id has been specified. Split up its components so the date
-            # and time can be extracted: '20111212042025' -> '2011-12-12 04:20:25'
-            self.date = datetime.strptime(build_id, '%Y%m%d%H%M%S')
-            self.builds, self.build_index = self.get_build_info_for_date(self.date,
-                                                                         has_time=True)
+        if self.build_id:
+            # A build id has been specified. Split up its components so the
+            # date and time can be extracted:
+            # '20111212042025' -> '2011-12-12 04:20:25'
+            self.date = datetime.strptime(self.build_id, '%Y%m%d%H%M%S')
 
-        elif date:
+        elif self.date:
             # A date (without time) has been specified. Use its value and the
             # build index to find the requested build for that day.
-            self.date = datetime.strptime(date, '%Y-%m-%d')
-            self.builds, self.build_index = self.get_build_info_for_date(self.date,
-                                                                         build_index=self.build_index)
-
+            try:
+                self.date = datetime.strptime(self.date, '%Y-%m-%d')
+            except:
+                raise ValueError('%s is not a valid date' % self.date)
         else:
-            # If no build id nor date have been specified the lastest available
+            # If no build id nor date have been specified the latest available
             # build of the given branch has to be identified. We also have to
             # retrieve the date of the build via its build id.
-            url = '%s/nightly/latest-%s/' % (self.base_url, self.branch)
+            self.date = self.get_latest_build_date()
 
-            print 'Retrieving the build status file from %s' % url
-            parser = DirectoryParser(url)
-            parser.entries = parser.filter(r'.*%s\.txt' % self.platform_regex)
-            if not parser.entries:
-                message = 'Status file for %s build cannot be found' % self.platform_regex
-                raise NotFoundException(message, url)
+        self.builds, self.build_index = self.get_build_info_for_date(
+            self.date, self.build_index)
 
-            # Read status file for the platform, retrieve build id, and convert to a date
-            status_file = url + parser.entries[-1]
-            f = urllib.urlopen(status_file)
-            self.date = datetime.strptime(f.readline().strip(), '%Y%m%d%H%M%S')
-            self.builds, self.build_index = self.get_build_info_for_date(self.date,
-                                                                         has_time=True)
+    def get_latest_build_date(self):
+        """ Returns date of latest available nightly build."""
+        if self.application not in ('fennec'):
+            url = urljoin(self.base_url, 'nightly', 'latest-%s/' % self.branch)
+        else:
+            url = urljoin(self.base_url, 'nightly', 'latest-%s-%s/' %
+                          (self.branch, self.platform))
 
-
-    def get_build_info_for_date(self, date, has_time=False, build_index=None):
-        url = '/'.join([self.base_url, self.monthly_build_list_regex])
-
-        print 'Retrieving list of builds from %s' % url
-        parser = DirectoryParser(url)
-        regex = r'%(DATE)s-(\d+-)+%(BRANCH)s%(L10N)s$' % {
-                    'DATE': date.strftime('%Y-%m-%d'),
-                    'BRANCH': self.branch,
-                    'L10N': '' if self.locale == 'en-US' else '-l10n'}
-        parser.entries = parser.filter(regex)
+        self.logger.info('Retrieving the build status file from %s' % url)
+        parser = self._create_directory_parser(url)
+        parser.entries = parser.filter(r'.*%s\.txt' % self.platform_regex)
         if not parser.entries:
-            message = 'Folder for builds on %s has not been found' % self.date.strftime('%Y-%m-%d')
-            raise NotFoundException(message, url)
+            message = 'Status file for %s build cannot be found' % \
+                self.platform_regex
+            raise errors.NotFoundError(message, url)
+
+        # Read status file for the platform, retrieve build id,
+        # and convert to a date
+        headers = {'Cache-Control': 'max-age=0'}
+
+        r = self.session.get(url + parser.entries[-1], headers=headers)
+        try:
+            r.raise_for_status()
+
+            return datetime.strptime(r.text.split('\n')[0], '%Y%m%d%H%M%S')
+        finally:
+            r.close()
+
+    def is_build_dir(self, folder_name):
+        """Return whether or not the given dir contains a build."""
+
+        # Cannot move up to base scraper due to parser.entries call in
+        # get_build_info_for_date (see below)
+
+        url = '%s/' % urljoin(self.base_url, self.monthly_build_list_regex, folder_name)
+        if self.application in APPLICATIONS_MULTI_LOCALE \
+                and self.locale != 'multi':
+            url = '%s/' % urljoin(url, self.locale)
+
+        parser = self._create_directory_parser(url)
+
+        pattern = re.compile(self.binary_regex, re.IGNORECASE)
+        for entry in parser.entries:
+            try:
+                pattern.match(entry).group()
+                return True
+            except:
+                # No match, continue with next entry
+                continue
+        return False
+
+    def get_build_info_for_date(self, date, build_index=None):
+        url = urljoin(self.base_url, self.monthly_build_list_regex)
+        has_time = date and date.time()
+
+        self.logger.info('Retrieving list of builds from %s' % url)
+        parser = self._create_directory_parser(url)
+        regex = r'%(DATE)s-(\d+-)+%(BRANCH)s%(L10N)s%(PLATFORM)s$' % {
+            'DATE': date.strftime('%Y-%m-%d'),
+            'BRANCH': self.branch,
+            # ensure to select the correct subfolder for localized builds
+            'L10N': '' if self.locale in ('en-US', 'multi') else '(-l10n)?',
+            'PLATFORM': '' if self.application not in (
+                        'fennec') else '-' + self.platform
+        }
+
+        parser.entries = parser.filter(regex)
+        parser.entries = parser.filter(self.is_build_dir)
 
         if has_time:
-            # If a time is included in the date, use it to determine the build's index
+            # If a time is included in the date, use it to determine the
+            # build's index
             regex = r'.*%s.*' % date.strftime('%H-%M-%S')
-            build_index = parser.entries.index(parser.filter(regex)[0])
-        else:
-            # If no index has been given, set it to the last build of the day.
-            if build_index is None:
-                build_index = len(parser.entries) - 1
+            parser.entries = parser.filter(regex)
+
+        if not parser.entries:
+            date_format = '%Y-%m-%d-%H-%M-%S' if has_time else '%Y-%m-%d'
+            message = 'Folder for builds on %s has not been found' % \
+                self.date.strftime(date_format)
+            raise errors.NotFoundError(message, url)
+
+        # If no index has been given, set it to the last build of the day.
+        self.show_matching_builds(parser.entries)
+        # If no index has been given, set it to the last build of the day.
+        if build_index is None:
+            # Find the most recent non-empty entry.
+            build_index = len(parser.entries)
+            for build in reversed(parser.entries):
+                build_index -= 1
+                if not build_index or self.is_build_dir(build):
+                    break
+        self.logger.info('Selected build: %s' % parser.entries[build_index])
 
         return (parser.entries, build_index)
 
-
     @property
     def binary_regex(self):
         """Return the regex for the binary"""
 
         regex_base_name = r'^%(APP)s-.*\.%(LOCALE)s\.%(PLATFORM)s'
-        regex_suffix = {'linux': r'\.%(EXT)s$',
+        regex_suffix = {'android-api-9': r'\.%(EXT)s$',
+                        'android-api-11': r'\.%(EXT)s$',
+                        'android-x86': r'\.%(EXT)s$',
+                        'linux': r'\.%(EXT)s$',
                         'linux64': r'\.%(EXT)s$',
                         'mac': r'\.%(EXT)s$',
                         'mac64': r'\.%(EXT)s$',
-                        'win32': r'(\.installer)\.%(EXT)s$',
-                        'win64': r'(\.installer)\.%(EXT)s$'}
+                        'win32': r'(\.installer%(STUB)s)?\.%(EXT)s$',
+                        'win64': r'(\.installer%(STUB)s)?\.%(EXT)s$'}
         regex = regex_base_name + regex_suffix[self.platform]
 
         return regex % {'APP': self.application,
                         'LOCALE': self.locale,
                         'PLATFORM': self.platform_regex,
-                        'EXT': self.extension}
-
+                        'EXT': self.extension,
+                        'STUB': '-stub' if self.is_stub_installer else ''}
 
     def build_filename(self, binary):
         """Return the proposed filename with extension for the binary"""
@@ -315,53 +514,69 @@
             timestamp = self.date.strftime('%Y-%m-%d')
 
         return '%(TIMESTAMP)s-%(BRANCH)s-%(NAME)s' % {
-                   'TIMESTAMP': timestamp,
-                   'BRANCH': self.branch,
-                   'NAME': binary}
-
+            'TIMESTAMP': timestamp,
+            'BRANCH': self.branch,
+            'NAME': binary}
 
     @property
     def monthly_build_list_regex(self):
-        """Return the regex for the folder which contains the builds of a month."""
+        """Return the regex for the folder containing builds of a month."""
 
         # Regex for possible builds for the given date
         return r'nightly/%(YEAR)s/%(MONTH)s/' % {
-                  'YEAR': self.date.year,
-                  'MONTH': str(self.date.month).zfill(2) }
-
+            'YEAR': self.date.year,
+            'MONTH': str(self.date.month).zfill(2)}
 
     @property
     def path_regex(self):
-        """Return the regex for the path"""
+        """Return the regex for the path to the build folder"""
 
         try:
-            return self.monthly_build_list_regex + self.builds[self.build_index]
+            path = '%s/' % urljoin(self.monthly_build_list_regex,
+                                   self.builds[self.build_index])
+            if self.application in APPLICATIONS_MULTI_LOCALE \
+                    and self.locale != 'multi':
+                path = '%s/' % urljoin(path, self.locale)
+            return path
         except:
-            raise NotFoundException("Specified sub folder cannot be found",
-                                    self.base_url + self.monthly_build_list_regex)
+            folder = urljoin(self.base_url, self.monthly_build_list_regex)
+            raise errors.NotFoundError("Specified sub folder cannot be found",
+                                       folder)
 
 
 class DirectScraper(Scraper):
     """Class to download a file from a specified URL"""
 
     def __init__(self, url, *args, **kwargs):
+        self._url = url
+
         Scraper.__init__(self, *args, **kwargs)
 
-        self.url = url
+    @property
+    def filename(self):
+        if os.path.splitext(self.destination)[1]:
+            # If the filename has been given make use of it
+            target_file = self.destination
+        else:
+            # Otherwise determine it from the url.
+            parsed_url = urlparse(self.url)
+            source_filename = (parsed_url.path.rpartition('/')[-1] or
+                               parsed_url.hostname)
+            target_file = os.path.join(self.destination, source_filename)
+
+        return os.path.abspath(target_file)
 
     @property
-    def target(self):
-        return urllib.splitquery(self.final_url)[0].rpartition('/')[-1]
-
-    @property
-    def final_url(self):
-        return self.url
+    def url(self):
+        return self._url
 
 
 class ReleaseScraper(Scraper):
     """Class to download a release build from the Mozilla server"""
 
-    def __init__(self, *args, **kwargs):
+    def __init__(self, version, *args, **kwargs):
+        self.version = version
+
         Scraper.__init__(self, *args, **kwargs)
 
     @property
@@ -372,66 +587,78 @@
                  'linux64': r'^%(APP)s-.*\.%(EXT)s$',
                  'mac': r'^%(APP)s.*\.%(EXT)s$',
                  'mac64': r'^%(APP)s.*\.%(EXT)s$',
-                 'win32': r'^%(APP)s.*\.%(EXT)s$',
-                 'win64': r'^%(APP)s.*\.%(EXT)s$'}
-        return regex[self.platform] % {'APP': self.application,
-                                       'EXT': self.extension}
-
+                 'win32': r'^%(APP)s.*%(STUB)s.*\.%(EXT)s$',
+                 'win64': r'^%(APP)s.*%(STUB)s.*\.%(EXT)s$'}
+        return regex[self.platform] % {
+            'APP': self.application,
+            'EXT': self.extension,
+            'STUB': 'Stub' if self.is_stub_installer else ''}
 
     @property
     def path_regex(self):
-        """Return the regex for the path"""
+        """Return the regex for the path to the build folder"""
 
-        regex = r'releases/%(VERSION)s/%(PLATFORM)s/%(LOCALE)s'
+        regex = r'releases/%(VERSION)s/%(PLATFORM)s/%(LOCALE)s/'
         return regex % {'LOCALE': self.locale,
                         'PLATFORM': self.platform_regex,
                         'VERSION': self.version}
 
+    @property
+    def platform_regex(self):
+        """Return the platform fragment of the URL"""
+
+        if self.platform == 'win64':
+            return self.platform
+
+        return PLATFORM_FRAGMENTS[self.platform]
 
     def build_filename(self, binary):
         """Return the proposed filename with extension for the binary"""
 
-        template = '%(APP)s-%(VERSION)s.%(LOCALE)s.%(PLATFORM)s.%(EXT)s'
+        template = '%(APP)s-%(VERSION)s.%(LOCALE)s.%(PLATFORM)s%(STUB)s' \
+                   '.%(EXT)s'
         return template % {'APP': self.application,
                            'VERSION': self.version,
                            'LOCALE': self.locale,
                            'PLATFORM': self.platform,
+                           'STUB': '-stub' if self.is_stub_installer else '',
                            'EXT': self.extension}
 
 
 class ReleaseCandidateScraper(ReleaseScraper):
     """Class to download a release candidate build from the Mozilla server"""
 
-    def __init__(self, build_number=None, no_unsigned=False, *args, **kwargs):
+    def __init__(self, version, build_number=None, *args, **kwargs):
+        self.version = version
+        self.build_number = build_number
+
         Scraper.__init__(self, *args, **kwargs)
 
+    def get_build_info(self):
+        """Defines additional build information"""
+
         # Internally we access builds via index
-        if build_number is not None:
-            self.build_index = int(build_number) - 1
-        else:
-            self.build_index = None
+        url = urljoin(self.base_url, self.candidate_build_list_regex)
+        self.logger.info('Retrieving list of candidate builds from %s' % url)
 
-        self.builds, self.build_index = self.get_build_info_for_version(self.version, self.build_index)
-
-        self.no_unsigned = no_unsigned
-        self.unsigned = False
-
-
-    def get_build_info_for_version(self, version, build_index=None):
-        url = '/'.join([self.base_url, self.candidate_build_list_regex])
-
-        print 'Retrieving list of candidate builds from %s' % url
-        parser = DirectoryParser(url)
+        parser = self._create_directory_parser(url)
         if not parser.entries:
-            message = 'Folder for specific candidate builds at has not been found'
-            raise NotFoundException(message, url)
+            message = 'Folder for specific candidate builds at %s has not' \
+                'been found' % url
+            raise errors.NotFoundError(message, url)
 
-        # If no index has been given, set it to the last build of the given version.
-        if build_index is None:
-            build_index = len(parser.entries) - 1
+        self.show_matching_builds(parser.entries)
+        self.builds = parser.entries
+        self.build_index = len(parser.entries) - 1
 
-        return (parser.entries, build_index)
-
+        if self.build_number and \
+                ('build%s' % self.build_number) in self.builds:
+            self.builds = ['build%s' % self.build_number]
+            self.build_index = 0
+            self.logger.info('Selected build: build%s' % self.build_number)
+        else:
+            self.logger.info('Selected build: build%d' %
+                             (self.build_index + 1))
 
     @property
     def candidate_build_list_regex(self):
@@ -439,51 +666,49 @@
            a candidate build."""
 
         # Regex for possible builds for the given date
-        return r'nightly/%(VERSION)s-candidates/' % {
-                 'VERSION': self.version }
-
+        return r'candidates/%(VERSION)s-candidates/' % {
+            'VERSION': self.version}
 
     @property
     def path_regex(self):
-        """Return the regex for the path"""
+        """Return the regex for the path to the build folder"""
 
-        regex = r'%(PREFIX)s%(BUILD)s/%(UNSIGNED)s%(PLATFORM)s/%(LOCALE)s'
+        regex = r'%(PREFIX)s%(BUILD)s/%(PLATFORM)s/%(LOCALE)s/'
         return regex % {'PREFIX': self.candidate_build_list_regex,
                         'BUILD': self.builds[self.build_index],
                         'LOCALE': self.locale,
-                        'PLATFORM': self.platform_regex,
-                        'UNSIGNED': "unsigned/" if self.unsigned else ""}
+                        'PLATFORM': self.platform_regex}
 
+    @property
+    def platform_regex(self):
+        """Return the platform fragment of the URL"""
+
+        if self.platform == 'win64':
+            return self.platform
+
+        return PLATFORM_FRAGMENTS[self.platform]
 
     def build_filename(self, binary):
         """Return the proposed filename with extension for the binary"""
 
-        template = '%(APP)s-%(VERSION)s-build%(BUILD)s.%(LOCALE)s.%(PLATFORM)s.%(EXT)s'
+        template = '%(APP)s-%(VERSION)s-%(BUILD)s.%(LOCALE)s.' \
+                   '%(PLATFORM)s%(STUB)s.%(EXT)s'
         return template % {'APP': self.application,
                            'VERSION': self.version,
                            'BUILD': self.builds[self.build_index],
                            'LOCALE': self.locale,
                            'PLATFORM': self.platform,
+                           'STUB': '-stub' if self.is_stub_installer else '',
                            'EXT': self.extension}
 
-
     def download(self):
         """Download the specified file"""
 
         try:
             # Try to download the signed candidate build
             Scraper.download(self)
-        except NotFoundException, e:
-            print str(e)
-
-            # If the signed build cannot be downloaded and unsigned builds are
-            # allowed, try to download the unsigned build instead
-            if self.no_unsigned:
-                raise
-            else:
-                print "Signed build has not been found. Falling back to unsigned build."
-                self.unsigned = True
-                Scraper.download(self)
+        except errors.NotFoundError, e:
+            self.logger.exception(str(e))
 
 
 class TinderboxScraper(Scraper):
@@ -497,86 +722,91 @@
 
     def __init__(self, branch='mozilla-central', build_number=None, date=None,
                  debug_build=False, *args, **kwargs):
-        Scraper.__init__(self, *args, **kwargs)
 
         self.branch = branch
+        self.build_number = build_number
         self.debug_build = debug_build
-        self.locale_build = self.locale != 'en-US'
-        self.timestamp = None
+        self.date = date
 
+        self.timestamp = None
         # Currently any time in RelEng is based on the Pacific time zone.
-        self.timezone = PacificTimezone();
+        self.timezone = PacificTimezone()
+
+        Scraper.__init__(self, *args, **kwargs)
+
+    def get_build_info(self):
+        "Defines additional build information"
 
         # Internally we access builds via index
-        if build_number is not None:
-            self.build_index = int(build_number) - 1
+        if self.build_number is not None:
+            self.build_index = int(self.build_number) - 1
         else:
             self.build_index = None
 
-        if date is not None:
+        if self.date is not None:
             try:
-                self.date = datetime.fromtimestamp(float(date), self.timezone)
-                self.timestamp = date
+                # date is provided in the format 2013-07-23
+                self.date = datetime.strptime(self.date, '%Y-%m-%d')
             except:
-                self.date = datetime.strptime(date, '%Y-%m-%d')
-        else:
-            self.date = None
+                try:
+                    # date is provided as a unix timestamp
+                    datetime.fromtimestamp(float(self.date))
+                    self.timestamp = self.date
+                except:
+                    raise ValueError('%s is not a valid date' % self.date)
 
+        self.locale_build = self.locale != 'en-US'
         # For localized builds we do not have to retrieve the list of builds
         # because only the last build is available
         if not self.locale_build:
-            self.builds, self.build_index = self.get_build_info(self.build_index)
-    
-            try:
-                self.timestamp = self.builds[self.build_index]
-            except:
-                raise NotFoundException("Specified sub folder cannot be found",
-                                        self.base_url + self.monthly_build_list_regex)
-
+            self.builds, self.build_index = self.get_build_info_for_index(
+                self.build_index)
 
     @property
     def binary_regex(self):
         """Return the regex for the binary"""
 
-        regex_base_name = r'^%(APP)s-.*\.%(LOCALE)s\.'
+        regex_base_name = r'^%(APP)s-.*\.%(LOCALE)s\.%(PLATFORM)s'
         regex_suffix = {'linux': r'.*\.%(EXT)s$',
                         'linux64': r'.*\.%(EXT)s$',
                         'mac': r'.*\.%(EXT)s$',
                         'mac64': r'.*\.%(EXT)s$',
-                        'win32': r'.*(\.installer)\.%(EXT)s$',
-                        'win64': r'.*(\.installer)\.%(EXT)s$'}
+                        'win32': r'(\.installer%(STUB)s)?\.%(EXT)s$',
+                        'win64': r'(\.installer%(STUB)s)?\.%(EXT)s$'}
 
         regex = regex_base_name + regex_suffix[self.platform]
 
         return regex % {'APP': self.application,
                         'LOCALE': self.locale,
+                        'PLATFORM': PLATFORM_FRAGMENTS[self.platform],
+                        'STUB': '-stub' if self.is_stub_installer else '',
                         'EXT': self.extension}
 
-
     def build_filename(self, binary):
         """Return the proposed filename with extension for the binary"""
 
         return '%(TIMESTAMP)s%(BRANCH)s%(DEBUG)s-%(NAME)s' % {
-                   'TIMESTAMP': self.timestamp + '-' if self.timestamp else '',
-                   'BRANCH': self.branch,
-                   'DEBUG': '-debug' if self.debug_build else '',
-                   'NAME': binary}
-
+            'TIMESTAMP': self.timestamp + '-' if self.timestamp else '',
+            'BRANCH': self.branch,
+            'DEBUG': '-debug' if self.debug_build else '',
+            'NAME': binary}
 
     @property
     def build_list_regex(self):
         """Return the regex for the folder which contains the list of builds"""
 
-        regex = 'tinderbox-builds/%(BRANCH)s-%(PLATFORM)s%(L10N)s%(DEBUG)s'
+        regex = 'tinderbox-builds/%(BRANCH)s-%(PLATFORM)s%(L10N)s%(DEBUG)s/'
 
-        return regex % {'BRANCH': self.branch,
-                        'PLATFORM': '' if self.locale_build else self.platform_regex,
-                        'L10N': 'l10n' if self.locale_build else '',
-                        'DEBUG': '-debug' if self.debug_build else ''}
-
+        return regex % {
+            'BRANCH': self.branch,
+            'PLATFORM': '' if self.locale_build else self.platform_regex,
+            'L10N': 'l10n' if self.locale_build else '',
+            'DEBUG': '-debug' if self.debug_build else ''}
 
     def date_matches(self, timestamp):
-        """Determines whether the timestamp date is equal to the argument date"""
+        """
+        Determines whether the timestamp date is equal to the argument date
+        """
 
         if self.date is None:
             return False
@@ -584,65 +814,89 @@
         timestamp = datetime.fromtimestamp(float(timestamp), self.timezone)
         if self.date.date() == timestamp.date():
             return True
-        
+
         return False
 
-
-    @property
-    def date_validation_regex(self):
-        """Return the regex for a valid date argument value"""
-
-        return r'^\d{4}-\d{1,2}-\d{1,2}$|^\d+$'
-
-
     def detect_platform(self):
         """Detect the current platform"""
 
         platform = Scraper.detect_platform(self)
 
-        # On OS X we have to special case the platform detection code and fallback
-        # to 64 bit builds for the en-US locale
-        if mozinfo.os == 'mac' and self.locale == 'en-US' and mozinfo.bits == 64:
+        # On OS X we have to special case the platform detection code and
+        # fallback to 64 bit builds for the en-US locale
+        if mozinfo.os == 'mac' and self.locale == 'en-US' and \
+                mozinfo.bits == 64:
             platform = "%s%d" % (mozinfo.os, mozinfo.bits)
 
         return platform
 
+    def is_build_dir(self, folder_name):
+        """Return whether or not the given dir contains a build."""
 
-    def get_build_info(self, build_index=None):
-        url = '/'.join([self.base_url, self.build_list_regex])
+        # Cannot move up to base scraper due to parser.entries call in
+        # get_build_info_for_index (see below)
+        url = '%s/' % urljoin(self.base_url, self.build_list_regex, folder_name)
 
-        print 'Retrieving list of builds from %s' % url
+        if self.application in APPLICATIONS_MULTI_LOCALE \
+                and self.locale != 'multi':
+            url = '%s/' % urljoin(url, self.locale)
 
-        # If a timestamp is given, retrieve just that build
-        regex = '^' + self.timestamp + '$' if self.timestamp else r'^\d+$'
+        parser = self._create_directory_parser(url)
 
-        parser = DirectoryParser(url)
-        parser.entries = parser.filter(regex)
+        pattern = re.compile(self.binary_regex, re.IGNORECASE)
+        for entry in parser.entries:
+            try:
+                pattern.match(entry).group()
+                return True
+            except:
+                # No match, continue with next entry
+                continue
+        return False
 
-        # If date is given, retrieve the subset of builds on that date
-        if self.date is not None:
+    def get_build_info_for_index(self, build_index=None):
+        url = urljoin(self.base_url, self.build_list_regex)
+
+        self.logger.info('Retrieving list of builds from %s' % url)
+        parser = self._create_directory_parser(url)
+        parser.entries = parser.filter(r'^\d+$')
+
+        if self.timestamp:
+            # If a timestamp is given, retrieve the folder with the timestamp
+            # as name
+            parser.entries = self.timestamp in parser.entries and \
+                [self.timestamp]
+
+        elif self.date:
+            # If date is given, retrieve the subset of builds on that date
             parser.entries = filter(self.date_matches, parser.entries)
 
         if not parser.entries:
             message = 'No builds have been found'
-            raise NotFoundException(message, url)
+            raise errors.NotFoundError(message, url)
+
+        self.show_matching_builds(parser.entries)
 
         # If no index has been given, set it to the last build of the day.
         if build_index is None:
-            build_index = len(parser.entries) - 1
+            # Find the most recent non-empty entry.
+            build_index = len(parser.entries)
+            for build in reversed(parser.entries):
+                build_index -= 1
+                if not build_index or self.is_build_dir(build):
+                    break
+
+        self.logger.info('Selected build: %s' % parser.entries[build_index])
 
         return (parser.entries, build_index)
 
-
     @property
     def path_regex(self):
-        """Return the regex for the path"""
+        """Return the regex for the path to the build folder"""
 
         if self.locale_build:
             return self.build_list_regex
 
-        return '/'.join([self.build_list_regex, self.builds[self.build_index]])
-
+        return '%s/' % urljoin(self.build_list_regex, self.builds[self.build_index])
 
     @property
     def platform_regex(self):
@@ -650,7 +904,7 @@
 
         PLATFORM_FRAGMENTS = {'linux': 'linux',
                               'linux64': 'linux64',
-                              'mac': 'macosx',
+                              'mac': 'macosx64',
                               'mac64': 'macosx64',
                               'win32': 'win32',
                               'win64': 'win64'}
@@ -658,178 +912,104 @@
         return PLATFORM_FRAGMENTS[self.platform]
 
 
-def cli():
-    """Main function for the downloader"""
+class TryScraper(Scraper):
+    "Class to download a try build from the Mozilla server."
 
-    BUILD_TYPES = {'release': ReleaseScraper,
-                   'candidate': ReleaseCandidateScraper,
-                   'daily': DailyScraper,
-                   'tinderbox': TinderboxScraper }
+    def __init__(self, changeset=None, debug_build=False, *args, **kwargs):
 
-    usage = 'usage: %prog [options]'
-    parser = OptionParser(usage=usage, description=__doc__)
-    parser.add_option('--application', '-a',
-                      dest='application',
-                      choices=APPLICATIONS,
-                      default='firefox',
-                      metavar='APPLICATION',
-                      help='The name of the application to download, '
-                           'default: "%default"')
-    parser.add_option('--directory', '-d',
-                      dest='directory',
-                      default=os.getcwd(),
-                      metavar='DIRECTORY',
-                      help='Target directory for the download, default: '
-                           'current working directory')
-    parser.add_option('--build-number',
-                      dest='build_number',
-                      default=None,
-                      type="int",
-                      metavar='BUILD_NUMBER',
-                      help='Number of the build (for candidate, daily, '
-                           'and tinderbox builds)')
-    parser.add_option('--locale', '-l',
-                      dest='locale',
-                      default='en-US',
-                      metavar='LOCALE',
-                      help='Locale of the application, default: "%default"')
-    parser.add_option('--platform', '-p',
-                      dest='platform',
-                      choices=PLATFORM_FRAGMENTS.keys(),
-                      metavar='PLATFORM',
-                      help='Platform of the application')
-    parser.add_option('--type', '-t',
-                      dest='type',
-                      choices=BUILD_TYPES.keys(),
-                      default='release',
-                      metavar='BUILD_TYPE',
-                      help='Type of build to download, default: "%default"')
-    parser.add_option('--url',
-                      dest='url',
-                      default=None,
-                      metavar='URL',
-                      help='URL to download.')
-    parser.add_option('--version', '-v',
-                      dest='version',
-                      metavar='VERSION',
-                      help='Version of the application to be used by release and\
-                            candidate builds, i.e. "3.6"')
-    parser.add_option('--extension',
-                      dest='extension',
-                      default=None,
-                      metavar='EXTENSION',
-                      help='File extension of the build (e.g. "zip"), default:\
-                            the standard build extension on the platform.')
-    parser.add_option('--username',
-                      dest='username',
-                      default=None,
-                      metavar='USERNAME',
-                      help='Username for basic HTTP authentication.')
-    parser.add_option('--password',
-                      dest='password',
-                      default=None,
-                      metavar='PASSWORD',
-                      help='Password for basic HTTP authentication.')
-    parser.add_option('--retry-attempts',
-                      dest='retry_attempts',
-                      default=3,
-                      type=int,
-                      metavar='RETRY_ATTEMPTS',
-                      help='Number of times the download will be attempted in '
-                           'the event of a failure, default: %default')
-    parser.add_option('--retry-delay',
-                      dest='retry_delay',
-                      default=10,
-                      type=int,
-                      metavar='RETRY_DELAY',
-                      help='Amount of time (in seconds) to wait between retry '
-                           'attempts, default: %default')
+        self.debug_build = debug_build
+        self.changeset = changeset
 
-    # Option group for candidate builds
-    group = OptionGroup(parser, "Candidate builds",
-                        "Extra options for candidate builds.")
-    group.add_option('--no-unsigned',
-                     dest='no_unsigned',
-                     action="store_true",
-                     help="Don't allow to download unsigned builds if signed\
-                           builds are not available")
-    parser.add_option_group(group)
+        Scraper.__init__(self, *args, **kwargs)
 
-    # Option group for daily builds
-    group = OptionGroup(parser, "Daily builds",
-                        "Extra options for daily builds.")
-    group.add_option('--branch',
-                     dest='branch',
-                     default='mozilla-central',
-                     metavar='BRANCH',
-                     help='Name of the branch, default: "%default"')
-    group.add_option('--build-id',
-                      dest='build_id',
-                      default=None,
-                      metavar='BUILD_ID',
-                      help='ID of the build to download')
-    group.add_option('--date',
-                     dest='date',
-                     default=None,
-                     metavar='DATE',
-                     help='Date of the build, default: latest build')
-    parser.add_option_group(group)
+    def get_build_info(self):
+        "Defines additional build information"
 
-    # Option group for tinderbox builds
-    group = OptionGroup(parser, "Tinderbox builds",
-                        "Extra options for tinderbox builds.")
-    group.add_option('--debug-build',
-                     dest='debug_build',
-                     action="store_true",
-                     help="Download a debug build")
-    parser.add_option_group(group)
+        self.builds, self.build_index = self.get_build_info_for_index()
 
-    # TODO: option group for nightly builds
-    (options, args) = parser.parse_args()
+    @property
+    def binary_regex(self):
+        """Return the regex for the binary"""
 
-    # Check for required options and arguments
-    # Note: Will be optional when ini file support has been landed
-    if not options.url \
-       and not options.type in ['daily', 'tinderbox'] \
-       and not options.version:
-        parser.error('The version of the application to download has not been specified.')
+        regex_base_name = r'^%(APP)s-.*\.%(LOCALE)s\.%(PLATFORM)s'
+        regex_suffix = {'linux': r'.*\.%(EXT)s$',
+                        'linux64': r'.*\.%(EXT)s$',
+                        'mac': r'.*\.%(EXT)s$',
+                        'mac64': r'.*\.%(EXT)s$',
+                        'win32': r'.*(\.installer%(STUB)s)\.%(EXT)s$',
+                        'win64': r'.*(\.installer%(STUB)s)\.%(EXT)s$'}
 
-    # Instantiate scraper and download the build
-    scraper_keywords = {'application': options.application,
-                        'locale': options.locale,
-                        'platform': options.platform,
-                        'version': options.version,
-                        'directory': options.directory,
-                        'extension': options.extension,
-                        'authentication': {
-                            'username': options.username,
-                            'password': options.password},
-                        'retry_attempts': options.retry_attempts,
-                        'retry_delay': options.retry_delay}
-    scraper_options = {'candidate': {
-                           'build_number': options.build_number,
-                           'no_unsigned': options.no_unsigned},
-                       'daily': {
-                           'branch': options.branch,
-                           'build_number': options.build_number,
-                           'build_id': options.build_id,
-                           'date': options.date},
-                       'tinderbox': {
-                           'branch': options.branch,
-                           'build_number': options.build_number,
-                           'date': options.date,
-                           'debug_build': options.debug_build}
-                       }
+        regex = regex_base_name + regex_suffix[self.platform]
 
-    kwargs = scraper_keywords.copy()
-    kwargs.update(scraper_options.get(options.type, {}))
+        return regex % {'APP': self.application,
+                        'LOCALE': self.locale,
+                        'PLATFORM': PLATFORM_FRAGMENTS[self.platform],
+                        'STUB': '-stub' if self.is_stub_installer else '',
+                        'EXT': self.extension}
 
-    if options.url:
-        build = DirectScraper(options.url, **kwargs)
-    else:
-        build = BUILD_TYPES[options.type](**kwargs)
+    def build_filename(self, binary):
+        """Return the proposed filename with extension for the binary"""
 
-    build.download()
+        return '%(CHANGESET)s%(DEBUG)s-%(NAME)s' % {
+            'CHANGESET': self.changeset,
+            'DEBUG': '-debug' if self.debug_build else '',
+            'NAME': binary}
 
-if __name__ == "__main__":
-    cli()
+    @property
+    def build_list_regex(self):
+        """Return the regex for the folder which contains the list of builds"""
+
+        return 'try-builds/'
+
+    def detect_platform(self):
+        """Detect the current platform"""
+
+        platform = Scraper.detect_platform(self)
+
+        # On OS X we have to special case the platform detection code and
+        # fallback to 64 bit builds for the en-US locale
+        if mozinfo.os == 'mac' and self.locale == 'en-US' and \
+                mozinfo.bits == 64:
+            platform = "%s%d" % (mozinfo.os, mozinfo.bits)
+
+        return platform
+
+    def get_build_info_for_index(self, build_index=None):
+        url = urljoin(self.base_url, self.build_list_regex)
+
+        self.logger.info('Retrieving list of builds from %s' % url)
+        parser = self._create_directory_parser(url)
+        parser.entries = parser.filter('.*-%s$' % self.changeset)
+
+        if not parser.entries:
+            raise errors.NotFoundError('No builds have been found', url)
+
+        self.show_matching_builds(parser.entries)
+
+        self.logger.info('Selected build: %s' % parser.entries[0])
+
+        return (parser.entries, 0)
+
+    @property
+    def path_regex(self):
+        """Return the regex for the path to the build folder"""
+
+        build_dir = 'try-%(PLATFORM)s%(DEBUG)s/' % {
+            'PLATFORM': self.platform_regex,
+            'DEBUG': '-debug' if self.debug_build else ''}
+        return urljoin(self.build_list_regex,
+                       self.builds[self.build_index],
+                       build_dir)
+
+    @property
+    def platform_regex(self):
+        """Return the platform fragment of the URL"""
+
+        PLATFORM_FRAGMENTS = {'linux': 'linux',
+                              'linux64': 'linux64',
+                              'mac': 'macosx64',
+                              'mac64': 'macosx64',
+                              'win32': 'win32',
+                              'win64': 'win64'}
+
+        return PLATFORM_FRAGMENTS[self.platform]
diff --git a/mozdownload/timezones.py b/mozdownload/timezones.py
index d697591..c722300 100644
--- a/mozdownload/timezones.py
+++ b/mozdownload/timezones.py
@@ -15,11 +15,9 @@
     def utcoffset(self, dt):
         return timedelta(hours=-8) + self.dst(dt)
 
-
     def tzname(self, dt):
         return "Pacific"
 
-
     def dst(self, dt):
         # Daylight saving starts on the second Sunday of March at 2AM standard
         dst_start_date = self.first_sunday(dt.year, 3) + timedelta(days=7) \
@@ -32,7 +30,6 @@
         else:
             return timedelta(0)
 
-
     def first_sunday(self, year, month):
         date = datetime(year, month, 1, 0)
         days_until_sunday = 6 - date.weekday()
diff --git a/mozdownload/utils.py b/mozdownload/utils.py
new file mode 100644
index 0000000..ddd4491
--- /dev/null
+++ b/mozdownload/utils.py
@@ -0,0 +1,32 @@
+# This Source Code Form is subject to the terms of the Mozilla Public
+# License, v. 2.0. If a copy of the MPL was not distributed with this
+# file, You can obtain one at http://mozilla.org/MPL/2.0/.
+
+"""Module to store various helper functions used in mozdownload."""
+
+import hashlib
+
+
+def urljoin(*fragments):
+    """Concatenates multi part strings into urls"""
+
+    # Strip possible already existent final slashes of fragments except for the last one
+    parts = [fragment.rstrip('/') for fragment in fragments[:len(fragments) - 1]]
+    parts.append(fragments[-1])
+
+    return '/'.join(parts)
+
+
+def create_md5(path):
+    """Creates the md5 hash of a file using the hashlib library"""
+
+    m = hashlib.md5()
+    # rb necessary to run correctly in windows.
+    with open(path, "rb") as f:
+        while True:
+            data = f.read(8192)
+            if not data:
+                break
+            m.update(data)
+
+    return m.hexdigest()
diff --git a/pylama.ini b/pylama.ini
new file mode 100644
index 0000000..c6024eb
--- /dev/null
+++ b/pylama.ini
@@ -0,0 +1,6 @@
+[pylama]
+format = mccabe,pep8,pep257,pyflakes,pylint
+ignore = C901
+
+[pylama:pep8]
+max_line_length = 100
diff --git a/run_tests.py b/run_tests.py
new file mode 100755
index 0000000..97c4044
--- /dev/null
+++ b/run_tests.py
@@ -0,0 +1,95 @@
+#!/usr/bin/env python
+# This Source Code Form is subject to the terms of the Mozilla Public
+# License, v. 2.0. If a copy of the MPL was not distributed with this file,
+# You can obtain one at http://mozilla.org/MPL/2.0/.
+
+import os
+import shutil
+from subprocess import check_call
+import sys
+import urllib2
+import zipfile
+
+# Link to the folder which contains the zip archives of virtualenv
+URL_VIRTUALENV = 'https://codeload.github.com/pypa/virtualenv/zip/'
+VERSION_VIRTUALENV = '1.9.1'
+
+DIR_BASE = os.path.abspath(os.path.dirname(__file__))
+DIR_ENV = os.path.join(DIR_BASE, 'tmp', 'venv')
+DIR_TMP = os.path.join(DIR_BASE, 'tmp')
+
+REQ_TXT = os.path.join('tests', 'requirements.txt')
+
+
+def download(url, target):
+    """Downloads the specified url to the given target."""
+    response = urllib2.urlopen(url)
+    with open(target, 'wb') as f:
+        f.write(response.read())
+    return target
+
+
+# see http://stackoverflow.com/questions/12332975/installing-python-module-within-code
+def install(package, version):
+    package_arg = "%s==%s" % (package, version)
+    check_call(['pip', 'install', '--upgrade', package_arg])
+
+
+def python(*args):
+    pypath = [os.path.join(DIR_ENV, 'bin', 'python')]
+    check_call(pypath + list(args))
+
+
+try:
+    # Remove potentially pre-existing tmp_dir
+    shutil.rmtree(DIR_TMP, True)
+    # Start out clean
+    os.makedirs(DIR_TMP)
+
+    print 'Downloading virtualenv %s' % VERSION_VIRTUALENV
+    virtualenv_file = download(URL_VIRTUALENV + VERSION_VIRTUALENV,
+                               os.path.join(DIR_TMP, 'virtualenv.zip'))
+    virtualenv_zip = zipfile.ZipFile(virtualenv_file)
+    virtualenv_zip.extractall(DIR_TMP)
+    virtualenv_zip.close()
+
+    print 'Creating new virtual environment'
+    virtualenv_script = os.path.join(DIR_TMP,
+                                     'virtualenv-%s' % VERSION_VIRTUALENV,
+                                     'virtualenv.py')
+    check_call(['python', virtualenv_script, DIR_ENV])
+
+    print 'Activating virtual environment'
+    # for more info see:
+    # http://www.virtualenv.org/en/latest/#using-virtualenv-without-bin-python
+    if sys.platform == 'win32':
+        activate_this_file = os.path.join(DIR_ENV, 'Scripts',
+                                          'activate_this.py')
+    else:
+        activate_this_file = os.path.join(DIR_ENV, 'bin',
+                                          'activate_this.py')
+
+    if not os.path.isfile(activate_this_file):
+        # create venv
+        check_call(['virtualenv', '--no-site-packages', DIR_ENV])
+
+    execfile(activate_this_file, dict(__file__=activate_this_file))
+    print "Virtual environment activated successfully."
+
+except:
+    print "Could not activate virtual environment."
+    print "Exiting."
+    sys.exit(1)
+
+
+# Install dependent packages
+check_call(['pip', 'install', '--upgrade', '-r', REQ_TXT])
+
+# Install mozdownload
+python('setup.py', 'develop')
+
+# run the tests
+python(os.path.join('tests', 'test.py'))
+
+# Clean up
+shutil.rmtree(DIR_TMP)
diff --git a/setup.cfg b/setup.cfg
deleted file mode 100644
index 861a9f5..0000000
--- a/setup.cfg
+++ /dev/null
@@ -1,5 +0,0 @@
-[egg_info]
-tag_build = 
-tag_date = 0
-tag_svn_revision = 0
-
diff --git a/setup.py b/setup.py
index 8658126..9492536 100755
--- a/setup.py
+++ b/setup.py
@@ -6,7 +6,7 @@
 
 
 import os
-from setuptools import setup, find_packages
+from setuptools import setup
 
 try:
     here = os.path.dirname(os.path.abspath(__file__))
@@ -14,13 +14,17 @@
 except (OSError, IOError):
     description = None
 
-version = '1.6'
+version = '1.19'
 
-deps = ['mozinfo==0.3.3']
+deps = ['mozinfo >= 0.7',
+        'progressbar == 2.2',
+        'requests == 2.7.0',
+        ]
 
 setup(name='mozdownload',
       version=version,
-      description='Script to download builds for Firefox and Thunderbird from the Mozilla server.',
+      description='Script to download builds for Firefox and Thunderbird '
+                  'from the Mozilla server.',
       long_description=description,
       # Get strings from http://pypi.python.org/pypi?%3Aaction=list_classifiers
       classifiers=[],
@@ -29,12 +33,12 @@
       author_email='tools@lists.mozilla.com',
       url='http://github.com/mozilla/mozdownload',
       license='Mozilla Public License 2.0 (MPL 2.0)',
-      packages = ['mozdownload'],
+      packages=['mozdownload'],
       zip_safe=False,
       install_requires=deps,
       entry_points="""
       # -*- Entry points: -*-
       [console_scripts]
-      mozdownload = mozdownload:cli
+      mozdownload = mozdownload.cli:cli
       """,
       )