Clone this repo:

Branches

  1. 97d2e8a Update content, restore front matter by Dirk Pranke · 23 hours ago main
  2. 06b7523 Update //feeds. by Dirk Pranke · 23 hours ago
  3. 816bdce First public revision of website migration code. by Dirk Pranke · 2 days ago
  4. bb8de99 Initial empty repository by Anthony Polito · 4 days ago

Exporting chromium.org to a static website.

So far this has only been tested to run on a Mac, but should run on Linux machines just fine, and possibly won't require a ton of work to work on Windows :).

  1. To run the conversion routines, you need Python3 (3.8 or newer) and Node (16 or newer) installed.

  2. Install a SASS processor for Node.

    node install
    
  3. Fetch the needed python packages (feel free to use a venv). This includes a YAML processor, a Markdown processor, and a Jinja2 template processor.

    python3 -m pip install --user -r requirements.txt
    
  4. Optional: Refresh the content from Classic Sites via the public GData APIs.

    python3 scripts/export.py
    

    This downloads all of the HTML pages and converts them to Markdown, and also fetches any associated assets (images, attachments, etc.).

    export.py caches the metadata and HTML from Sites locally in the //feeds directory (but not images or other assets). This is useful when you need to iterate on the HTML->Markdown conversion or other changes where the raw data isn't likely to be needed. To force the script to explicitly re-fetch things, use the --force flag.

    NOTE: The HTML->Markdown logic is currently just a placeholder stub function, and you'll get the same HTML out that you feed into it. The actual conversion code used to generate the files in //site are not yet open source.

  5. Optional: Build all of the static pages up-front to check for errors.

    python3 scripts/build.py
    
  6. Start a local web server to view the site. The server will (re-)generate the pages on the fly as needed if the input or conversion code changes.

    python3 scripts/serve.py