tree: 0f58991fdc96752aacadf52750ef0b581bc4f7f5 [path history] [tgz]
  1. feeds/
  2. scripts/
  3. site/
  4. .gitignore
  5. package-lock.json
  6. package.json
  7. README.md
  8. requirements.txt
README.md

Exporting chromium.org to a static website.

So far this has only been tested to run on a Mac, but should run on Linux machines just fine, and possibly won't require a ton of work to work on Windows :).

  1. To run the conversion routines, you need Python3 (3.8 or newer) and Node (16 or newer) installed.

  2. Install a SASS processor for Node.

    node install
    
  3. Fetch the needed python packages (feel free to use a venv). This includes a YAML processor, a Markdown processor, and a Jinja2 template processor.

    python3 -m pip install --user -r requirements.txt
    
  4. Optional: Refresh the content from Classic Sites via the public GData APIs.

    python3 scripts/export.py
    

    This downloads all of the HTML pages and converts them to Markdown, and also fetches any associated assets (images, attachments, etc.).

    export.py caches the metadata and HTML from Sites locally in the //feeds directory (but not images or other assets). This is useful when you need to iterate on the HTML->Markdown conversion or other changes where the raw data isn't likely to be needed. To force the script to explicitly re-fetch things, use the --force flag.

    NOTE: The HTML->Markdown logic is currently just a placeholder stub function, and you'll get the same HTML out that you feed into it. The actual conversion code used to generate the files in //site are not yet open source.

  5. Optional: Build all of the static pages up-front to check for errors.

    python3 scripts/build.py
    
  6. Start a local web server to view the site. The server will (re-)generate the pages on the fly as needed if the input or conversion code changes.

    python3 scripts/serve.py