|author||Dirk Pranke <firstname.lastname@example.org>||Thu Oct 14 00:48:27 2021|
|committer||Dirk Pranke <email@example.com>||Thu Oct 14 00:48:27 2021|
switch to vpython, add linux/win node binaries
So far these instructions have only been tested to run on a Mac, but should work on Linux and Windows with only a modicum of hoop-jumping.
$ git clone https://chromium.googlesource.com/chromium/tools/depot_tools.git $ export PATH=/path/to/depot_tools:$PATH
Check out the repo:
$ git clone https://chromium.googlesource.com/experimental/chromium_website
cd into the checkout and download any dependencies:
$ cd chromium_website $ gclient sync
Note that there is a //.gclient file checked in, so you don't need to run
gclient config or have a .gclient file in a directory above the website.
Optional: Refresh the content from Classic Sites via the public GData APIs.
This downloads all of the HTML pages and converts them to Markdown, and also fetches any associated assets (images, attachments, etc.).
export caches the metadata and HTML from Sites locally in the
//export/feeds directory (but not images or other assets). This is useful when you need to iterate on the HTML->Markdown conversion or other changes where the raw data isn't likely to be needed. To force the script to explicitly re-fetch things, use the
NOTE: The HTML->Markdown logic is currently just a placeholder stub function, and you'll get the same HTML out that you feed into it. The actual conversion code used to generate the files in //site are not yet open source.
Optional: Build all of the static pages up-front to check for errors. The content will be built into
//build by default.
$ ./npmw build
Start a local web server to view the site. The server will (re-)generate the pages on the fly as needed if the input or conversion code changes. The content will be built into
$ ./npmw watch
Optional: If you have the access to do so, you can deploy new versions to [chromium-website-experiment.web.app][https://chromium-website-experiment.web.app]:
$ ./npmw deploy