tree: eadd696a615db671bb1bbd027408b212aa577e4d [path history] [tgz]
  1. Dockerfile
  2. README.md
  3. android/
  4. build.sh
  5. cros/
docker/docker_devices/README.md

The files here are used to build a docker image suitable for sandboxing a USB or network device into its own execution environment, in which a swarming bot is running. More information is at go/one-device-swarming

The docker image

The build.sh script will create two local images named android_docker and cros_docker tagged with the date and time of creation. The image itself is simply an Ubuntu flavor (xenial as of writing this) with a number of packages and utilities installed, mainly via chromium's install_build_deps.sh. When launched as a container, this image is configured to run the start_swarm_bot.sh script here which fetches and runs the bot code of the swarming server pointed to by the $SWARM_URL env var. Note that running the image locally on a developer workstation is unsupported.

Automatic image building

Everyday at 8am PST, a builder on the internal.infra.cron waterfall builds a fresh version of the image. This builder essentially runs ./build.sh and uploads the resultant images to a docker container registry. The registry, hosted by gcloud, is located at chromium-container-registry.

Bumping src-rev for install-build-deps.sh

Many of the packages and dependencies needed to build and test chromium are installed via the install_build_deps.sh script in src. If the image is missing a dependency, chances are its src pin is out of date. To update it, simply change the pin located here and rebuild.

Shutting container down from within

Because a swarming bot may trigger a reboot of the bot at any time (see docs), it should also be able to shut its container down from within. By conventional means, that's impossible; a container has no access to the docker engine running outside of it. Nor can it run many of the utilities in /sbin/ since the hardware layer of the machine is not available to the container. However, because a swarming bot uses the /sbin/shutdown executable to reboot, this file is replaced with our own bash script here (shutdown.sh) that simply sends SIGUSR1 to init (pid 1). For a container, init is whatever command the container was configured to run (as opposed to actual /sbin/init). In our case, this is start_swarm_bot.sh, which conveniently traps SIGUSR1 at the very beginning and exits upon catching it. Consequently, running /sbin/shutdown from within a container will cause the container to immediately shutdown.

Image deployment

Image deployment is done via puppet. When a new image needs to be rolled out, grab the image name in the step-text of the “Push image” step on the image-building bot. It should look like android_docker:$date. (The bot runs once a day, but you can manually trigger a build if you don‘t want to wait.) To deploy, update the image pins in puppet for canary bots, followed by stable bots. The canary pin affects bots on chromium-swarm-dev, which the android testers on the chromium.swarm waterfall run tests against. If the canary has been updated, the bots look fine, and the tests haven’t regressed, you can proceed to update the stable pin. (Note that it may take several hours for the image pin update to propagate. It's advised to wait at least a day to update stable after updating canary.)

Launching the containers

On the bots, container launching and managing is controlled via the python service here. The service has two modes of invocation:

  • launch: Ensures every locally connected android device has a running container. Will gracefully tear down containers that are too old or reboot the host. Called every 5 minutes via cron.
  • add_device: Give‘s a device’s container access to its device. Called every time a device appears/reappears on the system via udev.

More information can be found here.