Crostini developer guide

Warning: This document is old & has moved. Please update any links:

If you just want to use Linux (Beta), you should read the Running Custom Containers Under Chrome OS doc. This doc is all about how the crostini is made, not how to use it. :)

What is all this stuff?

Note: this diagram is not exhaustive, but covers the major services and how they interact.

Crostini services diagram

Googlers: update this image at go/termina-rpc

Where does the code live?

chunnelHost, Terminaplatform2/vm_tools/chunnelchromeos-base/chunnel, chromeos-base/termina_container_tools
container .debs, Termina build scriptsContainerplatform/container-guest-toolsN/A
garconTermina, Containerplatform2/vm_tools/garconchromeos-base/vm_guest_tools, chromeos-base/termina_container_tools
sommelierTermina, Containerplatform2/vm_tools/sommelierchromeos-base/sommelier, chromeos-base/termina_container_tools
VM protobufsHost, Termina, Containerplatform2/vm_tools/protochromeos-base/vm_protos
vm_syslogHost, Terminaplatform2/vm_tools/syslogchromeos-base/vm_guest_tools, chromeos-base/vm_host_tools
vshHost, Termina, Containerplatform2/vm_tools/vshchromeos-base/vm_host_tools, chromeos-base/vm_guest_tools, chromeos-base/termina_container_tools

How do I build/deploy/test my change?

General prerequisites

  • Follow the Chromium OS Developer Guide for setup.
  • Device with test image in developer mode.
  • TODO: how to deal with cros_debug mismatch?
  • TODO: Emerging tremplin etc. and directly mount it in vm_rootfs.img on the dut.

Ensure you are able to SSH to the device:

(inside) $ export DEVICE_IP= # insert your test device IP here
(inside) $ ssh ${DEVICE_IP} echo OK

For the rest of this document, it will be assumed that the BOARD environment variable in cros_sdk is set to the board name of your test device as explained in the Select a board section of the Chromium OS Developer Guide.

Crostini requires a signed-in, non-guest user account to run. You can either use a test Google account, or run /usr/local/autotest/bin/ -d on a test image to log in with a fake testuser profile.

Host service changes

To begin working on a change to one of the host services (see Where does the code live?), use the cros_workon command:

(inside) $ cros_workon --board=${BOARD} start ${PACKAGE_NAME}

Now that the package(s) are cros_workon start-ed, they will be built from source instead of using binary prebuilts:

(inside) $ emerge-${BOARD} ${PACKAGE_NAME}

Then deploy the package to the device for testing:

(inside) $ cros deploy ${DEVICE_IP} ${PACKAGE_NAME}

Most VM services on the host have upstart conditions start on started vm_concierge and stop on stopped vm_concierge. Since restarting only one daemon could lead to inconsistent state across all services, it's best to shut them all down, then start them again with a clean slate. Stopping the vm_concierge service on the host stops all running VMs, along other VM services on the host. Starting vm_concierge will trigger other VM services to start as well.

(device) # stop vm_concierge && start vm_concierge

Guest service changes

The guest packages that run inside the termina VM are built for two special Chrome OS boards: tatl (for x86 devices) and tael (for arm devices). These VM images are distributed as part of the cros-termina component via the browser’s Component Updater.

To determine the guest board type, run uname -m on the device.

uname -mtermina board
x86_64(inside) $ export GUEST_BOARD=tatl
aarch64(inside) $ export GUEST_BOARD=tael

First, cros_workon --board=${GUEST_BOARD} start each guest package you are modifying (see Where does the code live?):

(inside) $ cros_workon --board=${GUEST_BOARD} start ${PACKAGE_NAME}

Then build the guest image like a normal CrOS board:

(inside) $ ./build_packages --board=${GUEST_BOARD}
(inside) $ ./build_image --board=${GUEST_BOARD} test

This image is installed into the host image by the termina-dlc package, and can be built and deployed like the host service changes above:

(inside) $ cros_workon --board=${BOARD} start termina-dlc
(inside) $ emerge-${BOARD} termina-dlc
(inside) $ cros deploy ${DEVICE_IP} termina-dlc

After cros deploy completes, newly-launched VMs will use the testing component with the updated packages.

Container changes

Packages can end up in the container by two mechanisms:

  1. Native Debian packages (.debs) are preinstalled in the container, and upgraded out-of-band from the rest of Chrome OS by APT.

  2. Packages built from Portage in Chrome OS are copied into /opt/google/cros-containers in Termina by the termina_container_tools ebuild. These are updated with the Termina VM image.

When working on Debian packages, the .debs should be copied to the crostini container and installed with apt:

# A leading "./" or other unambiguous path is needed to install a local .deb.
(penguin) $ apt install ./foo.deb

Portage-managed packages should be treated like other [Guest service changes]. However, the termina_container_tools package is not cros_workon, so it must be manually emerged to propagate changes into /opt/google/cros-containers. The following example uses sommelier:

(inside) $ emerge-${GUEST_BOARD} sommelier               # build for Termina
(inside) $ emerge-${GUEST_BOARD} termina_container_tools # copy into /opt

Once termina_container_tools is manually rebuilt, the termina-dlc flow will work as normal.

Running VM executables off of a device

It's possible to run binaries built for the termina VM from the chromium OS chroot, which can be useful for debugging or testing. Assuming you already have a chromium OS chroot set up and have built the tatl board, you can run inside the chroot:

../platform2/common-mk/ --board tatl [--run_as_root] [command to run]

For example, to run LXD, you could run inside the chroot:

../platform2/common-mk/ --board tatl --run_as_root env LXD_DIR=/path/to/lxd/data lxd
../platform2/common-mk/ --board tatl --run_as_root env LXD_DIR=/path/to/lxd/data lxd waitready
../platform2/common-mk/ --board tatl --run_as_root env LXD_DIR=/path/to/lxd/data lxc [lxc subcommands go here]

This is not limited to x86_64 boards either, will automatically set up QEMU to run ARM binaries if you ask it to run binaries from an ARM board. This is likely to be very slow, however.

Note that this is not equivalent to actually running the VM, since only the command you run will be performed. Depending on the exact set up you need to test, this may not be sufficient.