If you just want to use Linux (Beta), you should read the Running Custom Containers Under Chrome OS doc. This doc is all about how the crostini is made, not how to use it. :)
Note: this diagram is not exhaustive, but covers the major services and how they interact.
Googlers: update this image at go/termina-rpc
Name | Affects | Repo | ebuild |
---|---|---|---|
9s | Host | platform2/vm_tools/9s | dev-rust/9s |
chunnel | Host, Termina | platform2/vm_tools/chunnel | chromeos-base/chunnel, chromeos-base/termina_container_tools |
cicerone | Host | platform2/vm_tools/cicerone | chromeos-base/vm_host_tools |
concierge | Host | platform2/vm_tools/concierge | chromeos-base/vm_host_tools |
container .debs, Termina build scripts | Container | platform/container-guest-tools | N/A |
crostini_client | Host | platform2/vm_tools/crostini_client | chromeos-base/crostini_client |
crosvm | Host | platform/crosvm | chromeos-base/crosvm |
garcon | Termina, Container | platform2/vm_tools/garcon | chromeos-base/vm_guest_tools, chromeos-base/termina_container_tools |
LXD | Termina | github/lxc/lxd | app-emulation/lxd |
maitred | Termina | platform2/vm_tools/maitred | chromeos-base/vm_guest_tools |
seneschal | Host | platform2/vm_tools/seneschal | chromeos-base/vm_host_tools |
sommelier | Termina, Container | platform2/vm_tools/sommelier | chromeos-base/sommelier, chromeos-base/termina_container_tools |
system_api | Host | platform2/system_api | chromeos-base/system_api |
tremplin | Termina | platform/tremplin | chromeos-base/tremplin |
VM protobufs | Host, Termina, Container | platform2/vm_tools/proto | chromeos-base/vm_protos |
vm_syslog | Host, Termina | platform2/vm_tools/syslog | chromeos-base/vm_guest_tools, chromeos-base/vm_host_tools |
vsh | Host, Termina, Container | platform2/vm_tools/vsh | chromeos-base/vm_host_tools, chromeos-base/vm_guest_tools, chromeos-base/termina_container_tools |
Ensure you are able to SSH to the device:
(inside) $ export DEVICE_IP=123.45.67.765 # insert your test device IP here (inside) $ ssh ${DEVICE_IP} echo OK
For the rest of this document, it will be assumed that the BOARD
environment variable in cros_sdk
is set to the board name of your test device as explained in the Select a board section of the Chromium OS Developer Guide.
Crostini requires a signed-in, non-guest user account to run. You can either use a test Google account, or run /usr/local/autotest/bin/autologin.py -d
on a test image to log in with a fake testuser profile.
To begin working on a change to one of the host services (see Where does the code live?), use the cros_workon
command:
(inside) $ cros_workon --board=${BOARD} start ${PACKAGE_NAME}
Now that the package(s) are cros_workon start
-ed, they will be built from source instead of using binary prebuilts:
(inside) $ emerge-${BOARD} ${PACKAGE_NAME}
Then deploy the package to the device for testing:
(inside) $ cros deploy ${DEVICE_IP} ${PACKAGE_NAME}
Most VM services on the host have upstart conditions start on started vm_concierge
and stop on stopped vm_concierge
. Since restarting only one daemon could lead to inconsistent state across all services, it's best to shut them all down, then start them again with a clean slate. Stopping the vm_concierge
service on the host stops all running VMs, along other VM services on the host. Starting vm_concierge
will trigger other VM services to start as well.
(device) # stop vm_concierge && start vm_concierge
The guest packages that run inside the termina
VM are built for two special Chrome OS boards: tatl
(for x86 devices) and tael
(for arm devices). These VM images are distributed as part of the cros-termina
component via the browser’s Component Updater.
To determine the guest board type, run uname -m
on the device.
uname -m | termina board |
---|---|
x86_64 | (inside) $ export GUEST_BOARD=tatl |
aarch64 | (inside) $ export GUEST_BOARD=tael |
First, cros_workon --board=${GUEST_BOARD} start
each guest package you are modifying (see Where does the code live?):
(inside) $ cros_workon --board=${GUEST_BOARD} start ${PACKAGE_NAME}
Then build the guest image like a normal CrOS board:
(inside) $ ./build_packages --board=${GUEST_BOARD} (inside) $ ./build_image --board=${GUEST_BOARD} test
This image is installed into the host image by the termina-dlc
package, and can be built and deployed like the host service changes above:
(inside) $ cros_workon --board=${BOARD} start termina-dlc (inside) $ emerge-${BOARD} termina-dlc (inside) $ cros deploy ${DEVICE_IP} termina-dlc
After cros deploy
completes, newly-launched VMs will use the testing component with the updated packages.
Packages can end up in the container by two mechanisms:
Native Debian packages (.debs) are preinstalled in the container, and upgraded out-of-band from the rest of Chrome OS by APT.
Packages built from Portage in Chrome OS are copied into /opt/google/cros-containers
in Termina by the termina_container_tools
ebuild. These are updated with the Termina VM image.
When working on Debian packages, the .debs should be copied to the crostini container and installed with apt:
# A leading "./" or other unambiguous path is needed to install a local .deb. (penguin) $ apt install ./foo.deb
Portage-managed packages should be treated like other [Guest service changes]. However, the termina_container_tools
package is not cros_workon
, so it must be manually emerged to propagate changes into /opt/google/cros-containers
. The following example uses sommelier
:
(inside) $ emerge-${GUEST_BOARD} sommelier # build for Termina (inside) $ emerge-${GUEST_BOARD} termina_container_tools # copy into /opt
Once termina_container_tools
is manually rebuilt, the termina-dlc
flow will work as normal.
It's possible to run binaries built for the termina VM from the chromium OS chroot, which can be useful for debugging or testing. Assuming you already have a chromium OS chroot set up and have built the tatl board, you can run inside the chroot:
../platform2/common-mk/platform2_test.py --board tatl [--run_as_root] [command to run]
For example, to run LXD, you could run inside the chroot:
../platform2/common-mk/platform2_test.py --board tatl --run_as_root env LXD_DIR=/path/to/lxd/data lxd ../platform2/common-mk/platform2_test.py --board tatl --run_as_root env LXD_DIR=/path/to/lxd/data lxd waitready ../platform2/common-mk/platform2_test.py --board tatl --run_as_root env LXD_DIR=/path/to/lxd/data lxc [lxc subcommands go here]
This is not limited to x86_64 boards either, platform2_test.py
will automatically set up QEMU to run ARM binaries if you ask it to run binaries from an ARM board. This is likely to be very slow, however.
Note that this is not equivalent to actually running the VM, since only the command you run will be performed. Depending on the exact set up you need to test, this may not be sufficient.