Googlers, for broad docs on the Fleet Console, see: go/fleet-console
This directory hosts the code for the backend of the Fleet Console UI, a unified UI for managing machines in the fleet.
From the root directory for this repo.
First start the db using docker docker:
docker compose up -d
It also runs pgadmin so if you want to connect to the db via pgadmin visit localhost:5050 (no login should be required).
If this is your first time you will have to run migrations
You can now run the web server:
make build make run-local-db
If you are unable to use Docker, you can set up a local PostgreSQL database using Homebrew.
Install PostgreSQL:
brew install postgresql
Start the PostgreSQL service:
brew services start postgresql
Set the password for the default postgres user: The application expects the password to be password.
psql -d postgres -c "ALTER USER postgres WITH PASSWORD 'password';"
Run database migrations: Follow the instructions in the Run migrations section.
Run the web server:
make build make run-local-db
Once your local PostgreSQL database is running, you can connect to it using the psql command-line tool.
psql --host=localhost --port=5432 --username=postgres --dbname=postgres
After connecting, you can run SQL queries directly against the database.
To run the server in development mode with hot-reloading, first install the necessary tools:
make tools
Then, run the server with:
make run-hot-local-db
This will automatically rebuild and restart the server when you make changes to the code.
Create a tunnel to the dev alloydb vpc
./scripts/setup_dev_db_tunnel.sh
You can now run the web server
make build make run-dev-db
To make calls to your local service, you can use prpc command line tool.
prpc call localhost:8800 fleetconsole.FleetConsole.PingDB <<EOF {} EOF
This codebase include a Fleet Console CLI tool for the purpose of helping test the functionality of Fleet Console Server.
To build / run the CLI, run:
go build ./cmd/consoleadmin ./consoleadmin
You can do a liveness check for the local Fleet Console backend like so:
./consoleadmin ping -local {}
And you can check the connection to the database like so:
./consoleadmin ping-db -local {}
To see more commands available in the CLI run:
./consoleadmin help
To populate your local database with test data run:
./consoleadmin repopulate-cache -local
To populate your local database with mock Android device data run:
./consoleadmin mock-update-android-devices -local
Run go test ./...
Run make debug-server to build an executable with dlv debug options
After, start an instance of server. eg local sqldb connection pointing to ufs dev:
dlv exec ./fleetconsoleserver -- \ -sqldb-connection-url="pgx://postgres@localhost:5432/postgres" \ -sqldb-password-secret="devsecret-text://password" \ -ufs-addr ufs.api.cr.dev:443
Then add any necessary break points with b file:line or b file.function. Start the server with c
In a separate session, run your console server command. Go back to server instance and debug. Here are a few useful debugging commands in dlv: - c: continue until the next breakpoint - n: go to the next line - step: step into a function - locals: list all the local variables and their values - args: list all args and their values - p [variable]: list the value of variable
dlv is a forward debugger, meaning you can't step back in time once you execute a line. However, you could install necessary extensions to add revert step or step back in time.
In order to run migrations you need python installed. You can either do that using your OS package manager or using the version distributed in depot_tools:
alias python=/your/infra/directory/depot_tools/vpython3
Create a virtualenv and install Alembic database migration tool:
python -m venv venv source venv/bin/activate pip install -r requirements.txt
The necessary tools to manage the Postgres or AlloyDB database are installed.
You can now apply the migrations:
alembic upgrade head
Create a tunnel to the dev db vpc
./scripts/setup_dev_db_tunnel.sh
Run migrations specifying env=dev
alembic -x env=dev upgrade head
If you get an error make sure you are logged in gcloud cli for your application-default
gcloud auth application-default login
Create a tunnel to the prod db vpc
./scripts/setup_prod_db_tunnel.sh
Run migrations specifying env=prod
alembic -x env=prod upgrade head
If you get an error make sure you are logged in gcloud cli for your application-default
gcloud auth application-default login
Deployment configs are hosted in the infradata repo
If you need to deploy changes in infrastructure that is done via terraform.
from a piper workspace
alias terraform='/google/bin/releases/g3terraform/runner_main -tf_label=terraform_1_10_4' cd google3/configs/cloud/gong/services/chrome_cloud/chrome_fleet/fleet_console/envs cd dev # To deploy to dev cd prod # To deploy to prod terraform plan # Will show what changes will be applied # Once you are sure you want to move forward terraform apply
Dev is deployed through push on green.
To push dev to prod, use the Fleet Console backend promoter and click the “Trigger” action.
If you need to make change to the deployment environment
cd /your/infra/directory/data/cloud-run/projects/fleet-console/ cd dev/ # For dev cd prod/ # For prod vim service.star # Make the changes you need make gen # Generate needed files git commit -am '...' git cl upload
From a piper workspace
cd google3/configs/cloud/gong/org_hierarchy/google.com/teams/chrome-teams/cros-test-infra/ cd dev/project.fleet-console-dev/ # To deploy to dev cd prod/project.fleet-console-prod/ # To deploy to prod
From here you can simply make the required edits and submit a new cl
fleet-console-dev is deployed automatically after CLs land.
fleet-console-prod must have deployment triggered using a CL.
When running the server locally, you may encounter an authentication error like this:
error: failed to initialize auth: failed to initialize the token source: interactive login is required
To resolve this, run the following command to set up your credentials:
luci-auth login -scopes "https://www.googleapis.com/auth/cloud-platform https://www.googleapis.com/auth/userinfo.email"
To ensure that the Fleet Console server uses the correct service accounts (service@fleet-console-dev.iam.gserviceaccount.com and service@fleet-console-prod.iam.gserviceaccount.com), it is important to use auth.AsSelf when creating authenticated clients. This will ensure that the server uses its own service account credentials to make authenticated calls to other services.
Here is an example of how to create an authenticated HTTP client using auth.AsSelf:
t, err := auth.GetRPCTransport(ctx, auth.AsSelf)
if err != nil {
return nil, errors.Annotate(err, "failed to get RPC transport").Err()
}
prpcClient := &prpc.Client{
C: &http.Client{Transport: t},
Host: host,
Options: &prpc.Options{
Insecure: false,
UserAgent: "fleet-console",
},
}