tree: 782863783b7a72c8f5c217642b6b82ee4d4d4d7d [path history] [tgz]
  1. alembic/
  2. api/
  3. bbclient/
  4. cmd/
  5. integrationtest/
  6. internal/
  7. omnilab/
  8. proto/
  9. scripts/
  10. .air.toml
  11. .gitignore
  12. alembic.ini
  13. docker-compose.yml
  14. GEMINI.md
  15. Makefile
  16. OWNERS
  17. README.md
  18. requirements.txt
go/src/infra/fleetconsole/README.md

Fleet Console Server

Googlers, for broad docs on the Fleet Console, see: go/fleet-console

This directory hosts the code for the backend of the Fleet Console UI, a unified UI for managing machines in the fleet.

How to run locally

From the root directory for this repo.

With a local db

First start the db using docker docker:

docker compose up -d

It also runs pgadmin so if you want to connect to the db via pgadmin visit localhost:5050 (no login should be required).

If this is your first time you will have to run migrations

You can now run the web server:

make build
make run-local-db

Without Docker (macOS with Homebrew)

If you are unable to use Docker, you can set up a local PostgreSQL database using Homebrew.

  1. Install PostgreSQL:

    brew install postgresql
    
  2. Start the PostgreSQL service:

    brew services start postgresql
    
  3. Set the password for the default postgres user: The application expects the password to be password.

    psql -d postgres -c "ALTER USER postgres WITH PASSWORD 'password';"
    
  4. Run database migrations: Follow the instructions in the Run migrations section.

  5. Run the web server:

    make build
    make run-local-db
    

Connecting to the Local Database

Once your local PostgreSQL database is running, you can connect to it using the psql command-line tool.

psql --host=localhost --port=5432 --username=postgres --dbname=postgres

After connecting, you can run SQL queries directly against the database.

Hot reloading

To run the server in development mode with hot-reloading, first install the necessary tools:

make tools

Then, run the server with:

make run-hot-local-db

This will automatically rebuild and restart the server when you make changes to the code.

Connecting to the dev db

Create a tunnel to the dev alloydb vpc

./scripts/setup_dev_db_tunnel.sh

You can now run the web server

make build
make run-dev-db

Making calls to the local service

To make calls to your local service, you can use prpc command line tool.

prpc call localhost:8800 fleetconsole.FleetConsole.PingDB <<EOF
{}
EOF

Run the web client

How to manually test

This codebase include a Fleet Console CLI tool for the purpose of helping test the functionality of Fleet Console Server.

To build / run the CLI, run:

go build ./cmd/consoleadmin
./consoleadmin

You can do a liveness check for the local Fleet Console backend like so:

./consoleadmin ping -local
{}

And you can check the connection to the database like so:

./consoleadmin ping-db -local
{}

To see more commands available in the CLI run:

./consoleadmin help

To populate your local database with test data run:

./consoleadmin repopulate-cache -local

To populate your local database with mock Android device data run:

./consoleadmin mock-update-android-devices -local

How to run tests

Run go test ./...

debug console server

Run make debug-server to build an executable with dlv debug options

After, start an instance of server. eg local sqldb connection pointing to ufs dev:

dlv exec ./fleetconsoleserver -- \
-sqldb-connection-url="pgx://postgres@localhost:5432/postgres" \
-sqldb-password-secret="devsecret-text://password" \
-ufs-addr ufs.api.cr.dev:443

Then add any necessary break points with b file:line or b file.function. Start the server with c

In a separate session, run your console server command. Go back to server instance and debug. Here are a few useful debugging commands in dlv: - c: continue until the next breakpoint - n: go to the next line - step: step into a function - locals: list all the local variables and their values - args: list all args and their values - p [variable]: list the value of variable

dlv is a forward debugger, meaning you can't step back in time once you execute a line. However, you could install necessary extensions to add revert step or step back in time.

Run migrations

In order to run migrations you need python installed. You can either do that using your OS package manager or using the version distributed in depot_tools:

alias python=/your/infra/directory/depot_tools/vpython3

Local

Create a virtualenv and install Alembic database migration tool:

python -m venv venv
source venv/bin/activate
pip install -r requirements.txt

The necessary tools to manage the Postgres or AlloyDB database are installed.

You can now apply the migrations:

alembic upgrade head

Dev

Create a tunnel to the dev db vpc

./scripts/setup_dev_db_tunnel.sh

Run migrations specifying env=dev

alembic -x env=dev upgrade head

If you get an error make sure you are logged in gcloud cli for your application-default

gcloud auth application-default login

Prod

Create a tunnel to the prod db vpc

./scripts/setup_prod_db_tunnel.sh

Run migrations specifying env=prod

alembic -x env=prod upgrade head

If you get an error make sure you are logged in gcloud cli for your application-default

gcloud auth application-default login

How to deploy

Deployment configs are hosted in the infradata repo

Terraform

If you need to deploy changes in infrastructure that is done via terraform.

from a piper workspace

alias terraform='/google/bin/releases/g3terraform/runner_main -tf_label=terraform_1_10_4'

cd google3/configs/cloud/gong/services/chrome_cloud/chrome_fleet/fleet_console/envs

cd dev # To deploy to dev
cd prod # To deploy to prod

terraform plan # Will show what changes will be applied
# Once you are sure you want to move forward
terraform apply

Backend

Automatic Deployment

Push to dev

Dev is deployed through push on green.

Push to prod

To push dev to prod, use the Fleet Console backend promoter and click the “Trigger” action.

Environment changes

If you need to make change to the deployment environment

cd /your/infra/directory/data/cloud-run/projects/fleet-console/

cd dev/ # For dev
cd prod/ # For prod

vim service.star # Make the changes you need
make gen # Generate needed files

git commit -am '...'
git cl upload

Latchkey config

From a piper workspace

cd google3/configs/cloud/gong/org_hierarchy/google.com/teams/chrome-teams/cros-test-infra/

cd dev/project.fleet-console-dev/ # To deploy to dev
cd prod/project.fleet-console-prod/ # To deploy to prod

From here you can simply make the required edits and submit a new cl

Links

Dev Environment

fleet-console-dev is deployed automatically after CLs land.

Prod Environment

fleet-console-prod must have deployment triggered using a CL.

Troubleshooting

Authentication Errors

When running the server locally, you may encounter an authentication error like this:

error: failed to initialize auth: failed to initialize the token source: interactive login is required

To resolve this, run the following command to set up your credentials:

luci-auth login -scopes "https://www.googleapis.com/auth/cloud-platform https://www.googleapis.com/auth/userinfo.email"

Authentication

To ensure that the Fleet Console server uses the correct service accounts (service@fleet-console-dev.iam.gserviceaccount.com and service@fleet-console-prod.iam.gserviceaccount.com), it is important to use auth.AsSelf when creating authenticated clients. This will ensure that the server uses its own service account credentials to make authenticated calls to other services.

Here is an example of how to create an authenticated HTTP client using auth.AsSelf:

	t, err := auth.GetRPCTransport(ctx, auth.AsSelf)
	if err != nil {
		return nil, errors.Annotate(err, "failed to get RPC transport").Err()
	}
	prpcClient := &prpc.Client{
		C:    &http.Client{Transport: t},
		Host: host,
		Options: &prpc.Options{
			Insecure:  false,
			UserAgent: "fleet-console",
		},
	}