This repo houses all the GCP code for the
We're using Cloud Functions as a serverless platform for the lookup service.
The entry points for the Cloud Functions are defined in
cloud_functions/main.py. Each entry point has a
functions_framework decorator which helps receive the function arguments based on the signature type specified.
Cloud Functions creation is managed via Terraform. TODO add more info here.
All changes, once uploaded to Gerrit, are deployed to staging automatically. All changes, once merged, are deployed to production automatically.
We use Proctor/Cloud Build Integrations (go/gcb-ggob) to automatically deploy changes. A Cloud Build build is triggered whenever a new Gerrit patchset is uploaded in the
prebuilts-cloud project for the staging environment. See here for details on the Cloud Build trigger setup through Proctor.
During a build, a few things happen:
A Cloud Build trigger can be run manually by following the steps here. Once a CL successfully merged, a separate trigger deploys the latest source code to production.
The main way to test deployed Cloud Functions via a trigger including HTTP requests and Pub/Sub.
staging-lookup-service is setup to accept HTTP triggers via the
main function. A function cannot have more than one trigger associated with it, but a trigger can be associated with many functions (as long as the functions are unique).
staging-lookup-service, go to the
Testing tab and click
Test the Function. Output is only available if the Cloud Function is deployed using
1st gen environment.
To develop and test cloud functions locally, we should be able to connect to the cloud sql instance from our local machine/cloudtop. Hence,
Public IP has been enabled on the
prebuilts-staging instance to accommodate this and the database can be accessed locally through cloud sql auth proxy. Public IP should not be enabled in production.
Note: We are running the cloud function locally but are using GCP's staging environment, including staging Cloud SQL database and Secret Manager instances.
Steps to run cloud functions locally (All of these steps are done outside the chroot):
$cloud-sql-proxy --address 127.0.0.1 --port 5432 chromeos-prebuilts:us-central1:prebuilts-staging
scripts/.env.defaultsfile. Add a
scripts/.env.localfile to override the values. This can be helpful for local testing without checking it into the repo. Env variables used:
scripts/.env.defaults, useful for local testing and development (use the cloud-sql-proxy database host and port).
./scripts/run_server_local.sh(This script also sets up a virtual env and installs the required packages).
FUNCTION_SIGNATURE_TYPEbased on whether you want to run the lookup or the update service.
./scripts/test_cloud_function_local.sh -r lookup
Note: The cloud function auto restarts when any of the source files are changed.
We have two Cloud SQL instances:
prebuilts- the production instance
prebuilts-staging- the staging instance
Each instance currently contains a
lookup_service database for the project.
During initial development, we are manually deploying changes to the
staging instances of both Cloud SQL and Cloud Functions. DO NOT deploy to production instances as the production deployment process will be automated later.
To manually deploy changes:
lookup_serviceas the database destination.
Importand wait a few minutes for the import to complete.
The following steps can be used to connect to the database and verify deployments and/or query data:
psql "host=127.0.0.1 port=5432 sslmode=disable dbname=lookup_service user=postgres".
We're using Pub/Sub to receive messages and update metadata for snapshots, binhosts, etc in the database. The
update_service cloud functions receive messages as cloud events through Pub/Sub subscriptions via EventArc triggers. Each use case has its own Pub/Sub topic (e.g. update_snapshot_data, update_binhost_data) and the corresponding cloud function processes the messages, performs database operations. With 2nd gen cloud functions, each function can only have one trigger. So each Pub/Sub topic has its own cloud function for processing messages.
During initial development, we are manually creating Pub/Sub schemas, topics and corresponding subscriptions via Google Cloud console. DO NOT deploy to production instances as the production deployment process will be automated later.
Steps for setting up Pub/Sub topics (this needs to be done for each topic):
Schema type=Protocol Buffer.
update_servicecloud function to consume messages from this topic.
Use a schemabox, select the previously created schema and encoding
type=Binary. Select the
Enable message retentionbox with
Google-managed encryption key.
Eventarc triggeris enabled with the corresponding topic, so the subscription is automatically created with this step.
We're using protocol buffers to have a consistent data format when sending, retrieving Pub/Sub messages.
scripts/gen_proto.shscript compiles and puts the generated proto files in
The binhost lookup service is a HTTP GET endpoint running in a cloud function. The protocol buffer definitions for the request and response are defined in prebuilts_cloud.proto.
The query parameter to be sent with the request:
LookupBinhostsRequestobject encoded as a URL safe base64 byte object.
The response is a
LookupBinhostsResponse object encoded as a URL safe base64 byte object, sent in the response body.
GCP Secret Manager is used to store sensitive information that can be accessed by the cloud functions. The secrets are created and managed manually through the cloud console. Secrets being used:
NOTE: Name of each of these secrets is prefixed by the environment name. e.g. staging-prebuilts-db