//chrome/browser
)This directory contains the browser-specific implementation of the permissions prediction service. Its primary goal is to predict whether a permission request is likely to be granted and, based on that prediction, decide whether to show a standard permission prompt or a quieter UI variation (a permission prompt that is much less prominent). This helps in reducing the number of permission prompts that are perceived as intrusive by clients.
The classes in this folder use or implement the platform-agnostic components defined in //components/permissions/prediction_service/README.md
.
The permissions prediction service is architecturally split into two main layers: platform-agnostic base classes and model handlers in //components
and a browser-specific implementation layer here in //chrome/browser
.
//components/permissions/prediction_service
: Contains the low-level logic for loading and executing TFLite models, the client for the remote prediction service, and fundamental abstractions like PermissionUiSelector
.
//chrome/browser/permissions/prediction_service
: Uses browser specific infrastructure and classes defined in the components layer. It is responsible for:
PermissionUiSelector
interfaces.Profile
-keyed services like OptimizationGuideKeyedService
.When a permission request is initiated, the PermissionRequestManager
consults its configured UI selectors to determine whether to show a normal prompt or a quiet UI. This directory contains two such selectors that work in parallel; they are described in the components section below.
The following components are defined in this directory:
PredictionBasedPermissionUiSelector
: A ML-based selector that decides whether a quiet permission prompt should be shown for a geolocation or notifications permission request. For this decision it gathers various aggregated client-specific and contextual signals (features) and queries one or more prediction models (both on-device and server-side) to get a grant likelihood. It also contains the logic to decide which of those models to use.ContextualNotificationPermissionUiSelector
: Provides a rule-based check for notification permissions. It acts as a blocklist-style mechanism, using Safe Browsing verdicts to decide if the site is known as showing abusive, disruptive, or unsolicited prompts.The service uses a combination of a remote service and various on-device models to generate predictions. ModelHandler classes are used to trigger model download and initialization and provide an interface for model execution.G
PredictionModelHandlerProvider
: Acts as a factory that provides the PredictionBasedPermissionUiSelector
with the correct on-device model handler for a given permission request type.The service can leverage several types of prediction models with varying inputs, operating in different workflows. All models - except for CPSSv1 - are only active if the user has enabled “Make searches and browsing better (unified_consent::prefs::kUrlKeyedAnonymizedDataCollectionEnabled
)”
Server-side CPSS (Chrome Permissions Suggestion Service): A remote service that provides a grant likelihood prediction. It receives a combination of aggregated client signals (e.g., aggregated counts of the client's past permission decisions), contextual signals (e.g., gesture type, relevance score from on-device models), and URL-specific signals (the requesting origin). Communication is handled by the PredictionServiceRequest
class.
On-device Models: For privacy and latency benefits, several on-device models are used. The logic for gathering the inputs for each model resides in this //chrome/browser
layer. These models fall into two categories based on their workflow:
Standalone On-device Model (CPSSv1): This is a TFLite model that uses statistical signals about the client's past interactions with permission prompts to predict a grant likelihood. It operates entirely on-device without needing a server-side call, providing a complete prediction workflow.
Stacked On-device AI Models (AIv1, AIv3, AIv4): These models are part of a more multi-stage workflow. They run on-device to analyze page content and generate a PermissionRequestRelevance
score. This score is the only output from these models that is sent to the server-side CPSS as an additional feature; no actual page content is sent.
PermissionsAiv1Handler
, manages all interactions with this model. It is responsible for executing the model with the page's text content and handling execution timeouts. Unlike the other on-device model handlers, it resides in //chrome/browser
because it directly depends on OptimizationGuideKeyedService
, a Profile
-keyed service.PassageEmbedderDelegate
) as input.The low-level handlers and executors for the CPSSv1, AIv3, and AIv4 models are implemented in //components/permissions/prediction_service
.
LanguageDetectionObserver
: Determines the language of the web page, a prerequisite for the language-specific passage embedding model, that embeds the rendered text as input for AIv4.PassageEmbedderDelegate
: Manages calls to the passage embedding model to generate text embeddings from page content.PredictionServiceRequest
: Encapsulates a request to the server-side PredictionService
.PredictionServiceFactory
: A factory for creating the PredictionService
instance.The prediction service uses a combination of a rule-based check that runs in parallel with a more sophisticated, multi-stage ML pipeline. The service's core logic, implemented in the PredictionBasedPermissionUiSelector
, employs a “model stacking” approach: on-device AI models (AIv1, AIv3, AIv4) run first to generate a PermissionRequestRelevance
score. This score is then used as an input feature for the main server-side model, which makes the final grant likelihood prediction.
The PredictionBasedPermissionUiSelector::GetPredictionTypeToUse
method selects the appropriate ML workflow based on user settings (e.g., “Make searches and browsing better (unified_consent::prefs::kUrlKeyedAnonymizedDataCollectionEnabled
)”), feature flags, and the availability of on-device models.
This workflow provides a rule-based check for notification permission requests. It runs in parallel with the ML-based workflows and can trigger a quiet UI or a warning based on preloaded data and Safe Browsing verdicts.
The specific ML-based workflow is chosen by GetPredictionTypeToUse
based on a priority order (AIv4 > AIv3 > AIv1 > On-Device CPSSv1 > Server-Side only). If a higher-priority model is unavailable or its preconditions are not met, the service falls back to the next one in the sequence.
This workflow uses a single on-device model to make a final prediction without requiring a server-side call.
kOnDeviceCpssV1Model
unified_consent::prefs::kUrlKeyedAnonymizedDataCollectionEnabled
)” is not enabled for the clientpermissions::features::kPermissionOnDeviceNotificationPredictions
).These workflows use a two-stage process and are only active if the user has enabled “Make searches and browsing better (unified_consent::prefs::kUrlKeyedAnonymizedDataCollectionEnabled
)”. First, an on-device AI model generates a PermissionRequestRelevance
score. This score is then sent, along with other features, to the server-side CPSS model for the final grant likelihood prediction.
Baseline: Server-Side Only
This is the baseline workflow when no on-device AI models are active.
kServerSideCpssV3Model
permissions::features::kPermissionPredictionsV2
).Stage 1: On-Device AI Models
These models execute first to generate the relevance score.
Model: kOnDeviceAiv1AndServerSideModel
permissions::features::kPermissionsAIv1
).Model: kOnDeviceAiv3AndServerSideModel
permissions::features::kPermissionsAIv3
).Model: kOnDeviceAiv4AndServerSideModel
permissions::features::kPermissionsAIv4
).Stage 2: Server-Side Model (with AI Score)
PermissionRequestRelevance
score from the on-device AI model that ran in stage 1.The component has a suite of unit tests and browser tests.
Unit tests are located in prediction_based_permission_ui_selector_unittest.cc
and contextual_notification_permission_ui_selector_unittest.cc
. They cover the logic of the UI selectors in isolation.
To run the PredictionBasedUiSelector unit tests with logging:
tools/autotest.py --output-directory out/Default chrome/browser/permissions/prediction_service/prediction_based_permission_ui_selector.cc \ --test-launcher-bot-mode --single-process-tests --fast-local-dev \ --enable-logging=stderr --v=0 --vmodule="*/permissions/*"=2,"*/optimization_guide/*"=2
Browser tests are in prediction_service_browsertest.cc
. These tests cover the end-to-end functionality, including interactions with the PermissionRequestManager
and mock model handlers.
To run the browser tests locally:
tools/autotest.py --output-directory out/Default chrome/browser/permissions/prediction_service/prediction_service_browsertest.cc \ --test-launcher-bot-mode --test-launcher-jobs=1 --fast-local-dev
To run the browser tests locally with debugging output one needs to select one single test and use the --single-process-tests flag:
tools/autotest.py --output-directory out/Default chrome/browser/permissions/prediction_service/prediction_service_browsertest.cc \ --test-launcher-bot-mode --fast-local-dev --enable-logging=stderr --single-process-tests --v=0 \ --vmodule="*/passage_embeddings/*"=5,"*/permissions/*"=2,"*/optimization_guide/*"=2 --gtest_filter="OneSingleTest"
chrome
layer implementation relies on the core logic in //components
. See //components/permissions/prediction_service/README.md
for more details.PermissionRequestManager
invokes SelectUiToUse
on its configured selectors. See this Code Search query for the primary invocation site.