tree: d97d0fc8719a66b9cae2a34b3137ed98f4f400aa [path history] [tgz]
  1. contextual_notification_permission_ui_selector.cc
  2. contextual_notification_permission_ui_selector.h
  3. contextual_notification_permission_ui_selector_unittest.cc
  4. language_detection_observer.cc
  5. language_detection_observer.h
  6. OWNERS
  7. passage_embedder_delegate.cc
  8. passage_embedder_delegate.h
  9. permissions_aiv1_handler.cc
  10. permissions_aiv1_handler.h
  11. permissions_aiv1_handler_unittest.cc
  12. prediction_based_permission_ui_selector.cc
  13. prediction_based_permission_ui_selector.h
  14. prediction_based_permission_ui_selector_unittest.cc
  15. prediction_model_handler_provider.cc
  16. prediction_model_handler_provider.h
  17. prediction_model_handler_provider_factory.cc
  18. prediction_model_handler_provider_factory.h
  19. prediction_service_browsertest.cc
  20. prediction_service_factory.cc
  21. prediction_service_factory.h
  22. prediction_service_request.cc
  23. prediction_service_request.h
  24. README.md
chrome/browser/permissions/prediction_service/README.md

Permissions Prediction Service (//chrome/browser)

Overview

This directory contains the browser-specific implementation of the permissions prediction service. Its primary goal is to predict whether a permission request is likely to be granted and, based on that prediction, decide whether to show a standard permission prompt or a quieter UI variation (a permission prompt that is much less prominent). This helps in reducing the number of permission prompts that are perceived as intrusive by clients.

The classes in this folder use or implement the platform-agnostic components defined in //components/permissions/prediction_service/README.md.

Architecture

The permissions prediction service is architecturally split into two main layers: platform-agnostic base classes and model handlers in //components and a browser-specific implementation layer here in //chrome/browser.

  • //components/permissions/prediction_service: Contains the low-level logic for loading and executing TFLite models, the client for the remote prediction service, and fundamental abstractions like PermissionUiSelector.

  • //chrome/browser/permissions/prediction_service: Uses browser specific infrastructure and classes defined in the components layer. It is responsible for:

    • Implementing the concrete PermissionUiSelector interfaces.
    • Gathering browser-specific features for the models (e.g., page language, snapshots, text embeddings).
    • Interacting with Profile-keyed services like OptimizationGuideKeyedService.

When a permission request is initiated, the PermissionRequestManager consults its configured UI selectors to determine whether to show a normal prompt or a quiet UI. This directory contains two such selectors that work in parallel; they are described in the components section below.

Components

The following components are defined in this directory:

UI Selectors

  • PredictionBasedPermissionUiSelector: A ML-based selector that decides whether a quiet permission prompt should be shown for a geolocation or notifications permission request. For this decision it gathers various aggregated client-specific and contextual signals (features) and queries one or more prediction models (both on-device and server-side) to get a grant likelihood. It also contains the logic to decide which of those models to use.
  • ContextualNotificationPermissionUiSelector: Provides a rule-based check for notification permissions. It acts as a blocklist-style mechanism, using Safe Browsing verdicts to decide if the site is known as showing abusive, disruptive, or unsolicited prompts.

Prediction Models and Handlers

The service uses a combination of a remote service and various on-device models to generate predictions. ModelHandler classes are used to trigger model download and initialization and provide an interface for model execution.G

Model Access

  • PredictionModelHandlerProvider: Acts as a factory that provides the PredictionBasedPermissionUiSelector with the correct on-device model handler for a given permission request type.

Prediction Models

The service can leverage several types of prediction models with varying inputs, operating in different workflows. All models - except for CPSSv1 - are only active if the user has enabled “Make searches and browsing better (unified_consent::prefs::kUrlKeyedAnonymizedDataCollectionEnabled)”

  • Server-side CPSS (Chrome Permissions Suggestion Service): A remote service that provides a grant likelihood prediction. It receives a combination of aggregated client signals (e.g., aggregated counts of the client's past permission decisions), contextual signals (e.g., gesture type, relevance score from on-device models), and URL-specific signals (the requesting origin). Communication is handled by the PredictionServiceRequest class.

  • On-device Models: For privacy and latency benefits, several on-device models are used. The logic for gathering the inputs for each model resides in this //chrome/browser layer. These models fall into two categories based on their workflow:

    • Standalone On-device Model (CPSSv1): This is a TFLite model that uses statistical signals about the client's past interactions with permission prompts to predict a grant likelihood. It operates entirely on-device without needing a server-side call, providing a complete prediction workflow.

    • Stacked On-device AI Models (AIv1, AIv3, AIv4): These models are part of a more multi-stage workflow. They run on-device to analyze page content and generate a PermissionRequestRelevance score. This score is the only output from these models that is sent to the server-side CPSS as an additional feature; no actual page content is sent.

      • AIv1: Analyzes the rendered text content of a page. Its handler, PermissionsAiv1Handler, manages all interactions with this model. It is responsible for executing the model with the page's text content and handling execution timeouts. Unlike the other on-device model handlers, it resides in //chrome/browser because it directly depends on OptimizationGuideKeyedService, a Profile-keyed service.
      • AIv3: Uses a snapshot of the web page as input.
      • AIv4: Uses both a page snapshot and text embeddings (generated by PassageEmbedderDelegate) as input.

The low-level handlers and executors for the CPSSv1, AIv3, and AIv4 models are implemented in //components/permissions/prediction_service.

Helper Components

  • LanguageDetectionObserver: Determines the language of the web page, a prerequisite for the language-specific passage embedding model, that embeds the rendered text as input for AIv4.
  • PassageEmbedderDelegate: Manages calls to the passage embedding model to generate text embeddings from page content.

Server Communication

Prediction Workflows

The prediction service uses a combination of a rule-based check that runs in parallel with a more sophisticated, multi-stage ML pipeline. The service's core logic, implemented in the PredictionBasedPermissionUiSelector, employs a “model stacking” approach: on-device AI models (AIv1, AIv3, AIv4) run first to generate a PermissionRequestRelevance score. This score is then used as an input feature for the main server-side model, which makes the final grant likelihood prediction.

The PredictionBasedPermissionUiSelector::GetPredictionTypeToUse method selects the appropriate ML workflow based on user settings (e.g., “Make searches and browsing better (unified_consent::prefs::kUrlKeyedAnonymizedDataCollectionEnabled)”), feature flags, and the availability of on-device models.

Rule-Based Check

This workflow provides a rule-based check for notification permission requests. It runs in parallel with the ML-based workflows and can trigger a quiet UI or a warning based on preloaded data and Safe Browsing verdicts.

ML-Based Predictions

The specific ML-based workflow is chosen by GetPredictionTypeToUse based on a priority order (AIv4 > AIv3 > AIv1 > On-Device CPSSv1 > Server-Side only). If a higher-priority model is unavailable or its preconditions are not met, the service falls back to the next one in the sequence.

Standalone On-Device Model

This workflow uses a single on-device model to make a final prediction without requiring a server-side call.

  • Model: kOnDeviceCpssV1Model
  • Preconditions:
    • “Make searches and browsing better (unified_consent::prefs::kUrlKeyedAnonymizedDataCollectionEnabled)” is not enabled for the client
    • The on-device prediction feature is enabled for the permission type (e.g., permissions::features::kPermissionOnDeviceNotificationPredictions).
    • The client has at least 4 previous permission prompts for the given type.
  • Input:
    • Client-specific signals: Statistical aggregates of the client's past permission actions (e.g., grant/deny/dismiss counts), both for the specific permission type and for all permission types combined.
    • Contextual signals: Request gesture type.
  • Error Handling: If the model is not available or fails to execute, the workflow falls back to the normal UI.
  • Output: The model predicts a grant likelihood. If it's “VERY_UNLIKELY”, a quiet UI is shown. Otherwise, the normal UI is used.

Stacked Models (On-Device AI + Server-Side)

These workflows use a two-stage process and are only active if the user has enabled “Make searches and browsing better (unified_consent::prefs::kUrlKeyedAnonymizedDataCollectionEnabled)”. First, an on-device AI model generates a PermissionRequestRelevance score. This score is then sent, along with other features, to the server-side CPSS model for the final grant likelihood prediction.

Baseline: Server-Side Only

This is the baseline workflow when no on-device AI models are active.

  • Model: kServerSideCpssV3Model
  • Preconditions:
    • The server-side prediction feature is enabled (permissions::features::kPermissionPredictionsV2).
    • No on-device AI models (AIv1, AIv3, AIv4) are enabled.
  • Input:
    • Aggregated client-specific signals: The client's aggregated permission action history.
    • Contextual signals: Request gesture type.
    • URL-specific signals: Requesting origin URL.
  • Error Handling: If the request to the server fails or returns an empty response, the workflow falls back to the normal UI.
  • Output: The server returns a grant likelihood. If it's “VERY_UNLIKELY”, a quiet UI is shown. Otherwise, the normal UI is used.

Stage 1: On-Device AI Models

These models execute first to generate the relevance score.

  • Model: kOnDeviceAiv1AndServerSideModel

    • Preconditions: The AIv1 feature is enabled (permissions::features::kPermissionsAIv1).
    • Input: The rendered text content of the page.
    • Error Handling: If fetching text or model execution fails, the workflow proceeds to the server-side model without the relevance score.
  • Model: kOnDeviceAiv3AndServerSideModel

    • Preconditions: The AIv3 feature is enabled (permissions::features::kPermissionsAIv3).
    • Input: A snapshot of the web page.
    • Error Handling: If snapshot generation or model execution fails, the workflow proceeds to the server-side model without the relevance score.
  • Model: kOnDeviceAiv4AndServerSideModel

    • Preconditions: The AIv4 feature is enabled (permissions::features::kPermissionsAIv4).
    • Input:
      • A snapshot of the web page.
      • Text embeddings generated from the page's content (requires page language to be English).
    • Error Handling: If language detection, text embedding, snapshot generation, or model execution fails, the workflow proceeds to the server-side model without the relevance score.

Stage 2: Server-Side Model (with AI Score)

  • Input: All signals from the baseline server-side model, plus the PermissionRequestRelevance score from the on-device AI model that ran in stage 1.
  • Output / Error Handling: Same as the baseline server-side model.

Testing

The component has a suite of unit tests and browser tests.

Unit Tests

Unit tests are located in prediction_based_permission_ui_selector_unittest.cc and contextual_notification_permission_ui_selector_unittest.cc. They cover the logic of the UI selectors in isolation.

To run the PredictionBasedUiSelector unit tests with logging:

tools/autotest.py --output-directory out/Default chrome/browser/permissions/prediction_service/prediction_based_permission_ui_selector.cc \
--test-launcher-bot-mode --single-process-tests --fast-local-dev \
--enable-logging=stderr --v=0 --vmodule="*/permissions/*"=2,"*/optimization_guide/*"=2

Browser Tests

Browser tests are in prediction_service_browsertest.cc. These tests cover the end-to-end functionality, including interactions with the PermissionRequestManager and mock model handlers.

To run the browser tests locally:

tools/autotest.py --output-directory out/Default chrome/browser/permissions/prediction_service/prediction_service_browsertest.cc \
--test-launcher-bot-mode --test-launcher-jobs=1 --fast-local-dev

To run the browser tests locally with debugging output one needs to select one single test and use the --single-process-tests flag:

tools/autotest.py --output-directory out/Default chrome/browser/permissions/prediction_service/prediction_service_browsertest.cc \
--test-launcher-bot-mode --fast-local-dev  --enable-logging=stderr --single-process-tests --v=0 \
--vmodule="*/passage_embeddings/*"=5,"*/permissions/*"=2,"*/optimization_guide/*"=2 --gtest_filter="OneSingleTest"

Relevant Context