This document describes Chromium's implementation of service workers.
This section briefly introduces what service workers are. For a more detailed treatment, see the MDN documentation or the Service Worker specification.
Service workers are a web platform feature that form the basis of app-like capabilities such as offline support, push notifications, and background sync. A service worker is a event-driven JavaScript program that runs in a worker thread separate from a document.
Once registered, a service worker is installed on the browser and persists indefinitely until evicted or deleted manually (see Eviction below). The browser dispatches events to the worker thread, starting the thread whenever needed and stopping it when there are no more events to dispatch.
Service workers are bound to an origin. More specifically they have a scope URL, specified when the service worker is registered. The service worker controls pages or web workers that match its scope. There can be only one service worker registration for a given scope.
A website registers a service worker using the register()
API:
navigator.serviceWorker.register('sw.js', {scope: './foo'});
If this page is on https://example.com
, the service worker is registered for scope https://example.com/foo
.
The service worker may look like this:
// sw.js: self.addEventListener('install', event => { // Install static assets. event.waitUntil((async () => { const cache = await caches.open('my-cache'); await cache.addAll(['all.css', 'page.js', 'page.html']); })()); }); self.addEventListener('fetch', event => { // Respond with a cached resource, or else fetch from network. event.respondWith((async () => { const response = await caches.match(event.request); return response || fetch(event.request); })()); });
Note the fetch
event handler. A core functionality of service workers is the ability to intercept and respond to URL requests, similar to a network proxy. Whenever the browser makes a URL request that a service worker can intercept, it dispatches a fetch
event to the worker. The service worker can then provide a response to the request, for example, by using the Fetch API, the Cache Storage API, or by generating a response using new Response()
.
To understand which service worker intercepts a URL request, there are two rules.
https://example.com/foo/hi
matches the service worker above). If so, that service worker intercepts the request, and the service worker subsequently controls the resulting window or web worker.The rest of this document explains how service workers are implemented in Chromium.
As a web platform feature, service worker is implemented in the content module and its dependency Blink. Chrome-specific hooks and additions, for example, for Chrome extensions support, are in higher-level directories like //chrome.
The service worker implementation has parts in both the browser process and renderer process:
TODO: A simple diagram of the browser/renderer architecture and the Mojo message pipes and interfaces would be helpful.
Note: The classes in this section are in the namespace
content
.
In the browser process, ServiceWorkerContextCore
is root class which all other service worker classes are attached to. There is one context per storage partition.
ServiceWorkerContextCore
is owned by a thread-safe refcounted wrapper called ServiceWorkerContextWrapper
. StoragePartition
is the primary owner of this object on the UI thread. But ServiceWorkerContextCore
itself, and the classes that hang off of it, are primarily single-threaded and run on the IO thread. There is ongoing work to move this “service worker core” thread to the UI thread. After that time, it may be possible to remove the refcounted wrapper and have StoragePartition uniquely own the context core on the UI thread. See the Service Worker on UI design doc.
The context owns ServiceWorkerStorage
, which manages service worker registrations and auxiliary data attached to them. The ServiceWorkerStorage
owns a ServiceWorkerDatabase
, which provides access to the LevelDB instance containing the registration data. See Storage below.
ServiceWorkerStorage
is used to register, update, and unregister service workers. Typically these operations are driven by ServiceWorkerRegisterJob
and ServiceWorkerUnregisterJob
, which implement the jobs defined in the specification. As per the specification, the jobs are run sequentially using a a job queue. The class ServiceWorkerJobCoordinator
, owned by the context, implements this queue.
ServiceWorkerStorage
represents service worker entities as ServiceWorkerRegistration
and ServiceWorkerVersion
. These correspond to the specification's model of service worker registration and service worker, respectively.
ServiceWorkerVersion
provides functions for starting and stopping a service worker thread in the renderer, and for dispatching events to the thread. It uses a lower-level class, EmbeddedWorkerInstance
, to request the renderer to start and stop the service worker thread.
Note: The “embedded worker” terminology and abstraction is a bit of a historical accident. At one point the plan was for service workers and shared workers to use the same “embedded worker” classes. But it turned out only service workers use it.
A running service worker has a corresponding host in the browser process called ServiceWorkerHost
.
In addition, service worker clients (windows and web workers) are represented by a ServiceWorkerContainerHost
in the browser process. This host holds information pertinent to service workers, such as which ServiceWorkerRegistration
is controlling the client, and it implements the Mojo interface the renderer uses for the client-side service worker API.
Note: Historically much service worker code in the renderer process was implemented in
//content/renderer
. There is ongoing work to move it to//third_party/blink
per Onion Soup, which will remove some layers of indirection.
The renderer process naturally has classes that implement the web-exposed interfaces: blink::ServiceWorker
, blink::ServiceWorkerRegistration
, blink::ServiceWorkerContainer
, etc.
Other classes in the renderer process can be divided into those that deal with a) service worker execution contexts, and b) service worker clients (windows and web workers).
For starting and stopping a service worker, content::EmbeddedWorkerInstanceClientImpl
is used. One is created per service worker startup on a background thread. It creates a content::ServiceWorkerContextClient
, which owns a blink::WebEmbeddedWorkerImpl
, which creates a blink::ServiceWorkerThread
which starts the physical service worker thread and JavaScript execution context with a blink::ServiceWorkerGlobalScope
global.
ServiceWorkerGlobalScope
implements two Mojo interfaces:
mojom.blink.ServiceWorker
, which the browser process uses to dispatch events to the service worker.mojom.blink.ServiceWorkerController
, which renderer processes use to dispatch fetch events to a service worker that controls a client in that process.Service worker clients have an associated content::ServiceWorkerProviderContext
which contains information such as which service worker controls the client and manages request interception to that service worker.
Mojo is Chromium's IPC system and plays a important role in service worker architecture. This section describes the main Mojo interfaces for service workers, and which message pipes they are on.
For windows (or clients), the browser process and renderer process talk over Mojo interfaces bound to the Mojo pipe to commit a navigation, which is considered as the legacy IPC “channel” message pipe. This guarantees the order of IPC messages between Mojo interfaces.
Each window in the renderer process is connected to a host in the browser process. The renderer talks to the browser process over the mojom.blink.ServiceWorkerContainerHost
interface which provides functionality like registering service workers. The browser talks to the renderer over the mojom.blink.ServiceWorkerContainer
interface.
The window obtains ServiceWorkerRegistration
and ServiceWorker
JavaScript objects via APIs like navigator.serviceWorker.ready
, navigator.serviceWorker.controller
, and navigator.serviceWorker.register()
. Each object has a connection to the browser, again on the channel-associated message pipe. ServiceWorkerRegistration
has a remote to a mojom.blink.ServiceWorkerRegistrationObjectHost
and ServiceWorker
has a remote to a ServiceWorkerObjectHost
. Conversely, the browser process has remotes to mojom.blink.ServiceWorkerRegistrationObject
and mojom.blink.ServiceWorkerRegistration
.
After making this design, there's been some realization that asynchronous ownership makes destruction complicated because of non-deterministic destruction order sometimes caused crashes. It may have worked better to use fewer interfaces, e.g., a single ServiceWorkerContainer interface from which one can manipulate ServiceWorkerRegistration and ServiceWorker, or maybe prohibiting destructions initiated from the renderer may work. In addition, we have a Mojo interface for in-process communication across threads like this. Mojo is now slightly overused for abstraction of layers for service workers.
For shared workers, the browser process and renderer process talk over Mojo interfaces bound to the dedicated message pipe established by the mojom.blink.SharedWorkerFactory
implementation that creates the shared worker.
Similar to windows, the shared worker has a remote to a mojom.blink.ServiceWorkerContainerHost
in the browser process, and the browser process has a remote to the mojom.blink.ServiceWorkerContainer
in the renderer process.
However, shared workers don‘t yet support navigator.serviceWorker
, so they don’t use many of the methods on ServiceWorkerContainerHost
. They also don't yet have a way to obtain a ServiceWorkerRegistration
or ServiceWorker
JavaScript object.
For service workers, there are two message pipes: a) a “bootstrap” message pipe for starting/stopping the service worker thread, and b) a message pipe bound to the running service worker thread.
The “bootstrap” message pipe is established by the mojom.blink.EmbeddedWorkerInstanceClient
implementation in the renderer. The browser process uses this interface to ask the renderer process to start and stop a service worker thread. The renderer process has a remote to a corresponding mojom.blink.EmbeddedWorkerInstanceHost
in the browser process.
In addition, like windows and shared workers above, the service worker has a remote to a mojom.blink.ServiceWorkerContainerHost
in the brocess process, and the browser process has a remote to a mojom.blink.ServiceWorkerContainer
. These are on the bootstrap message pipe.
Note: It's unclear why service workers use the
ServiceWorkerContainerHost
interface, because they are forbidden from calling any methods on this interface. There are some plans to clean this up, see https://crbug.com/931087.
Running service worker threads have a dedicated message pipe, established by the mojom.blink.ServiceWorker
implementation. The browser process uses this interface to ask the renderer to dispatch events to the service worker. The service worker has a remote to a corresponding mojom.blink.ServiceWorkerHost
in the browser process.
Service workers have access to a ServiceWorkerRegistration
JavaScript object via self.registration
and its ServiceWorker
properties. The mojom.blink.ServiceWorker(Registration)Object(Host)
interfaces are bound to the service worker thread's message pipe.
Service worker clients in a renderer process have a direct connection to their controller service worker, which can be in the same process or a different process. The clients have a remote to a mojom.blink.ControllerServiceWorker
which they use to dispatch fetch events to the service worker.
This remote is given to the client by the browser process using SetController()
on the mojom.blink.ServiceWorkerContainer
interface. The browser is the source of truth about which service worker is controlling which client.
If the connection breaks, it likely means the service worker has stopped. The service worker client asks the browser process to restart the service worker so it can again dispatch fetch events to it.
Service worker storage consists of the following.
Code pointers include ServiceWorkerDatabase and ServiceWorkerStorage.
The related Cache Storage API uses a disk_cache instance using the “simple” implementation, located at ${DIR_USER_DATA}/Service Worker/CacheStorage. This location was chosen because the Cache Storage API is currently defined in the Service Worker specification, but it can be used independently of service workers.
For incognito windows, everything is in-memory.
Service workers storage lasts indefinitely, i.e, there is no periodic deletion of old but still installed service workers. Installed service workers are only evicted by the Quota Manager (or user action). The Quota Manager controls several web platform APIs, including sandboxed filesystem, WebSQL, appcache, IndexedDB, cache storage, service worker (registration and scripts), and background fetch.
The Quota Manager starts eviction when one of the following conditions is true (as of August 2018):
When eviction starts, origins are purged on an LRU basis until the triggering condition no longer applies. Purging an origin deletes its storage completely.
Note that Quota Manager eviction is independent of HTTP cache eviction. The HTTP cache is typically much smaller than the storage under the control of the Quota Manager, and it likely uses a simple non-origin-based LRU algorithm.
Blink has a UseCounter mechanism intended to measure the percentage of page loads on the web that used a given feature. Service workers complicate this measurement because a feature use in a service worker potentially affects many page loads, including ones in the future.
Therefore, service workers integrate with the UseCounter mechanism as follows:
For more details and rationale, see Design of UseCounter for workers and crbug 376039.
Code pointers include:
We monitor service worker performance with real-world metrics (UMA) and performance benchmarks.
The UMA data is internal-only. Key metrics include:
Page load metrics for service worker controlled loads:
Service worker startup time and breakdown:
Fetch event handling:
Service worker's startup sequence is composed of a few steps in ServiceWorker.StartTiming.[A]To[B]. These are the milestones that can be in the [A] and [B].
Here's the explanation about the each section:
We run a limited number of Telemetry benchmark tests for service worker and a few microbenchmarks in blink_perf (crbug).
Telemetry tests are part of the Loading benchmarks, as the “pwa” tests inside the “loading.mobile” suite. The tests do not run on desktop machines (loading.desktop) due to resource constraints.
See a quick dashboard of these test results. You can also run the benchmarks locally:
# Run benchmark on `FlipKart` $ tools/perf/run_benchmark --browser=android-chromium loading.mobile --story-filter='FlipKart' # Run benchmark on `FlipKart` with cache_temperature = cold $ tools/perf/run_benchmark --browser=android-chromium loading.mobile --story-filter='FlipKart_cold'
TODO(falken): Merge this with loading.md and cache_temperature.py documentation.
The PWA tests load a page multiple times. Each time has a different “cache temperature”. These temperatures have special significance for service worker controlled page loads:
Service workers are terminated between loads in order to include service worker startup as part of the performance test.
Code links and resources: