This document describes the browser-process implementation of the Cache Storage specification.
As of June 2018, Chrome components can use the Cache Storage interface via CacheStorageManager to store Request/Response key-value pairs. The concept of CacheStorageOwner was added to distinguish and isolate the different components.
Where ‘=>’ represents ownership, ‘->’ is a reference, and ‘~>’ is a weak reference.
CacheStorageContextImpl->CacheStorageManager=>CacheStorage=>CacheStorageCacheCacheStorageManager can own multiple CacheStorage objects.CacheStorage can own multiple CacheStorageCache objects.StoragePartitionImpl->CacheStorageContextImplStoragePartitionImpl effectively owns the CacheStorageContextImpl in the sense that it calls CacheStorageContextImpl::Shutdown() on deletion which resets its CacheStorageManager.RenderProcessHost->CacheStorageDispatcherHost->CacheStorageContextImplCacheStorageDispatcherHost=>CacheStorageCacheHandle~>CacheStorageCacheCacheStorageDispatcherHost holds onto handles for:CacheStorageDispatcherHost=>CacheStorageHandle~>CacheStorageCacheStorageDispatcherHost holds onto handles for:CacheStorageCacheDataHandle=>CacheStorageCacheHandle~>CacheStorageCacheCacheStorageCacheDataHandle is the blob data handle for a response body and it holds a CacheStorageCacheHandle. It streams from the disk_cache::Entry response stream. It's necessary that the disk_cache::Backend (owned by CacheStorageCache) stays open so long as one of its disk_cache::Entrys is reachable. Otherwise, a new backend might open and clobber the entry.CacheStorageCache=>CacheStorageCacheHandle~>CacheStorageCacheCacheStorageCache will hold a self-reference while executing an operation. This self-reference is dropped between subsequent operations, so shutdown is possible when there are no external references even if there are more operations in the scheduler queue.CacheStorageManager or CacheStorageCache operation.CacheStorageCacheHandle to keep the cache alive since the operation is asynchronous.CacheStorageCacheHandle.CacheStorageHandle. This is used to inform the CacheStorage about whether its externally used so it can keep warmed cache objects alive to mitigate rapid opening/closing/opening churn.CacheStorage for a given origin-owner pair, loading CacheStorages on demand.QuotaManager and BrowsingData calls.CacheStorage::Match).CacheStorage::CacheLoaderSimpleCache or MemoryCache) on initialization.CacheStorage and deleted either when CacheStorage deletes or when the last CacheStorageCacheHandle for the cache is gone.CacheStorageCache.CacheStorageCacheHandle to a CacheStorageCache is deleted, so to is the CacheStorageCache.CacheStorageCache may be deleted before the CacheStorageCacheHandle (on CacheStorage destruction), so it must be checked for validity before use.CacheStorage.CacheStorageHandle to a CacheStorage is deleted, internal state is cleaned up. The CacheStorage object is not deleted, however.CacheStorage may be deleted before the CacheStorageHandle (on browser shutdown), so it must be checked for validity before use.$PROFILE/Service Worker/CacheStorage/origin/cache/
Where origin is a hash of the origin and cache is a GUID generated at the cache's creation time.
The reason a random directory is used for a cache is so that a cache can be doomed and still used by old references while another cache with the same name is created.
CacheStorage creates its own index file (index.txt), which contains a mapping of cache names to its path on disk. On CacheStorage initialization, directories not in the index are deleted.
Each CacheStorageCache has a disk_cache::Backend backend, which writes in the CacheStorageCache's directory.
A cache is represented by a disk_cache::Backend. The Request/Response pairs referred to in the specification are stored as disk_cache::Entrys. Each disk_cache::Entry has three streams: one for storing a protobuf with the request/response metadata (e.g., the headers, the request URL, and opacity information), another for storing the response body, and a final stream for storing any additional data (e.g., compiled JavaScript).
The entries are keyed by full URL. This has a few ramifications:
ignoreSearch) must scan every entry in the cache.The above could be fixed by changes to the backend or by introducing indirect entries in the cache. The indirect entries would be for the query-stripped request URL. It would point to entries to each query request/response pair and for each vary request/response pair.
CacheStorageContextImpl which is created on UI but otherwise runs and is deleted on IO.CacheStorageDispatcherHost which is created on UI but otherwise runs and is deleted on IO.SequencedTaskRunner assigned at CacheStorageContextImpl creation.disk_cache::Backend lives on the IO thread and uses its own worker pool to implement async operations.CacheStorageCacheCache or CacheStorage) is still alive. Once the object is deleted, the callbacks are dropped. We don't worry about dropped callbacks on shutdown. If deleting prior to shutdown, one should Close() a CacheStorage or CacheStorageCache to ensure that all operations have completed before deleting it.Operations are scheduled in a sequential scheduler (CacheStorageScheduler). Each CacheStorage and CacheStorageCache has its own scheduler. If an operation freezes, then the scheduler is frozen. If a CacheStorage call winds up calling something from every CacheStorageCache (e.g., CacheStorage::Match), then one frozen CacheStorageCache can freeze the CacheStorage as well. This has happened in the past (Cache::Put called QuotaManager to determine how much room was available, which in turn called Cache::Size). Be careful to avoid situations in which one operation triggers a dependency on another operation from the same scheduler.
At the end of an operation, the scheduler needs to be kicked to start the next operation. The idiom for this in CacheStorage/ is to wrap the operation's callback with a function that will run the callback as well as advance the scheduler. So long as the operation runs its wrapped callback the scheduler will advance.
Applications can cache cross-origin resources as per Cross-Origin Resources and CORS. Opaque responses are also cached, but in order to prevent “leaking” the size of opaque responses their sizes are obfuscated. Random padding is added to the actual size making it difficult for an attacker to ascertain the actual resource size via quota APIs.
When Chromium starts, a new random padding key is generated and used for all new caches created. This key is used by each cache to calculate padding for opaque resources. Each cache's key is persisted to disk in the cache index file
Each cache maintains the total padding for all opaque resources within the cache. This padding is added to the actual resource size when reporting sizes to the quota manager.
The padding algorithm version is also written to each cache allowing for it to be changed at a future date. CacheStorage will use the persisted key and padding from the cache's index unless the padding algorithm has been changed, one of values is missing, or deemed to be incorrect. In this situation the cache is enumerated and the padding recalculated during open.