This document is intended as an overview of the core layers of the network stack and the network service, their basic responsibilities, and how they fit together, without going into too much detail. This doc assumes the network service is enabled, though the network service is not yet enabled by default on any platform.
It's particularly targeted at people new to the Chrome network stack, but should also be useful for team members who may be experts at some parts of the stack, but are largely unfamiliar with other components. It starts by walking through how a basic request issued by another process works its way through the network stack, and then moves on to discuss how various components plug in.
If you notice any inaccuracies in this document, or feel that things could be better explained, please do not hesitate to submit patches.
The network stack is located in //net/ in the Chrome repo, and uses the namespace “net”. Whenever a class name in this doc has no namespace, it can generally be assumed it's in //net/ and is in the net namespace.
The top-level network stack object is the URLRequestContext. The context has non-owning pointers to everything needed to create and issue a URLRequest. The context must outlive all requests that use it. Creating a context is a rather complicated process, and it's recommended that most consumers use URLRequestContextBuilder to do this.
The primary use of the URLRequestContext is to create URLRequest objects using URLRequestContext::CreateRequest(). The URLRequest is the main interface used by direct consumers of the network stack. It use used to drive requests for http, https, ftp, and some data URLs. Each URLRequest tracks a single request across all redirects until an error occurs, it's canceled, or a final response is received, with a (possibly empty) body.
The HttpNetworkSession is another major network stack object. It owns the HttpStreamFactory, the socket pools, and the HTTP/2 and QUIC session pools. It also has non-owning pointers to the network stack objects that more directly deal with sockets.
This document does not mention either of these objects much, but at layers above the HttpStreamFactory, objects often grab their dependencies from the URLRequestContext, while the HttpStreamFactory and layers below it generally get their dependencies from the HttpNetworkSession.
A URLRequest informs the consumer of important events for a request using two main interfaces: the URLRequest::Delegate interface and the NetworkDelegate interface.
The URLRequest::Delegate interface consists of a small set of callbacks needed to let the embedder drive a request forward. The NetworkDelegate is an object pointed to by the URLRequestContext and shared by all requests, and includes callbacks corresponding to most of the URLRequest::Delegate's callbacks, as well as an assortment of other methods.
The network service, which lives in //services/network/, wraps //net/ objects, and provides cross-process network APIs and their implementations for the rest of Chrome. The network service uses the namespace “network” for all its classes. The Mojo interfaces it provides are in the network::mojom namespace. Mojo is Chrome‘s IPC layer. Generally there’s a network::mojom::FooPtr proxy object in the consumer‘s process which also implements the network::mojom::Foo interface. When the proxy object’s methods are invoked, it passes the call and all its arguments over a Mojo IPC channel to another the implementation of the network::mojom::Foo interface in the network service (typically implemented by a class named network::Foo), which may be running in another process, or possibly another thread in the consumer's process.
The network::NetworkService object is singleton that is used by Chrome to create all other network service objects. The primary objects it is used to create are the network::NetworkContexts, each of which owns its own mostly independent URLRequestContext. Chrome has a number of different NetworkContexts, as there is often a need to keep cookies, caches, and socket pools separate for different types of requests, depending on what's making the request. Here are the main NetworkContexts used by Chrome:
A request for data is dispatched from some other process which results in creating a network::URLLoader in the network process. The URLLoader then creates a URLRequest to drive the request. A protocol-specific job (e.g. HTTP, data, file) is attached to the request. In the HTTP case, that job first checks the cache, and then creates a network connection object, if necessary, to actually fetch the data. That connection object interacts with network socket pools to potentially re-use sockets; the socket pools create and connect a socket if there is no appropriate existing socket. Once that socket exists, the HTTP request is dispatched, the response read and parsed, and the result returned back up the stack and sent over to the child process.
Of course, it's not quite that simple :-}.
Consider a simple request issued by some process other than the network service‘s process. Suppose it’s an HTTP request, the response is uncompressed, no matching entry in the cache, and there are no idle sockets connected to the server in the socket pool.
Continuing with a “simple” URLRequest, here's a bit more detail on how things work.
Chrome has a single browser process which handles starting and configuring other processes, tab management, and navigation, among other things, and multiple child processes, which are generally sandboxed and have no network access themselves, apart from the network service (Which either runs in its own process, or potentially in the browser process to preserve RAM). There are multiple types of child processes (renderer, GPU, plugin, network, etc). The renderer processes are the ones that layout webpages and run HTML.
The browser process creates the top level network::mojom::NetworkContext objects, and uses them to create network::mojom::URLLoaderFactories, which it can set some security-related options on, before vending them to child processes. Child processes can then use them to directly talk to the network service.
A consumer that wants to make a network request gets a URLLoaderFactory through some manner, assembles a bunch of parameters in the large ResourceRequest object, creates a network::mojom::URLLoaderClient Mojo channel for the network::mojom::URLLoader to use to talk back to it, and then passes them to the URLLoaderFactory, which returns a URLLoader object that it can use to manage the network request.
The URLLoaderFactory, along with all NetworkContexts and most of the network stack, lives on a single thread in the network service. It gets a reconstituted ResourceRequest object from the Mojo pipe, does some checks to make sure it can service the request, and if so, creates a URLLoader, passing the request and the NetworkContext associated with the URLLoaderFactory.
The URLLoader then calls into a URLRequestContext to create the URLRequest. The URLRequestContext has pointers to all the network stack objects needed to issue the request over the network, such as the cache, cookie store, and host resolver. The URLLoader then calls into the ResourceScheduler, which may delay starting the request, based on priority and other activity. Eventually, the ResourceScheduler starts the request.
The URLRequest then calls into the URLRequestJobFactory to create a URLRequestJob and then starts it. In the case of an HTTP or HTTPS request, this will be a URLRequestHttpJob. The URLRequestHttpJob attaches cookies to the request, if needed.
The URLRequestHttpJob calls into the HttpCache to create an HttpCache::Transaction. If there's no matching entry in the cache, the HttpCache::Transaction will just call into the HttpNetworkLayer to create an HttpNetworkTransaction, and transparently wrap it. The HttpNetworkTransaction then calls into the HttpStreamFactory to request an HttpStream to the server.
The HttpStreamFactory::Job creates a ClientSocketHandle to hold a socket, once connected, and passes it into the ClientSocketPoolManager. The ClientSocketPoolManager assembles the TransportSocketParams needed to establish the connection and creates a group name (“host:port”) used to identify sockets that can be used interchangeably.
The ClientSocketPoolManager directs the request to the TransportClientSocketPool, since there‘s no proxy and it’s an HTTP request. The request is forwarded to the pool's ClientSocketPoolBase‘s ClientSocketPoolBaseHelper. If there isn’t already an idle connection, and there are available socket slots, the ClientSocketPoolBaseHelper will create a new TransportConnectJob using the aforementioned params object. This Job will do the actual DNS lookup by calling into the HostResolverImpl, if needed, and then finally establishes a connection.
Once the socket is connected, ownership of the socket is passed to the ClientSocketHandle. The HttpStreamFactory::Job is then informed the connection attempt succeeded, and it then creates an HttpBasicStream, which takes ownership of the ClientSocketHandle. It then passes ownership of the HttpBasicStream back to the HttpNetworkTransaction.
The HttpNetworkTransaction passes the request headers to the HttpBasicStream, which uses an HttpStreamParser to (finally) format the request headers and body (if present) and send them to the server.
The HttpStreamParser waits to receive the response and then parses the HTTP/1.x response headers, and then passes them up through both the HttpNetworkTransaction and HttpCache::Transaction to the URLRequestHttpJob. The URLRequestHttpJob saves any cookies, if needed, and then passes the headers up to the URLRequest and on to the network::URLLoader, which sends the data over a Mojo pipe to the network::mojom::URLLoaderClient, passed in to the URLLoader when it was created.
Without waiting to hear back from the network::mojom::URLLoaderClient, the network::URLLoader allocates a raw mojo data pipe, and passes the client the read end of the pipe. The URLLoader then grabs an IPC buffer from the pipe, and passes a 64KB body read request down through the URLRequest all the way down to the HttpStreamParser. Once some data is read, possibly less than 64KB, the number of bytes read makes its way back to the URLLoader, which then tells the Mojo pipe the read was complete, and then requests another buffer from the pipe, to continue writing data to. The pipe may apply back pressure, to limit the amount of unconsumed data that can be in shared memory buffers at once. This process repeats until the response body is completely read.
When the URLRequest informs the network::URLLoader the request is complete, the URLLoader passes the message along to the network::mojom::URLLoaderClient, over its Mojo pipe, before telling the URLLoaderFactory to destroy the URLLoader, which results in destroying the URLRequest and closing all Mojo pipes related to the request.
When the HttpNetworkTransaction is being torn down, it figures out if the socket is reusable. If not, it tells the HttpBasicStream to close the socket. Either way, the ClientSocketHandle returns the socket is then returned to the socket pool, either for reuse or so the socket pool knows it has another free socket slot.
A sample of the object relationships involved in the above process is diagramed here:
There are a couple of points in the above diagram that do not come clear visually:
The HttpCache::Transaction sits between the URLRequestHttpJob and the HttpNetworkTransaction, and implements the HttpTransaction interface, just like the HttpNetworkTransaction. The HttpCache::Transaction checks if a request can be served out of the cache. If a request needs to be revalidated, it handles sending a 204 revalidation request over the network. It may also break a range request into multiple cached and non-cached contiguous chunks, and may issue multiple network requests for a single range URLRequest.
The HttpCache::Transaction uses one of three disk_cache::Backends to actually store the cache's index and files: The in memory backend, the blockfile cache backend, and the simple cache backend. The first is used in incognito. The latter two are both stored on disk, and are used on different platforms.
One important detail is that it has a read/write lock for each URL. The lock technically allows multiple reads at once, but since an HttpCache::Transaction always grabs the lock for writing and reading before downgrading it to a read only lock, all requests for the same URL are effectively done serially. The renderer process merges requests for the same URL in many cases, which mitigates this problem to some extent.
It's also worth noting that each renderer process also has its own in-memory cache, which has no relation to the cache implemented in net/, which lives in the browser process.
A consumer can cancel a request at any time by deleting the network::mojom::URLLoader pipe used by the request. This will cause the network::URLLoader to destroy itself and its URLRequest.
When an HttpNetworkTransaction for a cancelled request is being torn down, it figures out if the socket the HttpStream owns can potentially be reused, based on the protocol (HTTP / HTTP/2 / QUIC) and any received headers. If the socket potentially can be reused, an HttpResponseBodyDrainer is created to try and read any remaining body bytes of the HttpStream, if any, before returning the socket to the SocketPool. If this takes too long, or there's an error, the socket is closed instead. Since this all happens at the layer below the cache, any drained bytes are not written to the cache, and as far as the cache layer is concerned, it only has a partial response.
The URLRequestHttpJob checks if headers indicate a redirect when it receives them from the next layer down (Typically the HttpCache::Transaction). If they indicate a redirect, it tells the cache the response is complete, ignoring the body, so the cache only has the headers. The cache then treats it as a complete entry, even if the headers indicated there will be a body.
The URLRequestHttpJob then checks with the URLRequest if the redirect should be followed. The URLRequest then informs the network::URLLoader about the redirect, which passes information about the redirect to network::mojom::URLLoaderClient, in the consumer process. Whatever issued the original request then checks if the redirect should be followed.
If the redirect should be followed, the URLLoaderClient calls back into the URLLoader over the network::mojom::URLLoader Mojo interface, which tells the URLRequest to follow the redirect. The URLRequest then creates a new URLRequestJob to send the new request. If the URLLoaderClient chooses to cancel the request instead, it can delete the network::mojom::URLLoader pipe, just like the cancellation case discussed above. In either case, the old HttpTransaction is destroyed, and the HttpNetworkTransaction attempts to drain the socket for reuse, as discussed in the previous section.
In some cases, the consumer may choose to handle a redirect itself, like passing off the redirect to a ServiceWorker. In this case, the consumer cancels the request and then call into some other network::mojom::URLLoaderFactory the new URL to continue the request.
When the URLRequestHttpJob receives headers, it sends a list of all Content-Encoding values to Filter::Factory, which creates a (possibly empty) chain of filters. As body bytes are received, they're passed through the filters at the URLRequestJob layer and the decoded bytes are passed back to the URLRequest::Delegate.
Since this is done above the cache layer, the cache stores the responses prior to decompression. As a result, if files aren‘t compressed over the wire, they aren’t compressed in the cache, either.
The ClientSocketPoolManager is responsible for assembling the parameters needed to connect a socket, and then sending the request to the right socket pool. Each socket request sent to a socket pool comes with a socket params object, a ClientSocketHandle, and a “group name”. The params object contains all the information a ConnectJob needs to create a connection of a given type, and different types of socket pools take different params types. The ClientSocketHandle will take temporary ownership of a connected socket and return it to the socket pool when done. All connections with the same group name in the same pool can be used to service the same connection requests, so it consists of host, port, protocol, and whether “privacy mode” is enabled for sockets in the goup.
All socket pool classes derive from the ClientSocketPoolBase. The ClientSocketPoolBase handles managing sockets - which requests to create sockets for, which requests get connected sockets first, which sockets belong to which groups, connection limits per group, keeping track of and closing idle sockets, etc. Each ClientSocketPoolBase subclass has its own ConnectJob type, which establishes a connection using the socket params, before the pool hands out the connected socket.
Some socket pools are layered on top other socket pools. This is done when a “socket” in a higher layer needs to establish a connection in a lower level pool and then take ownership of it as part of its connection process. For example, each socket in the SSLClientSocketPool is layered on top of a socket in the TransportClientSocketPool. There are a couple additional complexities here.
From the perspective of the lower layer pool, all of its sockets that a higher layer pools owns are actively in use, even when the higher layer pool considers them idle. As a result, when a lower layer pool is at its connection limit and needs to make a new connection, it will ask any higher layer pools to close an idle connection if they have one, so it can make a new connection.
Since sockets in the higher layer pool are also in a group in the lower layer pool, they must have their own distinct group name. This is needed so that, for instance, SSL and HTTP connections won't be grouped together in the TcpClientSocketPool, which the SSLClientSocketPool sits on top of.
The relationships between the important classes in the socket pools is shown diagrammatically for the lowest layer socket pool (TransportSocketPool) below.
The ClientSocketPoolBase is a template class templatized on the class containing the parameters for the appropriate type of socket (in this case TransportSocketParams). It contains a pointer to the ClientSocketPoolBaseHelper, which contains all the type-independent machinery of the socket pool.
When socket pools are initialized, they in turn initialize their templatized ClientSocketPoolBase member with an object with which it should create connect jobs. That object must derive from ClientSocketPoolBase::ConnectJobFactory templatized by the same type as the ClientSocketPoolBase. (In the case of the diagram above, that object is a TransportConnectJobFactory, which derives from ClientSocketPoolBase::ConnectJobFactory<TransportSocketParams>.) Internally, that object is wrapped in a type-unsafe wrapper (ClientSocketPoolBase::ConnectJobFactoryAdaptor) so that it can be passed to the initialization of the ClientSocketPoolBaseHelper. This allows the helper to create connect jobs while preserving a type-safe API to the initialization of the socket pool.
When an SSL connection is needed, the ClientSocketPoolManager assembles the parameters needed both to connect the TCP socket and establish an SSL connection. It then passes them to the SSLClientSocketPool, which creates an SSLConnectJob using them. The SSLConnectJob's first step is to call into the TransportSocketPool to establish a TCP connection.
Once a connection is established by the lower layered pool, the SSLConnectJob then starts SSL negotiation. Once that's done, the SSL socket is passed back to the HttpStreamFactory::Job that initiated the request, and things proceed just as with HTTP. When complete, the socket is returned to the SSLClientSocketPool.
Each proxy has its own completely independent set of socket pools. They have their own exclusive TransportSocketPool, their own protocol-specific pool above it, and their own SSLSocketPool above that. HTTPS proxies also have a second SSLSocketPool between the the HttpProxyClientSocketPool and the TransportSocketPool, since they can talk SSL to both the proxy and the destination server, layered on top of each other.
The first step the HttpStreamFactory::Job performs, just before calling into the ClientSocketPoolManager to create a socket, is to pass the URL to the Proxy service to get an ordered list of proxies (if any) that should be tried for that URL. Then when the ClientSocketPoolManager tries to get a socket for the Job, it uses that list of proxies to direct the request to the right socket pool.
HTTP/2 negotation is performed as part of the SSL handshake, so when HttpStreamFactory::Job gets a socket, it may have HTTP/2 negotiated over it as well. When it gets a socket with HTTP/2 negotiated as well, the Job creates a SpdySession using the socket and a SpdyHttpStream on top of the SpdySession. The SpdyHttpStream will be passed to the HttpNetworkTransaction, which drives the stream as usual.
The SpdySession will be shared with other Jobs connecting to the same server, and future Jobs will find the SpdySession before they try to create a connection. HttpServerProperties also tracks which servers supported HTTP/2 when we last talked to them. We only try to establish a single connection to servers we think speak HTTP/2 when multiple HttpStreamFactory::Jobs are trying to connect to them, to avoid wasting resources.
QUIC works quite a bit differently from HTTP/2. Servers advertise QUIC support with an “Alternate-Protocol” HTTP header in their responses. HttpServerProperties then tracks servers that have advertised QUIC support.
When a new request comes in to HttpStreamFactory for a connection to a server that has advertised QUIC support in the past, it will create a second HttpStreamFactory::Job for QUIC, which returns an QuicHttpStream on success. The two Jobs (One for QUIC, one for all versions of HTTP) will be raced against each other, and whichever successfully creates an HttpStream first will be used.
As with HTTP/2, once a QUIC connection is established, it will be shared with other Jobs connecting to the same server, and future Jobs will just reuse the existing QUIC session.
URLRequests are assigned a priority on creation. It only comes into play in a couple places:
At the socket pool layer, sockets are only assigned to socket requests once the socket is connected and SSL is negotiated, if needed. This is done so that if a higher priority request for a group reaches the socket pool before a connection is established, the first usable connection goes to the highest priority socket request.
The URLRequestJobFactory has a ProtocolHander for ftp, http, https, and data URLs, though most data URLs are handled directly in the renderer. For other schemes, and non-network code that can intercept HTTP/HTTPS requests (Like ServiceWorker, or extensions), there's typically another network::mojom::URLLoaderFactory class that is used instead of network::URLLoaderFactory. These URLLoaderFactories are not part of the network service. Some of these are web standards and handled in content/ code (Like blob:// and file:// URLs), while other of these are chrome-specific, and implemented in chrome/ (like chrome:// and chrome-extension:// URLs).