This document outlines many common questions we've received about extensions security and what we do or do not consider to be security bugs. This is primarily written for an audience of Chromium developers and security researchers.
It is analogous to the general Chrome Security FAQ.
Add canonical examples of bugs for each of these (particularly WontFix bugs for examples of what is not considered a security bug).
In most cases, this is not a security bug. Extensions are designed to have access to user data if and only if they have the appropriate permissions. For instance, an extension may be able to access a user's data on a set of sites (or all sites) if the extension requests the appropriate permissions in its manifest file. Before an extension can use these capabilities, they must be granted by the user - as an example, the install prompt may say that the extension can “Read and change your data on google.com”.
If the extension is only able to access user data through permissions the extension has requested (for instance, it is able to access data on google.com because it specified host or content script permissions that match google.com), then this access is working as intended. If an extension is able to access data without appropriate permission, then this would be a security bug. Please report any such bugs here.
It may seem alarming that certain extensions can access certain types of sensitive data, such as passwords. However, this access can be critical to extensions' functionality. Consider, for instance, password managers (which store and retrieve your passwords) - these extensions, fundamentally, must access this type of sensitive data. There is very little data that we can deterministically say should never be available to extensions.
First, extensions should only be allowed to access data that they have permission to (and these permissions must be approved by the user before they can be used, either at installation or at runtime). Users should only install extensions and grant permissions to extensions they trust.
In addition to these platform mitigations, developers distributing through the Chrome Web Store are required to adhere to a number of different policies, which describe which types of behaviors are allowed. For instance, the user data FAQ here describes the types of data and permissions that extensions are allowed to gather and use.
Google also subjects extensions to a combination of automated and manual review systems.
In most cases, this is not a security bug. Extensions can also access certain sites with content scripts (specified in the
content_scripts key) or with the
activeTab permission. Hosts specified in
content_scripts (under the
matches key) are displayed to the user in permission requests in the same way as host permissions requested under
host_permissions - that is, it indicates the extension can “Read and change your data on ”. ActiveTab requires a user to explicitly invoke the extension on a page before the extension can access data on the page, which is a form of runtime permission grant (similar to how “sharing” a page with an app on Android grants the app access to the content of the page).
If an extension is able to access site data without any API or permission that allows this access, then it may be a security bug. Please report any such bugs here.
An extension is not able to directly keep running after it has been uninstalled (or disabled) by the user or the browser. However, any changes made by an extension to a currently loaded site (e.g. script injection or data modification) will remain in place after the extension is uninstalled until the user leaves the site (e.g. by navigating away or refreshing), and potentially beyond. This has a few implications:
Other extension behavior, such as running background scripts, network handlers, URL overrides, proxy settings, and preference modifications should be reverted upon uninstallation.
Individual extensions are generally not considered a part of Chromium, so extensions violating Chrome Web Store policies (including distribution policies) are not considered security bugs in Chromium itself.
You can report the extension with the “Report abuse” link in the extension's entry on the Chrome Web Store.
Chromium does not consider physically-local attacks to be security bugs. This includes attacks like loading extensions on a user‘s machine. As such, attacks like loading extensions physically through loading an unpacked extension in the chrome://extensions page, via malicious software that executes outside of the Chromium browser, or updating enterprise policies to load extensions are all not within Chromium’s threat model.
Adding a malicious extension to the user‘s profile is only a security bug if you find a way to add the extension without direct access to the user’s profile and bypassing the normal extension installation flow. For instance, if an extension could be installed (without user consent) when the user visits a malicious site, this would be considered a security bug. Please report any such bugs here.
No. Annoyance extensions like this are treated similarly to Denial of Service issues rather than as security vulnerabilities.
Attacks that involve loading an unpacked extension are typically not security bugs. Two common approaches are:
--load-extensioncommandline switch, and
If you identify another way to load an unpacked extension, it may be considered a security bug.
Some extension APIs are restricted to extensions with a specified ID. Developers can give an unpacked extension a given ID by setting the “key” entry in the manifest, as described here. This allows unpacked extensions to access powerful APIs.
This is not considered a security bug (example). See above for our stance on unpacked extensions. The addition of access to private APIs does not change this stance, as it is still most similar to a physically-local attack or devtools execution in a trusted context (for instance, inspecting a component extension allows access to trusted APIs).
Developers can give an unpacked extension a given ID by setting the “key” entry in the manifest, as described here. This would allow a developer to imitate a legitimate extension and have access to its ID.
This is not considered a security bug. See above for our stance on unpacked extensions. If the extension is instead treated as being from the webstore (as opposed to an unpacked extension), Chromium will validate the content of the extension.
If an attacker has access to one of a victim‘s devices, the attacker can install an extension which will be synced to the victim’s other sync-enabled devices. Similarly, an attacker who phishes a victim‘s Google credentials can sign in to Chrome as the victim and install an extension, which will be synced to the victim’s other sync-enabled devices. Sync thereby enables an attacker to elevate phished credentials or physical access to persistent access on all of a victim's sync-enabled devices.
To mitigate this issue, Chrome only syncs extensions that have been installed from the Chrome Web Store. Extensions in the Chrome Web Store are monitored for abusive behavior.
In the future, we may pursue further mitigations. However, because an attacker must already have the victim‘s Google credentials and/or physical access to a device, we don’t consider this attack a security vulnerability.
We do consider it a vulnerability if an attacker can get an extension to sync to a victim‘s device without either of the above preconditions. For example, we consider it a vulnerability if an attacker could craft a request to Google’s sync servers without proper credentials that causes an extension to be installed to a user's device, or if an attacker could entice a victim to visit a webpage that causes an extension to be installed on their device(s). Please report any such bugs here.
Permissions and APIs may not have an associated warning for a number of reasons. There is not a 1-to-1 mapping of permissions listed in the manifest and warnings shown to the user.
Some permissions may grant innocuous capabilities to extensions. For instance, the
storage permission allows an extension to store its own data. This does not grant the extension access to any additional data; further, this type of behavior is already possible using open web APIs like
localStorage and Indexed DB. Since this does not provide the extension any access to any new data or dangerous capabilities, the API does not have an associated warning.
Other permissions, like
webRequest, may only apply to sites the extension has access to. The webRequest API allows extensions to monitor (and potentially modify) network requests made by web pages. However, an extension can only intercept these requests for sites it has access to. Requesting access to a site already displays a permission warning (“Read and change your data on ”); the webRequest API does not provide any additional access. This is also the case for the
scripting permission and others.
Some APIs may also have runtime permission prompts, such as dialogs or choosers, that are shown to the user in context. Since these APIs do not immediately grant any data access to extensions, no permission warning is shown.
Finally, some permissions may be subsumed by other, more powerful, permissions. For instance, if an extension has access to the history API (which allows extensions to read and change a user's browsing history), we do not also show a warning for the
topSites API (which allows extensions to see the top sites a user has visited). Since the history API is strictly more powerful, it subsumes the
topSites API. You can read more here.
Permissions listed in the
optional_permissions key in the manifest are not granted to the extension at install-time. Instead, they are granted through the use of the Permissions API. When the extension requests new capabilities (i.e., permissions that have not been previously granted and are not superseded by other granted permissions), a dialog is shown to the user, allowing them to grant or refuse the permissions.
If an extension updates and includes new permissions that are not already contained within its current granted capabilities, the extension is disabled on users' machines and the user is notified (and asked if they'd like to grant the new permissions and enable the extension). You can read more here.
This does not apply to newly-requested optional permissions, which are not granted by default.
This is not considered a security bug in Chromium (example). Prior to Manifest Version 3, extensions were allowed to execute code that was not contained in the extension package (also called remotely-hosted code); this is a pattern that is extremely common in web development. Beginning in Manifest Version 3, all logic must be contained within the extension package for developers distributing through the Chrome Web store; however, this is a policy requirement (described here), and is not enforced by the Chromium browser.
While the platform includes some restrictions through the inclusion of a default content security policy for extensions, this is not meant to be a guaranteed deterrent, and does not prevent all types of remote code execution. For instance, it is impossible for the Chromium browser to guard against an extension that includes an interpreter that processes remotely-fetched JSON commands, even though this type of behavior is prohibited by policy in Manifest V3.
Web pages with the chrome:-scheme (such as chrome://settings) are generally protected from extensions - extensions are not allowed to read or change data on these pages (without the use of the
--extensions-on-chrome-urls command line flag). However, extensions are allowed to open and close these pages through APIs like the tabs and windows APIs. This is critical for certain types of extensions, such as tab and session managers, bookmark managers, and history managers.
The chrome-untrusted:-scheme (such as chrome-untrusted://terminal) is generally used for Chrome OS System Web Apps. Some of these apps such as Terminal which starts the Linux VM can perform operations on startup, or start other systems which may have security vulnerabilities. We intentionally disallow auto-start to avoid persistent attacks.
Extensions are considered more privileged than the web pages they are allowed to run on. As such, they are allowed to circumvent restrictions put in place by those web pages. This can be critical for extension functionality.
foo in the content script does not reference
foo from the main world.
Extensions can trivially execute in the main world of a document if they so desire - for instance, by appending a
<script> element. Assuming the extension has access to the site, it can already read and change the data on that site. As such, injecting in the main world does not represent any increased capability or access, and is not a security bug.
Process-based attacks: Since isolated worlds are necessarily in the same process as the main world, any attack that allows the attacker to read process-level data (such as a Spectre or Meltdown attack, or other renderer compromise) could potentially access content script data or access APIs exposed to content scripts.
Shared variable attacks: Certain content is shared between the isolated world and the main world. For instance, references to an undeclared variable can result in accessing a frame with the same name from the window object. Similarly, the content of the DOM is shared between different JS worlds (though the JS objects are distinct to each). Attackers may be able to “trick” a content script into using these variables as a flavor of XSS - an example of this is this Project Zero bug. These are bugs in the particular extension, rather than in Chromium. Please report them to the extension developer.
Other attacks that cross from the main world into an extension's isolated world may be considered security bugs; please report any such bugs here.
Assuming the extension has access to the site, this is not considered a security bug. If the extension has access to a site, it can read and change all data associated with that site, including affecting how it can be shared with other parties.
However, modifying security-sensitive headers is generally discouraged, unless absolutely necessary (such as for developer tools).
Maybe. Extensions are allowed to open (and potentially communicate with) native applications in a variety of ways. One of the main ways is the nativeMessaging API, which allows for communication with installed native applications that have registered a manifest on the user‘s machine. Extensions are also able to open native applications with the downloads.open() method. Finally, extensions are allowed to navigate to file:-scheme URLs, which could, depending on the user’s system configuration, result in execution. Using these methods to open a native application is not considered a security bug.
If an extension is able to open a native application or execute native code in another way, it may be a security bug; please report any such bugs here.
Extensions can read file contents if they are granted file permission by the user. This is toggled in the chrome://extensions page for the given extension (under “Allow access to file URLs”; note that this is the default for unpacked extensions). If this setting is enabled, extensions can read all files on disk. Additionally, extensions are allowed to read any files that were explicitly shared with them, such as through the HTML5 Filesystem API. Additionally, extensions (with appropriate API permissions) can read the URLs of tabs and history entries, including file:-scheme URLs; however, this should not allow access to the contents of the file.
If an extension is able to read file contents from the local machine in another way, this may be a security bug; please report any such bugs here.
Extensions are allowed to run in an incognito profile if the “Allow in incognito” setting is enabled for the given extension in the chrome://extensions page. In this scenario, the extension has access to all the types of data it does in normal browsing - such as URLs and the contents of websites. Chrome does not limit what an extension does with this data.
If an extension is able to access incognito contexts without this setting enabled, this may be a security bug; please report any such bugs here.
The debugger permission does not allow automating parts of the Chromium browser unrelated to websites. Automating WebUI or settings, installing extensions, downloading and executing a native binary, or executing custom code outside the sandbox should not be possible for an extension with the debugger permission.
This depends on the extension.
Component Extensions: Component extensions are bundled by the Chromium browser and implement core browser functionality. If you find a security bug in a component extension, this is considered a security bug in Chromium. Please report any such bugs here.
Google Extensions: A security bug in an extension developed by Google, but not distributed directly with the Chrome browser, would likely not be considered a bug in Chromium. However, they may be covered by the Google Vulnerability Reward Program.
Other Extensions: A security bug in a third-party extension would not be considered a security bug in Chromium. This is true even if the extension has sensitive and powerful permissions, which could leak user data or allow cross-site scripting attacks (example). Some third-party extensions may have their own vulnerability reward programs; please check with the extension developer. It may also be eligible for a reward through the Developer Data Protection Reward Program (though this typically targets abuse, rather than vulnerabilities); visit this site for more information.