Encoding video uses hardware accelerated capabilities where possible: check Encode Accelerator Implementation Status for the current situation.
MediaRecorder uses a
MediaStream as its source of data. The stream may originate from a camera, microphone,
<audio> tag, remote
PeerConnection, web audio
Node or content capture (such as the screen, a window or a tab).
MediaRecorder() constructor accepts an optional
MediaRecorderOptions dictionary giving hints as to how to carry out the encoding:
mimeType indicates which container and codec to use, e.g.
video/x-matroska;codecs="avc1" (see the specific isTypeSupported() test).
Chrome will select the best encoding format if
mimeType is left unspecified; in particular, it will select a hardware accelerated encoder if available. (The actual encoding format can be found in
Users can vary the target encoding bitrate to accommodate different scenes and CPU loads via the different bitrate members.
MediaRecorder is created, recording can begin with
This method accepts an optional
timeslice parameter. Chrome will buffer this much of the encoded result (in milliseconds). If unspecified Chrome will buffer as much as possible. A value of 0 will cause as little buffering as possible.
Encoded chunks are received via the
ondataavailable event, following the cadence specified by the
timeslice is unspecified, the buffer can be flushed using
event.data contains the recorded
This API is structured around the MediaRecorder class, which owns a
MediaRecorderHandler which in turn owns a number of
AudioTrackRecorders and a single
VideoTrackRecorders are codec specific and encapsulate the necessary resources to get the job done. All this is illustrated in the diagram below.
<video> via a