advanced
keyword.
In general, [=User Agents=] will have more flexibility to optimize the
media streaming experience the fewer constraints are applied, so
application authors are strongly encouraged to use required
constraints sparingly.
MediaRecorder) [[?mediastream-recording]], image capture
(ImageCapture) [[?image-capture]], and web audio
({{MediaStreamAudioSourceNode}}) [[?WEBAUDIO]].
{{MediaStream}} consumers must be able to
handle tracks being added and removed. This behavior is specified per
consumer.
A {{MediaStream}} object is said to be active when it has at least one
{{MediaStreamTrack}} that has not [=MediaStreamTrack/ended=]. A {{MediaStream}} that does not
have any tracks or only has tracks that are [= MediaStreamTrack/ended =]
is inactive.
A {{MediaStream}} object is said to be audible when it has at least one
{{MediaStreamTrack}} whose {{MediaStreamTrack/[[Kind]]}} is "audio"
that has not [=MediaStreamTrack/ended=]. A {{MediaStream}} that does not have any
audio tracks or only has audio tracks that are [=MediaStreamTrack/ended=] is
inaudible.
The [=User Agent=] may update a {{MediaStream}}'s [=track set=] in response to, for example, an external
event. This specification does not specify any such cases, but other
specifications using the MediaStream API may. One such example is the
WebRTC 1.0 [[?WEBRTC]] specification where the [=track set=] of a {{MediaStream}}, received
from another peer, can be updated as a result of changes to the media
session.
Toadd a track track to a
{{MediaStream}} stream, the [=User Agent=] MUST
run the following steps:
Iftrack is already in stream's [=track set=], then abort these steps.
Add tracktostream's [=track set=].
[= Fire a track event=] named {{addtrack}} with
trackatstream.
Toremove a track track from a
{{MediaStream}} stream, the [=User Agent=] MUST
run the following steps:
Iftrack is not in stream's [=track set=], then abort these steps.
[=set/Remove=] track from stream's [=track set=].
[= Fire a track event =] named {{removetrack}} with
trackatstream.
[Exposed=Window]
interface MediaStream : EventTarget {
constructor();
constructor(MediaStream stream);
constructor(sequence<MediaStreamTrack> tracks);
readonly attribute DOMString id;
sequence<MediaStreamTrack> getAudioTracks();
sequence<MediaStreamTrack> getVideoTracks();
sequence<MediaStreamTrack> getTracks();
MediaStreamTrack? getTrackById(DOMString trackId);
undefined addTrack(MediaStreamTrack track);
undefined removeTrack(MediaStreamTrack track);
MediaStream clone();
readonly attribute boolean active;
attribute EventHandler onaddtrack;
attribute EventHandler onremovetrack;
};
true if this {{MediaStream}} is
[= stream/active =] and false
otherwise.
onaddtrack of type {{EventHandler}}
The event type of this event handler is {{addtrack}}.
onremovetrack of type {{EventHandler}}
The event type of this event handler is {{removetrack}}.
null, if no such track
exists.
addTrack()
Adds the given {{MediaStreamTrack}} to this
{{MediaStream}}.
When the {{addTrack}} method is
invoked, the [=User Agent=] MUST run the following steps:
Let track be the methods argument and
stream the {{MediaStream}} object
on which the method was called.
Iftrack is already in stream's
[=track set=], then abort these
steps.
[=MediaStream/Add a track|Add=] tracktostream's [=track set=].
removeTrack()
Removes the given {{MediaStreamTrack}} object
from this {{MediaStream}}.
When the {{removeTrack}}
method is invoked, the [=User Agent=] MUST run the following
steps:
Let track be the methods argument and
stream the {{MediaStream}} object
on which the method was called.
Iftrack is not in stream's [=track set=], then abort these steps.
[=MediaStream/Remove a track|Remove=] track from stream's [=track set=].
clone()
Clones the given {{MediaStream}} and all its
tracks.
When the {{clone()}} method is invoked, the User
Agent MUST run the following steps:
Let streamClone be a newly constructed
{{MediaStream}} object.
Initialize streamClone.{{MediaStream.id}} to a newly
generated value.
Clone each track in this
{{MediaStream}} object and add the result to
streamClone's track
set.
(四)Return streamClone.
false.
If the [=permission state=]
of the permission associated with the device's kind and
deviceId for mediaDevices's [=relevant settings object=],
is not {{PermissionState/"granted"}}, then set
mediaDevices.{{MediaDevices/[[devicesAccessibleMap]]}}[deviceId] to
false.
Tocreate a MediaStreamTrack with an underlying
source, and a mediaDevicesToTieSourceTo, run the
following steps:
Let track be a new object of type source's [=MediaStreamTrack source type=].
Initialize track with the following internal slots:
[[\Source]],
initialized to source.
[[\Id]],
initialized to a newly generated unique identifier string. See
{{MediaStream.id}} attribute for guidelines on how to generate
such an identifier.
[[\Kind]],
initialized to "audio"ifsource is
an audio source, or "video"ifsource is a video source.
[[\Label]],
initialized to source's label, if provided by the User
Agent, or "" otherwise. [=User Agents=] MAY label audio and
video sources (e.g., "Internal microphone" or "External USB Webcam").
[[\ReadyState]],
initialized to {{MediaStreamTrackState/"live"}}.
[[\Enabled]],
initialized to true.
[[\Muted]],
initialized to trueifsource is
[= source/muted =], and false otherwise.
●
[[\Capabilities]],
[[\Constraints]], and
[[\Settings]], all initialized as
specified in the {{ConstrainablePattern}}.
[[\Restrictable]], initialized to false.
IfmediaDevicesToTieSourceTo is not null,
[=tie track source to `MediaDevices`=] with source and mediaDevicesToTieSourceTo.
Run source's [=MediaStreamTrack source-specific construction steps=]
with track as parameter.
Return track.
Toinitialize the underlying sourceoftracktosource, run the following steps:
Initialize track.{{MediaStreamTrack/[[Source]]}} to
source.
Initialize track's [[\Capabilities]],
[[\Constraints]], and
[[\Settings]], as
specified in the {{ConstrainablePattern}}.
Totie track source to `MediaDevices`, given source and
mediaDevices, run the following steps:
Add sourcetomediaDevices.{{MediaDevices/[[mediaStreamTrackSources]]}}.
Tostop all sources of a [=global object=], named globalObject,
the [=User Agent=] MUST run the following steps:
For each {{MediaStreamTrack}} object track whose
relevant global objectisglobalObject,
set track's {{MediaStreamTrack/[[ReadyState]]}} to
{{MediaStreamTrackState/"ended"}}.
IfglobalObject is a {{Window}}, then for each sourceinglobalObject's
[=associated `MediaDevices`=].{{MediaDevices/[[mediaStreamTrackSources]]}},
[= source/stopped | stop =] source.
The [=User Agent=] MUST [=stop all sources=] of a globalObject in the following conditions:
IfglobalObject is a {{Window}} object and the [=unloading document cleanup steps=]
are executed for its [=associated document=].
IfglobalObject is a {{WorkerGlobalScope}} object and its
closing flag is set to true.
An implementation may use a per-source reference count to keep track
of source usage, but the specifics are out of scope for this
specification.
Toclone a track the [=User Agent=] MUST run
the following steps:
Let track be the {{MediaStreamTrack}}
object to be cloned.
Let sourcebetrack's
{{MediaStreamTrack/[[Source]]}}.
Let trackClone be the result of
[=create a MediaStreamTrack | creating a MediaStreamTrack=] with
source and null.
Set trackClone's {{MediaStreamTrack/[[ReadyState]]}} to
track's {{MediaStreamTrack/[[ReadyState]]}} value.
Set trackClone's
[[\Capabilities]] to a clone of
track's
[[\Capabilities]].
Set trackClone's
[[\Constraints]] to a clone of
track's
[[\Constraints]].
Set trackClone's
[[\Settings]] to a clone of
track's
[[\Settings]].
Run source [=MediaStreamTrack source-specific clone steps=] with track and trackClone as parameters.
Return trackClone.
true let
eventName be {{mute}}, otherwise
{{unmute}}.
[=Fire an event=] named eventNameontrack.
Enabled/disabled on the other hand is
available to the application to control (and observe) via the
{{MediaStreamTrack/enabled}}
attribute.
The result for the consumer is the same in the sense that whenever
{{MediaStreamTrack}} is muted or disabled (or both) the
consumer gets zero-information-content, which means silence for audio
and black frames for video. In other words, media from the source only
flows when a {{MediaStreamTrack}} object is both
unmuted and enabled. For example, a video element sourced by a
{{MediaStream}} containing only muted or disabled {{MediaStreamTrack}}s
for audio and video, is playing but rendering black video frames in
silence.
For a newly created {{MediaStreamTrack}} object, the
following applies: the track is always enabled unless stated otherwise
(for example when cloned) and the muted state reflects the state of the
source at the time the track is created.
false,
provided the UA sets it back to true as soon as any
unstopped track connected to this device becomes un-muted or enabled
again.
When a {{MediaStreamTrackState/"live"}}, [= MediaStreamTrack/muted | unmuted =], and
[= MediaStreamTrack/enabled =] track sourced by a device exposed
by {{MediaDevices/getUserMedia()}} becomes either
[= MediaStreamTrack/muted =] or [= MediaStreamTrack/enabled | disabled =],
and this brings all tracks connected to the device (across all
[=navigables=] the user agent operates) to be either
muted, disabled, or stopped, then the UA SHOULD relinquish the device
within 3 seconds while allowing time for a reasonably-observant user to
become aware of the transition. The UA SHOULD attempt to reacquire the
device as soon as any live track sourced by the device
becomes both [= MediaStreamTrack/muted | unmuted =] and
[= MediaStreamTrack/enabled =] again, provided that track's
[=relevant global object=]'s [=associated `Document`=]
[=Document/is in view=] at that time. If the
document is not [=Document/is in view|in view=] at that time,
the UA SHOULD instead queue a task to [=MediaStreamTrack/muted|mute=] the
track, and not queue a task to [=MediaStreamTrack/muted|unmute=] it until
the document comes [=Document/is in view|into view=].
If reacquiring the device fails, the UA MUST
[= track ended by the User agent | end the track =] (The UA MAY end it earlier
should it detect a device problem, like the device being physically
removed).
The intent is to give users the assurance of privacy that having
physical camera (and microphone) hardware lights off brings, by
aligning physical and logical “privacy indicators”, at least while the
current document is the sole user of a device.
While other applications and documents using the device
simultaneously may interfere with this intent at times, they do not
interfere with the rules laid forth.
A {{MediaStreamTrack}} object is said to
end when the source of the track is disconnected or
exhausted.
If all {{MediaStreamTrack}}s that are using the same
source are [= MediaStreamTrack/ended =], the source will be
[= source/stopped =].
After the application has invoked the {{MediaStreamTrack/stop()}}
method on a {{MediaStreamTrack}} object, or once the [=source=] of a
{{MediaStreamTrack}} permanently ends production of live samples to its tracks,
whichever is sooner, a {{MediaStreamTrack}} is said to be
ended.
For camera and microphone sources, the reasons for a source to
[=MediaStreamTrack/ended|end=] besides {{MediaStreamTrack/stop()}} are
[=implementation-defined=]
(e.g., because the user rescinds the permission for the page to
use the local camera, or because the User
Agent has instructed the track to end for any reason).
When a {{MediaStreamTrack}} track
ends for any reason other than the {{MediaStreamTrack/stop()}} method being
invoked, the [=User Agent=] MUST queue a task that runs the following
steps:
Iftrack's {{MediaStreamTrack/[[ReadyState]]}}
has the value {{MediaStreamTrackState/"ended"}} already, then abort these
steps.
Set track's {{MediaStreamTrack/[[ReadyState]]}}
to {{MediaStreamTrackState/"ended"}}.
Notify track's {{MediaStreamTrack/[[Source]]}} that track is
[= MediaStreamTrack/ended =] so that the source may be [= source/stopped =], unless other
{{MediaStreamTrack}} objects depend on it.
[=Fire an event=] named ended at the object.
If the end of the track was reached due to a user request, the event
source for this event is the user interaction event source.
To invoke the device permission revocation algorithm with permissionName,
run the following steps:
Let tracks be the set of all currently
{{MediaStreamTrackState/"live"}} MediaStreamTracks
whose permission associated with this kind of track ("camera"or"microphone")
matches permissionName.
For each trackintracks,
end the track.
Constraints were provided at track
initialization time or need to be established later at runtime, the
APIs defined in the ConstrainablePattern Interface allow the
retrieval and manipulation of the constraints currently established on
a track.
Once ended, a track will continue exposing a
list of inherent constrainable track properties.
This list contains deviceId,
facingMode and
groupId.
[Exposed=Window]
interface MediaStreamTrack : EventTarget {
readonly attribute DOMString kind;
readonly attribute DOMString id;
readonly attribute DOMString label;
attribute boolean enabled;
readonly attribute boolean muted;
attribute EventHandler onmute;
attribute EventHandler onunmute;
readonly attribute MediaStreamTrackState readyState;
attribute EventHandler onended;
MediaStreamTrack clone();
undefined stop();
MediaTrackCapabilities getCapabilities();
MediaTrackConstraints getConstraints();
MediaTrackSettings getSettings();
Promise<undefined> applyConstraints(optional MediaTrackConstraints constraints = {});
};
undefined.
Return p.
Invoke and return the result of the
applyConstraints template method where:
●In the SelectSettings algorithm,
●
object is the
{{MediaStreamTrack}} on which this
method was called, and
●
settings dictionary refers to a possible
instance of the {{MediaTrackSettings}}
dictionary. The [=User Agent=] MUST NOT include inherent
unchangeable device properties as members unless
they are in the list of inherent constrainable
track properties, or otherwise include
device properties that must not be exposed.
Other specifications may define
constrainable properties that at times must not
be exposed.
For every [=settings dictionary=] with
resizeMode
set to
"none",
the [=User Agent=] MUST include another otherwise
identical [=settings dictionary=] with
resizeMode
set to
"crop-and-scale".
Constraining around non-native modes is not supported.
The net effect is to reflect that crop-and-scale is a superset of none.
●In step 3 of the ApplyConstraints algorithm, all
changes listed are to be made to object, and
●In step 4 of the ApplyConstraints algorithm, the
requirement on getConstraints() applies to the
getConstraints() method of object.
enum MediaStreamTrackState {
"live",
"ended"
};
| Enum value | Description |
|---|---|
| live |
The track is active (the track's underlying media source is making a best-effort attempt to provide data in real time). The output of a track in the {{MediaStreamTrackState/"live"}} state can be switched on and off with the {{MediaStreamTrack/enabled}} attribute. |
| ended |
The track has [= MediaStreamTrack/ended =] (the track's underlying media source is no longer providing data, and will never provide more data for this track). Once a track enters this state, it never exits it. For example, a video track in a {{MediaStream}} ends when the user unplugs the USB web camera that acts as the track's media source. |
dictionary MediaTrackSupportedConstraints {
boolean width = true;
boolean height = true;
boolean aspectRatio = true;
boolean frameRate = true;
boolean facingMode = true;
boolean resizeMode = true;
boolean sampleRate = true;
boolean sampleSize = true;
boolean echoCancellation = true;
boolean autoGainControl = true;
boolean noiseSuppression = true;
boolean latency = true;
boolean channelCount = true;
boolean deviceId = true;
boolean groupId = true;
boolean backgroundBlur = true;
};
true
See width
for details.
height of type {{boolean}}, defaulting to
true
See height
for details.
aspectRatio of type {{boolean}}, defaulting to
true
See aspectRatio for details.
frameRate of type {{boolean}}, defaulting to
true
See frameRate for
details.
facingMode of type {{boolean}}, defaulting to
true
See facingMode for
details.
resizeMode of type {{boolean}}, defaulting to
true
See resizeMode for
details.
sampleRate of type {{boolean}}, defaulting to
true
See sampleRate for
details.
sampleSize of type {{boolean}}, defaulting to
true
See sampleSize for
details.
echoCancellation of type {{boolean}}, defaulting to
true
See echoCancellation
for details.
autoGainControl of type {{boolean}}, defaulting to
true
See autoGainControl for
details.
noiseSuppression of type {{boolean}}, defaulting to
true
See noiseSuppression
for details.
latency of type {{boolean}}, defaulting to
true
See latency for details.
channelCount of type {{boolean}}, defaulting to
true
See channelCount for
details.
deviceId of type {{boolean}}, defaulting to
true
See deviceId for details.
groupId of type {{boolean}}, defaulting to
true
See groupId for details.
backgroundBlur of type {{boolean}}, defaulting to
true
See backgroundBlur for details.
dictionary MediaTrackCapabilities {
ULongRange width;
ULongRange height;
DoubleRange aspectRatio;
DoubleRange frameRate;
sequence<DOMString> facingMode;
sequence<DOMString> resizeMode;
ULongRange sampleRate;
ULongRange sampleSize;
sequence<(boolean or DOMString)> echoCancellation;
sequence<boolean> autoGainControl;
sequence<boolean> noiseSuppression;
DoubleRange latency;
ULongRange channelCount;
DOMString deviceId;
DOMString groupId;
sequence<boolean> backgroundBlur;
};
For historical reasons, {{MediaTrackCapabilities/deviceId}} and
{{MediaTrackCapabilities/groupId}} are {{DOMString}} instead of the
`sequence<DOMString>` expected by {{Capabilities}} in the
ConstrainablePattern.
sequence<{{DOMString}}>
A camera can report multiple facing modes. For example, in a
high-end telepresence solution with several cameras facing the
user, a camera to the left of the user can report both {{VideoFacingModeEnum/"left"}}
and {{VideoFacingModeEnum/"user"}}. See facingMode for
additional details.
resizeMode of type sequence<{{DOMString}}>
The [=User Agent=] MAY use cropping and downscaling to offer
more resolution choices than this camera naturally produces.
The reported sequence MUST list all the means the UA may employ
to derive resolution choices for this camera. The value {{VideoResizeModeEnum/"none"}}
MUST be present, indicating the ability to constrain the UA
from cropping and downscaling. See resizeMode for
additional details.
sampleRate of type {{ULongRange}}
See sampleRate for
details.
sampleSize of type {{ULongRange}}
See sampleSize for
details.
echoCancellation of type sequence<{{boolean}}>
If the source cannot do echo cancellation a single
false MUST be the only element in the list. If the
source can do echo cancellation, then true MUST be included
in the list. If the script can control the feature, the list MUST
include at least both true and false.
Additionally, if the source allows controlling which audio sources will be
cancelled, it must include any supported
values from the {{EchoCancellationModeEnum}} enum. If trueorfalse are included in the list, they must appear
before any value from {{EchoCancellationModeEnum}}. See echoCancellation
for additional details.
autoGainControl of type sequence<{{boolean}}>
If the source cannot do auto gain control a single
false is reported. If auto gain control cannot be
turned off, a single true is reported. If the
script can control the feature, the source reports a list with
both true and false as possible
values. See autoGainControl
for additional details.
noiseSuppression of type sequence<{{boolean}}>
If the source cannot do noise suppression a single
false is reported. If noise suppression cannot be
turned off, a single true is reported. If the
script can control the feature, the source reports a list with
both true and false as possible
values. See noiseSuppression
for additional details.
latency of type {{DoubleRange}}
See latency for details.
channelCount of type {{ULongRange}}
See channelCount for
details.
deviceId of type {{DOMString}}
See deviceId for details.
groupId of type {{DOMString}}
See groupId for details.
backgroundBlur of type sequence<{{boolean}}>
If the source does not have built-in background blurring, a single false is reported. If background blurring cannot be turned off, a single true is reported. If the script can control the feature, the source reports a list with both true and false as possible values. See backgroundBlur for details.
dictionary MediaTrackConstraints : MediaTrackConstraintSet {
sequence<MediaTrackConstraintSet> advanced;
};
sequence<{{MediaTrackConstraintSet}}>
See Constraints and ConstraintSet
for the definition of this element.
Future specifications can extend the
MediaTrackConstraintSet dictionary by defining a partial
dictionary with dictionary members of appropriate type.
dictionary MediaTrackConstraintSet {
ConstrainULong width;
ConstrainULong height;
ConstrainDouble aspectRatio;
ConstrainDouble frameRate;
ConstrainDOMString facingMode;
ConstrainDOMString resizeMode;
ConstrainULong sampleRate;
ConstrainULong sampleSize;
ConstrainBooleanOrDOMString echoCancellation;
ConstrainBoolean autoGainControl;
ConstrainBoolean noiseSuppression;
ConstrainDouble latency;
ConstrainULong channelCount;
ConstrainDOMString deviceId;
ConstrainDOMString groupId;
ConstrainBoolean backgroundBlur;
};
dictionary MediaTrackSettings {
unsigned long width;
unsigned long height;
double aspectRatio;
double frameRate;
DOMString facingMode;
DOMString resizeMode;
unsigned long sampleRate;
unsigned long sampleSize;
(boolean or DOMString) echoCancellation;
boolean autoGainControl;
boolean noiseSuppression;
double latency;
unsigned long channelCount;
DOMString deviceId;
DOMString groupId;
boolean backgroundBlur;
};
true
See backgroundBlur for details.
| Property Name | Type | Notes |
|---|---|---|
| deviceId | {{DOMString}} | The identifier of the device generating the content of the {{MediaStreamTrack}}. It conforms with the definition of {{MediaDeviceInfo.deviceId}}. Note that the setting of this property is uniquely determined by the source that is attached to the {{MediaStreamTrack}}. In particular, {{MediaStreamTrack/getCapabilities()}} will return only a single value for deviceId. This property can therefore be used for initial media selection with {{MediaDevices/getUserMedia()}}. However, it is not useful for subsequent media control with {{MediaStreamTrack/applyConstraints()}}, since any attempt to set a different value will result in an unsatisfiable ConstraintSet. If a string of length 0 is used as a deviceId value constraint with {{MediaDevices/getUserMedia()}}, it MAY be interpreted as if the constraint is not specified. |
| groupId | {{DOMString}} | The [=document=]-unique group identifier for the device generating the content of the {{MediaStreamTrack}}. It conforms with the definition of {{MediaDeviceInfo.groupId}}. Note that the setting of this property is uniquely determined by the source that is attached to the {{MediaStreamTrack}}. In particular, {{MediaStreamTrack/getCapabilities()}} will return only a single value for groupId. Since this property is not stable between browsing sessions, its usefulness for initial media selection with {{MediaDevices/getUserMedia()}} is limited. It is not useful for subsequent media control with {{MediaStreamTrack/applyConstraints()}}, since any attempt to set a different value will result in an unsatisfiable ConstraintSet. |
| Property Name | Type | Notes |
|---|---|---|
| width | {{unsigned long}} | The width, in pixels. As a capability, its valid range should span the video source's pre-set width values with min being equal to 1 and max being the largest width. The [=User Agent=] MUST support downsampling to any value between the min width range value and the native resolution width. |
| height | {{unsigned long}} | The height, in pixels. As a capability, its valid range should span the video source's pre-set height values with min being equal to 1 and max being the largest height. The [=User Agent=] MUST support downsampling to any value between the min height range value and the native resolution height. |
| frameRate | {{double}} | The frame rate (frames per second). If video source's pre-set can determine frame rates, then, as a capability, its valid range should span the video source's pre-set frame rate values with min being equal to 0 and max being the largest frame rate. The [=User Agent=] MUST support frame rates obtained from integral decimation of the native resolution frame rate. If frame rate cannot be determined (e.g. the source does not natively provide a frame rate, or the frame rate cannot be determined from the source stream), then the capability values MUST refer to the [=User Agent=]'s vsync display rate. As a setting, this value represents the configured frame rate. If decimation is used, this is that value rather than the native frame rate. For example, if the setting is 25 frames per second via decimation, the native frame rate of the camera is 30 frames per second but due to lighting conditions only 20 frames per second is achieved, {{frameRate}} reports the setting: 25 frames per second. |
| aspectRatio | {{double}} | The exact aspect ratio (width in pixels divided by height in pixels, represented as a double rounded to the tenth decimal place) or aspect ratio range. |
| facingMode | {{DOMString}} | This string is one of
the members of {{VideoFacingModeEnum}}. The
members describe the directions that the camera can face, as seen
from the user's perspective. Note that getConstraints may not return
exactly the same string for strings not in this enum. This
preserves the possibility of using a future version of WebIDL
enum for this property. |
| resizeMode | {{DOMString}} |
This string is one of the
members of {{VideoResizeModeEnum}}. The members
describe the means by which the resolution can be derived by
the UA. In other words, whether the UA is allowed to use
cropping and downscaling on the camera output.
The UA MAY disguise concurrent use of the camera, by downscaling, upscaling, and/or cropping to mimic native resolutions when "none" is used, but only when the camera is in use in another application outside the [=User Agent=]. Note thatgetConstraints may not return
exactly the same string for strings not in this enum. This
preserves the possibility of using a future version of WebIDL
enum for this property.
|
| backgroundBlur | {{boolean}} | Some platforms or User Agents may provide built-in support for background blurring of video frames, in particular for camera video streams. Web applications may either want to control or at least be aware that background blur is applied at the source level. This may for instance allow the web application to update its UI or to not apply background blur on its own. |
enum VideoFacingModeEnum {
"user",
"environment",
"left",
"right"
};
| Enum value | Description |
|---|---|
user |
The source is facing toward the user (a self-view camera). |
environment |
The source is facing away from the user (viewing the environment). |
left |
The source is facing to the left of the user. |
right |
The source is facing to the right of the user. |
enum VideoResizeModeEnum {
"none",
"crop-and-scale"
};
| Enum value | Description |
|---|---|
| none |
This resolution and frame rate is offered by the camera, its driver, or the OS. Note: The UA MAY report this value to disguise concurrent use, but only when the camera is in use in another [=navigable=]. |
| crop-and-scale |
This resolution is downscaled and/or cropped from a higher camera resolution by the [=User Agent=], or its frame rate is decimated by the [=User Agent=]. The media MUST NOT be upscaled, stretched or have fake data created that did not occur in the input source, except as noted below. Note: The UA MAY upscale to disguise concurrent use, but only when the camera is in use in another application outside the [=User Agent=]. |
| Property Name | Values | Notes |
|---|---|---|
| sampleRate | {{unsigned long}} | The sample rate in samples per second for the audio data. |
| sampleSize | {{unsigned long}} | The linear sample size in bits. As a constraint, it can only be satisfied for audio devices that produce linear samples. |
| echoCancellation | {{boolean}} or {{DOMString}} | This is either false, true, or
one of the members of {{EchoCancellationModeEnum}}.
When one or more audio streams are being played in the
processes of various microphones, it is often desirable to
attempt to remove sound being played from the input signals
recorded by the microphones. This is referred to as echo
cancellation. There are cases where it is not needed and it is
desirable to turn it off so that no audio artifacts are
introduced. This allows applications to control this
behavior. |
| autoGainControl | {{boolean}} | Automatic gain control is often desirable on the input signal recorded by the microphone. There are cases where it is not needed and it is desirable to turn it off so that the audio is not altered. This allows applications to control this behavior. |
| noiseSuppression | {{boolean}} | Noise suppression is often desirable on the input signal recorded by the microphone. There are cases where it is not needed and it is desirable to turn it off so that the audio is not altered. This allows applications to control this behavior. |
| latency | {{double}} | The latency or latency range, in seconds. The latency is the time between start of processing (for instance, when sound occurs in the real world) to the data being available to the next step in the process. Low latency is critical for some applications; high latency may be acceptable for other applications because it helps with power constraints. The number is expected to be the target latency of the configuration; the actual latency may show some variation from that. |
| channelCount | {{unsigned long}} | The number of independent channels of sound that the audio data contains, i.e. the number of audio samples per sample frame. |
enum EchoCancellationModeEnum {
"all",
"remote-only"
};
| Enum value | Description | ||
|---|---|---|---|
| {{EchoCancellationModeEnum/"all"}} |
The system MUST attempt to remove all sound being played by the system from the input signal of the microphone. This option is meant to provide maximum privacy, as it prevents the transmission of local audio such as notifications or screen readers. |
||
| {{EchoCancellationModeEnum/"remote-only"}} |
The system MUST attempt to remove the sound from incoming audio {{MediaStreamTrack}}s sourced from WebRTC {{RTCPeerConnection}}s. This option is useful for cases where it is desirable to transmit locally played audio. One example is a remote music class, where a student plays an instrument together with some accompaniment produced by a local application. In this case, the application requires audio coming from the remote participant (i.e., the teacher) to be cancelled in order to avoid echo, but also requires that the accompaniment not be cancelled since the music teacher on the remote side needs to hear it together with the sound from the instrument. It is up to the UA to decide which {{RTCPeerConnection}}s to cancel, but the ones being played out by the browsing context capturing microphone SHOULD be among those cancelled. |
||
| Attribute Name | Attribute Type | Setter/Getter Behavior When Provider is a MediaStream | Additional considerations |
|---|---|---|---|
| {{HTMLMediaElement/preload}} | {{DOMString}} | On getting: none. On setting: ignored. |
A {{MediaStream}} cannot be preloaded. |
| {{HTMLMediaElement/buffered}} | {{TimeRanges}} | buffered.length MUST return 0. |
A {{MediaStream}} cannot be preloaded. Therefore, the amount buffered is always an empty time range. |
| {{HTMLMediaElement/currentTime}} | {{double}} | Any non-negative integer. The initial value is 0 and
the values increments linearly in real time whenever the element is
[=media element/potentially playing=]. |
The value is the official playback position, in seconds. Any attempt to alter it MUST be ignored. |
| {{HTMLMediaElement/seeking}} | {{boolean}} | false |
A {{MediaStream}} is not seekable. Therefore, this attribute MUST
always return the value false. |
| {{HTMLMediaElement/defaultPlaybackRate}} | {{double}} | On getting: 1.0. On setting: ignored. |
A {{MediaStream}} is not seekable. Therefore, this attribute MUST
always return the value 1.0 and any attempt to alter it
MUST be ignored. Note that this also means that the
ratechange
event will not fire. |
| {{HTMLMediaElement/playbackRate}} | {{double}} | On getting: 1.0. On setting: ignored. |
A {{MediaStream}} is not seekable. Therefore, this attribute MUST
always return the value 1.0 and any attempt to alter it
MUST be ignored. Note that this also means that the
ratechange
event will not fire. |
| {{HTMLMediaElement/played}} | {{TimeRanges}} |
played.length MUST return 1.played.start(0) MUST return 0.played.end(0) MUST return the last known {{HTMLMediaElement/currentTime}}.
|
A {{MediaStream}}'s timeline always consists of a single range, starting at 0 and extending up to the currentTime. |
| {{HTMLMediaElement/seekable}} | {{TimeRanges}} | seekable.length MUST return 0. |
A {{MediaStream}} is not seekable. |
| {{HTMLMediaElement/loop}} | {{boolean}} | true, false |
Setting the {{HTMLMediaElement/loop}} attribute has no effect since a {{MediaStream}} has no defined end and therefore cannot be looped. |
null or a non-stream object,
just ahead of the media element load algorithm. As a result, the ratechange
event may fire (from step 7) if {{HTMLMediaElement/playbackRate}}
and {{HTMLMediaElement/defaultPlaybackRate}} were different from before a
{{MediaStream}} was assigned.
OverconstrainedError. This is
an extension of {{DOMException}} that carries additional
information related to constraints failure.
[Exposed=Window]
interface OverconstrainedError : DOMException {
constructor(DOMString constraint, optional DOMString message = "");
readonly attribute DOMString constraint;
};
OverconstrainedError
Run the following steps:
Let constraint be the constructor's first
argument.
Let message be the constructor's second argument.
Let ebe a new
{{OverconstrainedError}} object.
Invoke the {{DOMException}} constructor of
ewith the message argument set to
message and the name argument set to
"OverconstrainedError".
This name does not have a mapping to a legacy
code so e's code attribute will return
0.
Set e.constrainttoconstraint.
Return e.
constraint of type {{DOMString}}, readonly
The name of a constraint associated with this error, or
"" if no specific constraint name is revealed.
| Event name | Interface | Fired when... |
|---|---|---|
| addtrack | {{MediaStreamTrackEvent}} | A new {{MediaStreamTrack}} has been added to this stream. Note that this event is not fired when the script directly modifies the tracks of a {{MediaStream}}. |
| removetrack | {{MediaStreamTrackEvent}} | A {{MediaStreamTrack}} has been removed from this stream. Note that this event is not fired when the script directly modifies the tracks of a {{MediaStream}}. |
| Event name | Interface | Fired when... |
|---|---|---|
| mute | {{Event}} | The {{MediaStreamTrack}} object's source is temporarily unable to provide data. |
| unmute | {{Event}} | The {{MediaStreamTrack}} object's source is live again after having been temporarily unable to provide data. |
| ended | {{Event}} |
The {{MediaStreamTrack}} object's source will no longer provide any data, either because the user revoked the permissions, or because the source device has been ejected, or because the remote peer permanently stopped sending data. |
| Event name | Interface | Fired when... |
|---|---|---|
| devicechange | {{DeviceChangeEvent}} | The set of media devices, available to the [=User Agent=], has changed. The current list of devices is available in the {{DeviceChangeEvent/devices}} attribute. |
partial interface Navigator {
[SameObject, SecureContext] readonly attribute MediaDevices mediaDevices;
};
false.
[[\canExposeMicrophoneInfo]], initialized
to false.
[[\mediaStreamTrackSources]], initialized
to an empty [=set=].
Let settingsbemediaDevices's [=relevant settings object=].
For each kind of device, kind, that
{{MediaDevices.getUserMedia()}} exposes, run the following step:
Set mediaDevices.{{MediaDevices/[[kindsAccessibleMap]]}}[kind]
to either true
if the [=permission state=]
of the permission associated with kind (e.g. "camera",
"microphone") for settings is {{PermissionState/"granted"}}, or to
false otherwise.
For each individual device that {{MediaDevices.getUserMedia()}}
exposes, using the device's
deviceId, deviceId, run the following step:
Set mediaDevices.{{MediaDevices/[[devicesLiveMap]]}}[deviceId]tofalse, and
set mediaDevices.{{MediaDevices/[[devicesAccessibleMap]]}}[deviceId] to either
true if the [=permission state=]
of the permission associated with the device’s kind and
deviceId for settings, is {{PermissionState/"granted"}}, or to
false otherwise.
Return mediaDevices.
For each kind of device, kind, that {{MediaDevices/getUserMedia()}} exposes,
[=permission state|whenever a transition occurs of the
permission state=] of the permission associated with kind for
mediaDevices's [=relevant settings object=],
run the following steps:
If the transition is to {{PermissionState/"granted"}} from another value, then set
mediaDevices.{{MediaDevices/[[kindsAccessibleMap]]}}[kind]totrue.
If the transition is from {{PermissionState/"granted"}} to another value, then set
mediaDevices.{{MediaDevices/[[kindsAccessibleMap]]}}[kind]tofalse.
For each device that {{MediaDevices/getUserMedia()}} exposes, whenever a transition occurs of the
[=permission state=] of the permission associated with the device's kind
and the device's deviceId, deviceId, for
mediaDevices's [=relevant settings object=], run the following
steps:
If the transition is to {{PermissionState/"granted"}} from another value, then set
mediaDevices.{{MediaDevices/[[devicesAccessibleMap]]}}[deviceId]totrue,
if it isn’t already true.
If the transition is from {{PermissionState/"granted"}} to another value, and the
device is currently [= source/stopped =], then set
mediaDevices.{{MediaDevices/[[devicesAccessibleMap]]}}[deviceId]tofalse.
When new media input and/or output devices are made available to the
[=User Agent=], or any available input and/or output device becomes
unavailable, or the system default for input and/or output devices of a
{{MediaDeviceKind}} changed, the [=User Agent=] MUST run the following
device change notification steps for each {{MediaDevices}}
object, mediaDevices, for which [=device enumeration can proceed=] is true,
but for no other {{MediaDevices}} object:
Let lastExposedDevices be the result of
[=creating a list of device info objects=] with mediaDevices and
mediaDevices.{{MediaDevices/[[storedDeviceList]]}}.
Let deviceList be the list of all media input and/or
output devices available to the [=User Agent=].
Let newExposedDevices be the result of
[=creating a list of device info objects=] with mediaDevices and
deviceList.
If the {{MediaDeviceInfo}} objects in newExposedDevices
match those in lastExposedDevices and have the same order,
then abort these steps.
Due to the {{MediaDevices/enumerateDevices}} algorithm, the
above step limits firing the devicechange event to documents
[=allowed to use=] {{MediaDevices/enumerateDevices}} to enumerate
devices of a particular {{MediaDeviceKind}}.
Set mediaDevices.{{MediaDevices/[[storedDeviceList]]}} to
deviceList.
Queue a task that [= fire an event | fires an event=] named {{devicechange}},
using the {{DeviceChangeEvent}} constructor with {{DeviceChangeEventInit/devices}}
initialized to newExposedDevices, at mediaDevices.
The [=User Agent=] MAY combine firing multiple events into firing one
event when several events are due or when multiple devices are added
or removed at the same time, e.g. a camera with a microphone.
Additionally, if a {{MediaDevices}} object that was traversed comes
to meet the [=device enumeration can proceed=] criteria later (e.g.
[=Document/is in view | comes into view=]), the [=User Agent=] MUST
execute the [=device change notification steps=] on the {{MediaDevices}}
object at that time.
These events are potentially triggered
simultaneously on documents of different origins. [=User Agents=] MAY add
fuzzing on the timing of events to avoid cross-origin activity
correlation.
[Exposed=Window, SecureContext]
interface MediaDevices : EventTarget {
attribute EventHandler ondevicechange;
Promise<sequence<MediaDeviceInfo>> enumerateDevices();
};
false,
truncate microphoneList to its first item.
If [=camera information can be exposed=] on mediaDevicesisfalse,
truncate cameraList to its first item.
Run the following sub steps for each discovered device in deviceList, device:
Ifdevice is a microphone or device is a camera,
abort these sub steps and continue with the next device (if any).
Run the [=exposure decision algorithm for devices other than camera and microphone=],
with device, microphoneList, cameraList and
mediaDevices as input.
If the result of this algorithm is false,
abort these sub steps and continue with the next device (if any).
Let deviceInfo be the result of
[=creating a device info object=] to represent device,
with mediaDevices.
(四)
Append deviceInfotootherDeviceList.
Ifdevice is the system default audio output,
run the following sub steps:
Let defaultAudioOutputInfo be the result of
[=creating a device info object=] to represent device,
with mediaDevices.
Set defaultAudioOutputInfo's {{MediaDeviceInfo/deviceId}} to
"default".
The user agent SHOULD update defaultAudioOutputInfo's {{MediaDeviceInfo/label}}
to make it explicit that this is the system default audio output.
Prepend defaultAudioOutputInfotootherDeviceList.
Append to resultList all devices of microphoneList in order.
Append to resultList all devices of cameraList in order.
Append to resultList all devices of otherDeviceList in order.
Return resultList.
Since this method returns persistent
information across browsing sessions and origins via the availability
of media capture devices, it adds to the
fingerprinting surface exposed by the [=User Agent=].
As long as the [=relevant global object=]'s
[=associated `Document`=] did not capture, this method will
limit exposure to two bits of information: whether there is a camera
and whether there is a microphone. A [=User Agent=] may mitigate this by
pretending the system has a camera and a microphone, for instance until the
[=relevant global object=]'s [=associated `Document`=] calls
{{MediaDevices/getUserMedia()}} with constraints deemed reasonable.
After the [=relevant global object=]'s [=associated `Document`=]
started capture, it provides additional persistent
cross-origin information via the list of all media capture devices,
including their grouping and human readable labels associated
with the capture devices, which further adds to the
fingerprinting surface.
A [=User Agent=] may limit exposure by sanitizing
device labels. This could for instance mean removing user names found
in labels, but keeping device manufacturer or model information.
It is important that the sanitized labels allow users to identify
the corresponding devices.
false, return deviceInfo.
IfdeviceInfo.{{MediaDeviceInfo/kind}} is equal to "audioinput"
and [=microphone information can be exposed=] on mediaDevicesisfalse, return deviceInfo.
Initialize deviceInfo.{{MediaDeviceInfo/label}} for device.
If a stored {{MediaDeviceInfo/deviceId}} exists for
device, initialize deviceInfo.{{MediaDeviceInfo/deviceId}} to that value.
Otherwise, let deviceInfo.{{MediaDeviceInfo/deviceId}} be a
newly generated unique identifier as described under {{MediaDeviceInfo/deviceId}}.
Ifdevice belongs to the same physical
device as a device already represented for document,
initialize deviceInfo.{{MediaDeviceInfo/groupId}} to the
{{MediaDeviceInfo/groupId}} value of the existing {{MediaDeviceInfo}} object.
Otherwise, let deviceInfo.{{MediaDeviceInfo/groupId}} be a
newly generated unique identifier as described under {{MediaDeviceInfo/groupId}}.
Return deviceInfo
true if
[=device information can be exposed=] on mediaDevices.
Return the result of [=Document/is in view=] with mediaDevices.
To perform a device information can be exposed
check, given mediaDevices, run the following steps:
If [=camera information can be exposed=] on mediaDevices,
return true.
If [=microphone information can be exposed=] on mediaDevices,
return true.
Return false.
To perform a camera information can be exposed
check, given mediaDevices, run the following steps:
If any of the local devices of kind "videoinput" are attached to a live
{{MediaStreamTrack}} in mediaDevices's [=relevant global object=]'s
[=associated `Document`=], return true.
Return mediaDevices.{{MediaDevices/[[canExposeCameraInfo]]}}.
To perform a microphone information can be exposed
check, given mediaDevices, run the following steps:
If any of the local devices of kind "audioinput" are attached to a live
{{MediaStreamTrack}} in the [=relevant global object=]'s
[=associated `Document`=], return true.
Return mediaDevices.{{MediaDevices/[[canExposeMicrophoneInfo]]}}.
To perform an is in view check, given mediaDevices, run the following
steps:
IfmediaDevices's [=relevant global object=]'s [=associated `Document`=] is
[=Document/fully active=] and its [=Document/visibility state=]
is `"visible"`, return `true`. Otherwise, return `false`.
To perform a has system focus check, given mediaDevices, run the following
steps:
IfmediaDevices's [=relevant global object=]'s [=navigable=]'s
[=top-level traversable=] has
system focus, return
`true`. Otherwise, return `false`.
To perform a device exposure can be extended check, given deviceType, run the following
steps:
Let permission be the result of reading the [=permission state=] for the descriptor
whose name is deviceType.
Ifpermission is {{PermissionState/"granted"}}, return true.
Ifpermission is {{PermissionState/"prompt"}}, the User Agent MAY return true
if it knows that deviceType access was previously granted for that origin.
Return false.
true and if [=device exposure can be extended=] with "microphone",
set mediaDevices.{{MediaDevices/[[canExposeMicrophoneInfo]]}} to true.
If"audio" is in requestedTypes, run the following sub-steps:
Set mediaDevices.{{MediaDevices/[[canExposeMicrophoneInfo]]}} to value.
Ifvalueistrue and if [=device exposure can be extended=] with "camera",
set mediaDevices.{{MediaDevices/[[canExposeCameraInfo]]}} to true.
A [=User Agent=] MAY at any point set the device information exposure back to false,
for instance if the [=User Agent=] decides to revoke device access on a given {{Document}}.
false.
Other specifications can define the algorithm for specific device types.
true, return true.
Return false.
This algorithm covers all capture tracks, including microphone, camera and display.
[Exposed=Window, SecureContext]
interface MediaDeviceInfo {
readonly attribute DOMString deviceId;
readonly attribute MediaDeviceKind kind;
readonly attribute DOMString label;
readonly attribute DOMString groupId;
[Default] object toJSON();
};
enum MediaDeviceKind {
"audioinput",
"audiooutput",
"videoinput"
};
| MediaDeviceKind Enumeration description | |
|---|---|
| audioinput |
Represents an audio input device; for example a microphone. |
| audiooutput |
Represents an audio output device; for example a pair of headphones. |
| videoinput |
Represents a video input device; for example a webcam. |
[Exposed=Window, SecureContext]
interface InputDeviceInfo : MediaDeviceInfo {
MediaTrackCapabilities getCapabilities();
};
getUserMedia({deviceId: id}) where id
is the value of the {{MediaDeviceInfo/deviceId}} attribute of this
{{MediaDeviceInfo}}.
If no access has been granted to any local devices and this
{{InputDeviceInfo}} has been filtered with respect to
unique identifying information (see above description of
{{MediaDevices/enumerateDevices()}} result), then this method returns
an empty dictionary.
[Exposed=Window]
interface DeviceChangeEvent : Event {
constructor(DOMString type, optional DeviceChangeEventInit eventInitDict = {});
[SameObject] readonly attribute FrozenArray<MediaDeviceInfo> devices;
[SameObject] readonly attribute FrozenArray<MediaDeviceInfo> userInsertedDevices;
};
dictionary DeviceChangeEventInit : EventInit {
sequence<MediaDeviceInfo> devices = [];
};
[]
The {{devices}} member is an array of {{MediaDeviceInfo}} objects
representing the available devices.
partial interface MediaDevices {
MediaTrackSupportedConstraints getSupportedConstraints();
Promise<MediaStream> getUserMedia(optional MediaStreamConstraints constraints = {});
};
true.
IfrequestedMediaTypes is the empty set, return
a promise rejected with a {{TypeError}}. The
word "optional" occurs in the WebIDL due to WebIDL rules, but
the argument MUST be supplied in order for the call to
succeed.
Let document be the [=relevant global object=]'s
[=associated `Document`=].
Ifdocument is NOT
[=Document/fully active=], return a promise rejected
with a {{DOMException}} object whose {{DOMException/name}}
attribute has the value {{"InvalidStateError"}}.
IfrequestedMediaTypes contains "audio" and
document is not [=allowed to use=] the
feature identified by the "microphone" permission name,
jump to the step labeled Permission Failure below.
IfrequestedMediaTypes contains "video" and
document is not [=allowed to use=] the
feature identified by the "camera" permission name,
jump to the step labeled Permission Failure below.
Let mediaDevices be [=this=].
Let isInView be the result of the
[= Document/is in view =] algorithm.
Let pbe a new promise.
Run the following steps in parallel:
While isInView is `false`, the [=User Agent=]
MUST wait to proceed to the next step until a task queued
to set isInView to the result of the
[=Document/is in view=] algorithm, would set
isInView to `true`.
Let finalSet be an (initially) empty
set.
For each media type kindinrequestedMediaTypes, run the following steps:
For each possible configuration of each possible
source device of media of type kind, conceive a
candidate as a placeholder for an eventual
{{MediaStreamTrack}} holding a source device and configured with a settings
dictionary comprised of its specific settings.
Call this set of candidates the
candidateSet.
IfcandidateSet is the empty set,
jump to the step labeled NotFound Failure below.
(二)If the value of the kind entry of
constraintsistrue, set CSto
the empty constraint set (no constraint). Otherwise,
continue with CSset to the value of the
kind entry of constraints.
(三)Remove any constrainable property inside of
CSthat are not defined for
{{MediaStreamTrack}} objects of type
kind. This means that audio-only constraints
inside of "video" and video-only constraints inside of
"audio" are simply ignored rather than causing
OverconstrainedError.
IfCScontains a member that is a
required constraint and whose name is not in the
list of allowed required constraints for device selection,
then [= reject =] pwith a {{TypeError}}, and abort
these steps.
Run the SelectSettings algorithm on each
candidate in candidateSet with CS
as the constraint set. If the algorithm returns
undefined, remove the candidate from
candidateSet. This eliminates devices
unable to satisfy the constraints, by verifying that
at least one settings dictionary exists that
satisfies the constraints.
IfcandidateSet is the empty set, let
failedConstraint be any
required constraint
whose fitness distance was infinity for
all settings dictionaries examined while executing
the SelectSettings algorithm, or
"" if there isn't one, and jump to the
step labeled Constraint Failure below.
This error gives information
about what the underlying device is not capable of
producing, before the user has given any
authorization to any device, and can thus be used as
a fingerprinting surface.
Read the current [=permission state=] for all
candidate devices in candidateSet that are
not attached to a live {{MediaStreamTrack}}
in the current {{Document}}. Remove from
candidateSet any candidate whose device's
permission state is {{PermissionState/"denied"}}.
IfcandidateSet is now empty,
indicating that all devices of this type are in state
{{PermissionState/"denied"}}, jump to the step labeled
PermissionFailure below.
Optionally, e.g., based on a previously-established
user preference, for security reasons, or due to platform
limitations, jump to the step labeled Permission
Failure below.
Add all candidates from candidateSettofinalSet.
Let stream be a new and empty
{{MediaStream}} object.
For each media type kindinrequestedMediaTypes, run the following sub steps,
preferably at the same time:
[=User Agents=] are encouraged to bundle concurrent
requests for different kinds of media into a single
user-facing permission prompt.
[=Request permission to
use=] a {{PermissionDescriptor}} with its {{PermissionDescriptor/name}} member set
to the permission name associated with kind
(e.g. "camera" for "video", "microphone" for "audio"),
while considering all devices attached to a
live and same-permission
{{MediaStreamTrack}} in the current {{Document}}
to have permission status {{PermissionState/"granted"}},
resulting in a set of provided media.
Same-permission in this context means a
{{MediaStreamTrack}} that required the same level of
permission to obtain as what is being requested (e.g. not
isolated).
When asking the user’s permission, the [=User Agent=]
MUST disclose whether permission will be granted only to
the device chosen, or to all devices of that
kind.
If the user never responds, this algorithm stalls on this step.
If the result of the request is {{PermissionState/"denied"}},
jump to the step labeled Permission Failure below.
Let hasSystemFocus be `false`.
While hasSystemFocus is `false`, the
[=User Agent=] MUST wait to proceed to the next step
until a task queued to set hasSystemFocus
to the result of the [=has system focus=]
algorithm, would set hasSystemFocus to
`true`.
[=Set the device information exposure=] on mediaDevices
with requestedMediaTypes and true.
For each media type kindinrequestedMediaTypes, run the following sub steps:
Let finalCandidate be the provided media, which
MUST be precisely one candidate of type kind from
finalSet. The decision of which candidate to
choose from the finalSet is completely up to
the [=User Agent=] and may be determined by asking the user.
The [=User Agent=] SHOULD use the value of the computed
fitness distance from the SelectSettings
algorithm as an input to the selection algorithm.
However, it MAY also use other internally-available
information about the devices, such as user preference.
This means that non-[=required constraints=] values are not guaranteed.
[=User Agents=] are encouraged to default to using the
user's primary or system default device for kind
(when possible). [=User Agents=]
MAY allow users to use any media source, including
pre-recorded media files.
The result of the request is {{PermissionState/"granted"}}.
If a hardware error such as an OS/program/webpage lock prevents access,
remove the corresponding candidate from finalSet.
If finalSet has no candidates of type kind,
[= reject =] pwith a new
{{DOMException}} object whose
{{DOMException/name}} attribute has the value
{{"NotReadableError"}} and abort these steps.
Otherwise, restart these sub steps with the updated finalSet.
If device access fails for any reason other than those listed above,
remove the corresponding candidate from finalSet.
If finalSet has no candidates of type kind,
[= reject =] pwith a new {{DOMException}}
object whose {{DOMException/name}} attribute has the
value {{"AbortError"}} and abort these steps.
Otherwise, restart these sub steps with the updated finalSet.
Let grantedDevicebefinalCandidate's source device.
Using grantedDevice's deviceId, deviceId, set
mediaDevices.{{MediaDevices/[[devicesLiveMap]]}}[deviceId] to
true, if it isn’t already true,
and set
mediaDevices.{{MediaDevices/[[devicesAccessibleMap]]}}[deviceId] to
true, if it isn’t already
true.
Let track be the result of
[=create a MediaStreamTrack|creating a MediaStreamTrack=]
with grantedDevice and mediaDevices.
The source of the {{MediaStreamTrack}} MUST NOT change.
Add tracktostream's track set.
Run the ApplyConstraints algorithm on all
tracks in stream with the appropriate
constraints. If any of them returns something other than
undefined, let failedConstraint be
that result and jump to the step labeled
Constraint Failure below.
For each trackinstream,
[=tie track source to `MediaDevices`=] with
track.{{MediaStreamTrack/[[Source]]}} and
mediaDevices.
[= Resolve =] pwith stream and
abort these steps.
NotFound Failure:
If [=getUserMedia specific failure is allowed=]
given requestedMediaTypes
returns false, jump to the step
labeled Permission Failure below.
[=Reject=] pwith a new
{{DOMException}} object whose {{DOMException/name}} attribute
has the value {{"NotFoundError"}}.
Constraint Failure:
If [=getUserMedia specific failure is allowed=]
given requestedMediaTypes
returns false, jump to the step
labeled Permission Failure below.
Let message be
either undefined or an informative
human-readable message, let constraintbefailedConstraint if
[=device information can be exposed=] is
true, or "" otherwise.
[=Reject=] pwith a new
OverconstrainedError created by calling
OverconstrainedError(constraint,
message).
Permission Failure: [= Reject =]
pwith a new {{DOMException}}
object whose {{DOMException/name}} attribute has the
value {{"NotAllowedError"}}.
Return p.
To check whether getUserMedia specific failure is allowed,
given requestedMediaTypes, run the following steps:
IfrequestedMediaTypes contains "audio", read the [=permission state=]
for the descriptor whose name is "microphone". If the result of the request is
{{PermissionState/"denied"}}, return false.
IfrequestedMediaTypes contains "video", read the [=permission state=]
for the descriptor whose name is "camera". If the result of the request is
{{PermissionState/"denied"}}, return false.
Return true.
In the algorithm above, constraints are checked twice - once at
device selection, and once after access approval. Time may have passed
between those checks, so it is conceivable that the selected device is
no longer suitable. In this case, a NotReadableError will result.
The allowed required constraints for device selection
contains the following constraint names:
width,
height,
aspectRatio,
frameRate,
facingMode,
resizeMode,
sampleRate,
sampleSize,
echoCancellation,
autoGainControl,
noiseSuppression,
latency,
channelCount,
deviceId,
groupId.
dictionary MediaStreamConstraints {
(boolean or MediaTrackConstraints) video = false;
(boolean or MediaTrackConstraints) audio = false;
};
({{boolean}} or {{MediaTrackConstraints}}),
defaulting to false
Iftrue, it requests that the returned
MediaStream contain a video track. If a Constraints
structure is provided, it further specifies the nature and
settings of the video Track. If false, the
{{MediaStream}} MUST NOT contain a video Track.
audio of type ({{boolean}} or {{MediaTrackConstraints}}),
defaulting to false
Iftrue, it requests that the returned
MediaStream contain an audio track. If a
Constraints structure is provided, it further specifies
the nature and settings of the audio Track. If
false, the MediaStream MUST NOT contain an
audio Track.
partial interface Navigator {
[SecureContext] undefined getUserMedia(MediaStreamConstraints constraints,
NavigatorUserMediaSuccessCallback successCallback,
NavigatorUserMediaErrorCallback errorCallback);
};
callback NavigatorUserMediaSuccessCallback = undefined (MediaStream stream);
callback NavigatorUserMediaErrorCallback = undefined (DOMException error);
getCapabilities() accessor.
The application can select the (range of) values it wants for an
object's Capabilities by means of basic and/or advanced ConstraintSets and
the applyConstraints() method. A ConstraintSet consists of the
names of one or more properties of the object plus the desired value (or a
range of desired values) for each property. Each of those property/value
pairs can be considered to be an individual constraint. For example, the
application may set a ConstraintSet containing two constraints, the first
stating that the framerate of a camera be between 30 and 40 frames per
second (a range) and the second that the camera should be facing the user
(a specific value). How the individual constraints interact depends on
whether and how they are given in the basic Constraint structure, which is
a ConstraintSet with an additional 'advanced' property, or whether they are
in a ConstraintSet in the advanced list. The behavior is as follows: all
'min', 'max', and 'exact' constraints in the basic Constraint structure are
together treated as the required constraints, and if it is not possible
to satisfy simultaneously all of those individual constraints for the
indicated property names, the [=User Agent=] MUST [= reject =] the returned
promise. Otherwise, it must apply the required constraints. Next, it will
consider any ConstraintSets given in the
advanced list, in the
order in which they are specified, and will try to satisfy/apply each complete
ConstraintSet (i.e., all constraints in the ConstraintSet together), but
will skip a ConstraintSet if and only if it cannot satisfy/apply it in its
entirety. Next, the [=User Agent=] MUST attempt to apply, individually, any
'ideal' constraints or a constraint given as a bare value for the property
(referred to as optional basic constraints).
Of these properties, it MUST satisfy the largest number that it can, in any
order. Finally, the [=User Agent=] MUST [= resolve =] the returned
promise.
Any constraint provided via this API will only be considered if the given
constrainable property is supported by the [=User Agent=]. JavaScript
application code is expected to first check, via
getSupportedConstraints(), that all the named properties
that are used are supported by the [=User Agent=]. The reason for this is that
WebIDL drops any unsupported names from the dictionary holding the
constraints, so the [=User Agent=] does not see them and the unsupported names
end up being silently ignored. This will cause confusing programming
errors as the JavaScript code will be setting constraints but the [=User Agent=]
will be ignoring them. [=User Agents=] that support (recognize) the name of a
required constraint but cannot satisfy it will generate an error, while
[=User Agents=] that do not support the constrainable property will not generate
an error.
The following examples may help to understand how constraints work. The
first shows a basic Constraint structure. Three constraints are given, each
of which the [=User Agent=] will attempt to satisfy individually. Depending
upon the resolutions available for this camera, it is possible that not all
three constraints can be satisfied at the same time. If so, the [=User Agent=]
will satisfy two if it can, or only one if not even two constraints can be
satisfied together. Note that if not all three can be satisfied
simultaneously, it is possible that there is more than one combination of
two constraints that could be satisfied. If so, the [=User Agent=] will
choose.
const stream = await navigator.mediaDevices.getUserMedia({
video: {
width: 1280,
height: 720,
aspectRatio: 3/2
}
});
This next example adds a small bit of complexity. The ideal values are
still given for width and height, but this time with minimum requirements
on each as well as a minimum frameRate that must be satisfied. If it cannot
satisfy the frameRate, width or height minimum it will [= reject =] the
promise. Otherwise, it will try to satisfy the width, height, and
aspectRatio target values as well and then [= resolve =] the promise.
try {
const stream = await navigator.mediaDevices.getUserMedia({
video: {
width: {min: 640, ideal: 1280},
height: {min: 480, ideal: 720},
aspectRatio: 3/2,
frameRate: {min: 20}
}
});
} catch (error) {
if (error.name != "OverconstrainedError") {
throw error;
}
// Overconstrained. Try again with a different combination (no prompt was shown)
}
This example illustrates the full control possible with the Constraints
structure by adding the 'advanced' property. In this case, the [=User Agent=]
behaves the same way with respect to the required constraints, but before
attempting to satisfy the ideal values it will process the 'advanced' list.
In this example the 'advanced' list contains two ConstraintSets. The first
specifies width and height constraints, and the second specifies an
aspectRatio constraint. Note that in the advanced list, these bare values
are treated as 'exact' values. This example represents the following: "I
need my video to be at least 640 pixels wide and at least 480 pixels high.
My preference is for precisely 1920x1280, but if you can't give me that,
give me an aspectRatio of 4x3 if at all possible. If even that is not
possible, give me a resolution as close to 1280x720 as possible."
try {
const stream = await navigator.mediaDevices.getUserMedia({
video: {
width: {min: 640, ideal: 1280},
height: {min: 480, ideal: 720},
frameRate: {min: 30},
advanced: [
{width: 1920, height: 1280},
{aspectRatio: 4/3},
{frameRate: {min: 50}},
{frameRate: {min: 40}}
]
}
});
} catch (error) {
if (error.name != "OverconstrainedError") {
throw error;
}
// Overconstrained. Try again with a different combination (no prompt was shown)
}
The ordering of advanced ConstraintSets is significant. In the preceding
example it is impossible to satisfy both the 1920x1280 ConstraintSet and
the 4x3 aspect ratio ConstraintSet at the same time. Since the 1920x1280
occurs first in the list, the [=User Agent=] will attempt to satisfy it first.
Application authors can therefore implement a backoff strategy by
specifying multiple advanced ConstraintSets for the same property. The
application also specifies two more advanced ConstraintSets, the
first asking for a frame rate greater than 50, the second asking for a
frame rate greater than 40.
If the [=User Agent=] is capable of setting a frame rate greater than 50, it
will (and the subsequent ConstraintSet will be trivially satisfied).
However, if the [=User Agent=] cannot set the frame rate above 50, it will
skip that ConstraintSet and attempt to set the frame rate above 40.
In case the [=User Agent=] cannot satisfy either of the two ConstraintSets, the
'min' value in the basic ConstraintSet insists on 30 as a lower bound.
In other words, the [=User Agent=] would fail altogether if it couldn't get a
value over 30, but would choose a value over 50 if possible, then try for
a value over 40.
Note that, unlike basic constraints, the constraints within a
ConstraintSet in the advanced list must be satisfied together or skipped
together. Thus, {width: 1920, height: 1280} is a request for that specific
resolution, not a request for that width or that height. One can think of
the basic constraints as requesting an 'or' (non-exclusive) of the
individual constraints, while each advanced ConstraintSet is requesting an
'and' of the individual constraints in the ConstraintSet. An application
may inspect the full set of Constraints currently in effect via the
getConstraints() accessor.
The specific value that the [=User Agent=] chooses for a constrainable
property is referred to as a Setting. For example, if the application
applies a ConstraintSet specifying that the frameRate must be at least 30
frames per second, and no greater than 40, the Setting can be any
intermediate value, e.g., 32, 35, or 37 frames per second. The application
can query the current settings of the object's constrainable properties via
the {{MediaStreamTrack/getSettings()}}
accessor.
Capabilities dictionary describing the aggregate
allowable values for each constrainable property exposed, as
explained under
Capabilities, or an empty dictionary
if it has none.
A[[\Constraints]] internal slot, initialized to an
empty Constraints dictionary.
A[[\Settings]] internal slot, initialized to a
Settings dictionary describing the currently active
settings values for each constrainable property exposed, as
explained under Settings, or an empty
dictionary if it has none.
Template:
[Exposed=Window]
interface ConstrainablePattern {
Capabilities getCapabilities();
Constraints getConstraints();
Settings getSettings();
Promise<undefined> applyConstraints(optional Constraints constraints = {});
};
undefined, let message be either
undefined or an informative human-readable
message, [= reject =] pwith a new
OverconstrainedError created by calling
OverconstrainedError(failedConstraint,
message), and abort these steps. The
existing constraints remain in effect in this case.
Set object's [[\Constraints]]
internal slot to newConstraints or a
Constraints dictionary that has the
identical effect in all situations as
newConstraints.
Set object's [[\Settings]]
internal slot to successfulSettings.
[= resolve =] pwith
undefined.
Return p.
The [=ApplyConstraints algorithm=] for applying constraints
is stated below. Here are some preliminary definitions that are
used in the statement of the algorithm:
We use the term settings dictionary for the set of
values that might be applied as settings to the object.
For string valued constraints, we define "==" below to be true
if one of the values in the sequence is exactly the same as the
value being compared against.
We define the fitness distance between a
settings dictionary and a constraint set CSas
the sum, for each member (represented by a
constraintName and constraintValue pair)
which [= map/exist =]s in
CS, of the following values:
IfconstraintName is not supported by the
[=User Agent=], the fitness distance is 0.
If the constraint is required
(constraintValue
either contains one or more members named 'min', 'max', or
'exact', or is itself a bare value in an advanced
ConstraintSet), and the settings dictionary's
constraintName member's value does not satisfy the
constraint or doesn't [= map/exist =], the fitness distance is
positive infinity.
If the constraint does not apply for this type of object,
the fitness distance is 0 (that is, the constraint does not
influence the fitness distance).
IfconstraintValue is a boolean, but the
constrainable property is not, then the fitness distance is
based on whether the settings dictionary's
constraintName member [= map/exist | exists =] or
not, from the formula
(constraintValue == exists) ? 0 : 1If the settings dictionary's constraintName member does [= map/exist | not exist=], the fitness distance is 1. (六)If no ideal value is specified (constraintValue either contains no member named 'ideal', or, if bare values are to be treated as 'ideal', isn't a bare value), the fitness distance is 0. (七)For all positive numeric constraints (such as height, width, frameRate, aspectRatio, sampleRate and sampleSize), the fitness distance is the result of the formula
(actual == ideal) ? 0 : |actual - ideal| / max(|actual|, |ideal|)(八)For all string, enum and boolean constraints (e.g. deviceId, groupId, facingMode, resizeMode, echoCancellation), the fitness distance is the result of the formula
(actual == ideal) ? 0 : 1More definitions: ●We refer to each element of a ConstraintSet (other than the special term 'advanced') as a 'constraint' since it is intended to constrain the acceptable settings for the given property from the full list or range given in the corresponding Capability of the ConstrainablePattern object to a value that is within the range or list of values it specifies. ●We refer to the "effective Capability" C of an object O as the possibly proper subset of the possible values of C (as returned by getCapabilities) taking into consideration environmental limitations and/or restrictions placed by other constraints. For example given a ConstraintSet that constrains the aspectRatio, height, and width properties, the values assigned to any two of the properties limit the effective Capability of the third. The set of effective Capabilities may be platform dependent. For example, on a resource-limited device it may not be possible to set properties P1 and P2 both to 'high', while on another less limited device, this may be possible. ●A settings dictionary, which is a set of values for the constrainable properties of an object O, satisfies ConstraintSet CS if the fitness distance between the set and CS is less than infinity. ●A set of ConstraintSets CS1...CSn (n >= 1) can be satisfied by an object O if it is possible to find a settings dictionary of O that satisfies CS1...CSn simultaneously. ●To apply a set of ConstraintSets CS1...CSn to object O is to choose such a sequence of values that satisfy CS1...CSn and assign them as the settings for the properties of O. We define the SelectSettings algorithm as follows: (一)Each constraint specifies one or more values (or a range of values) for its property. A property MAY appear more than once in the list of 'advanced' ConstraintSets. If an empty list has been given as the value for a constraint, it MUST be interpreted as if the constraint were not specified (in other words, an empty constraint == no constraint). Note that unknown properties are discarded by WebIDL, which means that unknown/unsupported required constraints will silently disappear. To avoid this being a surprise, application authors are expected to first use the {{MediaDevices/getSupportedConstraints()}} method as shown in the Examples below. (二)Let object be the
ConstrainablePattern object on which this
algorithm is applied. Let copy be an unconstrained
copy of object (i.e., copy should behave
as if it were object with all ConstraintSets
removed.)
For every possible settings dictionaryofcopy compute its fitness distance, treating
bare values of properties as ideal values. Let
candidates be the set of settings dictionaries for which the
fitness distance is finite.
Ifcandidates is empty, return
undefined as the result of the
SelectSettings algorithm.
(五)Iterate over the 'advanced' ConstraintSets in
newConstraints in the order in which they were
specified. For each ConstraintSet:
compute the fitness distance between it and
each settings dictionary in candidates,
treating bare values of properties as exact.
If the fitness distance is finite for one or more
settings dictionaries in candidates, keep
those settings dictionaries in candidates,
discarding others.
If the fitness distance is infinite for all settings
dictionaries in candidates, ignore this
ConstraintSet.
Select one settings dictionary from candidates,
and return it as the result of the SelectSettings
algorithm. The [=User Agent=] MUST use one with the smallest
fitness distance, as calculated in step 3. If more than
one settings dictionary have the smallest fitness distance,
the [=User Agent=] chooses one of them based on system default property values
and [=User Agent=] default property values.
For any property with a system default value for the selected device, the system default value SHOULD
be used if compatible with the above algorithm. This is usually the case for properties
like sampleRateorsampleSize.
Other properties, like echoCancellationorresizeMode do not usually have system default values.
The [=User Agent=] defines its own default values for these properties.
Implementors need to be cautious to select good default values since they will often have
an impact on how media content is generated.
It is recommended to look at existing implementations to select meaningful default values.
Note that default values may differ based on the system, for instance desktop vs. mobile.
At time of writing, [=User Agent=] implementations tend to use the following default values,
which were chosen for their suitability for using RTCPeerConnection as a sink:
width set to 640.
height set to 480.
frameRate set to 30.
echoCancellation set to true.
To apply the ApplyConstraints algorithm to an
object, given newConstraints as an
argument, the [=User Agent=] MUST run the following steps:
Let successfulSettings be the result of running
the SelectSettings algorithm with
newConstraints as the constraint set.
IfsuccessfulSettingsisundefined, let failedConstraint be
any required constraint
whose fitness distance was infinity
for all settings dictionaries examined while executing the
SelectSettings algorithm, or "" if there
isn't one, and then return
failedConstraint and abort these steps.
(三)In a single operation, remove the existing constraints from
object, apply newConstraints, and apply
successfulSettings as the current settings.
(四)Return undefined.
If the UA [=relinquish the device|relinquished the device=],
for instance if the track is [=MediaStreamTrack/muted|muted=],
applying the settings does not mean changing the device configuration.
Instead, the UA will configure the device to match the
track settings at the time the UA is reacquiring the
device, for instance when the track gets
[=MediaStreamTrack/muted|unmuted=].
Any implementation that has the same result as the algorithm
above is an allowed implementation. For instance, the
implementation may choose to keep track of the maximum and
minimum values for a setting that are OK under the constraints
considered, rather than keeping track of all possible values
for the setting.
When picking a settings dictionary, the UA can use any
information available to it. Examples of such information may
be whether the selection is done as part of device selection in
getUserMedia, whether the energy usage of the camera varies
between the settings dictionaries, or whether using a settings
dictionary will cause the device driver to apply
resampling.
The [=User Agent=] MAY choose new settings for the constrainable
properties of the object at any time. When it does so it MUST
attempt to satisfy all current Constraints, in the manner
described in the algorithm above, let
successfulSettings be the resulting new settings, and
queue a task to run the following steps:
Let object be the
ConstrainablePattern object on which new
settings for one or more constrainable properties have changed.
Set object's [[\Settings]] internal
slot to successfulSettings.
An example of Constraints that could be passed into
{{MediaStreamTrack/applyConstraints()}}
or returned as a value of constraints is below. It
uses the constrainable properties
defined for camera-sourced {{MediaStreamTrack}}s. In this example, all
constraints are ideal values, which means results are "best effort" based
on the user's specific camera:
await track.applyConstraints({
width: 1920,
height: 1080,
frameRate: 30,
});
const {width, height, frameRate} = track.getSettings();
console.log(`${width}x${height}x${frameRate}`); // 1920x1080x30, or it might be e.g.
// 1280x720x30 as best effort
For finer control, an application can insist on an exact match, provided
it's prepared to handle failure:
try {
await track.applyConstraints({
width: {exact: 1920},
height: {exact: 1080},
frameRate: {min: 25, ideal: 30, max: 30},
});
const {width, height, frameRate} = track.getSettings();
console.log(`${width}x${height}x${frameRate}`); // 1920x1080x25-30!
} catch (error) {
if (error.name != "OverconstrainedError") {
throw error;
}
console.log(`This camera cannot produce the requested ${error.constraint}.`);
}
Constraints can also be passed into {{MediaDevices/getUserMedia}}, not
just as an initialization convenience, but to influence device selection.
In this case,
[= list of inherent constrainable track properties | inherent constraints =]
are also available.
Here's an example of using constraints to prefer a specific
camera and microphone from a previous visit, with requirements on
dimensions and a preference for stereo, to be applied once granted, and to
help find suitable replacements in case the requested devices are no
longer available (or in some user agents, overriden by the user).
try {
const stream = await navigator.mediaDevices.getUserMedia({
video: {
deviceId: localStorage.camId,
width: {min: 800, ideal: 1024, max: 1280},
height: {min: 600}
},
audio: {
deviceId: localStorage.micId,
channelCount: 2
}
});
// Granted. Store deviceIds for next time
localStorage.camId = stream.getVideoTracks()[0].getSettings().deviceId;
localStorage.micId = stream.getAudioTracks()[0].getSettings().deviceId;
} catch (error) {
if (error.name != "OverconstrainedError") {
throw error;
}
// Overconstrained. No suitable replacements found
}
The above example avoids using {exact: deviceId},
so that browsers can use internally-available information about the
devices, such as user preference or absence of a device, over the
provided deviceId.
The example also stores the deviceIds on every grant, in
case they represent a new choice.
In contrast, here's an example of using constraints to implement an
in-content camera picker. In this case, we use exact and rely
solely on a deviceId that comes from the user picking from
a list of choices:
async function switchCameraTrack(freshlyChosenDeviceId, oldTrack) {
if (isMobile) {
oldTrack.stop(); // Some platforms can only open one camera at a time.
}
const stream = await navigator.mediaDevices.getUserMedia({
video: {
deviceId: {exact: freshlyChosenDeviceId}
}
});
const [track] = stream.getVideoTracks();
localStorage.camId = track.getSettings().deviceId;
return track;
}
Here's an example asking for a back camera on a phone, ideally in 720p,
but accepting anything close to that. Note how constraints on dimensions
are specified in landscape mode:
async function getBackCamera() {
return await navigator.mediaDevices.getUserMedia({
video: {
facingMode: {exact: 'environment'},
width: 1280,
height: 720
}
});
}
Here's an example of "I want a native 16:9 resolution near 720p, but
with an exact frame rate of 10 even if not natively available". This needs
to be done in two steps: One to discover the native mode, and a second
step to apply the custom frame rate. This also shows how to derive
constraints from current settings, which may be rotated:
async function nativeResolutionButDecimatedFrameRate() {
const stream = await navigator.mediaDevices.getUserMedia({
video: {
resizeMode: 'none', // means native resolution and frame rate
width: 1280,
height: 720,
aspectRatio: 16 / 9 // aspect ratios may not be exactly accurate
}
});
const [track] = stream.getVideoTracks();
const {width, height, aspectRatio} = track.getSettings();
// Constraints are in landscape, while settings may be rotated (portrait)
if (width < height) {
[width, height] = [height, width];
aspectRatio = 1 / aspectRatio;
}
await track.applyConstraints({
resizeMode: 'crop-and-scale',
width: {exact: width},
height: {exact: height},
frameRate: {exact: 10},
aspectRatio,
});
return stream;
}
Here's an example showing how to use {{MediaDevices/getSupportedConstraints}}, for cases where a constraint being ignored due to lack of support in a user agent is not tolerated by the application:
async function getFrontCameraRes() {
const supports = navigator.mediaDevices.getSupportedConstraints();
for (const constraint of ["facingMode", "aspectRatio", "resizeMode"]) {
if (!(constraint in supports) {
throw new OverconstrainedError(constraint, "Not supported");
}
}
return await navigator.mediaDevices.getUserMedia({
video: {
facingMode: {exact: 'user'},
advanced: [
{aspectRatio: 16/9, height: 1080, resizeMode: "none"},
{aspectRatio: 4/3, width: 1280, resizeMode: "none"}
]
}
});
}
dictionary DoubleRange {
double max;
double min;
};
dictionary ConstrainDoubleRange : DoubleRange {
double exact;
double ideal;
};
dictionary ULongRange {
[Clamp] unsigned long max;
[Clamp] unsigned long min;
};
dictionary ConstrainULongRange : ULongRange {
[Clamp] unsigned long exact;
[Clamp] unsigned long ideal;
};
dictionary ConstrainBooleanParameters {
boolean exact;
boolean ideal;
};
dictionary ConstrainDOMStringParameters {
(DOMString or sequence<DOMString>) exact;
(DOMString or sequence<DOMString>) ideal;
};
({{DOMString}} or
sequence<{{DOMString}}>)
The exact required value for this property.
ideal of type ({{DOMString}} or
sequence<{{DOMString}}>)
The ideal (target) value for this property.
dictionary ConstrainBooleanOrDOMStringParameters {
(boolean or DOMString) exact;
(boolean or DOMString) ideal;
};
typedef ([Clamp] unsigned long or ConstrainULongRange) ConstrainULong;Throughout this specification, the identifier ConstrainULong is used to refer to the ([Clamp] unsigned long or ConstrainULongRange) type.
typedef (double or ConstrainDoubleRange) ConstrainDouble;Throughout this specification, the identifier ConstrainDouble is used to refer to the (double or ConstrainDoubleRange) type.
typedef (boolean or ConstrainBooleanParameters) ConstrainBoolean;Throughout this specification, the identifier ConstrainBoolean is used to refer to the (boolean or ConstrainBooleanParameters) type.
typedef (DOMString or
sequence<DOMString> or
ConstrainDOMStringParameters) ConstrainDOMString;
Throughout this specification, the identifier
ConstrainDOMString is used to refer to the (DOMString or sequence<DOMString> or
ConstrainDOMStringParameters) type.
typedef (boolean or DOMString or ConstrainBooleanOrDOMStringParameters) ConstrainBooleanOrDOMString;Throughout this specification, the identifier ConstrainBooleanOrDOMString is used to refer to the (boolean or DOMString or ConstrainBooleanOrDOMStringParameters) type.
{
frameRate: {min: 1.0, max: 60.0},
facingMode: ['user', 'left']
}
The next example below points out that capabilities for range values
provide ranges for individual constrainable properties, not combinations.
This is particularly relevant for video width and height, since the
ranges for width and height are reported separately. In the example, if
the constrainable object can only provide 640x480 and 800x600
resolutions the relevant capabilities returned would be:
{
width: {min: 640, max: 800},
height: {min: 480, max: 600},
aspectRatio: {min: 4/3, max: 4/3}
}
Note in the example above that the aspectRatio would make clear that
arbitrary combination of widths and heights are not possible, although it
would still suggest that more than two resolutions were available.
A
specification using the Constrainable Pattern should not subclass the
below dictionary, but instead provide its own definition. See
{{MediaTrackCapabilities}} for an example.
Template:
dictionary Capabilities {};
getCapabilities() for which the property is defined on the
object type it's returned on; for instance, an audio
{{MediaStreamTrack}} has no "width" property. There MUST
be a single value for each key and the value MUST be a member of the set
defined for that property by getCapabilities(). The
Settings dictionary contains the actual values that the User
Agent has chosen for the object's constrainable properties. The exact
syntax of the value depends on the type of the property.
A conforming [=User Agent=] MUST support all the constrainable properties
defined in this specification.
An example of a Settings dictionary is shown below. This example is
not very realistic in that a [=User Agent=] would actually be required to
support more constrainable properties than just these.
{
frameRate: 30.0,
facingMode: 'user'
}
A specification using the Constrainable Pattern should not subclass
the below dictionary, but instead provide its own definition. See {{MediaTrackSettings}} for an example.
Template:
dictionary Settings {};
dictionary ConstraintSet {};
Each member of a ConstraintSet corresponds to a
constrainable property and specifies a subset of the property's valid
Capability values. Applying a ConstraintSet instructs the [=User Agent=] to
restrict the settings of the corresponding constrainable properties to
the specified values or ranges of values. A given property MAY occur both
in the basic Constraints set and in the advanced ConstraintSets list, and
MAY occur at most once in each ConstraintSet in the advanced list.
Template:
dictionary Constraints : ConstraintSet {
sequence<ConstraintSet> advanced;
};
advanced of type sequence<{{ConstraintSet}}>
This is the list of ConstraintSets that the [=User Agent=] MUST
attempt to satisfy, in order, skipping only those that cannot be
satisfied. The order of these ConstraintSets is significant. In
particular, when they are passed as an argument to
applyConstraints, the [=User Agent=] MUST try to satisfy
them in the order that is specified. Thus if advanced
ConstraintSets C1 and C2 can be satisfied individually, but not
together, then whichever of C1 and C2 is first in this list will
be satisfied, and the other will not. The [=User Agent=] MUST attempt
to satisfy all ConstraintSets in the list, even if some cannot be
satisfied. Thus, in the preceding example, if constraint C3 is
specified after C1 and C2, the [=User Agent=] will attempt to satisfy
C3 even though C2 cannot be satisfied. Note that a given property
name may occur only once in each ConstraintSet but may occur in
more than one ConstraintSet.
<button id="startBtn">Start</button>
<script>
const startBtn = document.getElementById('startBtn');
startBtn.onclick = async () => {
try {
startBtn.disabled = true;
const constraints = {
audio: true,
video: true
};
const stream = await navigator.mediaDevices.getUserMedia(constraints);
for (const track of stream.getTracks()) {
track.onended = () => {
startBtn.disabled = stream.getTracks().some((t) => t.readyState == 'live');
};
}
} catch (err) {
console.error(err);
}
};
</script>
This example allows people to take photos of themselves from the local
video camera. Note that the Image Capture specification [[?image-capture]]
provides a simpler way to accomplish this.
<script>
window.onload = async () => {
const video = document.getElementById('monitor');
const canvas = document.getElementById('photo');
const shutter = document.getElementById('shutter');
try {
video.srcObject = await navigator.mediaDevices.getUserMedia({video: true});
await new Promise(resolve => video.onloadedmetadata = resolve);
canvas.width = video.videoWidth;
canvas.height = video.videoHeight;
document.getElementById('splash').hidden = true;
document.getElementById('app').hidden = false;
shutter.onclick = () => canvas.getContext('2d').drawImage(video, 0, 0);
} catch (err) {
console.error(err);
}
};
</script>
<h1>Snapshot Kiosk</h1>
<section id="splash">
<p id="errorMessage">Loading...</p>
</section>
<section id="app" hidden>
<video id="monitor" autoplay></video>
<button id="shutter">📷</button>
<canvas id="photo"></canvas>
</section>
"camera" and "microphone".
It defines the following types and algorithms:
[=powerful feature/permission descriptor type=]
dictionary CameraDevicePermissionDescriptor : PermissionDescriptor {
boolean panTiltZoom = false;
};
A permission covers access to at least one device of a kind.
The semantics of the descriptor is that it queries for access to any device of that kind.
Thus, if a query
for the "camera" permission returns {{PermissionState/"granted"}}, the
client knows that it will get access to one camera without a permission prompt, and if
{{PermissionState/"denied"}} is returned, it knows that no getUserMedia request for a
camera will succeed.
If the User Agent considers permission given to some, but not all, devices of a kind, a query
will return
{{PermissionState/"granted"}}.
If the User Agent considers permission denied to all devices of a kind, a query
will return
{{PermissionState/"denied"}}.
`{name: "camera", panTiltZoom: true}` is [=PermissionDescriptor/stronger than=]
`{name: "camera", panTiltZoom: false}`.
A {{PermissionState/"granted"}} permission is no guarantee that getUserMedia will succeed. It
only indicates that the user will not be prompted for permission. There are many
other things (such as constraints or the camera being in use) that can cause
getUserMedia to fail.
[=powerful feature/permission revocation algorithm=]
This is the result of calling the [=device permission revocation algorithm=] passing
{{PermissionDescriptor/name}} as argument.
"self".
A [=document=]'s [=Document/permissions policy=]
determines whether any content in that document is allowed to use
{{MediaDevices/getUserMedia}} to request camera or microphone respectively. If
disabled in any document, no content in the document will be [=allowed to use=]
{{MediaDevices/getUserMedia}} to request the camera or microphone
respectively. This is enforced by the [=request permission to use=]
algorithm.
Additionally, {{MediaDevices/enumerateDevices}} will only enumerate devices
the document is [=allowed to use=].
https://webrtc.example.org/?call=user that would
automatically set up calls and transmit audio/video to
user, it would be open for instance to the
following abuse:
Users who have granted stored permissions to
https://webrtc.example.org/ could be tricked to send their
audio/video streams to an attacker EvilSpy by following a
link or being redirected to
https://webrtc.example.org/?user=EvilSpy.