-
Notifications
You must be signed in to change notification settings - Fork 61
/
getusermedia.html
5930 lines (5897 loc) · 292 KB
/
getusermedia.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
<!DOCTYPE html>
<html lang="en-us">
<head>
<link href="getusermedia.css" rel="stylesheet" type="text/css">
<title>Media Capture and Streams</title>
<meta content="text/html; charset=utf-8" http-equiv="Content-Type">
<script class="remove" src="respec-w3c.js" type="text/javascript"></script>
<script class="remove" src="getusermedia.js" type="text/javascript"></script>
</head>
<body>
<p class='copyright'>Initial Author of this Specification was Ian Hickson, Google Inc., with the following copyright statement:<br /> © Copyright 2004-2011 Apple Computer, Inc., Mozilla Foundation, and Opera Software ASA. You are granted a license to use, reproduce and create derivative works of this document.</p> <p class='copyright'>All subsequent changes since 26 July 2011 done by the W3C WebRTC Working Group (and previously the Device APIs Working Group) are under the following <a href="https://www.w3.org/policies/#copyright">Copyright</a> © 2011-2023 <a href="https://www.w3.org/">World Wide Web Consortium</a>. <abbr title="World Wide Web Consortium">W3C</abbr><sup>®</sup> <a href="https://www.w3.org/policies/#Legal_Disclaimer">liability</a>, <a href="https://www.w3.org/policies/#W3C_Trademarks">trademark</a> and <a href="https://www.w3.org/copyright/software-license/">permissive document license</a> rules apply.
<section id="abstract">
<p>This document defines a set of JavaScript APIs that allow local media,
including audio and video, to be requested from a platform.</p>
</section>
<section id="sotd">
<p>This document is not complete. The API is based on preliminary work done in the
WHATWG.</p>
<p>Before this document proceeds to Proposed Recommendation, the WebRTC Working Group intends to address <a href="https://www.w3.org/PM/horizontal/review.html?shortname=mediacapture-streams">issues that emerged from wide review</a>.</p>
</section>
<section class="informative" id="intro">
<h2>Introduction</h2>
<p>This document defines APIs for requesting access to local multimedia
devices, such as microphones or video cameras.</p>
<p>This document also defines the MediaStream API, which provides the means
to control where multimedia stream data is consumed, and provides some
control over the devices that produce the media. It also exposes
information about devices able to capture and render media.</p>
</section>
<section id="conformance">
<p>This specification defines conformance criteria that apply to a single
product: the <dfn>User Agent</dfn> that implements the interfaces that it
contains.</p>
<p>Conformance requirements phrased as algorithms or specific steps may be
implemented in any manner, so long as the end result is equivalent. (In
particular, the algorithms defined in this specification are intended to be
easy to follow, and not intended to be performant.)</p>
<p>Implementations that use ECMAScript [[ECMA-262]] to implement the APIs
defined in this specification must implement them in a manner consistent
with the ECMAScript Bindings defined in the Web IDL specification
[[WEBIDL]], as this specification uses that specification and
terminology.</p>
</section>
<section>
<h2>Terminology</h2>
<dl>
<dt><dfn data-export>source</dfn></dt>
<dd>
<p>A source is the "thing" providing the source of a media stream
track. The source is the broadcaster of the media itself. A source can
be a physical webcam, microphone, local video or audio file from the
user's hard drive, network resource, or static image. Note that this
document describes the use of microphone and camera type sources only,
the use of other source types is described in other documents.</p>
<p>An application that has no prior authorization regarding sources is
only given the number of available sources, their type and any
relationship to other devices. Additional information about sources can
become available when applications are authorized to use a source (see
<a href="#access-control-model"></a>).</p>
<p>Sources <strong>do not</strong> have constraints — tracks have
constraints. When a source is connected to a track, it must produce
media that conforms to the constraints present on that track, to that
track. Multiple tracks can be attached to the same source. [=User Agent=]
processing, such as downsampling, MAY be used to ensure that all tracks
have appropriate media.</p>
<p>Sources have constrainable properties which have
[=capabilities=] and [=settings=]
exposed on tracks. While the constrainable properties are "owned" by
the source, sources MAY be able to accommodate different demands at
once. For this reason, capabilities are common to any (multiple) tracks
that happen to be using the same source, whereas settings MAY differ
per track (e.g., if two different track objects bound to the same
source query capability and settings information, they will get back
the same capabilities, but may get different settings that are tailored
to satisfy their individual constraints).</p>
</dd>
<dt><dfn data-lt=settings>Setting</dfn> (Source Setting)</dt>
<dd>
<p>A setting refers to the immediate, current
value of the source's constrainable properties. Settings are always
read-only.</p>
<p>A source conditions may dynamically change, such as when a camera
switches to a lower frame rate due to low light conditions. In these
cases the tracks related to the affected source might not satisfy the
set constraints any longer. The platform SHOULD try to minimize such
excursions as far as possible, but will continue to deliver media even
when a temporary or permanent condition exists that prevents satisfying
the constraints.</p>
<p>Although settings are a property of the source, they are only
exposed to the application through the tracks attached to the source.
This is exposed via the <a>ConstrainablePattern</a> interface.</p>
</dd>
<dt><dfn data-lt=capabilities>Capability</dfn></dt>
<dd>
<p>For each constrainable property, there is a capability that
describes whether it is supported by the source and if so, the range of
supported values. As with settings, capabilities are exposed to the
application via the <a>ConstrainablePattern</a> interface.</p>
<p>The values of the supported capabilities must be normalized to the
ranges and enumerated types defined in this specification.</p>
<p>A {{MediaStreamTrack/getCapabilities()}} call on
a track returns the same underlying per-source capabilities for all
tracks connected to the source.</p>
<p>Source capabilities are effectively constant. Applications should be
able to depend on a specific source having the same capabilities for
any browsing session.</p>
<p>This API is intentionally simplified. Capabilities are not capable
of describing interactions between different values. For instance, it
is not possible to accurately describe the capabilities of a camera
that can produce a high resolution video stream at a low frame rate and
lower resolutions at a higher frame rate. Capabilities describe the
complete range of each value. Interactions between constraints are
exposed by attempting to apply constraints.</p>
</dd>
<dt><dfn>Constraint</dfn>s</dt>
<dd>
<p>Constraints provide a general control surface that allows
applications to both select an appropriate source for a track and, once
selected, to influence how a source operates.</p>
<p>Constraints limit the range of operating modes that a source can use
when providing media for a track. Without provided track constraints,
implementations are free to select a source's settings from the full
ranges of its supported capabilities. Implementations may also adjust
source settings at any time within the bounds imposed by all applied
constraints.</p>
<p>{{MediaDevices/getUserMedia()}} uses constraints
to help select an appropriate source for a track and configure it.
Additionally, the <a>ConstrainablePattern</a> interface on tracks
includes an API for dynamically changing the track's constraints at any
later time.</p>
<p>A track will not be connected to a source using
{{MediaDevices/getUserMedia()}} if its initial constraints cannot be
satisfied. However, the ability to meet the constraints on a track can
change over time, and constraints can be changed. If circumstances
change such that constraints cannot be met, the
<a>ConstrainablePattern</a> interface defines an appropriate error to
inform the application. [[[#the-model-sources-sinks-constraints-and-settings]]] explains how
constraints interact in more detail.</p>
<p>For each constrainable property, a constraint exists whose name
corresponds with the relevant source setting name and capability
name.</p>
<p>A constraint falls into one of three groups, depending on its place
in the constraints structure. The groups are:</p>
<ul>
<li><dfn>required constraints</dfn> are all <a data-lt=
"advanced constraints">non-advanced</a> constraints that are
<a>required</a>.</li>
<li><dfn>optional basic constraints</dfn> are the remaining
<a data-lt="advanced constraints">non-advanced</a> constraints.</li>
<li><dfn>advanced constraints</dfn> are all constraints specified
using the <code><a href="#dom-constraints-advanced">advanced</a></code>
keyword.</li>
</ul>
<p>In general, [=User Agents=] will have more flexibility to optimize the
media streaming experience the fewer constraints are applied, so
application authors are strongly encouraged to use <a>required
constraints</a> sparingly.</p>
</dd>
</dl>
</section>
<section id="stream-api">
<h2>MediaStream API</h2>
<section>
<h2>Introduction</h2>
<p>The two main components in the MediaStream API are the
{{MediaStreamTrack}} and {{MediaStream}} interfaces. The
{{MediaStreamTrack}} object represents media of a single
type that originates from one media source in the [=User Agent=], e.g. video
produced by a web camera. A {{MediaStream}} is used to
group several {{MediaStreamTrack}} objects into one unit
that can be recorded or rendered in a media element.</p>
<p>Each {{MediaStream}} can contain zero or more
{{MediaStreamTrack}} objects. All tracks in a
{{MediaStream}} are intended to be synchronized when
rendered. This is not a hard requirement, since it might not be possible
to synchronize tracks from sources that have different clocks. Different
{{MediaStream}} objects do not need to be
synchronized.</p>
<p class="note">While the intent is to synchronize tracks, it could be
better in some circumstances to permit tracks to lose synchronization. In
particular, when tracks are remotely sourced and real-time [[?WEBRTC]],
it can be better to allow loss of synchronization than to accumulate
delays or risk glitches and other artifacts. Implementations are expected
to understand the implications of choices regarding synchronization of
playback and the effect that these have on user perception.</p>
<p>A single {{MediaStreamTrack}} can represent
multi-channel content, such as stereo or 5.1 audio or stereoscopic video,
where the channels have a well defined relationship to each other.
Information about channels might be exposed through other APIs, such as
[[?WEBAUDIO]], but this specification provides no direct access to
channels.</p>
<p>A {{MediaStream}} object has an input and an output
that represent the combined input and output of all the object's tracks.
The output of the {{MediaStream}} controls how the object
is rendered, e.g., what is saved if the object is recorded to a file or
what is displayed if the object is used in a [^video^] element.
A single {{MediaStream}} object can be attached to
multiple different outputs at the same time.</p>
<p>A new {{MediaStream}} object can be created from
existing media streams or tracks using the
{{MediaStream/MediaStream()}} constructor. The constructor argument
can either be an existing {{MediaStream}} object, in
which case all the tracks of the given stream are added to the new
{{MediaStream}} object, or an array of
{{MediaStreamTrack}} objects. The latter form makes it
possible to compose a stream from different source streams.</p>
<p>Both {{MediaStream}} and
{{MediaStreamTrack}} objects can be cloned. A cloned
{{MediaStream}} contains clones of all member tracks from
the original stream. A cloned {{MediaStreamTrack}} has a
<a href="#constrainable-interface">set of constraints</a> that is
independent of the instance it is cloned from, which allows media from
the same source to have different constraints applied for different
<a>consumer</a>s. The {{MediaStream}} object is also used in
contexts outside {{MediaDevices/getUserMedia}}, such as [[?WEBRTC]].</p>
</section>
<section>
<h2>{{MediaStream}}</h2>
<p id=
"mediastream-constructor">The MediaStream <dfn data-idl data-dfn-for=MediaStream>constructor</dfn> composes a new
stream out of existing tracks. It takes an optional argument of type
{{MediaStream}} or an array of
{{MediaStreamTrack}} objects. When the constructor is invoked, the User
Agent must run the following steps:</p>
<ol class=algorithm>
<li>
<p>Let <var>stream</var> be a newly constructed
{{MediaStream}} object.</p>
</li>
<li>
<p>Initialize <var>stream</var>.{{MediaStream/id}} attribute to a newly generated
value.</p>
</li>
<li>
<p>If the constructor's argument is present, run the following
steps:</p>
<ol>
<li>
<p>Construct a set of tracks <var>tracks</var> based on the type
of argument:</p>
<ul>
<li>
<p>A {{MediaStream}} object:</p>
<p>Let <var>tracks</var> be a set containing all the
{{MediaStreamTrack}} objects in the
{{MediaStream}} <a>track
set</a>.</p>
</li>
<li>
<p>A sequence of {{MediaStreamTrack}}
objects:</p>
<p>Let <var>tracks</var> be a set containing all the
{{MediaStreamTrack}} objects in the provided
sequence.</p>
</li>
</ul>
</li>
<li>
<p>For each {{MediaStreamTrack}},
<var>track</var> , in <var>tracks</var>, run the following
steps:</p>
<ol>
<li>
<p>If <var>track</var> is already in <var>stream</var>'s
[=track set=], skip
<var>track</var>.</p>
</li>
<li>
<p>Otherwise, add <var>track</var> to <var>stream</var>'s
[=track set=].</p>
</li>
</ol>
</li>
</ol>
</li>
<li>
<p>Return <var>stream</var>.</p>
</li>
</ol>
<p>The tracks of a {{MediaStream}} are stored in a
<dfn class="export">track set</dfn>. The track set MUST contain the
{{MediaStreamTrack}} objects that correspond to the
tracks of the stream. The relative order of the tracks in the set is User
Agent defined and the API will never put any requirements on the order.
The proper way to find a specific {{MediaStreamTrack}}
object in the set is to look it up by its {{MediaStreamTrack/id}}.</p>
<p>An object that reads data from the output of a
{{MediaStream}} is referred to as a
{{MediaStream}} <dfn data-export data-for=MediaStream>consumer</dfn>. The list of
{{MediaStream}} consumers currently include media
elements (such as [^video^] and
[^audio^]) [[HTML]], Web Real-Time Communications
(WebRTC; {{RTCPeerConnection}}) [[?WEBRTC]], media recording
(<code class=fixme>MediaRecorder</code>) [[?mediastream-recording]], image capture
(<code class=fixme>ImageCapture</code>) [[?image-capture]], and web audio
({{MediaStreamAudioSourceNode}}) [[?WEBAUDIO]].</p>
<p class="note">{{MediaStream}} consumers must be able to
handle tracks being added and removed. This behavior is specified per
consumer.</p>
<p>A {{MediaStream}} object is said to be <dfn data-dfn-for="stream" id=
"stream-active">active</dfn> when it has at least one
{{MediaStreamTrack}} that has not [=MediaStreamTrack/ended=]. A {{MediaStream}} that does not
have any tracks or only has tracks that are [= MediaStreamTrack/ended =]
is <dfn data-dfn-for="stream" id="stream-inactive">inactive</dfn>.</p>
<p>A {{MediaStream}} object is said to be <dfn data-dfn-for=stream id=
"stream-audible" data-dfn-for=stream>audible</dfn> when it has at least one
{{MediaStreamTrack}} whose {{MediaStreamTrack/[[Kind]]}} is <a>"audio"</a>
that has not [=MediaStreamTrack/ended=]. A {{MediaStream}} that does not have any
audio tracks or only has audio tracks that are [=MediaStreamTrack/ended=] is
<dfn id="stream-inaudible" data-dfn-for=stream>inaudible</dfn>.</p>
<p>The [=User Agent=] may update a {{MediaStream}}'s [=track set=] in response to, for example, an external
event. This specification does not specify any such cases, but other
specifications using the MediaStream API may. One such example is the
WebRTC 1.0 [[?WEBRTC]] specification where the [=track set=] of a {{MediaStream}}, received
from another peer, can be updated as a result of changes to the media
session.</p>
<p>To <dfn class="abstract-op" data-dfn-for="MediaStream">add a track</dfn> <var>track</var> to a
{{MediaStream}} <var>stream</var>, the [=User Agent=] MUST
run the following steps:</p>
<ol class=algorithm>
<li>
<p>If <var>track</var> is already in <var>stream's</var> [=track set=], then abort these steps.</p>
</li>
<li>
<p>Add <var>track</var> to <var>stream</var>'s [=track set=].</p>
</li>
<li>
<p>[= Fire a track event=] named {{addtrack}} with
<var>track</var> at <var>stream</var>.</p>
</li>
</ol>
<p>To <dfn class="abstract-op" data-dfn-for="MediaStream">remove a track</dfn> <var>track</var> from a
{{MediaStream}} <var>stream</var>, the [=User Agent=] MUST
run the following steps:</p>
<ol class=algorithm>
<li>
<p>If <var>track</var> is not in <var>stream's</var> [=track set=], then abort these steps.</p>
</li>
<li>
<p>[=MediaStream/Remove a track|Remove=] <var>track</var> from <var>stream</var>'s [=track set=].</p>
</li>
<li>
<p>[= Fire a track event =] named {{removetrack}} with
<var>track</var> at <var>stream</var>.</p>
</li>
</ol>
<div>
<pre class="idl"
>[Exposed=Window]
interface MediaStream : EventTarget {
constructor();
constructor(MediaStream stream);
constructor(sequence<MediaStreamTrack> tracks);
readonly attribute DOMString id;
sequence<MediaStreamTrack> getAudioTracks();
sequence<MediaStreamTrack> getVideoTracks();
sequence<MediaStreamTrack> getTracks();
MediaStreamTrack? getTrackById(DOMString trackId);
undefined addTrack(MediaStreamTrack track);
undefined removeTrack(MediaStreamTrack track);
MediaStream clone();
readonly attribute boolean active;
attribute EventHandler onaddtrack;
attribute EventHandler onremovetrack;
};</pre>
<section>
<h2>Constructors</h2>
<dl data-link-for="MediaStream" data-dfn-for="MediaStream" class=
"constructors">
<dt>{{MediaStream}}</dt>
<dd>
<p>See the <a href="#mediastream-constructor">MediaStream
constructor algorithm</a></p>
<div>
<em>No parameters.</em>
</div>
</dd>
<dt>{{MediaStream}}</dt>
<dd>
<p>See the <a href="#mediastream-constructor">MediaStream
constructor algorithm</a></p>
</dd>
<dt>{{MediaStream}}</dt>
<dd>
<p>See the <a href="#mediastream-constructor">MediaStream
constructor algorithm</a></p>
</dd>
</dl>
</section>
<section>
<h2>Attributes</h2>
<dl data-link-for="MediaStream" data-dfn-for="MediaStream" class=
"attributes">
<dt>{{id}} of type {{DOMString}}, readonly</dt>
<dd>
<p>The <dfn data-idl>id</dfn> attribute MUST return the value to
which it was initialized when the object was created.</p>
<p>When a {{MediaStream}} is created, the User
Agent MUST generate an identifier string, and MUST initialize the
object's {{id}}
attribute to that string, unless the object is created as part of
a special purpose algorithm that specifies how the stream id must
be initialized. A good practice is to use a UUID [[rfc4122]],
which is 36 characters long in its canonical form. To avoid
fingerprinting, implementations SHOULD use the forms in section
4.4 or 4.5 of RFC 4122 when generating UUIDs.</p>
<p>An example of an algorithm that specifies how the stream id
must be initialized is the algorithm to associate an incoming
network component with a {{MediaStream}} object. [[?WEBRTC]]</p>
</dd>
<dt><dfn>active</dfn> of type {{boolean}}, readonly</dt>
<dd>
<p>The {{active}} attribute MUST return
<code>true</code> if this {{MediaStream}} is
[= stream/active =] and <code>false</code>
otherwise.</p>
</dd>
<dt><dfn>onaddtrack</dfn> of type {{EventHandler}}</dt>
<dd>
<p>The event type of this event handler is {{addtrack}}.</p>
</dd>
<dt><dfn>onremovetrack</dfn> of type {{EventHandler}}</dt>
<dd>
<p>The event type of this event handler is {{removetrack}}.</p>
</dd>
</dl>
</section>
<section>
<h2>Methods</h2>
<dl data-link-for="MediaStream" data-dfn-for="MediaStream" class=
"methods">
<dt><dfn>getAudioTracks()</dfn></dt>
<dd>
<p>Returns a sequence of {{MediaStreamTrack}}
objects representing the audio tracks in this stream.</p>
<p>The {{getAudioTracks}}
method MUST return a sequence that represents a snapshot of all
the {{MediaStreamTrack}} objects in this stream's
[=track set=] whose {{MediaStreamTrack/[[Kind]]}} is equal to
<a>"audio"</a>. The conversion from the [=track set=] to the sequence is [=User Agent=] defined
and the order does not have to be stable between calls.</p>
</dd>
<dt><dfn>getVideoTracks()</dfn></dt>
<dd>
<p>Returns a sequence of {{MediaStreamTrack}}
objects representing the video tracks in this stream.</p>
<p>The {{getVideoTracks}}
method MUST return a sequence that represents a snapshot of all
the {{MediaStreamTrack}} objects in this stream's
[=track set=] whose {{MediaStreamTrack/[[Kind]]}} is equal to
<a>"video"</a>. The conversion from the
[=track set=] to the sequence is [=User Agent=] defined
and the order does not have to be stable between calls.</p>
</dd>
<dt><dfn>getTracks()</dfn></dt>
<dd>
<p>Returns a sequence of {{MediaStreamTrack}}
objects representing all the tracks in this stream.</p>
<p>The {{getTracks}} method
MUST return a sequence that represents a snapshot of all the
{{MediaStreamTrack}} objects in this stream's
[=track set=], regardless of {{MediaStreamTrack/[[Kind]]}}. The
conversion from the [=track set=] to the sequence is User
Agent defined and the order does not have to be stable between
calls.</p>
</dd>
<dt><dfn>getTrackById()</dfn></dt>
<dd>
<p>The {{getTrackById}}
method MUST return either a {{MediaStreamTrack}}
object from this stream's [=track set=]
whose {{MediaStreamTrack/[[Id]]}} is
equal to <var>trackId</var>, or <code>null</code>, if no such track
exists.</p>
</dd>
<dt><dfn>addTrack()</dfn></dt>
<dd>
<p>Adds the given {{MediaStreamTrack}} to this
{{MediaStream}}.</p>
<p>When the {{addTrack}} method is
invoked, the [=User Agent=] MUST run the following steps:</p>
<ol class=algorithm>
<li>
<p>Let <var>track</var> be the methods argument and
<var>stream</var> the {{MediaStream}} object
on which the method was called.</p>
</li>
<li>
<p>If <var>track</var> is already in <var>stream</var>'s
[=track set=], then abort these
steps.</p>
</li>
<li>
<p>[=MediaStream/Add a track|Add=] <var>track</var> to <var>stream</var>'s [=track set=].</p>
</li>
</ol>
</dd>
<dt><dfn>removeTrack()</dfn></dt>
<dd>
<p>Removes the given {{MediaStreamTrack}} object
from this {{MediaStream}}.</p>
<p>When the {{removeTrack}}
method is invoked, the [=User Agent=] MUST run the following
steps:</p>
<ol class=algorithm>
<li>
<p>Let <var>track</var> be the methods argument and
<var>stream</var> the {{MediaStream}} object
on which the method was called.</p>
</li>
<li>
<p>If <var>track</var> is not in <var>stream's</var> [=track set=], then abort these steps.</p>
</li>
<li>
<p>[=MediaStream/Remove a track|Remove=] <var>track</var> from <var>stream</var>'s [=track set=].</p>
</li>
</ol>
</dd>
<dt><dfn>clone()</dfn></dt>
<dd>
<p>Clones the given {{MediaStream}} and all its
tracks.</p>
<p>When the {{clone()}} method is invoked, the User
Agent MUST run the following steps:</p>
<ol class=algorithm>
<li>
<p>Let <var>streamClone</var> be a newly constructed
{{MediaStream}} object.</p>
</li>
<li>
<p>Initialize <var>streamClone</var>.{{MediaStream.id}} to a newly
generated value.</p>
</li>
<li>
<p><a href="#track-clone">Clone each track</a> in this
{{MediaStream}} object and add the result to
<var>streamClone</var>'s <a>track
set</a>.</p>
</li>
<li>Return <var>streamClone</var>.</li>
</ol>
</dd>
</dl>
</section>
</div>
</section>
<section>
<h2>{{MediaStreamTrack}}</h2>
<p>A {{MediaStreamTrack}} object represents a media
source in the [=User Agent=]. An example source is a device connected to the
[=User Agent=]. Other specifications may define sources for
{{MediaStreamTrack}} that override the behavior specified
here. Several {{MediaStreamTrack}} objects can represent
the same media source, e.g., when the user chooses the same camera in the
UI shown by two consecutive calls to {{MediaDevices/getUserMedia()}}.</p>
<p>A {{MediaStreamTrack}} source defines the following properties:
<ol>
<li>A source has a <dfn data-export>MediaStreamTrack source type</dfn>.
It is set to either {{MediaStreamTrack}} or a subtype of {{MediaStreamTrack}}.
By default, it is set to {{MediaStreamTrack}}. </li>
<li>A source has <dfn data-export>MediaStreamTrack source-specific construction steps</dfn>
that are executed when creating a {{MediaStreamTrack}} from a source.
The steps take a newly created {{MediaStreamTrack}} as input. By default, the steps are empty.</li>
<li>A source has <dfn data-export>MediaStreamTrack source-specific clone steps</dfn>
that are executed when cloning a {{MediaStreamTrack}} of the given source.
The steps take the source and destination {{MediaStreamTrack}}s as input. By default, the steps are empty.</li>
</ol>
</p>
<p>The data from a {{MediaStreamTrack}} object does not
necessarily have a canonical binary form; for example, it could just be
"the video currently coming from the user's video camera". This allows
[=User Agents=] to manipulate media in whatever fashion is most suitable on
the user's platform.</p>
<p>A script can indicate that a {{MediaStreamTrack}}
object no longer needs its source with the {{MediaStreamTrack/stop()}}
method. When all tracks
using a source have been stopped or ended by some other means, the source
is <dfn id="source-stopped" data-lt="source stopped state">stopped</dfn>. If the source is a device
exposed by {{MediaDevices/getUserMedia()}}, then when
the source is stopped, the [=User Agent=] MUST run the following steps:</p>
<ol class=algorithm>
<li>
<p>Let <var>mediaDevices</var> be the {{MediaDevices}} object in question.</p>
<li>
<p>Let <var>deviceId</var> be the source device's {{MediaDeviceInfo/deviceId}}.</p>
</li>
<li>
<p>Set <var>mediaDevices</var>.{{MediaDevices/[[devicesLiveMap]]}}[<var>deviceId</var>] to
<code>false</code>.</p>
</li>
<li>
<p>If the [=permission state=]
of the permission associated with the device's kind and
<var>deviceId</var> for <var>mediaDevices</var>'s [=relevant settings object=],
is not {{PermissionState/"granted"}}, then set
<var>mediaDevices</var>.{{MediaDevices/[[devicesAccessibleMap]]}}[<var>deviceId</var>] to
<code>false</code>.</p>
</li>
</ol>
<p>To <dfn class="abstract-op">create a MediaStreamTrack</dfn> with an underlying
<var>source</var>, and a <var>mediaDevicesToTieSourceTo</var>, run the
following steps:</p>
<ol class=algorithm>
<li>
<p>Let <var>track</var> be a new object of type <var>source</var>'s [=MediaStreamTrack source type=].</p>
<p>Initialize track with the following internal slots:</p>
<ul data-dfn-for="MediaStreamTrack">
<li>
<p><dfn>[[\Source]]</dfn>,
initialized to <var>source</var>.</p>
</li>
<li>
<p><dfn>[[\Id]]</dfn>,
initialized to a newly generated unique identifier string. See
{{MediaStream.id}} attribute for guidelines on how to generate
such an identifier.</p>
</li>
<li>
<p><dfn>[[\Kind]]</dfn>,
initialized to <dfn><code>"audio"</code></dfn> if <var>source</var> is
an audio source, or <dfn><code>"video"</code></dfn> if
<var>source</var> is a video source.</p>
</li>
<li>
<p><dfn>[[\Label]]</dfn>,
initialized to <var>source</var>'s label, if provided by the User
Agent, or <code>""</code> otherwise. [=User Agents=] MAY label audio and
video sources (e.g., "Internal microphone" or "External USB Webcam").
</p>
</li>
<li>
<p><dfn>[[\ReadyState]]</dfn>,
initialized to {{MediaStreamTrackState/"live"}}.</p>
</li>
<li>
<p><dfn>[[\Enabled]]</dfn>,
initialized to <code>true</code>.</p>
</li>
<li>
<p><dfn>[[\Muted]]</dfn>,
initialized to <code>true</code> if <var>source</var> is
[= source/muted =], and <code>false</code> otherwise.</p>
</li>
<li>
<a data-link-for="constrainable object"
data-link-type="attribute">[[\Capabilities]]</a>,
<a data-link-for="constrainable object"
data-link-type="attribute">[[\Constraints]]</a>, and
<a data-link-for="constrainable object"
data-link-type="attribute">[[\Settings]]</a>, all initialized as
specified in the {{ConstrainablePattern}}.</p>
</li>
<li>
<p><dfn class="export">[[\Restrictable]]</dfn>, initialized to <code>false</code>.</p>
</li>
</ul>
</li>
<li>
<p>If <var>mediaDevicesToTieSourceTo</var> is not <code>null</code>,
[=tie track source to `MediaDevices`=] with <var>source</var> and <var>mediaDevicesToTieSourceTo</var>.</p>
</li>
<li><p>Run <var>source</var>'s [=MediaStreamTrack source-specific construction steps=]
with <var>track</var> as parameter.</p></li>
<li><p>Return <var>track</var>.</p></li>
</ol>
<p>To <dfn class="abstract-op">initialize the underlying source</dfn> of <var>track</var>
to <var>source</var>, run the following steps:</p>
<ol class="algorithm">
<li><p>Initialize <var>track</var>.{{MediaStreamTrack/[[Source]]}} to
<var>source</var>.</p></li>
<li><p>Initialize <var>track</var>'s <a data-link-for="constrainable object"
data-link-type="attribute">[[\Capabilities]]</a>,
<a data-link-for="constrainable object"
data-link-type="attribute">[[\Constraints]]</a>, and
<a data-link-for="constrainable object"
data-link-type="attribute">[[\Settings]]</a>, as
specified in the {{ConstrainablePattern}}.</p></li>
</ol>
<p>To <dfn class="abstract-op">tie track source to `MediaDevices`</dfn>, given <var>source</var> and
<var>mediaDevices</var>, run the following steps:
<ol class="algorithm">
<li>
<p>Add <var>source</var> to
<var>mediaDevices</var>.{{MediaDevices/[[mediaStreamTrackSources]]}}.</p>
</li>
</ol>
<p>To <dfn class="abstract-op">stop all sources</dfn> of a [=global object=], named <var>globalObject</var>,
the [=User Agent=] MUST run the following steps:</p>
<ol class="algorithm">
<li><p>For each {{MediaStreamTrack}} object <var>track</var> whose
<a data-cite="!HTML/#concept-relevant-global">relevant global object</a> is <var>globalObject</var>,
set <var>track</var>'s {{MediaStreamTrack/[[ReadyState]]}} to
{{MediaStreamTrackState/"ended"}}.</p></li>
<li><p>If <var>globalObject</var> is a {{Window}}, then for each <var>source</var> in
<var>globalObject</var>'s
[=associated `MediaDevices`=].{{MediaDevices/[[mediaStreamTrackSources]]}},
[= source/stopped | stop =] <var>source</var>.</p></li>
</ol>
<p>The [=User Agent=] MUST [=stop all sources=] of a <var>globalObject</var> in the following conditions:</p>
<ol>
<li><p>If <var>globalObject</var> is a {{Window}} object and the [=unloading document cleanup steps=]
are executed for its [=associated document=].</p></li>
<li><p>If <var>globalObject</var> is a {{WorkerGlobalScope}} object and its
<a data-cite="!HTML/workers.html#dom-workerglobalscope-closing">closing</a> flag is set to true.</p></li>
</ol>
<p>An implementation may use a per-source reference count to keep track
of source usage, but the specifics are out of scope for this
specification.</p>
<p>To <dfn class="abstract-op" id="track-clone">clone a track</dfn> the [=User Agent=] MUST run
the following steps:</p>
<ol class="algorithm">
<li>
<p>Let <var>track</var> be the {{MediaStreamTrack}}
object to be cloned.</p>
</li>
<li>
<p>Let <var>source</var> be <var>track</var>'s
{{MediaStreamTrack/[[Source]]}}.</p>
</li>
<li>
<p>Let <var>trackClone</var> be the result of
[=create a MediaStreamTrack | creating a MediaStreamTrack=] with
<var>source</var> and <code>null</code>.
</p>
</li>
<li>
<p>Set <var>trackClone</var>'s {{MediaStreamTrack/[[ReadyState]]}} to
<var>track</var>'s {{MediaStreamTrack/[[ReadyState]]}} value.</p>
</li>
<li>
<p>Set <var>trackClone</var>'s
<a data-link-for="constrainable object"
data-link-type="attribute">[[\Capabilities]]</a> to a clone of
<var>track</var>'s
<a data-link-for="constrainable object"
data-link-type="attribute">[[\Capabilities]]</a>.</p>
</li>
<li>
<p>Set <var>trackClone</var>'s
<a data-link-for="constrainable object"
data-link-type="attribute">[[\Constraints]]</a> to a clone of
<var>track</var>'s
<a data-link-for="constrainable object"
data-link-type="attribute">[[\Constraints]]</a>.</p>
</li>
<li>
<p>Set <var>trackClone</var>'s
<a data-link-for="constrainable object"
data-link-type="attribute">[[\Settings]]</a> to a clone of
<var>track</var>'s
<a data-link-for="constrainable object"
data-link-type="attribute">[[\Settings]]</a>.</p>
</li>
<li>
<p>Run <var>source</var> [=MediaStreamTrack source-specific clone steps=] with <var>track</var> and <var>trackClone</var> as parameters.</p>
</li>
<li>
<p>Return <var>trackClone</var>.</p>
</li>
</ol>
<section>
<h3>Media Flow and Life-cycle</h3>
<section>
<h4>Media Flow</h4>
<p>There are two dimensions related to the media flow for a
{{MediaStreamTrackState/"live"}} {{MediaStreamTrack}} : muted / not
muted, and enabled / disabled.</p>
<p><dfn class="export" data-dfn-for="MediaStreamTrack" data-dfn-type="dfn" id=
"track-muted">Muted</dfn> refers to the input to the
{{MediaStreamTrack}}. A {{MediaStreamTrack}} is [= MediaStreamTrack/muted =]
when its source is <dfn data-dfn-for="source">muted</dfn>,
i.e. temporarily unable to provide the track with data.
Live samples MUST NOT be made available to a
{{MediaStreamTrack}} while it is [=MediaStreamTrack/muted=].</p>
<p>The [=MediaStreamTrack/muted=] state is outside the control of web applications, but can be observed by
the application by reading the {{MediaStreamTrack/muted}} attribute and listening
to the associated events {{mute}} and {{unmute}}. The reasons for a
{{MediaStreamTrack}} to be muted are defined by its <a>source</a>.</p>
<p>For camera and microphone sources, the reasons to [=source/muted|mute=] are
[=implementation-defined=]. This allows user agents to implement privacy
mitigations in situations like:
the user pushing a physical mute button on the microphone, the user
closing a laptop lid with an embedded camera, the user toggling a
control in the operating system, the user clicking a mute button in the
[=User Agent=] chrome, the [=User Agent=] (on behalf of the user) mutes, etc.</p>
<p>On some operating systems, microphone access may
get stolen from the [=User Agent=] when another application with higher-audio priority gets access to it,
for instance in case of an incoming phone call on mobile OS. The [=User Agent=] SHOULD provide
this information to the web application through {{MediaStreamTrack/muted}} and
its associated events.</p>
<p>Whenever the [=User Agent=] initiates such an [= implementation-defined=]
change for camera or microphone sources, it MUST queue a
task, using the user interaction task source, to [=MediaStreamTrack/set a track's muted
state=] to the state desired by the user.</p>
<div class="note">This does not apply to [=source|sources=] defined in
other specifications. Other specifications need to define their own steps
to [=MediaStreamTrack/set a track's muted state=] if desired.</div>
<p>To <dfn class="export abstract-op" data-dfn-for="MediaStreamTrack"
id="set-track-muted">set a track's muted state</dfn> to
<var>newState</var>, the [=User Agent=] MUST run the following steps:</p>
<ol class="algorithm">
<li>
<p>Let <var>track</var> be the {{MediaStreamTrack}} in
question.</p>
</li>
<li>
<p>If <var>track</var>.{{MediaStreamTrack/[[Muted]]}} is already
<var>newState</var>, then abort these steps.</p>
</li>
<li>
<p>Set <var>track</var>.{{MediaStreamTrack/[[Muted]]}} to
<var>newState</var>.</p>
</li>
<li>
<p>If <var>newState</var> is <code>true</code> let
<var>eventName</var> be {{mute}}, otherwise
{{unmute}}.</p>
</li>
<li>
<p>[=Fire an event=] named <var>eventName</var> on
<var>track</var>.</p>
</li>
</ol>
<p><dfn data-export id="track-enabled" data-dfn-for="MediaStreamTrack" data-dfn-type="dfn" data-lt="track enabled state|enabled" data-lt-noDefault>Enabled/disabled</dfn> on the other hand is
available to the application to control (and observe) via the
{{MediaStreamTrack/enabled}}
attribute.</p>
<p>The result for the consumer is the same in the sense that whenever
{{MediaStreamTrack}} is muted or disabled (or both) the
consumer gets zero-information-content, which means silence for audio
and black frames for video. In other words, media from the source only
flows when a {{MediaStreamTrack}} object is both
unmuted and enabled. For example, a video element sourced by a
{{MediaStream}} containing only muted or disabled {{MediaStreamTrack}}s
for audio and video, is playing but rendering black video frames in
silence.</p>
<p>For a newly created {{MediaStreamTrack}} object, the
following applies: the track is always enabled unless stated otherwise
(for example when cloned) and the muted state reflects the state of the
source at the time the track is created.</p>
</section>
<section>
<h4>Life-cycle</h4>
<p>A {{MediaStreamTrack}} has two states in its
life-cycle: live and ended. A newly created
{{MediaStreamTrack}} can be in either state depending
on how it was created. For example, cloning an ended track results in a
new ended track. The current state is reflected by the object's
{{MediaStreamTrack/readyState}}
attribute.</p>
<p>In the live state, the track is active and media
(or zero-information-content if the {{MediaStreamTrack}} is
[= MediaStreamTrack/muted =] or [= MediaStreamTrack/enabled | disabled =])
is available for use by consumers.</p>
<p>If the source is a device exposed by `navigator.mediaDevices.`{{MediaDevices/getUserMedia()}},
then when a track becomes either
muted or disabled, and this brings all tracks connected to the device
to be either muted, disabled, or stopped, then the UA MAY, using the
device's {{MediaDeviceInfo/deviceId}}, <var>deviceId</var>, set
`navigator.mediaDevices.`{{MediaDevices/[[devicesLiveMap]]}}[<var>deviceId</var>] to <code>false</code>,
provided the UA sets it back to <code>true</code> as soon as any
unstopped track connected to this device becomes un-muted or enabled
again.</p>
<p>When a {{MediaStreamTrackState/"live"}}, [= MediaStreamTrack/muted | unmuted =], and
[= MediaStreamTrack/enabled =] track sourced by a device exposed
by {{MediaDevices/getUserMedia()}} becomes either
[= MediaStreamTrack/muted =] or [= MediaStreamTrack/enabled | disabled =],
and this brings <em>all</em> tracks connected to the device (across all
[=navigables=] the user agent operates) to be either
muted, disabled, or stopped, then the UA SHOULD relinquish the device
within 3 seconds while allowing time for a reasonably-observant user to
become aware of the transition. The UA SHOULD attempt to reacquire the
device as soon as any live track sourced by the device
becomes both [= MediaStreamTrack/muted | unmuted =] and
[= MediaStreamTrack/enabled =] again, provided that track's
[=relevant global object=]'s [=associated `Document`=]
[=Document/is in view=] at that time. If the
document is not [=Document/is in view|in view=] at that time,
the UA SHOULD instead queue a task to [=MediaStreamTrack/muted|mute=] the
track, and not queue a task to [=MediaStreamTrack/muted|unmute=] it until
the document comes [=Document/is in view|into view=].
If reacquiring the device fails, the UA MUST
[= track ended by the User agent | end the track =] (The UA MAY end it earlier
should it detect a device problem, like the device being physically
removed).</p>
<div class="note">
<p>The intent is to give users the assurance of privacy that having
physical camera (and microphone) hardware lights off brings, by
aligning physical and logical “privacy indicators”, at least while the
current document is the sole user of a device.</p>
<p>While other applications and documents using the device
simultaneously may interfere with this intent at times, they do not
interfere with the rules laid forth.</p>
</div>
<p>A {{MediaStreamTrack}} object is said to
<em>end</em> when the source of the track is disconnected or
exhausted.</p>
<p>If all {{MediaStreamTrack}}s that are using the same
source are [= MediaStreamTrack/ended =], the source will be
[= source/stopped =].</p>
<p>After the application has invoked the {{MediaStreamTrack/stop()}}
method on a {{MediaStreamTrack}} object, or once the [=source=] of a
{{MediaStreamTrack}} permanently ends production of live samples to its tracks,
whichever is sooner, a {{MediaStreamTrack}} is said to be
<dfn id="track-ended" data-dfn-for="MediaStreamTrack" data-dfn-type="dfn" data-export>ended</dfn>.</p>
<p>For camera and microphone sources, the reasons for a source to
[=MediaStreamTrack/ended|end=] besides {{MediaStreamTrack/stop()}} are
[=implementation-defined=]
(e.g., because the user rescinds the permission for the page to
use the local camera, or because the User
Agent has instructed the track to end for any reason).</p>
<p>When a {{MediaStreamTrack}} <var>track</var>
<dfn data-lt="track ended by the User agent" data-for=MediaStreamTrack data-export id="ends-nostop">ends for any reason other than the {{MediaStreamTrack/stop()}} method being
invoked</dfn>, the [=User Agent=] MUST queue a task that runs the following
steps:</p>
<ol class="algorithm">
<li>
<p>If <var>track</var>'s {{MediaStreamTrack/[[ReadyState]]}}
has the value {{MediaStreamTrackState/"ended"}} already, then abort these
steps.</p>
</li>
<li>
<p>Set <var>track</var>'s {{MediaStreamTrack/[[ReadyState]]}}
to {{MediaStreamTrackState/"ended"}}.</p>
</li>
<li>
<p>Notify <var>track</var>'s {{MediaStreamTrack/[[Source]]}} that <var>track</var> is
[= MediaStreamTrack/ended =] so that the source may be [= source/stopped =], unless other
{{MediaStreamTrack}} objects depend on it.</p>
</li>
<li>
<p>[=Fire an event=] named <a data-link-type=event>ended</a> at the object.</p>
</li>
</ol>
<p>If the end of the track was reached due to a user request, the event
source for this event is the user interaction event source.</p>
<p>To invoke the <dfn class="abstract-op">device permission revocation algorithm</dfn> with <var>permissionName</var>,
run the following steps:</p>
<ol class="algorithm">
<li>
<p>Let <var>tracks</var> be the set of all currently
{{MediaStreamTrackState/"live"}} <code>MediaStreamTrack</code>s
whose permission associated with this kind of track (<a>"camera"</a> or <a>"microphone"</a>)
matches <var>permissionName</var>.</p>
</li>
<li>
<p>For each <var>track</var> in <var>tracks</var>,
<a href="#ends-nostop">end</a> the track.</p>
</li>
</ol>
</section>
</section>
<section>
<h3>Tracks and Constraints</h3>
<p>{{MediaStreamTrack}} is a <a>constrainable
object</a> as defined in the <a href=
"#constrainable-interface">Constrainable Pattern</a> section.
Constraints are set on tracks and may affect sources.</p>
<p>Whether <code><a>Constraints</a></code> were provided at track
initialization time or need to be established later at runtime, the
APIs defined in the <a>ConstrainablePattern</a> Interface allow the
retrieval and manipulation of the constraints currently established on
a track.</p>
<p>Once ended, a track will continue exposing a
<dfn id="list-of-inherent-constrainable-track-properties">
list of inherent constrainable track properties</dfn>.
This list contains <code><a href="#def-constraint-deviceId">deviceId</a></code>,
<code><a href="#def-constraint-facingMode">facingMode</a></code> and
<code><a href="#def-constraint-groupId">groupId</a></code>.
</p>
</section>
<section id="media-stream-track-interface-definition">
<h3>Interface Definition</h3>
<div>
<pre class="idl"
>[Exposed=Window]
interface MediaStreamTrack : EventTarget {
readonly attribute DOMString kind;
readonly attribute DOMString id;
readonly attribute DOMString label;
attribute boolean enabled;
readonly attribute boolean muted;
attribute EventHandler onmute;
attribute EventHandler onunmute;
readonly attribute MediaStreamTrackState readyState;
attribute EventHandler onended;
MediaStreamTrack clone();
undefined stop();
MediaTrackCapabilities getCapabilities();
MediaTrackConstraints getConstraints();
MediaTrackSettings getSettings();
Promise<undefined> applyConstraints(optional MediaTrackConstraints constraints = {});
};</pre>
<section>
<h2>Attributes</h2>
<dl data-link-for="MediaStreamTrack" data-dfn-for=