Skip to content

Latest commit

 

History

History
1074 lines (576 loc) · 189 KB

mar-21.md

File metadata and controls

1074 lines (576 loc) · 189 KB

21 March, 2023 Meeting Notes


Remote and in person attendees:

Name Abbreviation Organization
Michael Saboff MLS Apple
Kevin Gibbons KG F5
Waldemar Horwat WH Google
Daniel Ehrenberg DE Bloomberg
Chris de Almeida CDA IBM
Ashley Claymore ACE Bloomberg
Guy Bedford GB OpenJS Foundation
Jordan Harband JHD Invited Expert
Ben Allen BAN Igalia
Nicolò Ribaudo NRO Igalia
Philip Chimento PFC Igalia
Jesse Alama JMN Igalia
Eemeli Aro EAO Mozilla
Luca Casonato LCA Deno
Daniel Minor DLM Mozilla
Asumu Takikawa ATA Igalia
Ujjwal Sharma USA Igalia
Sergey Rubanov SRV Invited Expert
Peter Klecha PKA Bloomberg
Richard Gibson RGN Agoric
Justin Ridgewell JRL Vercel
Frank Yung-Fong Tang FYT Google
Shane Carr SFC Google
Chip Morningstar CM Agoric
Daniel Rosenwasser DRR Microsoft
Istvan Sebestyen IS Ecma
Willian Martins WMS Netflix
Ben Newman BN Apollo
Linus Groh LGH SerenityOS
Ron Buckton RBN Microsoft
Luis Fernando Pardo LFP Microsoft

Introduction

RPR: Chris de Almeida is proposed co-chair, and Justin Ridgewell is proposed facilitator. The election will be tomorrow.

RPR: Please fill out the sign-in form. The meeting will run 10 AM to 4:30, with a hard stop at 5 PM.

RPR: We don't have individual microphones because the entire room is wired in mics, so please leave the room if you want to have a side conversation. Remote attendees: Please speak up if you have trouble hearing; we have support from logistics folks.

RPR: Logistics through TCQ

DE: For today there is no transcriptionist. Hopefully we will tomorrow. My deepest apologies.

KG: I would like to ask for permission to take a recording, so I can tune an automated transcription system. I will keep the recording private, it will not be shared with any other human.

RPR: Any objections? Hearing none, I will be asking people to take responsibility of taking notes in particular today. and then hopefully tomorrow only fixups.

RPR: We'd like to provide better meeting summaries, in addition to the very verbose logs. If you have a look at the template and the notes now, you will also see a place where the presenter can summarize the key points. So after you are presenting and so on, as part of the publishing process beforehand we will be asking presenters to put in a little more effort to provide a summary.

DE: I wanted to make a quick procedure proposal. At the end of each topic, we should synchronously pause to write the conclusion. Historically we have been doing it asynchronously, but having the shared conclusion, is kind of the most important part. How do people feel about this procedure?

ACE: I like it.

RPR: This would happen after the time box.

DE: So it wouldn't it would it would not count against the time, yes it's there. There you couldn't say oh we're out of time so we can't follow us to know. This is done all for the time. but

RPR: I think in this meeting, we do have enough flex to do that. Thanks. Thanks. All right, so that's the note.

RPR: The next upcoming meeting we have is in a couple months time. That will in fact be fully remote on the Chicago time zone.

Committee Housekeeping

RPR: so, we've got the regular host housekeeping and things to go through. First of all, we need to see if the previous meeting's minutes can be approved. So any objections to approving that?

(silence)

RPR: So, I think that we can consider that approved.

RPR: The next is adoption of this week's agenda. Any objections to that? [silence] actually, that leaves us with just over an hour or spare time based on the current schedule, which is good. So, there's a little bit of time for any overflow needed. All right.

Secretary's Report

Presenter: Istvan Sebestyen (IS)

IS: Okay. So now I will try to share my screens a second. So the bad news is, you know, that it is still very long, 22 slides. The good news is that I'm not going to present those because the content of it is mainly an update of the information that I usually submit. So I can be very, very quick. And I will try to concentrate only on the and interesting things and then all the rest, then I will ask you that if you are interested in then just to read it through. Okay?

IS: So this is what has happened since the January meeting. This is a typical type of presentation ofwhat I usually bring into the information, so what are the latest latest TC39, Ecma GA documents, And then I will say something about the request for the short summaries again, based on my experiences that I have made so far, which was already positive. And the status of TC39 meetings participation. So that, you know, I don't know, 10 second something. standard downloads Etc. Also a 10-second something because we didn't have to do much in the first two months. Done. What is important here? And then we have to come out in from this meeting is the next one, is the status of the ES 2023 approval. You know that. So we have to think about when to freeze Exactly, the to specification or the ES2023 the royalty-free opt-out procedure. Etc. So this is rather important here. And then the chair re-election, this is coming tomorrow. So I'm not going to say on that. I'm done. being a reminder of the five years periodic review of the to fast-track. This is certainly not standard. I have already is brought that already twice, but it is unfortunately still relevant. This is again, just a repetition without any changes what we had so far so, this is the list of the documents, which can ready for the TC39 file server. You know, that we have published there basically, double documents. You know, that we have also seen on GitHub so it is not terribly interesting for participants. Okay, the new relevant. GA documents. There are only two GA documents also. Not terribly interesting. So I continue okay.

IS: That year, again, the usual explanation, Why are these lists of interest to? This is the nine members. And so they are more interested to act more members than to TC39 members because you have this information on the GitHub. anyway so I okay, this is not about the short. summary request. So actually the good news is you know, that at least I got from the from the agenda, the type And then I got four each discussed agenda, I that I'm talking about the January meeting. the conclusion and the summary, and I have put everything together into the main main part of the of the minutes. So this is a sort of duplication, of course. But for those who are not reading the technical notes and only the main part, this is a good summary. And the good news is that I have already found that even without a summary paragraph of what we have decided and what we have not got yet, it is already a good information content. you know, out of the of the title and out of the resolutions and summary etc has helped a lot what has happened under meeting. So I think we are already on a very good path. is bad, but once we get the one paragraph in addition let's say for the next technical notes etc. we would be a good shape.

IS: The next one, this is important. So the stages of the ES2023 approval. So at some point in time in the very near future, we hope to freeze the specification for ES2023 and and because the royalty-free patent policy requires that we put it out internally for two months. Now the approval of the ES 2023 at the general assembly is the 27th of June. So the latest date, theoretically, not practically, would be the 27th of April to put out this frozen versions of the specification. for Ecma members. So I would suggest of course not to wait until the 27th of April, but to finish it as soon as we can. So the idea I think it would be if we could do it at this meeting, or if we cannot do it at this meeting, then shortly after this meeting. But I would suggest we let us try to do it before the first of April, but we need to come out with a freeze versions of this specification. The freeze version means, you know that from the substantive point of view it has to contain everything that we want to have. From the editorial point of view, not yet. So we can even make editorial changes after that. Etc. So that's not so critical. So it means that once the freeze is out at the May 20. TC39 meeting. then we can approve or so formally accept. the ES2023, specification. for approval of the Ecma GA, and this would be a TC39 decision. Because of that, it is probably a yes or no type of decision. So I would say that we have to be very careful not to run out from the from from the deadline, in order to have it approved at the general assembly. Now, in case that we are still running out, and which might help you it's a theory practically never happened. Then, it is also possible to do that from the general assembly that we ask for a letter ballot for ES2023 for after the 125th general assembly meeting. So, this is a central question that we have to discuss it here, and we have to come to an agreement. And also with a plan how we are dealing this approval process for ES2023.

IS: The next point is the chair group election. This is just a copy of what I found out GitHub. It is on the, we have an excellent team. We have excellent teams of facilitators, etc. Everything is all thing. Clear, if it is not clear than to Morrow. will join the meeting and she will be in charge of the of the of the of the conducting here. this this election process. yes, and I think it will be a good one. So this is up for tomorrow. First thing tomorrow. It is just as a copying as I said.

IS: regarding TC39 meeting participation. This is just a continue of the old table, which I found very useful And you can see from the latest entry, which is here atthe bottom of the second page, January 23 remote meeting, so it is still the same very nice high amount of participation. since the participation everybody was removed, 27 companies have participated. This is a typical participation. So I think we are in terms of participation still in a very good shape.

IS: Now, regarding the standard download statistic again, those who are more interesting figures that can they can really see see through later on. The figures yet are already showing the usual pattern at the moment. You know, we have only be available only to collect two months of data. So it is not so much. Okay, the same is also true for the access statistics, one is for ECMA-262 HTML access. The other one is for the Ecma 402. These are the usual things. Then the TC39 plenary schedule again. copied, you know where are we now? It is unchanged. I will go immediately to the next slide, format of the meeting. We have set it up, it is just for those new readers who might show this is 39 now, but for the old one, it is not not interested. So I get just for reading the same is also true with the five years periodic review for the to pass practices. TC39 standards to JTC 1 SC 22. I have already presented this twice but it is unfortunately still relevant. It would be bloody important you know that TC39 that JTC1 still provides a positive. response to this periodic review. One is the the JSON standard, the other one is the architecture standard. This is quite important from the Ecma point of view because the entire philosophy is based on that. You know this is not change, but we are changing our the yearly specification which otherwise we could not fast-track. to ISO.

IS: The GA venue and dates are unchanged. so it is just again a repetition uninteresting. So those who are involved in these matters, the same. also for the execom meeting unchanged Next one is 19-20 of April in Geneva. Etc.

IS: and this is the end of the presentation, and I'm sure that I was within 15 minutes. So that's the end of the presentation. and thank you very much. much everything. Uh, is can be somebody's interested in reading them can be downloaded from the TC39 website. I have completed this presentation.

SFC: I was wondering if you have referrer information for the spec links. I think I've asked before because that for the Ecma 402, specification in particular, many of the older editions are the ones that have more traffic than the newer editions. And if we had referred information, we could maybe go and figure out what the problem is and fix it.

IS: No. it is the same. And then this reason you know, these first years I never count again. So I leave it out, you know, I have the feeling it is some kind of bots, you know, I don't know some kind of bots. So so I only take it into account, you know, the latest 4, 5 years and that's it. We have no further information. further information. And we were not able to follow what the hell was going on. So I have completely given up, you know, on that detail. I feel very we can still survive with that. that. but when you information, thank you.

IS: Okay. So then I sort of stopped sharing way. Yeah. And then it is back to you.

RPR: So sorry. there's more questions on the queue. Next up is DE.

DE: So you mentioned. This SC22 review for I guess for the JSON standard in the architecture standard. so, given that we don't have any updates to those and the ecmascript suite is already a standard, that being the thing that refers to all of our other. documents published by ecma, what is the purpose of this review? And what is the risk if it goes badly?

IS: The risk would be - JSON is up now. Yeah. And I am less worried about JSON because because the facto it is a it is an extremely strong and extremely popular standard. So whatever ISO does. in my opinion, you know, I mean the fate of the standard is, you know, they are doing more damage. Age to themselves than to us.

DE: What is ISO even talking about giving that we're not making any changes? What are they going to -

IS: That's right, That's right, you know about but but you know I mean every five years, you know, they have to say yes this is a good sign that we still want to keep it. you know. And if they say if they say no, we don't want to keep it and kick it out. They would be very, very stupid. you know, the if they kicked out the JSON standard so it is not about changing it or whatever, it is about keeping it as an ISO standard or not keeping it. It would be really very, very stupid to kick it out. it out.

DE: So, do we have this for you, periodically, for the ecmascript, sweet standard as well?

IS: Yeah, probably eight months group is it is it is coming up at the end of the year, you know? So it is not coming up so I am because it is a five years review. So every four or five years, this comes up and for for the for for that one, it is coming up somewhere. So I think it is in the last quarter of 2023. so I am warning you and everybody in TC39. If you have some kind of connections to add to your SC 22 national bodies then. try to make it Influence them so they say simply yes to both of those and that in ISO in order that they don't kill it. As I said JSON is up already, hte did the other one, The Ecma 414 is no doubt but it will be up for the five years review at the end of the year.

DE Are there any particular requests or concerns that you've heard from SC22 that we should be thinking about?

IS: Well, not really, it is only my concern that I looked at it, you know. And how SC 22 is working and I have figured it out, that there is currently no working group, you know, which is associated to the Ecma standard. So it is only coming up in the plenary. and this might be good or bad. It is just a potential concern not necessarily A real one, you know, it is just a warning. And we have one contract by who we know very well and who is very helpful in these things, you know? Rex is the one who is following SC 22 as Etc. So we might contact him and ask what is the situation at the moment in SC 22? Is there anything that we really have to worry about that, etc.

DE: Yeah, I think that would be good. Generally, I generally trust and refer to Ecma management here. I'm a little concerned about Rex because when he was co-chair he was kind of privately reporting that we were somehow out of compliance with rules [which I disagree with] and kind of threatening that this would look bad at the ISO level, as you and I have discussed. So yeah, we should follow up on this but mostly I trust and defer to Ecma.

DE: Okay, next question. You talked about complaints about the notes. I understand that people are concerned with the overall length of the notes, but I'm wondering who the audience of this is, and how detailed in the summary people are looking for. Are they looking for things that go beyond the conclusion? And if so, what?

IS: So regarding the summary, the original idea, what we have discussed with Patrick Luethi and actually only Patrick came with the idea and then I think it is a good idea. I agree, we have been using that in the itu OR law, you know? So just to have one paragraph summary for every contribution about it. and that's all.

DE: We've been producing a summary of the conclusions for years and I'm wondering if there's anything beyond this summary of the Resolution that you would like to see, or if the complaint is mostly focused on, it's just too long.

IS: I found the conclusion okay. And I have also taken the conclusion into the main part of the of the minutes and I didn't have any problems with that or with the title, you know? But it would be just nice, you know. So if in addition to it, we could have have this. one paragraph summary, But as I said my first reaction was that at this point in time, even without the one paragraph summary, it got significantly better than what we had before. So I think we are on a very good way forward in order to create minutes where people don't have to read the 250 pages of the technical notes to get to all the details.

DE: Yes. I'm glad you decided to start collating those conclusions. So, I'm taking that there aren't concrete requirements for anybody who wants to know more about what's going on, that they just want something shorter than the notes. And if there are such concrete requirements, you will come and report those to us. right?

IS: Mm-hmm.

DE: Last, IS mentioned the GA meeting. All Ecma members, not just Ordinary members, are invited and welcomed to attend the GA meeting. It's just a Zoom call. We will post the link on the Reflector as a redundant strategy with email that is already sent to all members. I strongly encourage anybody who can afford to attend the GA kind of in the background, calling in, to do so, because Ecma is really interested in serving all of its members. There are conversations going on about how to engage non-ordinary members more and include them more in the decision-making process. Also most decisions in Ecma are not made by a vote. They're just made by sort of rough consensus. So you can definitely come and participate in the discussion if your a member organization since you as their kind of delegate to the GA,

CDA: This was my question on the queue: only Ecma representatives that can call into the GA?

DE: It's up to the member organizations who to send to the GA–it doesn't need to be the primary contact that's listed in the item of the Ecma memento document, but like, you know, if you want to attend and you have to like coordinate with your primary person.

MS: You need to be a part of an Ecma member. organization. That's it.

DE: Yeah. But then concretely within IBM, where CDA is coming from. I think he'll coordinate with other IBM representatives like Jochen Friedrich to attend.

RPR: All right. Yes, I think we're at the end of IS's report.

TC39 Editor’s Update

Presenter: Kevin Gibbons (KG)

KG: All right, editor updates. There have not been any significant editorial changes since the previous meeting in January, there's been the usual minor tweaks and fixes but nothing worth calling to everyone's attention. In terms of normative changes, we have landed two stage 4 proposals: symbols as weakmap keys and change array by copy. Those have been approved for a while and got stage 4 I believe in both cases at the previous meeting, and after some bike shedding about definitions for how to phrase things, especially in the symbols as weakmap keys proposal, we have finally gotten everything to a state that we're happy with and landed those. And then this last one was technically normative, but certainly a bug fix: when we added named back references, we added it in such a way that they were present in the annex b grammar and they were present in u mode in the regular specification. As you'll recall, there is the actual regex grammar, which lives in Annex B, and then there's this completely fictional grammar that regexes have in the main specification which is not used for anything anywhere as far as I am aware, but which we maintain separately for whatever reason. And we messed up the integration such that if you didn't have the U flag and you were looking at the non-annex B grammar, then named backreferences were not allowed. We went ahead and fixed that without coming back to the committee because that had always been the intention.

KG: In terms of upcoming work, we as we discussed in the previous meeting are going to work to reduce the amount of monkey patching that Annex B does. by essentially inlining, a bunch of stuff from Annex B. So that instead of saying, you know, this algorithm is actually different, go look in this other place, and then you have to sort of manually reconcile the two algorithms in your head. We are instead going to put the annex B grammar in the main algorithms with, you know, "if Annex B is enabled", or some other phrasing, "if the host is a web browser or supports this feature", or whatever, then do the steps that annex b says, otherwise do the non-Annex B steps and it will just be a single algorithm that you can read top to bottom and not have to sort of figure out how to reconcile the two. And then, the remainder of the work is fairly similar. We are making progress on consistency in general. Also we made some progress on clarifying execution contexts since the last meeting. Otherwise, very similar list of work.

KG: Last and most important thing is that we are cutting ES2023. We are freezing it, or rather we have frozen it, I should say. We are not expecting any more significant editorial changes. There will be at least a couple very small editorials tweaks that will land but nothing large. Which means that the patent opt-out policy is starting now.. The next meeting is in very slightly fewer than 60 days. Normally this is a 60-day period but we would like to get the spec approved at the meeting that is in two months. So please ensure that if you feel the need to do any review of this, you do this in advance of the next meeting, not the full 60-day period, so that we are able to get formal approval at the next meeting. And then, after that, we will ask for official approval after the period at the following meeting.

RPR: Any other questions for Kevin? Okay, all right. thank you for that

Summary

A number of fixes and cleanups have been applied to the specification text. No further significant changes will be made before ES2023 is cut. We will be starting the IPR opt-out period now, and ask for approval next meeting.

ECMA 402 Update

Presenter: Shane Carr (SFC)

SFC: So I think most people in this room I've seen at least part of this slide before but just in case anyone has it. Well, as I can throw to work, it's JavaScript internationalization library, the Intl object. As you can see here, we can do things like localize dates and date formats into your favorite, local and favorite region. So how is Intl developed? It's a separate specification, but we all proposals, all moving through the TC39 stage process. We have a monthly phone call to discuss details and you can find more information at these links. So here's the here's the TGT. Nell, which will in turn have been editors for the last year. I'm the convener and then these are the delegates. These are I copied the attendance list from the last two, meetings and merge them to get this list here. So we've been getting a fairly solid attendance lately, which is great. So, thank you everyone, and thanks for all the contributions from the delegates.

SFV: So, ES2023. We just got an update for the ECMA-262 side of this. I was wondering if Ujjwal or Richard has had an update to share on the ECMA-402. ES2023.

USA:We have a few remaining work items for ES2023. Most importantly. stage 4 PRs. but we should be wrapping it up soon, hopefully. It is ready before April.

SFC: Thank you. for that. Is there anything else that you need from this body to prepare the ES2023 draft?

USA: There is a PR, I suppose you'll get to that later. Consensus on that would be great.

SFC: Okay, so let's look at pull requests.

SFC: There's one normative pull request. It fixes Issue #402 in the ECMA-402 repository and it's this one. Here I'll go ahead and switch over here to these slides that USA put together. It changes a little bit about the default houtCycle computation, so that it resolves to the now, non-preferred formats as you can see the example here, there's some funny stuff going on with this logic. The one of the issues here is that the seal the are 43 update changes a bit about somehow some locales, use the default our cycle. So, one, one issue here is that yet, this is too new for TG2 consensus. This PR came in since after our last TG2 meetings, we have not yet had a chance to discuss it with TG2. But you know, as for our formal we process are still asking this body for feedback on this PR and for tentative consensus. If 402 achieves consensus on this PR, One line., it's the actual changes that it's like a one line change. I can open up the actual pull request (PR: 758), it's a one-line change in the specification right here. So that that changes the the hour cycle resolution logic. If this seems okay to people then we'll probably achieve TG2 consensus at the next TG2 call in a few weeks. And you know, we'll go ahead and and I'll ask for consensus, at the end of the presentation.

SFC: But in the meantime, proposal status. So we keep track of all of the proposal tracking on this wiki page. So we've got two stage 4 proposals that are shipping in the ES2023 Edition, which will be Intl enumeration API, as well as some of them were from NumberFormat v3, as Ujjwal mentioned, is just working on merging these in to the specification. We've got two stage three pull requests: intlLocaleInfo and from one DurationFormat. Those are (?). we took the recommendation with this group, to change things to be functions instead of getters, so FYT is working on integrating a change for DurationFormat. There have been a number of mostly editorial pull requests lately to resolve various issues in Duration. For my visual gave a big update on that TG. to lessen the last TG1 meeting in January, so these are both moving along. I'm hoping that these are both land is stage for later this year. We also have a couple stage two proposals: era display, that I've been presenting on, as well as the era and month code proposal. Frank gave an update on that in January and you know, it's also related to the Temporal work. We also have a bunch of stage 1 proposals. These stage one proposals may at some point advance, but yeah, stage 1 is largely a place where we keep track of experimental work that people have been doing and are exploring because once they don't proposals are forth, but only so only the stage 2 and above proposals, are the ones that it's like, have a concrete path.

SFC: so, let me go back to the slides. Okay, one thing I really wanted to highlight today is the User Locale Preferences WICG/proposals#78, it's not strictly a TG1 proposal or TG2 proposal by itself. but it does touch other parts of the web standard and there's a lot of overlap in terms of personnel and things. so this, this link here is a good place to go. The core question that we're exploring is how do we improve the internationalisation experience in the web platform in a way that respects user privacy? Anyone who's a you know, keen on on the you know privacy concerns know that there's like a lot of concerns about things like accepted language, which is very fundamental to you know how internationalisation works on the web platform. And you know, most with know that well most internationalisation do your Locale your language and region is only like - it's important but only a subset of the information that's normally apps are able to use in order to give you a higher quality internationalisation experiences. There's also many other things such as your hourCycle and numbering system preference, your calendar preference, measurement units, and other things. Which all effects, which are all part of the bag of options that collectively, we call unicode extensions or or user preferences. So we've been exploring, how do we make this work on the web platform? And we had a big discussion about this at the last TG2 meeting earlier this month, and I highly encourage anyone who, if my comments sparked any interest at all from you either the privacy or the internationalisation side, to go and take a look at this proposal, and give a way in and help us find the best path forward that satisfies all the requirements. So thank you for that.

SFC: If you'd like to get more involved with TG2. you can do it. That's our GitHub page. One thing that is always helpful is if we can have help writing MDN documentation and especially writing polyfills in JavaScript. format.JS is a really great polyfill. There's, you know, Community contributors to from time to time contribute, polyfills for our proposals and this is a really great way to get involved with internationalisation as well as standards work in general. You can read the spec, we've gotten really good feedback from polyfill authors before on some proposals. So, it helps the specification because you know it's an extra implementation, extra eyes on the specification actually implementing it and making sure that it works. So it's a really, really great way to get involved. That's my plug for writing polyfills and yeah if you'd like to join our call, mail this address or talk to me or anyone else on ECMA-402 and we can get you all hooked up with that. So that's my update.

SFC: And I'll go back now to the pull request (758) to ask if there's any concerns and if we're ok giving conditional consensus on this, assuming that's TG2 will achieve consensus on that in our next call in a few weeks to know. Okay.

DE: So I would like decisions made by the plenary to be informed by discussion in TG2, I would be happy to fully defer the capacity to take consensus decisions without free you from the TG1 at TG2 if we want to. But I don't see the benefit in having conditional consensus from the committee here based on future discussion that hasn't taken place in TG2. yet, if we want to come teach either an item and just not comfortable. saying anything at all without without TG2 talking first. But I would be okay if we say that in general TG2 you can just take decisions, So I'm uncomfortable with conditional consensus here. Just like I expressed discomfort with conditional consensus for the Temporal issue we discussed last meeting.

SFC: That's fair. to be clear. I wouldn't normally ask for additional consensus except that USA did mention that he would like this to land in. Yes, 2023. And there's not going to be another TG1 meeting before we have to submit that Co if we don't achieve consensus distributists will save this, which is probably fine, for ES2024.

DE: I think we shouldn't worry too much about the annual version cuts, and we should mostly worry about the current draft spec to be in good shape. And I would encourage TG2, or anybody to propose this process change, allowing TG2 if in the future something smaller comes up to take decisions. I just don't see how we could meaningfully give feedback on this yet if that's what we're being asked to do.

SFC: Okay.

USA: I just wanted to say DE that I do understand what you mean here, when you say that we should so far, we have followed, what is mostly a two-prong model, where we discuss things in both venues and and they could happen out of order, but it's understandable if we decide to always discuss first within TG2, before coming to it. Although that said what SFC had mentioned this is a particularly complex issue because what we have right now is that there is an incompatibility between different engines, and that is because of the bug in the spec which means that we resolve to hour 24, where we know that there is actually no current time zone that does follow that hour cycle. So, because of that, we felt that this was pretty straightforward approval. the at that, you know what we're resolving to There's no real time zone that does follow that.

DE: So yeah, so three things first, I was unaware that you had previously done the ask for consensus in this opposite direction before and I apologize for the kind of turn of objecting to it now and not having objected to it before. Second, I think you should care more about the current draft spec, which will be the path towards really fixing the browser compatibility issue rather than the annual version cuts. And third, I would be okay with saying that the committee will just refer to TG2 on this matter. I'm more okay with that than saying we agreed to it because I haven't done this technical review work, and I think consensus on this particular issue would kind of imply that we had had done this if you work as a committee. which we think there's a couple of points. in support of you on that.

FYT: I'm the author of the PR, I do think DE's Point better. I'll actually surprised that bring up here, I think we really shouldn't do that reverse. I agree with that.

SYG: +1 The procedural thing Dan said, I completely agree with. I think if you're going to ask for conditional approval, on some discussion that hasn't happened that is no different than saying we would like stage advancement power within TG2, and that is fine. Like, I think that should smooth things over: most of these things you bring back our kind of pro forma. Anyway, we like expertise those of us who have expertise should already be in TG2. So it seems fine to ask for the general power. But this conditional one by one like ad hoc conditional thing doesn't make any sense.

SFC: For this particular PR and issue, I would rather discuss it in TG2. So I have it on the slide because the normal procedure that we do followed is that we take all the normative PRs, patch them up into these slides. It's unusual to have a PR that's opened between TG2 and TG1. So it's a bit unusual, but this is normally what the process that we follow us on is followed them. comparing these slides but I'm totally fine with discussing this in TG2 and then coming back here. In terms of a procedure change, I'm not ready to propose such a change at this point. So I think that we should discuss. Yeah, there's not much more to discuss here. Will just discuss those in need you to and then come back in two months and ask for consensus.

DLM: I'm yes, I think from Mozilla’s point of view, we prefer to have the actual discussion of proposal advancement in front of the larger committee, and that's for us to have the opportunity to do internal review, we don't have that same review process in place for TG2. So for us to be able to speak on behalf of SpiderMonkey or Mozilla, it really needs to happen in this committee.

SFC: Wearing my hat as a google delegate I will +1 that as someone in that position, in the sense that the way that we that I we run the TG2 meetings is not as formal as the TG1 meetings. We don't have an agenda advancement deadline and it would rather not implement one because that's a lot of extra process. So it is much easier from an organizational point of view to just say there's only one body that has actual advancement Authority and TG2 provides recommendations, that is a much easier operation to run than trying to formalize. It's like if we were to say TG2 actually has staged advancement power like that, that requires also changes to the processes in TG2 which is not necessarily something that I'm willing to sign up for.

RPR: All right. I think we've been a little bit. We're talking about like a potential process change here. So we're kind of moving away from the original item.

SFC: everyone, please get involved with pick my for it. Thank you.

Summary

ES2023 cut is on track Please see the user preferences proposal, User Locale Preferences WICG/proposals#78

Conclusion

No consensus for PR #786 due to not having been discussed in TG2 earlier. In the future, everything that TG2 brings to plenary for consensus should be discussed in TG2 first, given hesitation around approving things "conditionally pending TG2 discussion". Annual version cuts of standards are not usually considered a reason that a change is urgent. Process changes for TG2 were discussed, but it wasn't on the agenda and leadership would like to keep things similar for now.

ECMA-404 Status Update

Presenter: Chip Morningstar (CM)

CM: So I have it on good authority this morning from IS that JSON is "an extremely strong and extremely popular standard". So there's that. As usual, nothing newsworthy. ECMA-404 remains an island of tranquility in a world gone mad.

Conclusion

  • No newsworthy changes (as usual)

Test262 funding status

Presenter: Philip Chimento (PFC)

PFC: I wanted to share an update about the funding status of test262, and the composition of the maintainers group. I apologize for having these slides late. I think I shared them with the other maintainers on Friday, and things have been really crazy with traveling.

PFC: I'm in the maintainers group of test262, so are several other people in this room and on this call JHD is here, RGN on the call, I think, there are others as well.

PFC: As an overview of test262, it is the conformance test suite for ECMA-262 and ECMA-402. This is an effort that helps all of us do the work that we do in this committee and for the good of the the ecosystem. Having this test suite helps to ensure interoperability in implementations, prevent bugs in implementations that result in discrepancies between them, and it helps us find bugs in the proposals longer before implementation starts. The people that are spending the effort to make all this possible, there's a certain amount of maintenance necessary. There's a maintainers group that consists of some contracted maintainers, previously people from Bocoup and now it's currently from Igalia, of which I'm one. The contract is in partnership with Google and is .4 FTE of work that is contracted. There are also other maintainers who have their time paid for by their employer to spend on test262. There are also volunteer maintainers who do all of their maintenance work in their free time. Other than the maintenance work there is also test writing. Some of the test writing is done by the maintainers. Some of it is done by people working on the implementations, some of it is done by the authors of proposals. Some of it's done by community contributors. So all these things, all these sources of effort that come together to make the test suite.

PFC: Here's a little slide with numbers about the previous calendar year. About 450 commits, about 300 pull requests merged, about 3,500 new tests in the suite, and on average, it takes a little over a week from the time of the request is created until the time it is merged. That's an average; obviously, there are much shorter ones and ones that are much longer. In 2022 we also created the maintainers group with governance policies. Before that things were a bit more ad hoc, and this is good because there's a process now for people to get involved and receive the permissions and the trust that they need in order to be able to become maintainers. There's an RFC process for changes to the test suite that affects consumers of the test suite, such as implementations. We have a new policy for the staging directory which allows the proposal authors to contribute tests that are already correct but maybe still need some work to get into the right format, or that are correct but not complete, and have these available for implementations to run so that we get alerted of problems with interoperability earlier rather than later.

PFC: I mentioned the contracted maintainers. The contract is funded by Google at the moment, but unfortunately as of April 1st Google can no longer contribute this. So, what does this mean for TC39? What's on this slide is kind of my opinion or projection about what happens without this contract. Some of the TC39 proposals will sometimes have their tests written by proposal authors, and time is paid by the proposal author's employer, and so it's likely that those proposals will still have test coverage. Examples of this are the ongoing test coverage for Temporal, or the test coverage for Change Array by copy, which recently landed. The part where I expect things to change, would be with the coverage for proposals that are not funded in some way by proposal authors' employers. Some of the proposals I or my colleagues from Igalia have been working on writing tests for are Array.fromAsync, or duplicate named capture groups. So, if no funding is available it means that those tests need to be written by someone else in order for those proposals to advance to stage 4, and it seems likely to me that this is likely going to be absorbed by the proposal champions. So that means all of you. There's the other work that the maintainers group does such as pull request reviews, and other maintenance. So it's not like the maintainers group is going away completely. People are still participating, but the basis for relying on the availability of the maintainers group, we can just generally expect less availability and so it becomes more reliant on maintainers with limited time and or unpaid time. So, it seems likely that that kind of thing would move a lot slower.

PFC: The objective right now is not to get consensus on one of these paths, but just put up ideas so that we can do some short discussion about them. I'll go through these a little and then SYG would like to discuss one of them. Ideas that have come up in discussions are: just continue the best we can with reduced involvement from from maintainers. It means more work for the proposal authors, and proposals might go slower. We might consider process changes, although we're not proposing any process changes at this time, that would would require a lot of preparation. Another idea is to look at WPT (web platform tests). The policies they have for accepting tests lean toward more towards "best effort", which is not what we do in test262, but we could consider being more like that. Another option is that we get more paid contributors, so if you like that please take this message back to your company and advocate for it. Other ideas are possible. SYG, over to you about this slide.

SYG: Sure, Yeah, thanks PFC. Due to "different economic reality", what's the phrase du jour here? We can no longer do the test262 contract. The staging stuff put into place last last year is helping test262 to keep the velocity that it has had with funding with implementers help, by having implementers being able to directly commit less structured tests into the staging directory. That is all in service of making test262 more like WPT. Now, you might think I am wholeheartedly pushing for that direction, but I think that test262 quality as a whole, and historically, is higher than the WPT. There are way more web APIs and an interop by and large has been a much bigger problem for web APIs than ECMA-262. And test262 is a huge part of why that is, because the quality of test262 has been historically very high and very thorough, and tests spec corners that implementers, one, don't necessarily want to write tests for, and two, probably more aren't in the right mindset to think - they aren't spec authors, they aren't thinking of spec coverage in the way that some test262 contributors in the past have been thinking about spec coverage, and come up with the test that tests all the corners. Implementers test different corners about implementation, but not every corner. So I think what we have today with no extra funding going forward would naturally lean towards making test262 behave more like WPT. But I want to make a pitch here that I think that it will be a good idea to keep the test262 quality as high as it has been historically. And I don't think that level of quality is easily reached without extra funding. That is all I would like to say,

PFC: Thanks. So that was the end of my slides. I don't know if people would like to have a discussion right now in the meeting or you'd like to discuss things more informally, during lunch or something. I will be available for that, but is there anything on queue right now?

DE: I want to give a +1 just to SYG's comment. I'm happy about the addition of the staging directory. I think we'll make more use of it in the future but I'm also very happy with the work that Igalia has done over the past year in terms of improving test262 maintenance from reviews and tests are a good thing. And to ensure that coverage is increased. And it's great that you have this fast turnaround time, there were previously issues with that. So, I hope we can find some way to collectively fund this. Bloomberg already funds the writing of many test262 tests, like the examples of funded proposals that have tests are tests that we funded. That's not to say that all of them are, but I think this logically makes sense as a shared burden. One thing that some standards can be used to do, like Khronos, the standards body behind WebGL. The standards body itself contracts with a provider to write the conformance test suite and that's that's one possibility. I think if we went to ECMA and said that, the first response would be, well, why don't you have the committee members pool resources separately? So I think that's another thing to consider. I don't know how congruent that is with the current economic environment. Probably something more to discuss at lunch.

SFC: And just to +1 that comment. I also wanted to raise ECMA-402 funding, it's another thing where I've been able to successfully in the last several years pitch to my leadership that it is very important that we continued funding Igalia's work on this subject, but it's another type of thing where it's not a very good long-term solution because it really should be a collective thing. I believe that we would all say that test262 is very important and ECMA-402 editorial work and things are very important and they should be collective things and not carried as a burden by any particular organization. So, just +1 to that.

Speaker's Summary of Key Points

  • The contracted maintenance for test262 is ending (previously sponsored by Google). We discussed various possible ways to continue but we don't have a way forward yet.
  • There was broad recognition of the value of professionally maintained test262 by the committee.

Test262 Updates

Presenter: Jordan Harband (JHD)

JHD: We've updated a bunch of tests, so if you are championing a proposal please keep up to date about the test status of your proposal. That also means if there's something in the proposals table that isn't accurate about describing the test status of your proposal, please send a PR to update that. We documented our test262 RFC process and our maintenance practice rationales. Feel free to read that, I'm sure you will find it thrilling. We merged the async helpers implementation. So there's test helpers for some asynchronous behaviors There's a number of proposals that have asynchronous behavior that will be able to have easier tests authoring as a result of those helpers.

JHD: It would be nice if - this is a mild proposal that when we approve normative changes to proposals, that we put something in the notes indicating who is taking responsibility for filing a test262 issue to track those changes, or PR or whatever, but it would be nice to have somebody for each proposal, for each set of normative changes to kind of drive that forward and make sure that it's tracked. Not as a strict requirement, but just as like, it'd be nice if we tried to do this. That's the end of my list.

DE: I thought we already had a strict requirement that normative PRs need to have test262.

JHD: Normative PRs to the spec but these are like when we're talking about normative changes to a proposal, like when Temporal does its normative updates, things like that. That's a case where it's sort of a gray area in our process where it makes sense that there should already be test262 tests but we've never tried to enforce that.

DE: I guess in our current process the line is you need tests at stage 4. So, when something is stage 4, we would have tests via our current process, right? We could consider, as other people alluded to, having tests earlier, but that's a different topic.

JHD: Yeah. So given that I think anyone who wants to approach that topic of requiring test sooner separately, please go for it, but my request right now is not about getting the tests done, it's just making sure there's a tracking issue somewhere in test262, so somebody can take a look at it.

DE: Okay. So this is a reasonable request for this subset of stage 3 proposals that have tests that kind of purport to be complete. It's not a requirement for landing tests with them. But we do need to track it if those tests need completeness.

JHD: That sounds like a great way to phrase a conclusion.

Summary

Test262 has updated tests, and landed async test helpers. Please maintain your test status in the proposals repo table.

Conclusion

When a Stage 3 proposal is trying to maintain complete tests [not a requirement until Stage 4], if a normative PR gets consensus in committee, then please file a tracking issue/assign a person to restore test coverage.

Reminder to enable GitHub 2FA

Presenter: Jordan Harband (JHD)

  • No slides

JHD: My motivation here is that I want to require two-factor authentication in the TC39 org, but if I check that box it immediately evicts anyone from the org who doesn't have 2FA turned on which would ruin all of the organization I've done of the member lists and everything. So please if you have not yet enabled two-factor on your GitHub account, enable it, it's actually super convenient at this point. You can hook it up to touch ID, to face ID, to a physical key, you can hook it up to Google Authenticator or 1Password, or something like with the seed for a random code. I think you can even set it up to just shoot you an email or something and you click on that every time, and you can have any or all of these methods enabled. So please go make sure you have two-factor on.

Speaker's Summary of Key Points

  • Enable two-factor authentication on Github

Iterator helpers

Presenter: Kevin Gibbons (KG)

  • proposal
  • Note: Topic is split into three sections

Validate arguments

KG: So iterator helpers as you may recall is stage 3 as of a couple of meetings ago. That means that implementations are starting to go through it, and in some cases have noticed things there that are a little bit weird. We will be talking about three different things over the course of this meeting and I've split them out into separate items, mostly because I didn't realize when I had the first one that I was going to have so many different things. Anyway, this first one is basically an observation that the way the spec is currently written, it consumes the receiver, in the sense of looking up the .next method on the receiver, which it will call later, before it validates arguments. And this is odd for two reasons. One is just that it's more different than the code that you would sort of naturally think of: the thing that you would naturally think of is that you do all of the argument validation and then you do the iteration. And second, it is just generally inconsistent with the pattern that we have more or less followed of doing all argument validation before we start actually consuming anything.

KG: So I have a pull request. All it does is that it validates arguments, for example validates that the argument is callable in the case of .map, before it looks up the .next method on the receiver. I think this is a small and generally positive change, but that is a normative change to a stage 3 proposal, so need committee consensus for this. If there's nothing on the queue… which there is not, I would like to formally ask for consensus. And I guess it's part of our new process that we're supposed to have at least one explicit second.

RPR: You have messages of explicit support from CDA, LCA, and JHD.

KG: Thank you very much. Also, sidebar, I do hope that we can spend some time in the future more explicitly documenting these sorts of conventions, because there are a number of them that are not written down anywhere. I know we've mentioned that before. This is just another example of, it's important that we actually get around to that at some point.

Summary

An oddity in the iterator helper specification meant that the next method on the receiver was looked up before the arguments were validated, which is different from how you'd normally write similar code and also is inconsistent with most of the specification, which does argument validation before consuming anything. A PR is proposed to correct this.

Conclusion

Consensus on the PR.

Closing iterators which have not been iterated

KG: Alright, so moving on to the next item which is iterator helpers: closing iterators which have not been iterated. This is another one of those things that was noticed during implementations. So the way iterator helpers are currently specified, is that they are basically generators–sort of spec internal generators. This makes it I think clearer to readers what they're supposed to do, because this is closer to what a natural implementation in userland would be. It's also just much easier to write these down as generators than as iterators that are tracking all of the state explicitly instead of just closing over stuff. Unfortunately, one of the ways that iterator helpers are supposed to be different from generators is that if you construct a helper, so for example you call .map to get a helper, if you don't then iterate over at all, if you just immediately close the helper by calling the return method on the helper, it should close the underlying iterator. As currently specified, that does not close the underlying iterator. This is just a consequence of being specified as generators because for generators, if you close the generator there's nothing to do. Like it just moves the generator into its closed state. You couldn't possibly be within a try/finally, or soon a block with a using statement, where there would be resources that would need to get cleaned up on close. So if you close the generator, no code runs. But if you close an iterator helper, that is supposed to close the underlying iterator as well. We didn't notice this because we were really thinking of them as generators, but this is a place where they are supposed to be different. So I have a PR that fixes it, that all it does is to add some special logic to return. All it does is if you close an iterator helper that is not yet started, it will explicitly close the underlying thing. There's a bit more bookkeeping in that you need access to the underlying thing at this point, which previously was only closed over, but the only important normative component is if you try to close an iterator helper by calling return on it before you are actually started anything then it will explicitly close the underlying thing. Again, I think this is something that we should always have done and just failed to do because of how the spec was written.

MM: I like this, but since it's different than the consequence of just writing it straightforwardly as a generator, I'm wondering what the implications are for emulating it accurately in user space right now

KG: I don't think it is possible to emulate these accurately as generators either way.

MM: Do you have a faithful userspace emulation of this? JHD?

JHD: I'm typing it as we speak. so I can probably let you know in an hour or less.

MM: Yeah. Okay, I would I would like to wait on that before since especially since it's an hour away and we're about to have Inch. I would like to wait for that emulation before simply. going going ahead with this.

KG: I can guarantee you that it's possible to do such an emulation. Not as a generator, but you can write user code that is a faithful implementation of this.

MM: Yeah, not not as a generator is fine, and as if there is nothing terribly surprising about, the is your code then. Yeah, I'm okay, right now. then we can revisit if there's a surprise, once JHD writes that.

KG: That sounds good. Okay, so I'd like to ask for consensus for this change assuming that it does not prove to be unexpectedly complicated. Looks like there's a reply from SYG.

SYG: I was going to say what Kevin said. I can also guarantee that it is possible to do in user code, it's just like, it's not going to be just typing a for-of thing in a generator and expect that to work though.

MM: Okay. Okay, so I'm pretty happy. So yeah, I approve modulo possible surprises once JHD writes his implementation.

DE: JHD, is this part of the complete es-shim implementation of iterator helpers?

JHD: Yes

DE: Great. That there's a complete. issue, implementation iterator helpers that you can get up to date. All right. Would you like to repeat the request? Yes, yes I would like to ask for a consensus and in particular to get at least one explicit support for this change. would like to explicitly support?

MM: Explicit support.

KG: Thank you, MM. Okay. I hear explicit support and no objection, so I will take that as consensus for this change.

Summary

A bug in the iterator helpers spec would lead to an underlying iterator accidentally not being closed in the case that the helper was closed before iterating it at all. A PR is proposed and accepted which will fix this.

Conclusion

Consensus on the PR

Iterator helpers: renaming .take / .drop

Presenter: Michael Ficarra (MF)

MF: Okay, so we have had a request from the community to re-evaluate the naming. If you want to follow along there the issue is 270. As background, we have two methods called take and drop. Take takes an iterator and a number of elements and produces a new iterator that is exhausted after that number of nexts. Drop takes an iterator and a number of elements and nexts the underlying iterator that many times and then yields all of the remaining elements from the underlying iterator.

MF: So these are usually called take and drop. They're sometimes called by some other names, but will get into that detail later on. Also necessary background: in all the iterator helpers methods that consume the underlying iterator, when the consumption is done, they close the underlying iterator, they don't just stop iterating.

MF: That is true of take here as well: if take completes, the underlying iterator is closed, so it can't be used as a way to just advance the underlying iterator a certain number of times and then reuse the underlying iterator. There are some community members that we're trying to use take in that way. They were looking to reuse the underlying iterator after exhausting the produced helper iterator. They claim that if the name was something else like limit, they might not have made this mistake. So this comes from an actual user request and there are a number of supporters of that rename on the thread.

MF: So, here's some data that I collected on use of the names for these two operations in other languages and in JavaScript libraries. You can see that by far the most common names for both of these are take and drop. And in particular, the contentious one of "take" is almost universally used. The only other alternative that is used more than once is "limit", which was the one that was being suggested. So on that point, I think if we renamed to "limit", we should also rename "drop". I think everywhere where drop was used the name of take was "take". So there's kind of an implication there. Even if we do make this rename, this doesn't allow us to use take and drop for other operations. We should consider those names dead. We should no longer use "take" or "drop" for anything.

MF: Something else worth considering while we're considering these renames is that we have plans for future proposals for takeWhile and dropWhile which are typically called that: takeWhile and dropWhile. If we renamed to limit and skip as possible alternatives, you would have limitWhile and skipWhile, which are names which basically don't exist anywhere else. That might be a reason not to make the rename. And that's it for the presentation. So what we are considering is renaming take to limit and drop to skip, or possibly, some other choice. I can seed the conversation by saying that my opinion is that given the data that I collected, it looks like take and drop really are far too common to ignore that naming precedent. And yes, some people may draw incorrect inferences from it, but renaming it would probably have more downsides from having a less familiar name. So, do we have feedback?

PDL: Yes, so uncool this first question is do other languages that use take and drop also close the underlying iterator and is there Any. motivation. And what's the motivation for doing that because it seems like I thing, I wonder, why.

MF: The idea of the underlying iterator being closed is kind of a unique thing to JavaScript iterators. The the other question was, if somebody did want an operation that advanced an iterator by a number of elements, could we add something like that? I'd be fine with doing that, that's just not included in the initial iterator helpers MVP and this is.

PDL: I would suggest I would support the rename and possibly think of tacky using take and drop for something. It does not close the underlying iterator in a separate proposals at some point.

MF: I would not be okay with using take or drop to do anything but what they're currently specified to do. If we do the rename, we should not use take to mean this new thing because it's very, very common in other languages for take to mean the thing that it currently means.

PDL: Well, except for other languages, don't close the underlying iterator. So that's the one thing that is in line with everything else. okay? You're saying it would be still be the same other than food the same other than not closing family trip. That's possibly find.

KG: Yeah, you asked about why this is coming up. So I don't think MF mentioned in the presentation, the reason we're bringing this up is because Node.js has a streams implementation and they have just copied iterator helpers onto their streams, and they have copied that as close to the spec as possible, which is great, and have shipped that as experimental code. Not as like "this is done"; they're open to making changes to it - it's firmly marked as experimental, but shipped so they could start getting feedback. And one of the pieces of feedback that they got is that someone expected this to work in the "keeps the underlying thing open" way, they wanted to take some items from the iterator and then take some more items from the iterator, and that is a thing you can do in other languages sometimes, depending on… like in Rust, there's this ownership model that prevents you from taking before exhausting the thing that you've taken and so on. Anyway. So this came up in real life, we got actual feedback from an actual implementation. That's why this is coming up.

LCA: Yeah, KG does mention this but Rust does close the iterator after a take, because it takes ownership of the iterator to take method does. So once the tank is complete iterators, close for dropped, rather and you cannot go. we use it for brother take operations,

KG: I thought if you dropped the take'd iterator, you could like restore ownership.

LCA: Oh no. So if it kind of depends on whether the iterator was for will weather, the iterator was a borrowed, it's kind of complicated. whether the original they can have the original iterator and copy it like a reference-copy it, and then you can take things on that, but if you if you then take on your original iterator, it would resume. it would have its own like counter, with start at 0, again, rather than starting from where it last left off

KG: But in any case Rust prevents you from being confused.

break for lunch

LCA: Rust has take and skip, and they have take_while and skip_while.

DE: I'm happy to follow the champions' opinion that we should stick with take/drop. Given the relative (but not absolute) uniqueness of closing the iterator to JavaScript, I still don't think that means that we should disregard the rest of the cross-language consensus on this. I understand that closing iterators is something that people have to learn, but it seems like the kind of thing that you just have to learn once, but maybe it's not so bad. this is a subjective opinion, subjective circumstance. But overall, I'm happy to defer to the champions on making a call, since arguments were given arguments on both sides.

JRL: I don't understand why renaming to limit changes the expectation that the underlying iterator is going to be closed or not. If we rename it to limit, then the expectation could still be that it doesn't close the underlying iterator and you could limit multiple times. I don't understand why renaming eliminates the confusion. I think the confusion still exists, it's just, that we're going to call it a different method now.

MF: On the thread where this was proposed, multiple people gave that opinion. It's subjective and I take them at their word for it.

SYG: Okay. So three things. One, my opinion is I still like take and drop. Two, this is stage 3, so I would like our bar for renaming to be high here. I would like us to default to not do things like renames during stage 3. And three, I have similar concerns as to what JRL basically said: is the confusion. II can see a scenario and I'm not I would like to take. the the folks who offered the opinion on the issue at their word, but I can see a path to confusion where they had a certain behavior in mind, they found out that this iterator helper does not have that behavior. And any initial name would have been similarly confusing. Like it's not that clear to me how much of the root cause confusion is attributable to the name here. So given that, I don't think I'm compelled to change. from take/drop.

DE: +1 to the preference to not rename during stage 3.

DLM: I feel the same way.

WH: I'm not quite sure what to do here. The important thing in commonality with the other languages is the usage patterns. And if it looks like the same usage pattern should work in JavaScript, but it subtly does something different, that'd be a problem and I would be reluctant to ship something which repeatedly causes such a problem. What are the options here? Same name, rename, not close the iterators, anything else?

KG: It was suggested, and I don't particularly want to pursue this option but it is at least a thing that we could do, to have an additional argument to the method which specifies whether or not to close the underlying thing when the take is exhausted. But I don't really like that option.

WH: This would be a landmine that we're putting into the language for anybody familiar with these things coming from other languages.

KG: Yeah. so I did want to speak to that a little bit. I agree that there is the potential for that, but it is at least not particularly subtle, because you exhaust the first iterator and then the second one is just empty. That will be confusing, but that will not be subtle. You will just not have things. So you will get a bug, if you have the wrong expectation, but it is not a bug that's like your program is a little bit wrong. Your program is a lot wrong. So I am hopeful that it will not hurt too much. Like there's definitely - you could contrive a situation in which your program is just a little bit wrong and you don't notice but I expect even people who have the wrong expectation with the current semantics will generally notice.

WH: Yeah, it depends on how hidden this is inside the program and see how familiar folks are with the details of how this stuff works. I would expect casual users to be confused by this.

RBN: So one of the things that WH just said was that was saying that we could be doing something wrong and that we would be doing something different than everyone else. But my impression from both having written a library that does these types of operations having worked with C# and LINQ for many years. and kind of surveying the system is that the expectation for most of these patterns, is most of these existing runtimes in the ecosystem is that whenever you do take the iterator is essentially exhausted or closed. Now, most of these operate on the iterable, versus the iterator, which is a distinction between JavaScript and package in the ecosystem and many other languages that are also prior art. However, the consistency that we have with the take name is when it's used, it's fairly consistent to mean that you're essentially exhausting the thing, that you're done with it, you're not continuing to use it after the fact. You're going to be further operating on the results of take, rather than taking something and then trying to take something else. That's not usually the common case with these type of iteration methods as they're used in the ecosystem, and I think that there's value in using a name that is consistent with the ecosystem, has a consistent approach with the ecosystem, and that also is one that's commonly used in other languages, because not everybody is only a JavaScript developer - people will come with that background and knowledge. They come from other languages or will take this to other languages they use. And the other thing I wanted to point out, is that closing the iterator is probably the best failure straight. I pointed this out in the reflector. That, if you are using a, let's say an async iterator that also has an async version of take, and that's backed by a database that you want to be sure that the default state is that that database connection is dropped. Otherwise you'll starve yourself of resources. And the same is true for file I/O and everything else. So, the reality is that the only real valuable failure state that we could actually leverage is closing the iterator. and there are simple ways of not forwarding return, if you want to reuse this, that we could even consider modeling into the API be that through an options bag which Kevin has mentioned is something he'd rather not consider, or maybe even another method that allows you to create a non-closing iterator or something of that sort.

USA: We are past time. So, MF, what do you want to do? Do you want to bring this back, or…

MF: I'm comfortable finishing this without further discussion and seeing if we have consensus on no rename?

KG: We don't need consensus on not renaming.

MF: It sounds like nobody opposed keeping these names.

KG: I mean I'm happy to ask if anyone opposes us moving forward in that way, it's just not something we need consensus for. I personally am fine with keeping the existing names and just saying that people will be confused no matter what we do, and this as Ron says is the least bad confusion possible.

DE: +1 to that, and we can always add either an options bag or another method, so it seems like continuing with the current proposal seems like a good way forward.

SYG: +1

MM: Yeah, I support not renaming.

DE: Anybody want to express concerns?

silence

MF: Great.

Speaker's Summary of Key Points

  • Some people expressed support for the current names, and some would prefer a switch to some other name.
  • Several delegates shared the view that there should be a high bar/strong motivation for changing names during stage 3
  • JS's usage of iterators rather than iterables, and the existence of iterator return in the first place, is rather unique, complicating the comparison with other languages (which tend to use take and drop).

Conclusion

  • We will be sticking with the existing names and semantics for take and drop. Not making a change does not require consensus, that said there was explicit support from several delegates to stick with the current specification.

Temporal update and normative changes

Presenter: Philip Chimento (PFC)

PFC: (Slide 1) Hi everyone. It's me again. You heard me this morning about test262. My name is Philip Chimento. I'm going to be presenting about the Temporal proposal. Unlike the presentation this morning, this work here is a partnership with Bloomberg. In particular, I'd like to thank JWS from Bloomberg for helping prepare the slides. He did a lot of work on that.

PFC: (Slide 2) The purpose of today is to give a progress update. Last time In January, we talked about making a final push to resolve the issues raised during the stage 3, and we are now closer to that. I listed a few things last time that we still needed to address and I'll have at the end of the presentation a batch of changes that I'd like to ask for consensus on that address those things. And then I'll talk about one new issue that was raised last time during the plenary. So there will be a short discussion of what remains on the proposal. The other thing is that implementation has continued in several engines, and as always this has produced great feedback. We've also received feedback from people trying out the proposal using polyfills, and in particular there is a bugfix that a community member noticed and brought to our attention.

PFC: (Slide 3) Another thing I should mention is the progress of standardizing the string format in the IETF. This is something that I try to give an update on every time I present. So, the status that's been for the last several meetings that the document is under review by the IETF's Internet Engineering Steering Group. As luck would have it, this morning we received editorial comments from the IETF area director for this area about the draft. So this is as fresh news as you're going to get on it, and the timeline that I've heard mentioned is 7 to 12 weeks from area director's evaluation before the last call. So I don't have updated information on where that came from but that's what I heard as of this morning. As a reminder, we've agreed not to ship Temporal without a flag in any implementations until the standardization process has been completed for the string format.

PFC: (Slide 4) What else is left? There is the issue with the proposal allowing nanosecond precision. I have a bit more to say about that in the following slides. Another thing that is new since last time is that we were requested by implementations to add a host hook for HTML. To the best of my knowledge, this is a layering fix that doesn't require consensus from TC39, but I put links here to the PR for the proposal, and the PR for HTML, in case you're interested. Aside from these things we don't expect anything else other than editorial changes unless implementors bring up any showstoppers.

PFC: (Slide 5) About nanoseconds, this is very closely related to the topic of not doing unbounded integer calculations in the spec. As we discussed last time, the problem motivating the request to go to microsecond precision is so that implementations don't have to do expensive arithmetic operations essentially on bigints. After investigating this and getting feedback from other implementers as well, it seems like the nanosecond precision is not the only place where we'd have to do unbounded integer arithmetic. It also occurs when you balance different units in a Temporal.Duration with each other. In the spec this balancing arithmetic takes place in the mathematical value domain, ℝ. I originally thought that this didn't matter, but ABL, who's been working on the proposal for Firefox, helpfully provided bunch of test cases for test262 showing places where it does matter. Where if you implement the calculation using floats, you'll get a different result than with integer arithmetic. None of the implementers that we've talked to liked this. And you know, it doesn't seem like a good situation to have to do BigInt arithmetic where it wouldn't be necessary. So what I mentioned earlier is that if we eliminate nanosecond precision and have the whole proposal be in microsecond precision, that won't eliminate all the places where we would have to perform bigint arithmetic in implementations or even arithmetic in 64 plus 32 bits, which is another thing we talked about. So, we did a bunch of investigation and discussion about this. We have a framework for a solution. I am not presenting it for consensus during this meeting because the details are not worked out enough to propose a spec change at this point. But it involves putting an upper bound on some of the units of durations. And the goal that we're trying to achieve is that all calculations have to be able to be performed with at most 64 plus 32 bit integers.

PFC: (Slide 6) So this slide is a bit of an illustration of what the current situation is and what changes. We perform the calculations in the mathematical value domain, like I said, and then we store them in the internal slots of Temporal.Duration as a "float64-representable integer", which we get by taking ℝ(𝔽(..)) of the value. That means that implementers can implement the storage as 64-bit floats. But unfortunately, it doesn't prevent the unbounded integer arithmetic from happening in the interim between when it's retrieved from and stored to a Duration object. In the framework that we're proposing for the solution, we don't want to change the storage. Duration units will still be stored as float64-representable integers. The date units are going to continue to be stored separately. They need to be calculated separately, because calculating with date units requires calendar operations. The result of having to calculate date units using calendars is that you can't freely convert date units between each other and into time units. So for example, if you have one month you may not convert that to 30 days because not all months are 30 days. You need a reference point and a calendar calculation and such. Time units, which are hours, minutes, seconds, and whatever units we choose for subseconds, are always freely convertible with each other. There's no calendar that says a second is actually two seconds long. There are leap seconds, but POSIX ignores those, JavaScript Date ignores those, we're not taking those into account. There's leap second smearing, there's a movement to abolish leap seconds. Those are all things that we are leaving outside. So the way we're going to do with time units is to convert them to a normalized form of an integer number of seconds and an integer number of sub-seconds. So if we were to keep the proposal at nanosecond precision, the absolute value of the subseconds would be between zero and a billion minus 1. If we would go to microsecond precision, the absolute value of the subseconds would be between zero and a million minus 1. And then we're going to place an upper bound on the absolute value of the number of seconds at Number.MAX_SAFE_INTEGER. So it's no longer possible to have a Temporal.Duration with any time units that, totalled together, would be equal to or longer than (Number.MAX_SAFE_INTEGER + 1) seconds. So, when we do calculations with durations, we're going to convert it to this normalized form, do the calculation, and won't have to deal with integer overflow, or precision loss, and then convert it back to the float64-representable integers for storage.

WH: I'm curious — if seconds and subseconds are both integers, and I assume that they must have the same sign, what is the reason for having separate seconds and subseconds rather than just having an integral number of subseconds in the spec?

PFC: Just convenience for implementers, because if we had one number then the maximum value would be larger than 64 bits.

WH: It would fit into a 96-bit integer, or actually, you don't even need 96, you need… something like 83 bits.

PFC: That's right, something like that. It would fit generously into a 96 bit integer. We kind of expect implementers would choose to implement it this way anyway. The seconds would be a 64 bit integer and the subseconds would fit in 32 bits.

WH: It's just simplifying the spec, implementers can do any implementation which is mathematically equivalent.

DE: It sounds like the clarifying question has been answered, and this is an editorial decision made by the authors, and I think that makes it non-blocking.

PFC: I'd also be happy to take a look at how dramatic the difference in the specs would actually be, if we have time for that.

PFC: (Slide 7) The plan is to produce a detailed spec change soon. We'll check back in with the implementers and see whether it addresses the concerns. And at that point, we will talk about whether to keep the precision to nanoseconds or reduced precision to microseconds. We'll have a clearer idea of what advantages microseconds might bring, and we aim to have a decision on this by the time of the May plenary. And so we'll have a spec change ready that we will present for consensus at that time. If you have questions or feedback about the idea that I just presented, I'm here, and several other of the champions are here as well. If you have opinions on this, I would like to hear it. And obviously, if you're not here and you have questions, feel free to reach out online as well.

PFC: (Slide 8) All this is to say, we're nearly there. The to-do list is finite and decreasing.

SYG: First of all, thank you PFC and the Temporal champions for taking feedback very seriously and working on a way forward here. I want to understand one thing. So for benefit for the rest of the committee, v8's position here is ideally that we would still want microseconds and bounded integer precision, hopefully just using int64s, but that seems like it's actually not on the table. Which I think I'm okay with, but I want to confirm with the champions that that is not on the table because we can't have 64 bit everywhere. right?

PFC: Unless you want to reduce the range of allowable values for Temporal duration under a limit that we think is not realistic for what we'll see in usage, then duration calculations have to be done in 96 bits and not 64.

SYG: Thanks. And then, the second part is the 64 plus 32 thing. The background there is that we're looking around at how other duration libraries represent nanosecond precision time. And in particular, abseil, the C++ library uses this system uses an int64 to represent seconds and an additional 32 bits to represent sub seconds. In particular. I think they represent quarter nanoseconds or something like that, which is what could fit in? So I understand the detailed specs are not worked out here. I'm wondering what the implementation strategy is with the current plan from the champions. Does it basically lock all implementations into int64 plus 32 representation, even if you don't want an optimized implementation off the bat, is the choice still that if you don't want an optimized implementation, you still just do everything bigint because obviously, that's big enough. But if you do, there's a very clearly understood way to do an optimized thing and that's 64 plus 32.

PFC: Right. I don't think it obligates you to use 64 plus 32. In particular, I believe, although I'll need to confirm this when we actually go to work out the details, I believe it should be possible to create a polyfill for this with two JS numbers instead of 64 and 32 bit integers. But I'm not 100% sure about that.

SYG: Alright. I think that clarifies my questions. Thank you.

DE: So I think I'm very happy with this framework. I want to say at Bloomberg we have a moderate preference for the expressiveness of nanoseconds. There are number of publicly available financial data feeds that are expressed in nanoseconds, and it would be great if we didn't have to worry about whether this would be representable in Temporal units. That said microseconds are already pretty small, most of the things that you want to display to users are microseconds or coarser, so it might not be the end of the world. Anyway, I think the biggest problem was the bigint overflow, and then I get the feeling that the 64 plus 32 thing will be suitable, so I don't see a reason to coarsen precision to microseconds. and so I'm very happy with what was proposed here, once the details are worked out. But it also wouldn't be the end of the world, like Temporal is completely unusable, if we went to microseconds.

WH: For this I would like us to have the simplest spec and just let implementations pick whatever implementation suits them, be it a single integer or separate integers for seconds and subseconds. If we specify it as a single integer, it's fairly obvious how an implementation can split it into seconds and sub seconds — there are time libraries that do just that. If we specify it as a separate seconds and subseconds, it's not obvious how an implementation could do this using a single integer, and it's very easy to get the spec wrong, particularly in the areas where you need to transfer carries and signs between arithmetic on one of the numbers to the other. The overflow boundary conditions can get quite tricky where you might get cases where something overflows when it shouldn't. So, I definitely want implementations to be able to use separate integers for seconds and subseconds, but I would like to specify it as the simpler variant of just having an integral number of whatever your subsecond units are. It just makes the spec much simpler.

PFC: I think that's a fair concern. I will say it might not be as obvious as you think how to split it up, because we have actually had comments from implementers on something similar, where they didn't realize that the spec allowed them to split a large integer into smaller integers, but I think maybe we can solve that with a note, or something like that.

WH: Yeah, just add a note. It's challenging to mathematically separate integers and get all the boundary cases.

DE: The rest of the Temporal spec is also challenging mathematically. I have a reply here. You're making an interesting editorial suggestion which I'm not necessarily opposed to, but also do you have an opinion on the semantics or the framework for the semantics proposed about the limits, maybe sticking with nanoseconds?

WH: I'm not quite sure what you're asking me. I would not be in favor of having this work with times with up to a googol seconds. So we definitely need to have some limits. And 53 plus 30 bits, or anything like that, is reasonable.

DE: Great.

WH: The thing I care more about are identities such as addition is commutative and associative.

SYG: I get where you're coming from WH, but speaking from experience of trying to review a lot of implementations here, and just given the sheer size of Temporal, I would lean the other way for editorial direction. I would be more comfortable that if everyone has agreed on the bounds and number of bits to represent these, and what a good representation ought to be. Now, rather that the spec gets that right, once in spec correctly, then to lean on implementations to interpret, what the optimization representation could be. Just the sheer size of this proposal I think means in practice that implementations are going to be implementing the spec literally step-by-step. It is just impossible to review otherwise, like, I don't know how - like if somebody came to V8 and was like here is an optimized representation of Duration and all the math operations, I would just reject it. If it was not step to spec step mappable in a way that I can actually review. As a software engineer I don't know what we're going to do if it's spec'd in such a way that it's not obvious to review.

WH: I strongly disagree with that position in general. What we need is a limit on how high the number of seconds and subseconds can be. I don't want to have to deal with edge cases in which you have a number of sub seconds greater than a second, dealing with opposite signs and stuff like this. It just creates unnecessary spec complexity.

SYG: There might be a way to thread the needle, but there's nothing which says that you must Implement seconds and subseconds using two separate integers. You could just use 96 bits arithmetic and it's so often faster and simpler.

DE: So I want to propose that for next steps on this interesting topic, the editors have this regular open call opened to all delegates, you continue the discussion there? I think we can trust the editors to ultimately make it a good call on this editorial decision based on inputs like this.

WH: It's not just an editorial call. It's also a correctness issue.

DE: This spec has to be correct.

WH: Yeah, the spec has to be correct. It is my main concern.

DE: I think the point has been registered and we could move on.

JGT: So first just a note, this is not necessarily related to the size constraints. But one thing that wasn't immediately obvious while designing the duration type, but now is fairly obvious, is that a very large percentage of durations are going to have one unit in them, right? And so there may be some significant storage optimization opportunities for durations, especially if they're sort of, lots of them are being created And so, as implementers are sort of giving feedback on the bounds they might also be wanting to think about what, use cases are likely for durations, and perhaps to think about an optimised path for single-unit duration.

CDA: Okay. we have about 12-13 minutes left. So PFC, do you want to continue?

PFC: (Slide 9) OK, I'll run through the normative changes quickly. There are only five this time. Like I said, the todo list is finite and decreasing.

PFC: (Slide 10) This is one [fix time zone formatting in ZonedDateTime] that I presented last time and didn't achieve consensus because more time was needed for review. So during our review, we had discussion with TG2, and FYT raised some concerns about the proposed solution, and ultimately we were not able to get to a position where the PR proposed last time would be able to achieve consensus in plenary. So we came up with an alternate design, which we believe addresses the concerns we heard, while still allowing the toLocaleString method of Temporal to be used, although passing one to the format method or the other methods of a DateTimeFormat, like in this code sample, will throw. This is a temporary solution to allow toLocaleString to be used, and we hope that in a follow-up proposal we will be able to find a solution that everybody's happy with, that will allow a ZonedDateTime to be used with the other methods of DateTimeFormat. You'll notice at the bottom of the slide, it says "late PR", as we had part of this discussion after the agenda deadline. So you should note that unlike the other PRs I'm presenting, this one was added quite recently. If you need more time to review it, I'm here available to talk it through with anybody who has questions. And in case people are not ready to lend it consensus today, perhaps we could have a short item on that at the end of the meeting.

PFC: (Slide 11) Right. We have a pull request auditing the callers of MakeDay and MakeDate and TimeFromYear for possibly out of range values. This actually stems from an existing bug report in the tc93/ecma262 repository about how the operation MakeDay is not precise about when it returns NaN. This affected some of the operations in Temporal, which asserted that NaN was not returned from that operation, but that may not be correct. So, rather than complicate the spec with having a bunch of code paths handle NaN separately, we would like to put mathematical value versions of at least MakeDay and other similar operations, which hopefully when we get more clarity on what the operations in ecma262 we could recombine in the future. But this at least makes the semantics clear and removes ambiguity without complicating the spec too much.

JGT: Hey, PFC, could you back up two slides? (Slide 9) Just to clarify on this, the current behavior in the spec that we're trying to fix is what's shown in this code sample, right? So the illustration of this code sample is that it's really bad to return different results from toLocaleString than from DateTimeFormat.format. That's the problem to be solved. The solution that we're planning is essentially to have that second line of code throw, right? So we're not proposing what's here, but the PR behavior is just to throw in the second case and then come back at some point in the future with a better solution for date-time format.

PFC: Looking at this slide, I see wasn't wasn't entirely clear, you're right about that.

PFC: (Slide 12) All right, back to RGN's PR (#2500). This is a follow-up to an issue that we discussed in plenary last time about when exactly property bags are validated and how, when you pass a property bag to a calendar operation. This switches the order of things around a little bit to make sure that all the validation of calendar specific properties, like era and eraYear, is handled in calendar code, and so if you don't support other calendars than ISO 8601 then era and eraYear do not appear in the spec that you have to implement. So this has a couple of very subtle changes to the order in which certain properties are accessed, but the main visible consequence is what you see in this code sample below. The first two calls to PlainMonthDay.from are unchanged. If you if you put a month, day, and year that don't form an existing date, that will throw; if you put a monthCode and and a date that exists in any year, that works. So that doesn't change. If you have monthCode, day, and year, and that forms a date that doesn't exist, previously we would consider the third call here as equivalent to the second call. What has changed here is that it's considered more equivalent to the first call, and it's not accepted,

PFC: (Slide 13) All right, next we have an audit of user-visible operations (PR #2519). Feedback that we've gotten often is that calendar and time zone operations, which are potentially calls into user-visible code in the case of custom calendars and time zones, were called redundantly. We've had a number of PRs l from either the champions or implementers trying to fix specific cases of this. At this point we decided, since we keep seeing these, we need to audit the whole proposal and just fix any calls that might be redundant instead of fixing them case-by-case. This audit is done now. It doesn't affect any of the functionality, but it is all observable as you can see in this very tiny section of the diff that I wrote for the test262 tests, a lot of Get and Call operations are eliminated. We've tried to write it according to the principle that you should get the methods once and call them multiple times only when necessary. So, a lot of lookups have been eliminated and also some calls.

PFC: (Slide 14) Last one (PR #2517). This is a bug discovered by somebody who was interested in using Temporal and using one of the polyfills. In certain situations, in the calculation of rounding a duration, the largest unit wasn't respected correctly. So if you want to balance 2400 hours into a duration where the largest unit is a month, then what you want, at least relative to this particular date, is three months and 11 days, but with the current spec text you would get 100 days. This is the right length of days, but not the right unit, and we are fixing this.

CDA: We have one question on the queue from KG.

KG: Not a question. I just wanted to say the audit of user-visible code I'm strongly in favor of. That's excellent work, it's something that it had been clear it would need to be done holistically for a long time and I'm glad it actually got done holistically. It's good.

PFC: That's good to hear, thanks. I'd like to ask if we have consensus on these five normative changes.

CDA: DE supports on the queue. We do have a clarifying question though, from JGT.

JGT: One more note on that the first normative PR that the bullet mentioned is in addition to throwing in the format() method, it also affects all formatting methods in the Intl.DateTimeFormat namespace. So formatToParts(), formatRange(), etc, all of them would throw if presented with a ZonedDateTime.

PFC: That's right. It's a bit unfortunate that you cannot directly format a range of ZoneDateTimes, but as I said, that's something that we hope to make possible with some future adjustments to the proposal, but not in scope right now.

JGT: Also, to clarify, it is possible you can transform it into an instant, right? So you can do it, there's a workaround, it's just not as ergonomic, but you can transform the ZonedDateTime temporal type into a different Temporal type and then you can format that range. So, there are certainly possibilities there In the meantime.

CDA: You have two explicit supports in the queue from DE and DLM.

DE: Yeah. I definitely support each of these. For this one we're talking about with ZonedDateTime, it seems like an okay starting point and we've been successfully incrementally adding to DateTimeFormat. So, I think that can continue.

DLM: I would say really quickly that way that we appreciate the amount of hard work.

DE: So we're out of time for now. but if we have more time to discuss later the meeting, I think would be great to flesh out the microsecond versus nanosecond thing more a little bit, as well as how much we want to wait for the IETF before considering this to be not anymore requiring coordination. But yeah, overflow.

Summary

Barring any showstoppers raised by implementors, we expect to present one more normative change to avoid unbounded integer arithmetic in the next plenary. We discussed the framework for the solution to avoiding unbounded arithmetic on integer conversions; there was general support to the idea that the number of seconds in a duration is bounded to MAX_SAFE_INTEGER, which is anticipated to solve the need for BigInt calculations. Topic to be continued on day 3 in an overflow session, discussing the nanosecond vs microsecond precision, and the way forward with the IETF review of our string formats proposal.

Conclusion

All 5 PRs got consensus and will be merged tc39/proposal-temporal#2522 - Change allowing Temporal.ZonedDateTime.prototype.toLocaleString to work while disallowing Temporal.ZonedDateTime objects passed to Intl.DateTimeFormat methods tc39/proposal-temporal#2518 - Change to eliminate ambiguous situations where abstract operations such as MakeDay might return NaN tc39/proposal-temporal#2500 - Change in the validation of property bags passed to calendar methods tc39/proposal-temporal#2519 - Audit of user-observable lookups and calls of calendar methods, and elimination of redundant ones tc39/proposal-temporal#2517 - Bug fix for Duration rounding calculation with largestUnit

Set methods: What to do about intersection order?

Presenter: Kevin Gibbons (KG)

KG: Okay, so Set.prototype.intersection. As a reminder the Set methods proposal is in stage three. It's beginning to be implemented and we are running into the issues of what is possible to implement. So as a reminder, for context, Sets are ordered, which means that for every method, including intersection, there is a particular order that we chose for the result, although the particular order doesn't matter very much. Mostly the order just falls out of how the algorithms are specified. In fact, in all but one case, it falls out of how the algorithms are specified. However, in the particular case of intersection, where you are intersecting a large thing with a smaller thing, the order as currently specified has to be accomplished explicitly by sorting. And the idea is that you don't want the sort to take time proportional to the larger thing, because intersection shouldn't require time proportional to the larger of the two sets; intersection should in principle only require time proportional to the smaller of the two sets. So it was hoped (by me) that you could do this sort that you can see on the screen (slide shows spec note) efficiently in terms of being proportional only to the size of the result, not the size of the receiver. And this is true if you are using the Firefox implementation, which is documented nicely online, and the V8 implementation, which is essentially the same data structure. But it's not true for every possible implementation and not even true for every current implementation. So JavaScriptCore uses an entirely distinct data structure, which has very little in common with the one that V8 and SpiderMonkey use, and JavaScriptCore’s implementation uses linked lists to maintain order rather than a contiguous array, which does not allow for efficient sorting. With V8 and SM you could do an efficient sort by looking up positions in an array and comparing them, but in JavaScriptCore you would have to iterate the actual list to get an order.

KG: So, JavaScriptCore does not allow efficient sorting. So we’ve got to do something. Well, in fact, we don't have to do something. One of these options is "it doesn't matter", though at the very least I need to remove this note that this note is not true for JavaScriptCore implementation. So, options for things that we could do. We could say that the order of the result will change suddenly as soon as the argument gets larger than the receiver. So where previously it was always going to be sorted so that things will be ordered the way they were in the receiver regardless of the sizes, with this it is possible the order of the result would suddenly switch based on which of the two was larger. So, you could add some unrelated elements to the argument that aren't even in the resulting intersection and have the effect that the order of the result is suddenly different. Alternatively, there's this sort of “zip” order where you pull something from the first one, and then something from the second one, and then something from the first one, and then something from the second one, etc. This would have twice as many user calls. And also means that it's basically impossible to implement by cloning in an actual implementation. You end up having to follow this thing where you iterate both sets, even when they're both built-ins and there's no user code involved so it has the right order for the result. It's not something that follows naturally from cloning either set.

KG: Another option is just say we don't worry about it - JavaScriptCore would incur some overhead in this very specific case, but like it probably doesn't come up that much. And it's probably not that much overhead anyway - just maybe it doesn't matter: I just remove the note and say “you've got to get here, but it's up to you whether you do that efficiently”. Or maybe there's another option I am not thinking of. But these are the only ones I've got. They all have pretty significant downsides. I would like to hear from the committee what people's opinions on this matter are.

SYG: We’ve chatted a little bit about this offline already anyway, but I'll recap some of that discussion here for the committee. So I did discuss this with the V8 folks already. the V8 engineers' first opinion is that this intersection algorithm is kind of wack anyway and specifically this switching on this size changes. So, in the case of a user-defined set-like (so not a built-in set and not a built-in map), you're calling other methods on it which is either iteration or iteration by keys, and what code gets called on it already is size dependence. So I think our opinion is that the size dependence is the weird thing from our point of view. And given that, it is what it is. For time optimality, you can't like, why even why work so hard to to get rid of the sharp edge of a dependence on order like the important thing. I don't think we should believe this completely implementation-defined because it's important to have something exactly define the order. It's a weirdness. I don't have the intuition that this order really matters, but I'm happy to be corrected here, but given that intuition, and given that the, which user, which visible user code gets called is already size defendant having the order precisely dependent seems fine from a semantics point of view. And it makes the fast paths simpler and implementation in that the fast path is exactly what you talked about with cloning the smaller thing. and if you clone a smaller thing that naturally preserves the order of the smaller thing so, yes, my preference is that one.

KG: Yeah. So I have the code for intersection on the screen just to demonstrate what SYG was talking about. There's this switch based on the relative sizes, and in one case you end up calling the has method of the argument and in the other case you end up calling the keys method of the argument. So that's the switch that SYG was talking about. The user code is displayed here. Now, personally I think that the order being size-dependent is a sharper edge than which piece of user code gets called, because in the case that you have a correct set-like, where the keys method is consistent with the has method and there's no side effects in either of them, you don't care at all which of the two things is getting called. But even in that case, you could plausibly care about the order. So my feeling is that the order being size dependent is, in fact, a sharper edge than the which user code gets called being size dependent. But that is just my opinion. And if the rest of the committee is okay with that sharp edge, then we can do that.

USA: Yeah. there's no response. Next up, we have DLM.

DLM: I discussed this with Andre Bargul who's been handling our implementation and as was mentioned, we don't have a problem right now, but we do have a small concern of specifying a performance. In this case, then that prevents us from doing other optimizations in the future by changing to a different data structure that might not give us the performance.

KG: I think we can't specify the performance unless we force JavaScriptCore to change their entire implementation. So they are today in this situation that you are worried about being in the future.

WH: This situation is somewhat similar to sort, in which it is intentionally not specified which sort algorithm we use in the spec because there could be a variety of ones with different performance characteristics. Now it sounds like that in this case there might be an algorithm which just iterates through the smaller object and looks things up in the larger one. Is there any concern that there might be an algorithm which is even better than this?

KG: I do not think it is possible to do better than that, modulo perhaps a constant overhead. So the algorithm…let me pull it up. The algorithm is fully specified. Yeah. The algorithm is fully specified in terms of which things you call; that we did not leave up to implementations. No one is really enthusiastic about the situation with array sort where things are implementation-defined, but we are largely okay with it because people are unlikely to be relying on the order in which calls occur, but we do know that people end up relying on the ultimate order of the data structure. In the case of sort, we know that people relied on it being a stable sort, even though they didn't rely on the order in which the calls happened. So I think we have to specify the order of the resulting thing, and I think we want to specify the order in which the calls occur because we don't want to make more things be implementation dependent. And I don't think we can do better than this in terms of big-O performance.

WH: For the third answer, if we specify it this way, will it cause a problem for any implementation?

KG: When you say it, we spec it this way, are you talking about everything apart from the sort, or are you including the sort?

WH: Yeah, we just iterate through the smaller set looking things up in the larger set, collect the elements, and not sort them. This is the same thing that SYG was advocating for. Would doing that cause any issues for any implementation?

KG: I don't believe so. And in fact, I think that's the best case for implementations because it allows them to be as efficient as possible.

WH: Okay, yes, then I’d be in favor of the first bullet point [order depends on relative size of argument vs receiver]. Let's just do that and not try to sort these things afterwards.

USA: Okay. Now, we have MM. Yeah.

MM: So I agree with what was just said. And because the reasons were stated fairly exhaustively I don't need to go into them. I think we should not sort. I think we should just do the deterministic thing that's friendly to all implementations, and I am not very concerned about that. I recognize the sharp edge KG is concerned about, I recognize that it might be something to be concerned about but altogether, I'm not worried about it.

DE: sorry for the notes to clarify, if MM and WH, were you expressed, his support for the first bullet point?

WH: I think we are. I think MM, I, and SYG all agree on the first bullet point. Yes.

DE: Thank you.

KG: Okay, I see that everything in the queue is about the sort order. So having heard several people speak in favor of the first one and no one except me really oppose it, I will just do the first one. The order will be weird. Okay, with that done, we have more general topics.

SYG: Sorry, were there more to your items Kevin - were there more questions you want to ask? Mine is more like a question I just want to bring to committee so I want to leave it as deprioritized as possible.

KG: Well SFC has a topic on the thing but the sort order is the only thing I was bringing and I consider that settled.

SYG: I'll say my piece which I think might dovetail into which things SFC is going to talk about anyway. So part of the feedback we got your invitation and when we're done, we're when I brought this back to the team is is basically, the feeling of the V8 implementers, is that we shouldn't care very much about the performance of set--likes. We should basically only care about performance, of built-in sets and built in maps Now, the intersection should of course, support set-likes but the feeling was basically that if you're using a set-like just have intersection do a Set.from or something as one of its very first steps to to, to cast the, what, not to cast, but due to convert it to an actual set and then use And that the algorithm and the rest of it, just basically is about only treating built-in Sets and built in Maps. This is not a, this is, is basically giving that go out to the concern that we had is is not blocking not, requesting the normal change. This has been litigated and relitigated in many past meetings. But our feelings are basically it's not clear how much we actually care about the performance of set-likes that are user programs. Why not just have forced them to iterate the entire set-like by casting it to an actual built-in Set first. I guess I would like some discussion around the topic if we have time because when we designed this thing especially around so class saying, non-generic on the receiver, but generic on the argument, it was with the explicit understanding of it being precedent-setting because this is a thing that that we have grapple with for many years. we want to reach a good like program to do in the future when we design new built-in, built-in methods, so so, Yeah. Now that there's some implementation experience under our belts, do people care about the performance of non built-in sets and maps?

KG: I care. I don't care about performance in the small, but I care about performance in the large and in particular I think it probably does matter a lot that if you take an empty set, and you intersect it with a very large set, that should be fast. I would be sad if intersecting the empty set or a singleton set or any other extremely small set was slow, even when the argument is a set-like. That just should not take a bunch of time. It did not require you to iterate the entire argument. I think it is a reasonable expectation that if you intersect the empty set with something, that finishes very quickly.

SFC: I'm next in the queue, then queue is not advancing, but I think Shu asked my question and I haven't answered it already so I don't have anything more to say.

USA: That’s the rest of the queue.

Summary

Although the sort order based on argument vs receiver size is a weird sharp edge, it is simple, has the best performance, and the alternatives are too complicated. We discussed various options for how to deal with this ordering question and decided that the least bad option was to just remove the sort step. So the order of the resulting set will depend on which of the argument or receiver is larger and live with that being kind of weird. is at least it's deterministic.

Conclusion

Use the sort order which depends on the relative size of argument vs receiver. Explicit support from SYG, MM, WH, DE

Async Explicit Resource Management

Presenter: Ron Buckton (RBN)

RBN: In January, I presented on the resource management proposal and requested stage 3. And at the time we had conditional advancement to Stage 3, pending an investigation into whether we should be using either the await keyword or the async keyword as a modifier to the using declaration. The consensus and conclusion was that this condition was to be resolved no later than the March 2023 plenary, this investigation would be conducted via some informal polling and if we we had no clear winner, so we would probably are we would advance to stage 3 with the current syntax, which is using await. So, there was some informal polling that occurred. SYG performed an internal informal poll at Google. I also performed one at Microsoft. There was also a broadly distributed poll via Twitter and Mastodon. That was provided by RPRR.. And I just on Monday, was notified by Hax that there was also a poll that was done within the JS China Interest Group.

RBN: So what does the poll that we provided look like? Essentially, it looks like this. We asked the question: “Which declaration form most clearly expresses the following semantics: that the value of the binding X below would be disposed when control flow exits, the containing block scope and that this disposal would happen asynchronously and would be awaited before continuing"? We provided in the twitter poll and the Microsoft poll a series of options, these were the current form, which is using await x = y, C# syntax, which is await using x = y, and the alternative syntax we've been discussing, which is async using x = y. And I'll go into each of these here is to kind of what the differences are and why we're looking at them.

RBN: So using await x = y is the current proposal syntax. It uses the await keyword to indicate that an implicit await occurs at the end of the block block. it uses the await modifier, which following the statement head, which is very similar to how for await uses the modifier following the or keyword. This has low likelihood of collision with await as an identifier. This is because await is a reserved word in strict mode code. It's also reserved inside of async functions, even in non-strict mode. And we recently added a lookahead restriction for the synchronous using to support this this case. We had some concern with this syntax about the await being a deferred operation rather than an immediate operation and the syntax has writte may seem to indicate that we are awaiting x somehow.

RBN: And we've also discussed the await using syntax, which is the C# syntax. This uses await again to indicate an implicit await occurs at the end of the block There's the possibility that this might better indicate that we are awaiting the using operation because it precedes using rather than awaiting x. The reason this wasn't originally considered was that this requires a cover grammar for disambiguation in that await using is already legal JavaScript before we have to disambiguate the declaration form from an expression via a cover grammar.

RBN: Finally, we considered the async using keyword instead. This would use async to indicate the operation occurs at the end of the block. async is a contextual keyword prefix in JavaScript much like a sync function There's less potential for misinterpretation. of whether or not this await is implicitly, it is immediate but concerns raised by myself, especially our that it doesn't really matter. Each existing and proposed use cases of async that occur in the JavaScript language. From facing function, async arrows async methods, even async do async here does not imply a weight, it only permits it. They're declaring something is async here. Does not actually have any explicit or implicit indication that await actually occurs in those cases. So it feels to the champion at least that this is possibly the wrong term to be using it also requires a, async or a cover grammar, which like the 08 using syntax in that the async before using is also currently valid as the beginning of an async arrow.

RBN: There's a fourth option that we didn't consider, which was using async. I had a number of concerns about this which is why it wasn't included. There's a much higher likelihood of collision with async as an identifier. When refactoring than there is with something like await, async is not reserved in strict code. It's not reserved in module bodies or class blocks. It's not reserved even inside of async functions. I also had concerns that it doesn't really align with the keyword order in ECMAScript or in any other language with similar prior art. So it seemed there was potential for confusion potential for refactoring issues that using await really doesn't have because using await has been around for quite a number of years and doesn't have - the folks that are using it generally aren't using it as an identifier. Whereas something like async. I have seen async as an identifier in many places.

RBN: So, the results of the polls. I've gathered this information from SYG and others. so internally SYG posted this poll, his did not include await using. There was a little bit of a miscommunication there I had indicated when I first posted the poll internally at Microsoft that I was concerned that having both using a weight and the weight using as separate items might have been a bit confusing. And unfortunately in the poll the options we were using, or that we had used for these did not include anything in the form of Ranked choice Voting. But the options as of then, I think the last time I got a snapshot of this from SYG was either Friday. I think Friday right before end of business day. But currently shows that at Google async using had a higher interest, then using await does. so at least on that side async using seem to be more more interesting line with Folks at Google that are working either on Chrome or V8 or heavily use JavaScript, or or TypeScript.

RBN: Within Microsoft, we kind of had the exact opposite. perspective. using await had about 41% of the vote from the number for the respondents that that responded. await using had a much higher incidence There's again a potential bias at Microsoft for towards the C# syntax and sets which many people are familiar with.

RBN: We had a Twitter poll, as I mentioned that RPR had posted publicly. This had 434 respondents, some of the respondents also indicated familiarity with C#, as the reason for their choice. In this case, using await was much lower than async using. But await using was ahead quite handily. Rob also performed a poll. on Mastodon. Unfortunately the polling was a bit awkward as the Elk mastodon client at the time did not support poles. So this was done using likes, these are harder to see via the link that is advertised in the slides here but the elk web application does show the same. does show likes Which means it feasible to actually references information for the Mastodon poll. There were about 16 respondents. they generally favored await using but it was fairly neck-and-neck. We're talking here I think a difference of 7 respondents for async using eight folks for await using and one for using await.

RBN: And the poll results that JHX gathered from the JS China interest group at eight respondents. Number of these respondents only expressed a preference that the keyword, whether it's awaite async come before the using declaration, and other than that, it's about two people seemed primarily we're had primarily People were more. interested in a way using versus one apiece for using await or async using with the remaining respondents only again, having preference for the keyword ordered showing the keyword keyword first.

RBN: so, in summary, for the Google poll, we're looking about a two-to-one in favor of async versus await. And again, this did not include await using as an option. The Microsoft internal poll results were 11 to 1 in favor of a syntax that included await. And is within the syntax include to wait about six to five. Six favoring the await using syntax, and five favoring the using await Twitter poll Showed about a two to one in favor of awaits in some form versus async within the await it was again. Two to one of folks looking at await using versus using await. Massive poll results. Again, there was a smaller number of respondents in this case. It’s about 9 to 7 in favor of await has async. But 8 to 1 in favor of await coming before the using declaration in the await case,

RBN: The champion's preference, in this case, I'm starting to lean towards await using. for a number of reasons, one. you still uses the await keyword, which clearly indicates in a weight and preserves sentiment that have been expressed by MM and others. More specifically that await and yield should really indicate interleaving points within JavaScript. I'm again, wary of introducing an inconsistent meaning of the, async keyword compared to anything else in the language, and it does feel to me more That the keyword order came to indicates that what we are waiting here is something related to the using declaration. rather than the X or Y. the X identifier, the seem to be more strongly favored in public polls and matches the prior art and C# which is also one of the inspirations behind async functions in JavaScript as it stands today. But given that, I still want to kind of get some feedback from committee. see on any specific preference as this has been something that's we haven't really had a strong preference from others in the committee. So I'll put this up on the screen. I don't know if we want to take some time to have folks put some answers on the queue or if we want to find a way to try to use a temperature check as a way to do this.

CDA: We (IBM) support the await using form, we agree that async was a bit awkward for the reasons that you mentioned and, unscientifically, we just sort of expect the await to come first. So we kind of have a preference on that one.

MM: Just confirming what RBN said that all the concerns that I had that led me to favored using await are completely satisfied by await using, I'm very happy with that result.

KG: I still strongly prefer async using but I don't think there's any objective way to resolve this given that there's not overwhelming consensus among the community, which is the only thing that a poll could possibly have shown us that would have actually determined the outcome to this. I don't think we have much of a way to solve this other than other than just, I'm saying we pick one and as much as I would like us to pick the thing that I like, we have to pick something. and I'm okay with deferring to champion's preference here.

JFI: RBN, you mentioned that you know await using was an inconsistent usage of await. But doesn't this apply to await to? I mean, my, my major concern with the weight using is that, you know, I expect await to denote that yielded. Point, like right there. And that this is actually saying that it's going to potentially await later.

RBN: Well, I have two responses to that one. Is that await? So await using, I think is still more consistent than async using would be because again, async as a keyword has no similar meeting in the language. Its purposes has been both in things that are currently in the language and things that are proposed, aimed towards indicating the allowing a specific syntax within the body of a function or in the case of async do expressions within the body of that block. await to be permitted to be used. And the second thing I would say is I actually would argue that it actually it isn't inconsistent. If we were going to make that consistency argument, I think it would have applied to for await only in that. So, for await is interesting because it seems both immediate and deferred. When I say, seems immediate is that a for-await doesn't await the x declaration, we say for await (const x of y) it does not await. the x, it awaits the results. of calling the iterator, the symbol dot iterator method on y. much, like they using declaration will in the although it is deferred away. The result of calling symbol that disposed on exit. Some point. but again, or a way in that case, seems to be very immediate. But it was also deferred. Because if I say for await context of why and I write a bunch of lines of code and then it some point I might have an if that says break now there is also an implicit await that occurs behind the scenes that is far divorced from where the Declaration and the await actually occurs. because we still now have to await her return. So for-await, provides both a implicit or both an immediate await and implicit defered await. A using declaration of this case is just an extension of the implicit defer to wait. So I think it is still consistent with what we have in the language today.

JFI: Okay, thanks for the response. I have a different topic I'll throw in the queue.

SYG: I, like KG, also still prefer async using. but for await using maybe, in your slide for await using you said, one of the reasons you didn't go with it was because the requirement of cover grammar. So is that going to be hard to parse?

RBN: Let me go back to that slide. I don't believe it will be. I've had conversations with oothers on the committee. I think, especially specifically KG who said that we really shouldn't use the complexity of cover grammars as an excuse not to try to pick the right fit. I mostly avoided originally to avoid the cover grammar, which is why I chose the using await ordering to begin with. the weight using grammar. if, in a L1 grammar, because we are when we are parsing the using we're trying to differentiate between say a weight using and weights of some other expression. When in many cases when performing that type of look ahead we might be performing to look ahead between await using we're using his lookahead character or lookahead token. So in LR1, it becomes more complex because we now need to cover grammar to differentiate, because where we get when we are deep within parsing and an expression like an await expression. It's a bit awkward to then back out and then parsing it again as a statement, but not impossible. Now, the interesting differentiating Factor here is that there is a no-line-terminator restriction between await and using and using does not allow binding patterns; therefore, you cannot have await using [ to a you're awaiting an element of a using array, therefore in a conventional parser or in the TypeScript parser for example, which is a little bit more forgiving when it comes to look ahead, we don't enforce LR 1 we can use single token look ahead because the next token after await, using must be an identifier for it to be in a way using declaration. So In a parser that doesn't necessarily necessary to comply with LR 1, it's not that complex to look ahead one more token to look at that and a parser that requires all are one or within the grammar that we specify. today. we would need to implement a cover grammar to wrap this. We would also need to do the same thing for async using We're that the case because again async using Central to be. has conflated with async Arrow. At the top level, this is valid JavaScript. I have run the copied and pasted async using arrow curly into Node and it runs fine. Therefore we would need to disambiguate that again. Once we hit async and using that, the next token is in this case. You have to be a identifier and not an so, I think that complexity exists with both async using and await using. I'd also say, I don't think that that is insurmountable.

USA: All right, you have nine minutes to go next up. We have WH.

WH: There is no spec for this that I've seen so I can't tell if the solution works or not. I'm concerned about cases like await using of or await using as and so on for which even if you do have an identifier, what were you using? It's not always clear what to do. There are also possible issues with using followed by a slash.

RBN: Yeah. so we don't have a specific syntax proposal for this at the moment. I plan to look at that and whatever that result being is long as it. My intention would be that if we have the editors' review and show that that syntax change would be normatively equivalent to the using await syntax we currently support. That might be. Acceptable enough to still reach stage 3 pending that change. I do think so. We have looked at things like using declarations in for of already we have to ban of as an identifier so we don't have: for of of etcetera. So we are looking at, we have looked at that for using as well.

WH: I think this is stretching the process too far. The process requires a complete spec 10 days before trying to approve something for stage 3. And asking for conditional approval without even having seen the spec would be putting me in a very awkward situation.

DE: I think it's reasonable to ask that if we select here, we want to go about the approach of await using, which I think we should, you know, resolved on a committee to. We want to try, then it's reasonable to ask that we bring this back to committee to confirm the grammar.

WH: Yeah, that's my preferred approach. What I heard just now was that we might have conditional approval and I'm reluctant to do that without even having seen the spec.

DE: I'm saying I think that's reasonable, but also I don't see reason for concern about this technically I think we should be able to work out all the cases. But still it's a reasonable request.

DE: I'm next on the queue. I'd like to ask - Ron previously said maybe a temperature check and I think it'd be great if we could resolve today about whether we want to go in this direction that RBN is proposing. using the await using syntax. that work as a temperature. Check with the problem be. Are you happy with this sytax?

RBN: Do we need a temperature check or would it be better to just repeat request consensus for await using? My biggest concern is that for the most part most members of committee that have discussed this in the past, Really have not expressed strong preference. and I think we did a temperature check in the last meeting and that really wasn't very fruitful. So, I think it might be better to just ask for consensus on a weight using as the gems of that slide consensus on. await using as the syntax to use going forward and that we would make those changes to make this work in the future.

DE: Okay. if we're not doing a temperature check, I just want to register, I would be strongly positive. If we had a temperature check, I think it's great that you did so much detailed look into this. and I'm happy with your proposal.

RBN: At this point. then I would like to request at least consensus on await using?

MM: I support.

CDA: IBM supports.

DE: yeah. well KG are you okay with this procedurally?

KG: We're just saying that we like the syntax but presumably the specification for the syntax will have to come back and get approval.

DE: Yeah. I'm just confirming that we're all on the same page about that because previously there were concerns raised about that. Does that address your concerns WH?

WH: The currently active question is, which syntax would people like? Let's stick to that question. I have no objection to await using in aesthetics but I haven't seen a spec.

DE: Okay, I'm asking because we need the conclusion to explain clearly what we decided. so good.

SFC: All right. I think I do. Good research and, you know, I was also in the async using camp but now that I've seen this research, I've warmed up to the idea of await using and I think that we should use. You should try to do things like this more often when trying to solve these like bikeshedding-type problems. And yeah, just what I've been saying like that so, I believe we have unless there are any other objections.

SYG: I want to talk about the parsing difficulty. We will wait for the spec with the full detailed spec, which will give us an understanding of the grammar difficulty. I think that's a possible, a separate question from the parsing difficulty. What like what are your thoughts on the ranked choice? if there is parsing difficulty? Like, is your opinion? that we should stick with it? Even if there is parking difficulty or do you feel on the fence enough that we can go with? using await? If it's simpler? Even if it's like less ideal from your current View.

RBN: If I were to rank our choices, to be honest, I would probably stick with using await. statements, that's already specified. the await using doesn't reduce complexity in the grammar. I know that I've already been working on TypeScript's implementation of synchronous using declarations and also its implementation of async using declarations, although if this is since this won't have stage 3 advancement, it's not likely something we will be shipping in our in when we reach (?) TypeScript but I have looked at the at these complexity of updating the TypeScript, parser to support parsing 08 using actually to support all three of these cases using await which is currently would currently be supported await using as a prefix modifier and and even modifier. Even async using and the again the same things that I found where that using await is the only one that's really simple because it doesn't require the cover grammar. await using and async using both require more than one token of look ahead. In the case of await using, it's always going to be. It's easy to differentiate at the statement level and TypeScript because we can look ahead to using and we can look ahead to if the next token is identifier and there's no line Terminator. Then this can only be a using declaration. So the parts for us isn't terribly complicated. I imagine the grammar too. to support LR1 would be more complicated for both await using and the async using case. Because again, there are existing there is existing Acts. that matches that we would have to disambiguate,

SYG: Okay. Okay. so so I understand we're out of time but I do think it's important to to resolve this. When you ask for a consensus for for, is it okay, with the do we have consensus for await using versus using await? I don't know. People took that to mean like do you also like await using or you can live with await using? because if all the things being equal, it comes back to that we can live with both number one, and number two and simplicity might favor number one - like we also still have consensus for number one, like, I'm not sure what the outcome is. If we can live with number two? Because if we can all live with number one as well, I would prefer that we just do the simpler thing.

RBN: I think that's a fair question to ask. can we potentially extend a time box by five minutes to talk about this? [yes] Okay, so, we know that. that we have consensus on await using, as an option. Instead, I would like to ask the committee if there is anyone that would object to instead leaving the syntax as is currently proposed using await as it. Avoids, The parser complexity that there is a concern about.

DE: Sorry, you mean this as a fallback option, if we discover this significant parsing complexity, what do you mean that we've already identified the parts of the complexity and want to make this opposite resolution?

SYG: I think that it from what I understand from SYG's question is to determine if we are all okay with await using, as not all people are happy. then. Are we also may be okay with keeping up with using await since that is a simpler parsing option. If and if so, then that syntax is already well defined and we could potentially advance to stage 3 with out. to do wrong, is you find me?

USA: I think it would be much simpler if you put forward your preferred outcome and then asked for consensus.

RBN: I'l be honest, my preferred outcome is achieving stage 3.

SYG: Ask for sticking with number one (using await) and the rationale being there is a spec and the simplicity of parsing.

SFC: I think that, you know, I think RBN made a compelling case for number two (await using) in this presentation, he presented us with evidence in support of option to arguments in favor of option one seemed a bit theoretical at this point. We don't know for sure. the, the impact on parts and complexity I would also point out that there's not a single poll in which option one one B option to, and option 3, I think was fairly consistently. The second choice in most of the polls that we were shown. And in terms of order of constituencies what's like, this is already a very, very you know, potentially confusing We thing for developers, so we should probably be in on. you know what's best for developers here and I think there's a fairly strong signal that at least number two is a fairly good for developers and I would not be comfortable with us saying, option one is a fall back because that's not what the evidence that I've seen indicates coming into this call. But I speak for myself, not for Google.

RBN: so, I'd like to respond if I can. I agree that that's the probably the preferred outcome await using is I think much clearer. and I am not opposed to spending the intervening time between this and the next plenary session investigating the syntax and grammar that's necessary to make this work. And as I've said, I've already investigated the parsing complexity at least in TypeScript. I'd be happy to hear feedback from other implementers. If they think that there would be concern for this within their engines,

DE: Briefly. I agree. and I would be very uncomfortable if we made a decision based on a potential delay of just want to eating. So for further, we stick to the conclusion of the articulator. the articulator.

USA: We are on time, so we unfortunately cannot go any further. I hope it's okay to defer this I think that's we need a conclusion.

DE: I previously noted the conclusion that we would try for using await that we would try for await using. and your back makes meeting. Should that be the conclusion or should we make it overflow item to come back to whatever We want to resolve in the direction, she proposed.

SYG: I think there is no consensus on the thing I called for. I think we have the - I can certainly live with number two, so I think your summary stands that we come back next meeting and everyone is everyone thinks that pending full grammar and possible parsing complexity, number two is the preferred outcome.

DE: I would also note that number one is a roughly agreeable fallback, in case we do discover these technical issues with silver. One is a good fallback. We discover issues with number two, is that Accurate to write this part of the conclusion?

SFC: I don't think that's been discussed.

DE: Okay, so I won't record that. Thank you.

RBN: I would note though that I think - again this will require investigation, but it's likely that if we find that await using's parsing complexity is too involved to be direction. We go. That is very likely that async using will suffer the same fate as it has the same. complexity. We're both looking at something at a statement level versus something in an expression level and requires the same amount of look ahead. So it's very likely that if await using, I'll do an investigation to both but it's very like chuckling wait using is not violent viable than async using will also not be viable.

DE: Sorry, did I say it wrong? I meant to say that using await would be the fallback. I agree that, that's what you said. and can we conclude that? I think it was more that I think felt the statement was being made was that it's not that if await using isn't viable then we just fall back to using await but rather we need to come to this decision again and I wanted to make the point that if await using isn't viable, it's likely that know She wasn't viable. it's likely number three. Also isn't viable, which means we might still be falling back to 1 as the only alternative

MM: I think since we're going to investigate something that we don't currently know. we should discuss it again. If that investigation says that we can't do await using then we should discuss it again, rather than trying to predict what the nature is of the surprise. so that we can now make a decision before actually having the surprise.

Summary

Various grammars for async resource disposal were considered, including results from polls. The champion's preference became await using, and several delegates were swayed to prefer this option based on the data and arguments presented. There are concerns about the parse-ability of await using, both based on practical implementations and the fit into the ES spec's cover grammars; it's unclear if certain edge cases will be easy to manage. If await using isn't viable, it's likely that async using isn't viable, and the committee may come back to the conclusion of using await, but this will need to come back to plenary for future discussion.

Conclusion

The committee resolves to attempt the syntax await using. The grammar will need to be worked out in a PR, which will need to be presented in a future plenary for review and consensus.

Decorator: Normative update

Presenter: Kristen Hewell Garrett (KHG)

KHG: So the first item here is a few normative updates to the spec for decorators. These are all relatively small changes. so I kind of bundled them all together. There's six in total and well we can just go through each one individually and talk about it.

KHG: (PR, Issue) So the first one is removing the dynamic assignment of the home object for decorated methods. You can see what this actually looks like here. Currently we call perform MakeMethod on the decorated function. This is the second time MakeMethod would be called on this function and that sets the home objects dynamically for that method. So that's a very new thing that's never been done before in the spec and I believe the reason I did that originally was just I didn't understand the full implications of makeMethods. I was just kind of cargo-culted the thing along. So yeah, we've had a few people point out that this can result in very confusing behavior. Rebinding super in general, seems really weird. and just seems like a bad idea and doesn't have any really good use cases. So we wanted to just remove this and no longer rebind super. Do we have consensus for that change?

RPR: There's no one on the queue.

DE: Yes, I support that change, although the committee has in the past thought about making an API for make method this is like it's like has all the cost but they're not be expressiveness. You. that's it for this change.

KHG: Cool. awesome. Okay, So I'm going to assume that that one is okay and we will move forward.

SYG: +1 yes please we don't know how to implement it otherwise

JHD: +1

KHG: (PR, Issue) The next one is calling decorators with their natural, this value, whatever that would be. That would be the receiver. instead of undefined currently in the spec we just always called the fecorator function with undefined. undefined. I think that's an oversight because I never really intended for that to be the case. So, for instance, if we call foo.bar, this is an invocation of a function. So the receiver this value should be foo here and if you bound this for bar, at some point, it should have the this value that it was bound. They basically should work just like a normal function. That is the whole mental module of decorators. So, yeah, not sure why this happened in the first place, but that That would be the change. Yeah.

DE: +1 good bug fix

DLM: +1 I agree with this.

KHG: Any other comments on that one?

RPR: Nothing. in the room. Nothing on the Queue. So I think, you have consensus on that item.

KHG: Perfect. Okay. (PR) So for three, this is just a new validation that we would do, so add initializers, a function on the context object for adding an initializer function. and currently it can receive any value and it doesn't assert that, it's a function. So we just want to add a statement that that causes it to throw an error if the value is not callable. So, that is basically all that one is the behaviors like really undefined if it's basically assumes it's a function after that point. So it definitely should try that or or I don't know what'll happen. Okay any comments on that one? Dan did you want to speak?

DLM: Yeah, looks good to me.

RPR: Right. Any corrections or the comments on number three? silence. You have consensus on Number three, perfect.

KHG: (PR) So for number four, setting the name of the add initializer function itself currently it just did not have a name said ever set to undefined or an empty string. Again, I think this was an oversight. The spec is very large and it was the first time I was writing it so I made a bunch of little mistakes. Yeah. it seems like the thing we should do because that's generally what you do with functions in JavaScript so does everyone agree?

DE: +1

NRO: +1

KHG: OK, I’ll take that as consensus. So now we get to this one. so I actually originally had a PR up to make this change in the opposite direction, which was to allow function names to be dynamically reassigned for decorated methods. The logic was that if you have a method on a class and you decorate it, it's always going to be like that. that new decorated function name. So you could like, decorate three, things and they would all be like, X, right? If the function return from The Decorator was x. Originally my logic was we're going to want as a user, I'm going to want to be able to see what the original function names were. But then a bunch of folks pointed out that this would actually really mess up stack traces because then you would have l the decorated function calling the original function and they would both have the same name. would be kind of confusing. There's we would have to kind of rethink how second function name works and everything for that as well. And what somebody mentioned was we could actually have this be something that you know is non-normative or I guess. I don't know if error text is not normative, but this could actually be something that stack traces insert, in order to make it easier for people to read what method is actually being called. Any clarifications needed for that one?

ACE: Sorry, I’m not on the queue, but didn't follow what's being proposed exactly.

KHG: Okay. So let me see if I can make an example real quick, so, actually, I think. RBN has some example here. so currently a b and c. When they get decorated we return this anonymous function right here. So, a b and c. These names would be the name of whatever this anonymous function is. And what that leads to is. You have like really odd stack traces because you, have two a's or to b's. Because you know this function is going to call the original function. and yeah, it's just kind of confusing behavior. So the solution would be Instead. you redefine, if you want to redefine the name to match, the original name, you can do that yourself manually. and otherwise it'll just be this new decorated method. So like every function here would have the name X by defaults Okay, thanks.

ACE: So the proposal is we don't do anything special with the name. The name of the function is just the name of the function, as it would be whenever right?

KHG: Yes. Any other? questions?

ACE: I'll just add: that is what I've been doing with decorators already, setting the name myself manually. So it seems good.

KHG: Does anyone object?

MAH: I was thinking we didn't make sense to conditionally sets the name to the original method name. If the original function is an animus. So if it has an empty empty name, if you return the same anonymous function for multiple providers. yeah, so if you return an anonymous function, the decorator machinery would automatically set the name to the name of the methods that was degraded.

LCA: What I'm saying is what if what if you create an anonymous function outside of The Decorator function itself and return that same function for multiple like decorators over multiple, multiple, protocols? Maybe renaming that function. like to whatever the last. decoration is, or rights.

MAH: Yeah. but that would only happen the first time I guess but yeah, that would be problematic.

KHG: It sounds like that. We agree. That would be a bad idea.

RBN: Yeah, I was just going to say the same thing and that was Illustrated. although not Illustrated within unnamed function, but in the example that Chris had up a moment ago where with no op, If this were a function, that did not have an assigned name and I were using it the same way then the difference is instead of sun functioning resulting in logging acc, would log abb in still end up with that same situation. So I still don't think that's viable.

SYG: (from queue): Prefer no conditional setting. "Things that look declarative ought to be declarative"

KHG: All right. so it sounds like consensus to remove set function name and dynamically setting the name. Great. Yes. Cool.

KHG: Okay. (PR pending, issue) So the last issue here. is a bit more involved. Basically we've added this new accessor keyword, right? That if everybody remembers is basically a way to define a getter and a setter on the class, in a single statement. That access a private storage, that is backing that getter and setter. And the idea is that it works like a kind of field, but via decoration, and potentially, in the future via othe like property syntax. You would be able to intercept that get and that set. and replace it. So in classes, on class instances this works just fine because you can see like basically desugars to something like this. Imagine it without the static on a class instance. It's just like a private field, a getter and a setter that can access that private field. cool. But once we add static, we run into a problem because all class fields get to find on every instance and are inherited class fields as well. for instance private fields, but for static ones, they don't. static private fields, get defined just on just like normal Fields, on. on the class itself and they’re inherited on subclasses. classes via shadowing or prototypical inheritance. Whichever it is. So what this results in is if somebody tries to access on a child class here, this will throw an error because they can't access the private banks on the Sea. On the child class. It has to be on the superclass. and this kind of just makes not at all on for inheritance on. static. properties. So the proposed change is that instead of having it desugar, essentially to this. hashtag X, we would have a desugar to a direct reference to the class itself, which is basically how you would solve this problem if you're using static private fields today, if you you were doing that and a accessing it with, you know, get and set. You would just replace this with a typically and things would work. So yeah that's that's basically the proposal. Any questions about that?

DE: Okay, so I'm getting a plus one. This is a good fix. We spent two years discussing this case, about what the static private semantics should be. I don't know if we need improved documentation or something? Anyway, the fix seems correct.

RBN: I just want to make sure and I need I regret that I did not see the there is no PR the addresses of fixing this, but I want to make sure that when we're talking about this that we're talking about, this isn't actually about return a.hash X. It's actually because if a itself decorated with something that replaces the constructor. It's not the value of a, at the end of decoration, it is the value of a before any decorators are run so that we're not making that confusion because that would be just as bad as this. this.

KHG: I believe that it will be the same A that private fields. access in general. I believe that. all instances of a get rebound to the return value of the decorators that are applied to the class itself. I'm pretty sure what you would need to make sure that it is whatever the value is that the private field is installed on.

RBN: yeah, my comment is correct. It should be the decorated class. We had this discussion before about the fact that the all private static fields end up installed on the decorated version, otherwise, static methods don't work? So yes, that's correct, I'm sorry. It should be the whatever the final version of what that class binding is

KHG: Yes, And that is the other thing we tried to preserve with decorations, is that you wouldn't ever end up in a split world where you'd have like one reference to a meeting one thing? and another reference to a meeting like the undecorated thing. So I'm pretty sure it's that all references to mean the decorated thing I'd have to have to look again to be 100% sure, but I'm pretty sure. Sure. Okay, I remember the reason why people didn't want that. to happen. The reason was they wanted decorators to run after fields were assigned and which would have forced fields to run in the un-decorated world. But we solve that with class initializers that can run after fields have been assigned. Okay. any other topics on the queue?

NRO: Yes. I'm sure that this least like PR for this practice. We make it through here, but the private field is only on the Decorator class appears on the credit class and like that. but that's the only option as we work. I would be happy to review this spec for this PR, stuff like that. There is only a very possible answer here.

KHG: Yes, yeah. you can, you can look at the spec text in general for that, that has been rebound and everything properly in the specs currently. So we would just continue through with that.

KHG: Do we have a consensus for this change? Pending the actual spec update?

RPR: Any. any voices of support for this?

DE: Yes, I support this

JHD: +1

RPR: All right, any objections? Doesn't. sound like it, so yeah, congratulations. You have consensus on number 6 as well.

KHG: Awesome.

Conclusion

Consensus of all 6 changes:

1. Remove the dynamic assignment of `[[HomeObject]]` from decorator method application/evaluation ([PR](https://github.com/pzuraq/ecma262/pull/5), [Issue](https://github.com/tc39/proposal-decorators/issues/497))
2. Call decorators with their natural `this` value instead of `undefined` ([PR](https://github.com/pzuraq/ecma262/pull/6), [Issue](https://github.com/tc39/proposal-decorators/issues/487))
3. Throw an error if the value passed to `addInitializer` is not callable ([PR](https://github.com/pzuraq/ecma262/pull/7))
4. Set the name of the `addInitializer` function ([PR](https://github.com/pzuraq/ecma262/pull/8))
5. Remove `SetFunctionName` from decoration (PR pending)
6. "Bind" static accessors directly to the class itself. (PR pending, [issue](https://github.com/tc39/proposal-decorators/issues/468)). Pending updated spec text.

Decorator Metadata Update

Presenter: Kristen Hewell Garrett (KHG)

KHG: So yeah, Where we left off last time with decorator metadata is we broke it out from the decorator proposal which was at stage 2. So decorator metadata started at stage 2. and we ended up having a incubator call where we discussed it, and we all came to the conclusion on that call that yes, metadata is definitely needed, and there are some valid use cases for it. and we came up with the basic strategy for how we wanted to pursue metadata in that call. However, there has been some debate over exactly how that will be implemented and everything. So today I'm going to talk about the kind of the current proposal, a few different options, the current proposal, which is my preferred option, and several others, as well. Very strongly, but then a more minimal version, that would be a little bit more restrictive in some ways. And then what I see as kind of a compromise solution.

KHG: Quick refresher. Why is metadata useful? Well, it's used for a lot of things like dependency injection. ORMs, runtime type information for like, you know, various type checking frameworks and frameworks that use that type information. serialization unit, testing routing debugging, membranes, and to top it off, Reflect.metadata is the single most used decorator library, which definitely suggests that this is a useful pattern overall.

KHG: How metadata used to work: In legacy TypeScripts, or a Babel decorators, they would receive the class itself directly and because they received the class itself, we could do things like place a value directly on the class, You could define, underscore underscore types, or symbol types, and for other types. And put your Types on the class directly or you could use a WeakMap to associate it in a way that was private with the class. So, this gave people a lot of flexibility when defining metadata. However, this is no longer possible because in order to make sure that decoration was as static as possible, and didn't allow people to change the shape of a class dynamically. we no longer give people the reference to the class itself. In fact, we don't give them a reference to any value that they can tie back to that class. So there really is no way that you can currently. with the stage 3 decorators proposal. side-channel any metadata. specific to that class in an ergonomic way. In a way that would be one to one with what was previously possible. There have been some proposals where you would maybe create an entangled decorator like have that decorator itself hold the metadata. But that has been called out as having way too much boilerplate. Basically, every time it's been brought up. So that's why we're adding metadata as a first-class supported thing.

KHG: So yeah. Let's talk about the current proposal. so, basically, what we would do is pass in a plain JavaScript object called metadata on the decorator context object. Every decorator, applied to the class would receive the same object and they would be able to define whatever they wanted on that object. And then at the end of class definition that object would be assigned to Symbol.metadata on the class itself. In addition this metadata object would inherit from any parent classes metadata object. So it would look up prior to being passed in prior to decoration, we would look up Symbol.metadata on the parent class, we would, you know, Object.create with the parent classes, Symbol.metadata objects and if it existed and the child class would be able to then look up parent class metadata using standard prototypical inheritance.

KHG: So the pros of this approach are it's very simple and straightforward. We can do public metadata where if a decorator author wants to for instance share type information or validation information, they can share that in a public way. they can create a well-known key. Either. a symbol or a string key, and then other decorators can then read that information and use it in a way. And this is something that we see a lot of in the ecosystem. And at the same time, you can create truly private metadata by using a WeakMap. You just would use the metadata object itself as the key in the WeakMap and then it would work. kind of similarly to you how it would work previously. and then we have inheritance with that, you know? works just like class inheritance, just like prototypical inheritance. by default things Shadow and it's pretty easy to override metadata on a child class but just like with prototypical inheritance, you can manually crawl the prototype chain to find out what the parent classes metadata was. so, the major con with this approach that has been pointed out, is that this creates a shared namespace, a global namespace effectively where anybody could and will add string names to this object. and it could become a space where people are fighting for particular names or decorators are stepping on each other's toes. And it could cause weird undefined behavior and things breaking, in other places.

KHG: So, with that in mind, we come to option 2. So the option 2. idea is basically we kind of similar, we pass in that object. that metadata object on. context. However, it is a frozen object. and as a frozen object, it is only meant to be used as a key in a WeakMap, and it would also have a parent property, so you could look up parent parent metadata as well. so this this would basically force people into using private metadata all the time and there would be no way to have a shared namespace. The pro is it is private by default, and generally encourages what is probably the best practice by default, and there is no shared global namespace for people to accidentally collide in. The cons are there is really no way to share public metadata. with this setup. And we've discussed this. Supporters of this particular solution had pointed out that if you wanted to make public metadata, you could export a WeakMap or an API to look up the metadata for a decorator. My personal worry there is that, that really just exposes the intricacies and details of the build systems themselves that are exposing those modules. I mean, we already live in a world where duplication is very common, and it is possible that you could get multiple modules from the same library in a single build of the app. And if we haven't fully duplicated every single one, you might end up with a split world where metadata that is logically part of the same set can only be accessed partially from one part of the app or the other. And if that's the case, you might say, well, you could just put the metadata on the window, and make sure that every instance of the library that is exposing it shares all of its state. But that brings us right back to where we were before. We have a shared global namespace. What's worse is that that doesn't work in secure ECMAscript contexts for we wouldn't be able to because you're not allowed to put things on window. So I'm not sure that it would be better to push people in that direction. And in addition inheritance is a little bit trickier to use, but that's not a major driver for me, personally, I'm more worried about the complexity that this solution would introduce for sharing public metadata.

KHG: Okay, option three. The idea behind option three is basically to have it be the same as option one in terms of what actually gets exposed: it's an object, it has inheritance, and it's just a plain JavaScript object, not frozen, but we would guard access to that object with getter and setter functions on the context that the decorator receives. The getter and setter functions would force users to only use symbols as keys. This would help us to prevent collisions because by defaults users when they make a new symbol unless they're using Symbol.for will be making a unique simple. And unless they, you know, expose that symbol if they export that symbol the only way for anybody else to get it would be to use object. prototype or get own property symbols. And that would have to be used on a class after being decorated. It's multiple steps. and it makes it just a bit more inconvenient to use a well-known name and accidentally have collisions. Other than that, it's basically exactly the same as the option one. And the idea is that it kind of addresses some of the concerns: it encourages people to avoid the issues that come up with option two, but still allows people to intentionally share public metadata. When they want to That's that's pretty much all three options.

MM: Okay so I have a question that's mostly about option number one. In the normal use um, for embedded where you're concerned about putting things in ROM and for security where you're concerned about hardening things, making many properties, many objects frozen, for option number one and option number three, where you've got a object that's mutable. During initialization, if I do understand. this proposal does not end and given the nature of the proposal should not for options. One in three. specify that after the class is initialize that the object, The Meta object get transitively frozen. My question is in an environment in which something else would transitively freeze them after the class is initialized, do you expect that the patterns of usage that you know, that you've seen that, you expect people to use the meta object for would or would not get broken. by transitively. Freezing The Meta object basically transitively, freezing, the meta object, at the same time. That one is transitively is transitively freezing the class itself, the class prototype and the methods of the class—basically everything, transitively. Freezing the class means, of course, freezing the things that are reachable from the class that would include the metaobject, right?

KHG: I would not expect that to cause any breakage metadata is typically used in a very static way, The values that are provided on there are not mutated after the fact, they really just define declaratively what's expected of the class or whatever from a particular decorator. The only usage that you I've seen, that is mildly problematic with these cases the case where, you know, to decorators choose the same key on metadata and collide but I've never seen The metadata object used directly as a store of information.

MM: And then with regard to option, three, you said that the context object would enforce with accessor properties, that you could only set symbol-named properties on the metaobject. So that means it's only just confirmed because I think I initially misread the proposal as implying that option. Number one, option number three, The Meta object need to be exotic. in order to allow the creation of symbol name. Properties. but not allow the creation of string name properties. where you're not saying that. but the Text object would also. not as so, how would the context object? enforced us without the context? Object itself needing, either something exotic or proxy?

KHG: So, the context object would just have two functions setMetadata, and getMetadata. It would not have direct access to the object that gets defined on Symbol.metadata users would be to use context set metadata to set a key on that object that it's backing. this. And for instance, if they wanted to use a WeakMap, they could either set the key to a value that is a symbol and then then use As WeakMap key. now that we can or just set it to an object and that use that object as WeakMap key. So it would be one more step for people who want to use WeakMaps for privacy, But it's not that much overhead. Okay, thank you.

MM: I'll refrain from stating an opinion at this time. but you've answered everything that would have been an objection to any of these three proposals. Yes.

KG: So Chris knows this but for everyone else, I am very strongly in favor of Option 2 over Option 1, and in particular the shared namespace aspect of option, one seems like it ought to be fatal to me. I understand not everyone shares that intuition, but for me, the con is: we are introducing a new Global shared namespace. I would prefer to kill most proposals over having a new global shared namespace. So, for option 2 the only listed cons, there are two. One was that it makes inheritance trickier, which I kind of agree with. The other is that it doesn't allow you to share public metadata, but of course it does allow you to share public metadata as Chris pointed out. You share the way that you always share data between libraries, which is that you export a function that allows people to look up data. The downside of this as again, Chris pointed out is that this means that you might have multiple versions of a library built into your application and they would have different versions of the metadata. But to my mind, this is like I'm actively good thing. This is a property that we want, the whole reason that the build system in, for example, npm, works the way that it does is because the requirement that you have like a globally unique version of every library and as soon as you start including multiple versions of a library things break, that was a very bad situation and the situation that we like is where two different things can import. from different versions. of the same library and both bits of the library can be in the same application. And that works. I don't have to worry about if I upgrade you know, 1.0 to version 2.0, which changes how the type property on the metadata works that breaks everything because I have a different library that's expecting version 1 of the type meta data and like have one library that expects version 1 of the type meta data and one version that expects version 2 of the type meta data. And now, those fundamentally cannot operate because they are working in the shared namespace that is a bad situation. which we should avoid it. But we avoid it the way we always avoid it, which is that you used imports and exports and the build system just wires things up for you. so, I really don't like having a shared Global namespace. I think, that is very bad thing and importing and exporting works great for everyone except TypeScript to as a genuinely unique constraint, but you do get shared metadata with option two, you just export a thing that lets you share the same way you always share.

KHG: I do want to respond to that. It's not as simple as you make it seem a great example of where this could absolutely fall down very easily is dependency injection. If you have a single container for your app, which is the entire point of dependency injection, and that container gets a version of the metadata that does not fully describe everything that is supposed to be injected, your entire app will fall apart and you can easily get into that situation if you have a minor version difference between two libraries, not a major version even just a minor version. And so now, they're bundling two separate minor versions, and build tools. make these types of decisions all the time. This is not spec'd. We do not have a spec for when you deduplicate modules and when you don't. So you basically are pushing this problem to modules, and you're saying like, Okay, it's up to the build system to figure out, and now everybody has to learn all the details of the build system to figure out. To duplicate everything just so they can have their dependency injection.

KG: So you have exactly the opposite problem with option one, which is that now, if I have a version bump, that should have been not breaking because I have one part of my library that expect it to work, and now, the format changes And like now I can't have two versions of the library. Like, I was that not exactly the same problem problem I didn't. Grade, it shouldn't have broken anything because I just needed like some separate part of the application. I have a different version of the library, But like now, the format of the public namespace has changed. And so the thing that previously worked didn't I mean this is

KHG: this is one of the things that when you're working on a dependency injection framework or something like that, you’re very careful about those changes because you're aware of that

KG: okay? So if the solution to this problem is that people should be very careful, we shouldn't produce a new thing they have to be very careful about. They should just live in the world that they're already in, where deduplication and non-duplication of libraries is a thing that comes up. That's a thing that is familiar to people already.

KHG: and that is it's a thing that doesn't necessarily even have a solution like some build tools. Don't give you the option to deduplicate things properly. great, but like the names will just collide.

KG: That is not a solution to the problem, it's - "You only get deduplication" is no more of a solution to this problem than like "deduplication works for metadata the way that it already works in the rest of your application".

DE: there's a lot of people in the queue. You made a good point.

RBN: So my concern. is and it's like KG has articulated, as being a TypeScript specific concern, but it really isn't my concern is not about library deduplication, although I do think that is a valid concern. My concern is about: Script is a valid parsing goal. there. I don't have numbers but my intuition is that the largest majority of existing JavaScript code run on the web is still scripts, not modules. A system that purely depends on the concept of public metadata being only available with modules is not actually public. And so the TypeScript specific example is that TypeScript would like to be able to at compile time as a compiler option, inject a decorator that adds metadata about the types of the things that you are decorating. This is the TypeScript emit decorator metadata flag, that we've had since 2015. and we'd like to be able to bring some of that capability forward and that is one of the main motivating reasons that we even introduced metadata as part of this proposal. For TypeScript to be able to work correctly and inject a decorator that can attach metadata in a script environments, I cannot depend on a module. depending on the module require an await of an import. Each. because I can't necessarily import. And I can't break the users expected. They expected semantics of code. that does not await currently by randomly injected in an await. And then, it could also be in the middle of an asynchronous function that you declare a class declaration. So that means that we cannot depend on modules. So if the only solution is to say that public metadata is only valid, if you're using modules. completely cuts off a very large percentage of people that are currently using JavaScript with scripts today in bundlers, etc., and saying that this solution can only be solved, very far down the lin when everybody has moved modules, and I don't think that that is a valid position or I don't think that that's a very strong position, and while I completely understand the concern about having a namespace that allows for possible collision having a mutable, object does not prevent you from using WeakMaps to resolve those potential collisions, It does prevent well-crafted and well, written code that can use a namespace like this. successfully from being able to do a very important feature in a script environment where they don't have access to modules to have that type of behavior. The other concern that we have is that – we've kind of rehash this over and over again on GitHub – but every solution that's been presented requires significant amounts of rewriting to support it, and we really don't think that that's valid or that's a viable option. So our strong position is that option? One does not prevent you from having the type of isolation you'd like out of option two but option two completely prevents the types of behaviors, we'd like to be able to use in a script environment. So we believe option two is completely completely a non-starter.

RPR: Okay. and so on the time box, we only have four minutes left but given that we got through the first item. happy to let this run till 4:30. so that's 13 minutes. There's still a lot on the queue so please keep that in mind. Justin. so, Justin's point was about a practical example of Library duplication.

JFI: Yeah, I want to point out that the library just location problem and solving that has dangers on, on both sides of things. So, like, yes, if your light library is choosing to use immutable metadata object and choosing to use a of versioned and generic key. You can run into problems of collisions on that. On the other hand if a library naively uses the meta data object as a key into a WeakMap, things can break, and in my experience break more frequently, than the danger of collisions there. And this is because it's just so easy to get in situations where you have duplication. so maybe you're importing your decorator from a library that for some reason, got its own copy of the base class of the thing. you're decorating installed and so that chain in the dependency trees using one module that has the metadata WeakMap in it. and then the base class, the user is decorating Imports a different version and so is using a different WeakMap. And now the class when it boots up cannot see the metadata that the decorator applied because they're in two different WeakMaps. So, I think that libraries are going to have to be very careful. No matter which way this goes. and one way, you know, they're going to have to have some kind of versioning scheme for their metadata keys or they're going to have to hang these WeakMaps off of the global object, with a version two property name off of there, to ensure that they share when they need to. They have to do one of these two things like libraries need to be aware of versioning and when they can reuse the same WeakMap or the same key, So I Sirius is very, very equivalent. And I hope the library offers are going to be careful here because if you're writing to metadata, you can't lose a generic. You also can't use a naive. WeakMap map still in the module. both of them are going to break. far too too often.

DE: I think we seem to have a shared understanding that Library duplication happens. and the question is what's the behavior that we want to occur when this happens? And KG and many other people have articulated kind of different things about what behavior should occur. I think we should be approaching this kind of problem pragmatically, rather than from first principles. and I think we have a bunch of practical pieces here, where we do want them to be referring to the same. the same thing. I don't think it's useful to have a strong first principles argument about how you shouldn't have any shared namespaces, because we have a ton of these all over the place. Like how do you get to anything? so, I'm in favor of the first option.

MAH: Yeah. I'm not sure that module identity discontinuity is problem re specific to metadata decorators made of metadata. This can happen in a lot of other cases and it's more General ecosystem problem. problem. that I'm not sure should wait that heavily on the decision here.

KHG: I think the reason I bring that up is not because it is a specific problem to decorators.I don't think that’s the way it was being framed. Was that Global namespace is have all the problems and modules fix those problems. And thus, they have no problems. And my point was just like modules had problems, too. like. we have a hard problem here and I think no matter what the solution we choose is, there's going to be a lot of trickiness in dealing with all of the problems of it because the goal is to share public metadata in a global way that is act that is the actual goal. by the way,

MAH: I've have a little nit on that. It's not a global name space. It's a namespace scope to the class being decorated

KHG: yeah. I think we're using that as a shorthand, but yeah. we tend to move forward.

DE: so, just overall, I think it's important that we keep this system simple and I'm happy that stage one that option 1 is a very simple proposal. It's great that the decorator proposal just keeps getting similar and simpler.

SYG: Yes. I like the simplicity but I'll stop it there. I don't think judging by the matrix chat. Don't think I'm actually qualified to give much of ecosystem opinion here. but just implementation. I like simplicity.

JFI: Yeah. I mean I wanted to comment, I mean, I think number three is interesting when you look at it because you it supposes that is getting rid of this. The shared namespace here. But then you see the example, use a simple dot for all throughout it And you're like, well, it's just using a different global namespace. I think that. And then, it's just very cumbersome to use and so I don't know that the supposed benefits actually materialize in are worth the cost of the extra. syntax look for option. Three. three. But I think that generalizes to two other things here. I think that what libraries like, what I maintain are going to have to do, is specifically seek out a global namespace and that there's a lot of equivalence is here, right? Symbol that for and window dot underscore, whatever you want to name your property for your WeakMap are basically the same as a key in the metadata object and and and I don't know. I think that we run the risk of, you know, making the API Is but having very equivalent hazards, no matter which one we choose here. here.

CDA:. I don't have anything to add from what's already been pointed out, just expressing support for option 1. The downsides of two and three I think are a little more hard to swallow than for option 1, I find them less ergonomic and more limiting. Of course, we don't want people stepping on each other in option one, but still support it.

JHB. Yeah. So I'm asking basically what's wrong with option 2? But it like with it, take away the frozen object question and just provide a symbol Yeah.

KHG: So the reason we're using a frozen object is so that you can access parent metadata during decoration. Otherwise we could use a symbol but it has all of the downsides of option to and basically has all the same problems.

JHD: Well, I guess I'm confused. In the frozen object case, the object is meant to be a key in a weak selection, right? Ah, the prototype has a parent always, so you climb up the frozen object, right, thank you.

JFI: Yeah, just real quick. I want to iterate because it gets asked a lot when we're going to use the new decorators implementation and TypeScript and we simply cannot use it until we have metadata. So, as that type of constraint here, you know, my number one concern is that metadata before word at all. So I don't know how many other libraries in this are in this position where that even though starting to roll out you can't use it until that a bit of the curse but I want to reiterate that That's all. This is an important topic.

MAH: Yeah. I want to say option 3 for me is a non-starter because it actually enables for a managed decorator to interfere with another decorator. by having a disconnect between the context subjects which in Option 1 & 2 is the identities preserved the creator can set this metadata that would conflict. The fact, it's the symbol doesn't mean there is no possibility of overwriting, another decorators metadata. It just means like non-malicious ori by chance overriding would not happen, So imagine decorator can go and set. the metadata. of another decorator and override it and if even if they do and give an option 3 forces, if you want private data you have. to use a single entry and associated with a WeakMap key. So now all the sudden you could get that overridden and there is no way to protect against against that

KHG: that's, a good point. Yeah, I do think that that would be something we'd have to design around. maybe we could make it so that when you initially define metadata, like, what's that metadata, you could specify that it has makes the property like non-writable or something but that again that seems like extra complexity that personally I would not prefer,

MAH: right.

KHG: so, I don't know where we're at now with this. Basically, my plan was to try to get consensus here. on one of these options. so that we could propose for stage 3 at the next plenary. It sounds like we do not have consensus for any of the options at the moment though.

KG: I'm not going to say that this proposal can't advance unless it has my preferred semantics. I have said my piece. There are people who still prefer option one. We are not going to reconcile those. So if the proposal is to advance, we need to pick one. I acknowledge I am in the minority and am ok saying we can live with option one.

MM: So, I'm going to register that option 3, given the answer to MAH’s question for me is clearly disqualified so that I'm okay with either one or two.

RPR: I don’t think anyone is pitching option 3 at this point. So as DE says, how about KHG would you like to ask for consensus on your preferred option?

KHG: Yeah, Do we have consensus for option one? one?

RPR: Hey, is there any support for option one? I think there was so DE, PDL, CDA. So you have three supporters for option one. Are there any objections to option one? Or any non-blocking concerns?

RPR: I think JHD has a non-blocking concern.

JHD: I prefer option 2 but can live with option 1. I agree with everything Kevin has said. So all the same reasons.

MF: I also agree with KG.

RPR: Okay. So we have MF, KG, and JHD who prefer option 2. and we had another voice supporting option 1 from Justin. So the moment looks like we have consensus for option one.

KHG: Okay. the nice thing is also like a spec. It has. already been written for that. So if y'all want to review it, I will be presenting for stage 3 at the next plenary. So yeah, cool Thanks everyone.

Summary of Key Points

  • Metadata is necessary for several core use cases of class decorators. It was omitted from the Stage 3 decorators proposal for excessive complexity, and it is now at Stage 2, with hopes to progress to Stage 3 next meeting.
  • There are three alternative semantics proposed for how decorator metadata could be supported, all based on an object shared throughout all the decorators on the class.
  • There were blocking concerns from some delegates about malicious interference with Option 3, which was allowing Symbol-keys only, as this didn't do a good job solving the problems that the mutable and immutable alternatives were attempting.
  • Some committee members expressed that they prefer that this shared metadata object be immutable, to avoid risk that different decorators from different libraries (or versions of the same library) could be using the same property name for different things. Avoiding that would all decorator authors in every library to coordinate with each other in advance to ensure names are not re-used.
  • On the other hand, in many cases, libraries are falsely duplicated, and it is a requirement that multiple duplicates are looking at the same piece of metadata (and are willing to evolve this metadata in a backwards-compatible way to account for that). Core issues for using module/library state to store the metadata is compilers emitting code for applications using scripts (rather than modules), and risk of duplicate library instances

Conclusion

  • Consensus for option 1. Metadata being a mutable object.
    • Non-blocking concerns around modularity/compositionality expressed by JHD, KG, MF.
  • Spec text for this already written; will be proposed in the next meeting