Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Inconsistent promotion behaviour between assignments inside and outside of patterns #2857

Closed
stereotype441 opened this issue Feb 21, 2023 · 14 comments
Labels
flow-analysis Discussions about possible future improvements to flow analysis patterns Issues related to pattern matching.

Comments

@stereotype441
Copy link
Member

In my work on flow analysis for patterns, I've made the decision that any pattern match with a required type should promote the type of the scrutinee. That means that, for instance, the following pattern match promotes x:

num x = ...;
switch (x) {
  case int():
    x.isEven; // OK: `x` has been promoted to `int`, and `int` supports `.isEven`
}

And so does this:

num x = ...;
switch (x) {
  case int y:
    x.isEven; // also OK
}

To be consistent, I've made pattern variable declarations and pattern assignments behave the same way. It's only observable when the scrutinee is dynamic, because otherwise the assignment would be invalid. So, for example:

dynamic x = ...;
var (int y) = x; // OK; `x` is downcast to `int`
x.foo(); // ERROR: `x` has been promoted to `int`, and there is no such method `int.foo`

and:

dynamic x = ...;
int y;
(y) = x; // OK; `x` is downcast to `int`
x.foo(); // ERROR: `x` has been promoted to `int`, and there is no such method `int.foo`

However, this feels a little weird, because we don't promote the RHS of ordinary assignments and variable declarations, e.g.:

dynamic x = ...;
int y = x; // OK; `x` is downcast to `int`
x.foo(); // OK; the call to `foo` is dynamically dispatched

and:

dynamic x = ...;
int y;
y = x; // OK; `x` is downcast to `int`
x.foo(); // OK; the call to `foo` is dynamically dispatched

A related problem is that with ordinary assignment expressions, a coercion such as an implicit tearoff of .call affects the value of the assignment expression, e.g.:

class C {
  void call() {}
  void m() {}
}
f(C c) {
  void Function() g;
  var x = g = c; // `g = c` is interpreted as `g = c.call`, so `x` has inferred type `void Function()`
  x.m(); // ERROR: type `void Function()` has no such method `m`
}

But for pattern matches it doesn't really make sense to apply coercions to the right hand side of the assignment, e.g.:

class C {
  void call() {}
  void m() {}
}
f((C, int) x) {
  void Function() g;
  var y = (g, _) = x;
  // tearoff should probably happen as part of the subpattern match of `g`, so `y` should get inferred type
  // `(C, int)`, and thus this should be valid:
  y.$1.m();
}

In which case how do we want this code to behave?

class C {
  void call() {}
  void m() {}
}
f(C c) {
  void Function() g;
  var x = (g) = c;
  x.m(); // ???
}

For consistency with non-pattern assignments, it would make sense to have an error (because the value of (g) = c is the coerced value); for consistency with pattern assignments, it would make sense to have no error (because the coercion happened as part of the subpattern match of g).

Personally, my preference is to say that all pattern assignments and pattern variable declarations should follow the new rules for patterns, and we shouldn't worry about these subtle inconsistencies with old non-pattern behaviour. (In fact, in the long term, I would love to change the behaviour of old-style assignments and variable declarations to match the new pattern behaviour, but that's another story). In which case, in (g) = c, the coercion would happen as part of the subpattern match of g, and therefore x would have type C, so x.m() would be valid.

But I would love to hear other people's thoughts.
CC @dart-lang/language-team

@stereotype441 stereotype441 added patterns Issues related to pattern matching. flow-analysis Discussions about possible future improvements to flow analysis labels Feb 21, 2023
@lrhn
Copy link
Member

lrhn commented Feb 21, 2023

My thoughts on dynamic promotion: Irrefutable pattern contexts should not promote, refutable patterns should.

In a refutable/check context, you perform a type check, which makes the checked type a type of interest.

In a declaration/assignment context, the pattern does not imply any checks, no more than dynamic d= ...; int x = d; does. The implicit downcast from dynamic does not count as a check that introduces a type of interest, so no promotion. (And even if the type is already a type of interest for the variable, I still wouldn't promote the RHS of the assignment.)

About unifying coercions, I still think we should do that, and I'd make the downcast-from-dynamic at the assignment point, not on the RHS value, so int x; num y; dynamic d = 1; x = y = d; works and does two downcasts. #2845.

@eernstg
Copy link
Member

eernstg commented Feb 28, 2023

tl;dr Let's consider coercions as syntactic transformations, and cast from dynamic as rather early rd;lt

We have always treated assignments such that the value and static type is based on the assigned value (RHS), not the target (LHS). This is consistent with the treatment of the setter return type and the semantics of setters.

At a first glance, this seems to be consistent with @lrhn's example (where d has static type dynamic):

x = y = d; works and does two downcasts.

However, this apparent consistency relies on a couple of assumptions that we might want to consider explicitly.

Let's say that an assignment chain is any construct whereby the value of an assignment is used (such as x = y = d or foo(y = d), etc).

We might consider the assignment chain x = y = d as a shorthand for let dynamic tmp = d in (y = tmp; x = tmp), which is in turn subject to coercions: let dynamic tmp = d in (y = tmp as int; x = tmp as num). From this perspective, cast from dynamic is a mechanism based on syntactic sugar.

We could then say that assignment chaining is a syntactic sugar which is resolved very early, and cast from dynamic is a syntactic sugar which is resolved "later than very early". You could say that this makes sense because assignment chaining is recognized syntactically whereas cast from dynamic depends on the static types (so assignment chaining is "more primitive", hence earlier).

However, in order to take setter invocation into account, we might not even want to consider assignments as syntactic sugar (where we relied on some lower-level assignment statement above, eliminating the question about "what's the value of the assignment expression?"). It certainly isn't correct to consider x = y = d to be any of the let expressions shown above when x= or y= resolves to a setter (e.g., when x or y is a non-local variable).

If we consider the type check to be part of the storage manipulation then we can consider cast from dynamic to be even later than that which is illustrated above: Just before we store the new value in the storage location for the local variable, we test the value-to-be-stored against the declared type of the variable (and throw if there is no subtype relation). It doesn't get later than that, but there may be several intermediate degrees of lateness.

We have different strengths and weaknesses:

  • If cast from dynamic is syntactic sugar: We have the freedom to apply this transformation in various different locations (e.g., it's easy to say that we apply it everywhere based on the context type, or just in some locations), and it is easy to see what it means that the mechanism is applied early or late. Also, the treatment as syntactic sugar allows us to bring several coercions on an equal footing. On the other hand, we can't easily express the ultimately latest semantics (where the check is an integrated part of the value-storing action).

  • If cast from dynamic is part of storing a value: This is the ultimately latest semantics we can get, which may be desirable because it is a very permissive and flexible semantics (we can do different things "in each leaf" based on the local needs, but anything we do earlier may have to suffice for multiple "leaves"). On the other hand, we can't choose to make this happen earlier (or at least: anything which happens earlier is much, much simpler to explain in terms of syntactic sugar). In this case we also need to consider all the possible contexts: How do we understand assignments to non-local variables? provision of actual arguments? invocation of extension methods in a cascade? etc.

It is my impression that other coercions are best explained as syntactic sugar:

  • Type inference transforms an expression where actual type arguments and/or formal parameter types and return types (of function literals) are expected, but not provided, and yields an expression where all actual type arguments and parameter/return types are specified (function literal return types are always inferred, and we just pretend that there's a way to specify them in source).
  • Generic function instantiation transforms e into e<T1, .. Tk> based on context types (and, as implemented, based on occurring in specific syntactic positions, e.g., the right hand side of an assignment or as an actual argument).
  • Implicit call method invocation transforms e<...>(...) into e.call<...>(...) based on the static type of e.
  • Implicit call method tear-off transforms e into e.call based on context types (plus positions), at least this week. ;-)

We may add support for additional coercions such as implicit constructors in the future. Presumably, they would amount to the ability to write e of type T in a context where an S is expected, and have that transformed into C<...>(e) or C<...>.named(e) where C is a class that declares an implicit constructor named C resp. C.named, and C<...> is a subtype of S.

Int-to-double as a coercion is controversial, but we could at least consider that to be a transformation (which is allowed and maybe even required to take place at compile-time) of an integer literal l into the corresponding double value. If we consider this to be the transformation l -> l.toDouble() then there will be a small amount of breaking change (because some sequences of decimal digits are allowed today, but will then be an error), but on the other hand it will immediately allow for a generalization where any expression of type int is allowed in a context where a double is expected.

I tend to think that all these things conspire to support a consistent view of coercions as syntactic transformations: Interactions like ordering are then quite flexible, and they are rather easy to understand.

If we do adopt this perspective, and we consider cast from dynamic to be a local transformation, then x = y = d may or may not be an error (that depends on how early we do cast from dynamic). This can be a breaking change, but it would then occur because some expressions are typed more strictly (which is arguably a good change).

The question raised in this issue could then be resolved as follows: Cast from dynamic is a syntactic sugar that occurs early (in particular: before assignment, if we even want to consider that as a syntactic sugar). It gives rise to promotion just like explicit as expressions. It is then consistent with the treatment of patterns as described here, but it implies a ("small and good") breaking change.

@lrhn
Copy link
Member

lrhn commented Feb 28, 2023

I really want us to get away from talking about "syntactic sugar". It suggests that you can write the same thing directly, and that's not always the case. (It might be a goal to make it always possible, but we aren't there). It also usually suggests that it's a small local rewrite, which is even less the case.
We keep moving around, copying and rewriting source syntax, which is not guaranteed to be meaningful, and it's always unclear whether it happens before or after type inference. (Must be after if we use static types to guide the rewrite, but what's the static types of the new, never-type-inferenced expressions then?)
Also, we can then start arguing about where in the desugared code that something implicit will happen, which is opaque to users, who never saw that desugared code.
If we can explain things in terms of the original code, then it's much easier to explain it to users too.

That out the way, let's assume we all know that let v = e1 in e2 is shorthand for: Evaluate e1 to a value v, then evaluate e2 to a value v2 such that the locally visible occurrences of v evaluates to the value v with no side-effects. (Aka, pretend v is a metavariable denoting a freshly-named variable, being bound to v in e2.) The result is then v2.
(With all the usual rules about any non-normally-completing evaluation terminating everything.)
It's CPS transform, basically.

It is my impression that other coercions are best explained as syntactic sugar.

If we take this to mean that the behavior of implicit coercions can be explained as equivalent to what you'd get by inserting certain small local operations, then yes. It's an implicit operation, equivalent to a potential explicit operation that you can insert yourself. That's good for explainability.

However, in order to take setter invocation into account, we might not even want to consider assignments as syntactic sugar (where we relied on some lower-level assignment statement above, eliminating the question about "what's the value of the assignment expression?"). It certainly isn't correct to consider x = y = d to be any of the let expressions shown above when x= or y= resolves to a setter (e.g., when x or y is a non-local variable).

The "desugaring" description of an assignment depends on the LHS. We must evaluate enough of it to get to an assignable expression with no further unevaluated subexpressions. The current specification does case-explosion on the possible LHSs.
So, this is not a new thing.

Using let to describe the semantics, the current rules are essentially:

  • localVar = e1let v1 = e2 in let _ = localVar=(v)1 in v1
  • staticScope.id = e1let v1 = e2 in let _ = staticScope.id=(v1) in v1 (includes id resolving to static setter).
  • e1.id = e2let v1 = e1 in let v2 = e2 in let _ = v1.id=(v2) in v2 (includes id resolving to instance setter this.id)
  • e1[e2] = e3let v1 = e1 in let v2 = e2 in let v3 = e2 in let _ = e1[]=(v2, v3) in v3

The null-aware LHSs use this prior evaluation to guard:

  • e1?.id = e2let v1 = e1 in v1 == null ? null : let v2 = e2 in let _ = v1.id=(v2) in v2
  • e1?[e2] = e3let v1 = e1 in v1 == null ? null : let v2 = e2 in let v3 = e3 in let let _ = v1[]=(v2, v3) in v3

That means the e1.id = e2.id = e3 would be evaluated as:

let v1 = e1 in let t1 = (let v2 = e2 in let v3 = e3 in let _ = v2.id=(v3) in v3) in let _ e1.id=(t1) in t1

We can optimize that (t1 and v3 can be aliased), but it's consistent.

And if doing so, the obvious place for the inserted late downcasts would be at the actual assignments, v2.id=(v3 as Downcast) and v1.id=(t1 as OtherDowncast).
But looking at the desugared code doesn't tell us what the original expression was, so it won't tell us where an early coercion should happen. That's why post-desugared code should not be guiding us about where things should happen.

Wanting to explain downcasts as a possible source transform means late coercions is not good, like Erik says, because the source transform needs to bind d to a repeatable variable in order to allow downcasting more than once, like:

switch (d) { var d2 => (n = d2 as num, i = d2 as int, d2).$3}

(We have let and sequencing in the language now, just not pretty!)

If we want var z = n = d; to be equivalent to inserting a simple explicit downcast, it'd be var z = n = d as num;, and that would change the type of z to num, and would make i = n = d invalid.
Probably not a big loss in practice. We can do that, if that is our goal. (I'm split on whether it should be, because it does make things much more explainable.)

So, semantics of assignment x = e (other LHS cases need pre-evaluating as usual), with T the static type of e and S the declared type of x.
Static:

  • If T is not a subtype of S:
    • If T is assignable to S, let R be the "coerced type" of T to S (perform coercion at type level).
      • If R is not a subtype of S, a compile-time error occurs.
    • Otherwise an a compile-time error occurs.
  • Otherwise let R be T
  • The static type of x = e is R.

Runtime:

  • evaluate e to value v
  • If T is not a subtype of S (but T is assignable to T),
    • Coerce v from T to S, and let r be the result.
  • Otherwise let r be v
  • [Perform the assignment of r to x.]
  • The result of evaluating x = e is r.
  • with "assignable" covering all the assignment-compatible coercions, not just downcast.
    (And look, no syntactic sugar!)

I want implicit coercions (downcast, call-tear-off, generic instantiation), to be unified and happen at the same place. Implicit call-insertion happens in slightly different places, but is mostly compatible.

I'm open to discussing where that place is. Making it possible to get the same effect by a small local insertion of an inferred operation can be a goal. I'm sympathetic to that, but not strongly convinced.

@eernstg
Copy link
Member

eernstg commented Feb 28, 2023

@lrhn wrote:

I really want us to get away from talking about "syntactic sugar".

Agreed, in principle. Tiny, local transformations is the exception where syntactic sugar may well be the most understandable approach.

The "desugaring" description of an assignment

... is pretty complex. I think this confirms that we do not want to consider the value of an assignment as syntactic sugar. This also implies that cast from dynamic is rather straightforward to understand:

If we want var z = n = d; to be equivalent to inserting a simple explicit downcast, it'd be var z = n = d as num;, and that would change the type of z to num, and would make i = n = d invalid.
... (I'm split on whether it should be, because it does make things much more explainable.)

I tend to say "let's do that" because it does make things much more explainable. ;-)

I want implicit coercions (downcast, call-tear-off, generic instantiation), to be unified and happen at the same place.

I also want them to be unified, but we still need to consider the ordering. If we apply two coercions "at the same place" then they are not at the same place any more if we consider them to be local source code transformations, and that basically forces us to answer the questions about ordering.

@lrhn
Copy link
Member

lrhn commented Feb 28, 2023

If we apply two coercions "at the same place" then they are not at the same place any more if we consider them to be local source code transformations, and that basically forces us to answer the questions about ordering.

Then we have to define them to be mutually exclusive. If we want coercion from callable class with generic call method to non-generic function type, we'll just have to define that as a single coercion, instead of combining .call-tear-off and generic instantiation. Or we can define "assignable" recursively.
(We should still do it, we just have to be careful about how we specify it.)

@munificent
Copy link
Member

the following pattern match promotes x:

num x = ...;
switch (x) {
  case int():
    x.isEven; // OK: `x` has been promoted to `int`, and `int` supports `.isEven`
}

And so does this:

num x = ...;
switch (x) {
  case int y:
    x.isEven; // also OK
}

These seem reasonable.

To be consistent, I've made pattern variable declarations and pattern assignments behave the same way. It's only observable when the scrutinee is dynamic, because otherwise the assignment would be invalid. So, for example:

dynamic x = ...;
var (int y) = x; // OK; `x` is downcast to `int`
x.foo(); // ERROR: `x` has been promoted to `int`, and there is no such method `int.foo`

and:

dynamic x = ...;
int y;
(y) = x; // OK; `x` is downcast to `int`
x.foo(); // ERROR: `x` has been promoted to `int`, and there is no such method `int.foo`

I can see the logic, but intuitively, these are both pretty surprising to me. It feels sort of like action at a distance. I don't think of assigning from a variable as having any affect on that variable, only the destination one.

I think of assignment as similar to parameter passing. We don't promote, and I wouldn't expect us to promote, here:

takeInt(int i) {}

main() {
  dynamic i = 123;
  takeInt(i);
  i.foo();
}

Given that, I think it's fine if pattern and non-pattern assignment doesn't promote either.

Smart promotion is really nice, but there comes a point where it starts to feel like the language is reading too much into the code or something. Simplicity has its own value too.

@stereotype441
Copy link
Member Author

Ok, catching up on this thread now that I'm starting to come back from my illness.

It sounds like there's a general consensus around the idea that pattern matches in irrefutable contexts shouldn't cause promotions to happen. I hadn't previously considered this idea, and I like it. It's simple, intuitive, and it cleanly avoids any inconsistencies between how pattern assignments and regular assignments address promotion. I'll go ahead and make this change.

Regarding coercions and dynamic downcasts, it sounds like @lrhn and @eernstg are in agreement that it's valuable to make coercions behave more consistently with each other, and that downcast, call-tear-off, and generic instantiation are all examples of coercions. I'm good with that.

There's less agreement about whether int-to-double should be thought of as a coercion, and whether more coercions should be added to the language in the future; personally I'm happy to stick with the current behaviour for now (in which int-to-double is not a coercion, but an independent mechanism that does nothing but infer a missing .0 on number literals).

That leaves just one open question in my mind: how do we want this code to behave?

class C {
  void call() {}
  void m() {}
}
f(C x1, dynamic x2) {
  void Function() g;
  int i;
  var y1 = (g) = x1;
  var y2 = (i) = x2;
  y1.m(); // ok or not?
  y2 = 'foo'; // ok or not?
}

One interpretation is that (g) = x1 and (i) = x2 are pattern assignments, so they should behave consistently with more complex pattern assignments. I believe we've argued in the past that in more complex pattern assignments like var y = (g, i) = (x1, x2), the coercion shouldn't be reflected back into the matched value, so y should have type (C, dynamic). In which case, for consistency, it seems like in var y1 = (g) = x1, the coercion should happen as part of the subpattern match of g, so y1 should receive an unadulterated copy of x1 (and hence have type C), so y1.m() should be ok. Similarly, var y2 = (i) = x2 should receive an unadulterated copy of x2 (and hence have type dynamic), so y2 = 'foo' should be ok. This is the behaviour I've implemented so far for pattern assignments.

But IMHO an equally compelling interpretation is that since var y1 = (g) = x1 and var y2 = (i) = x2 aren't actually doing any destructuring or other pattern matching magic, they really ought to behave equivalently to var y1 = g = x1 and var y2 = i = x2. And the way those assignments behave today is... inconsistent with each other:

  • var y1 = g = x1 behaves like var y1 = g = x1.call, so y1 gets type void Function(), and y1.m() is a compile-time error.
  • var y2 = i = x2 behaves like i = x2; var y2 = x2, so y2 gets type dynamic, and y2 = 'foo' is ok.

In the discussion above, when considering what to do with i = n = d (where i, n, and d are local variables with types int, num, and dynamic), @lrhn and @eernstg both seem to be leaning toward saying that var z = n = d; should be equivalent to var z = n = d as num;, which would make i = n = d invalid, on the grounds that it's more explainable.

I get that. But I don't love it. Because if we make that change at some time in the future, then unless we make other changes, it will make the inconsistency between pattern assignments and non-pattern assignments worse, because it will mean that var y2 = (i) = x2 will stop behaving like var y2 = i = x2.

Personally I would rather aim for a future in which we say that the value of A = B is always equal to the value of B, and doesn't see any coercions. Which means that i = n = d would be valid, and equivalent to i = d as int; n = d as num;. It's not quite as explainable, because the equivalent code can't be expressed in terms of quite a simple syntactic transformation. But IMHO it's still pretty intuitive. In fact maybe it's a little better, because it means that A = B = C; is always equivalent to B = C; A = C;, regardless of the types of any of the variables.

We don't have to make a decision today about how we want to fix coercions in general, but I would love to hear opinions about the behaviour I've implemented for the code example above (no compile-time errors).

@eernstg
Copy link
Member

eernstg commented Mar 7, 2023

sounds like @lrhn and @eernstg are in agreement that it's valuable to make coercions behave more consistently with each other, and that downcast, call-tear-off, and generic instantiation are all examples of coercions.

Indeed! However, I believe we still disagree on the approach: I'd like to have a simple model (as I see it ;-) where a coercion is a source code transformation on an expression e based on, at least, the static type of e, the context type of e, and the location of e. I'm not so happy about an approach where the coercion is considered to be a part of the semantics of the action whereby an object reference is stored in a variable.

In particular, I'm not convinced that it is as easy to understand a mechanism which can't be expressed as source code as it is when you can just say "it works like this: <some code>", especially when that snippet of code is tiny. For example:

f(C x1, dynamic x2) {
  void Function() g;
  int i;
  var y1 = (g) = x1; // `x1` becomes `x1.call`.
  var y2 = (i) = x2; // `x2` becomes `x2 as int`.
  y1.m(); // Error.
  y2 = 'foo'; // Error.
}

var z = n = d; should be equivalent to var z = n = d as num;, which would make i = n = d invalid, on the grounds that it's more explainable.

Exactly.

With complex patterns it gets more tricky, but we did agree at a recent language team meeting that it is a good starting point for determining the semantics of a pattern declaration respectively pattern assignment that they work like a sequence of simple declarations/assignments (including temporary variables which are introduced in order to avoid multiple evaluations of various expressions). From that point of view the pattern declarations/assignments are reduced to regular declarations/assignments, and the above approach can be applied directly.

... if we make that change at some time in the future ... inconsistency ... worse

I don't think that's unavoidable. But I'd like to reintroduce a more basic concern about the approach to coercions in pattern assignments and pattern declarations:

You could say that whenever we assign a composite entity to a pattern, or initialize a pattern with a composite entity, and we have support for "hidden coercions" (that is, coercions that are applied after the composite entity has been decomposed in one way or another), we have introduced a semantics of the pattern which is dependent on the internal details of that composite entity. In other words, this turns the pattern into an entity which is tied more intimately to the initializing expression and less of a stand-alone entity.

This seems to be a bad fit for a future generalization where we allow formal parameters to be specified as patterns:

class C {
  void call() {}
}

X id<X>(X x) => x;

void foo(var (void Function() f1, int Function(int) f2)) {} // A pattern parameter.

void main() {
  var r1 = (C(), id);

  var (void Function() f1, int Function(int) f2) = r1; // OK!
  foo(r1); // OK?

  var r2 = (C().call, id<int>);
  foo(r2); // OK, of course. Certainly very different...
}

The point is that we might be able to justify the pattern declaration in main that introduces f1 and f2. However, I'm worried by the fact that the invocation of foo(r1) should perform an elaborate deconstruction and transformation of r1 whereas foo(r3) would proceed without any coercions.

Isn't that too much magic to have in a mechanism which is just supposed to let us pass an actual argument? Also, how would dynamic invocations of foo work?

I'm really tempted to suggest that we go one step backwards and remove the support for "hidden coercions".

void main() {
  ...
  var (void Function() f1, int Function(int) f2) = r1; // Compile-time error.
}

I think that's a better foundation for any generalization that turns a pattern into more of a first-class entity.

@stereotype441
Copy link
Member Author

I'm thinking about the discussion we had in yesterday's language meeting (in which there seemed to be a lot of support for @eernstg's idea of just removing support for hidden coercions). I'm also looking at the calendar and realizing we have only 20 days left until the branch cut. The CFE still doesn't support hidden coercions in patterns, so a big advantage of removing this feature is that it would require zero CFE work. (There would be some analyzer work, though).

That means that code like this would simply be a compile error:

class C {
  void call() {}
}
f((C, int) x) {
  var (void Function() f, int i) = x;
}

Unfortunately, because of the way patterns currently work, removing support for "hidden coercions" will also mean that an assignment like (x) = y, or a variable declaration like var (void Function() x) = y won't do a coercion either. Which means that we'll have some inconsistency between these forms and the corresponding old fashioned Dart 2.19 code (x = y and void Function() x = y). I think I could live with that.

@eernstg
Copy link
Member

eernstg commented Mar 10, 2023

Unfortunately .. (x) = y [and] var (void Function() x) = y won't do a coercion either

It does indeed look like an inconsistency, based on the idea that "it should not matter whether we're doing var y = e; or var (y) = e;. However, if we report a compile-time error in the situations where a coercion would otherwise kick in, we are free to add support for those coercions later. So it's really a delay, not a decision not to do it.

Conversely, assume that we do support the coercions now, and then we're somehow not happy about the semantics later on (say, because the chosen semantics makes it more difficult to introduce a unified treatment of coercions in Dart). In that situation we may actually have painted ourselves into a corner which is not optimal.

@eernstg
Copy link
Member

eernstg commented Mar 21, 2023

@dart-lang/language-team, it is my impression that we do not wish to promote a variable of type dynamic when it is subject to an implicit dynamic type check based on assignability. This is in line with the majority opinion in #1362.

Do you agree? In that case I think we should settle the matter, at least for a non-trivial period of time.

copybara-service bot pushed a commit to dart-lang/sdk that referenced this issue Mar 21, 2023
As discussed in
dart-lang/language#2857 (comment),
we only want a pattern match to promote the scrutinee in refutable
contexts (if-case and switch).

Bug: dart-lang/language#2857, #50419
Change-Id: I187ef632e5da95b931e5cda34db06491a5228c98
Reviewed-on: https://dart-review.googlesource.com/c/sdk/+/288000
Reviewed-by: Johnni Winther <[email protected]>
Commit-Queue: Paul Berry <[email protected]>
@stereotype441
Copy link
Member Author

As of dart-lang/sdk@adbf363, pattern matches now only promote the scrutinee in refutable contexts (as discussed in #2857 (comment)).

@munificent
Copy link
Member

Is this done now?

@stereotype441
Copy link
Member Author

Is this done now?

I believe so. We now consistently promote in refutable (matching) contexts but not in irrefutable (assignment or declaration) contexts; this addresses the inconsistency because classic assignments don't promote.

There's a bunch of discussion in this thread about coercions, but that should all probably be discussed separately.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
flow-analysis Discussions about possible future improvements to flow analysis patterns Issues related to pattern matching.
Projects
None yet
Development

No branches or pull requests

4 participants