Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Proposal: improvements to nullable reference types (VS 16.8, .NET 5) #3297

Open
jcouv opened this issue Mar 23, 2020 · 41 comments
Open

Proposal: improvements to nullable reference types (VS 16.8, .NET 5) #3297

jcouv opened this issue Mar 23, 2020 · 41 comments
Assignees
Labels
Implemented Needs ECMA Spec This feature has been implemented in C#, but still needs to be merged into the ECMA specification Proposal champion Proposal
Milestone

Comments

@jcouv
Copy link
Member

jcouv commented Mar 23, 2020

C# 8:

  • [MaybeNull]T state tracking (proposal, done, C# 8)
  • enforcing attributes in method bodies (done, C# 8)
  • nullability attributes in OHI (done, C# 8)

Done for C# 9:

Considered for C# 9 (moved to follow-up issue for C# 10):

  • Task<T> (LDM tentatively approved, needs design proposal for task-like types)
  • LINQ (LDM expressed interested to handle Where, needs design proposal)
  • uninitialized fields and constructors (no plan, aside from MemberNotNull)

LDM notes:

@jcouv jcouv added this to the 9.0 candidate milestone Mar 23, 2020
@jcouv jcouv self-assigned this Mar 23, 2020
@erik-kallen
Copy link

Glad to hear that you are thinking about improving this feature, which has the chance to be one of the best things about C#.

However, I cannot see anything about member initialization (which is probably what is meant with the last point), without which I can't really see any use for nullable analysis in web applications, as pointed out in #2830

@qrli
Copy link

qrli commented Jul 2, 2020

Isn't that the June 17 LDM #3589 favored explicit unconstraint over T???

@jcouv
Copy link
Member Author

jcouv commented Jul 2, 2020

@qrli Thanks, updated to reflect recent decision

@mikernet
Copy link

mikernet commented Jul 11, 2020

Having T? mean MaybeNull is going to be incredibly confusing, with our without the unconstraint. What's wrong with allowing this syntax?

public async Task<[MaybeNull] T> M<T>()

That is infinitely clearer than what is proposed in the latest LDMs and aligns perfectly with how we annotate everything else as being maybe null.

@mikernet
Copy link

See here for details:

https://github.com/dotnet/csharplang/blob/master/meetings/2020/LDM-2020-06-17.md#t

I can't believe anyone is looking at that syntax and thinking it is better than allowing the attribute to be applied to the type. Nullable annotations with generics are confusing enough, now I'm never going to be able to figure out at a glance if T? is nullable or maybe null.

@333fred
Copy link
Member

333fred commented Jul 11, 2020

The issue is that there's nowhere in metadata to put that. You can't add attributes to a particular usage of a type parameter. Additionally, just because you've gotten used to using attributes doesn't make them an actually good general solution. People ask about T? all the time because it's the natural first way to try to annotate something. It's literally the first question I asked when I started working on nullable.

@mikernet
Copy link

Put it in exactly the same place you would put it with public async Task<T?> M<T>() where T : unconstrained

@333fred
Copy link
Member

333fred commented Jul 11, 2020

Put it in exactly the same place you would put it with public async Task<T?> M<T>() where T : unconstrained

That's encoded as part of the NullableAttribute on a method, just like every other ? today. The presence of an attribute in syntax has specific meaning: it should be available in metadata. Why would you be able to put one attribute there, but not something like MaybeNullWhen or NotNull?

@mikernet
Copy link

mikernet commented Jul 11, 2020

The minor inconsistency with how that works for the relatively few people working directly with nullable metadata is worth eliminating the massive inconsistency for everyone working with generics. The people working with metadata are going to be doing stuff that requires deep knowledge of how it all works. The people working with generics is everyone including average joe programmers that have no interest in being confused all the time.

@mikernet
Copy link

mikernet commented Jul 11, 2020

What do you think happens in this case:

[MaybeNull]
public T Value { get; set;}

The compiler already does "magical" metadata transformations on MaybeNull (and likewise AllowNull when applied to a property).

@mikernet
Copy link

mikernet commented Jul 11, 2020

Note, I make my statement above as one of those people who work with nullable metadata...currently working on a weaver that analyzes nullable metadata. I don't see the issue.

@jnm2
Copy link
Contributor

jnm2 commented Jul 11, 2020

What about using modopt to encode Task<[MaybeNull] T>? It would only work with the parameterless attributes, but we only need [MaybeNull] and [NotNull], right?

Or, extending NullableAttribute to pack in known nullability attributes in addition to simple ? uses? (Sounds very expensive)

@333fred
Copy link
Member

333fred commented Jul 11, 2020

What do you think happens in this case:

[MaybeNull]
public T Value { get; set;}

The compiler already does "magical" metadata transformations on MaybeNull (and likewise AllowNull when applied to a property).

I think that the IDE offers a quickfix to change it to T?. And the magic is significantly more in your face with your proposal: today, if you put an attribute in source, it appears in metadata. That's it, full stop. The proposed form cannot appear in metadata, is restricted to a single specific attribute, and would conflict with any future attempt to put attributes on generic type parameter uses.

Finally, I really don't agree that it's inconsistent. It makes declaring a defaulted generic exactly the same as declaring a defaulted reference type.

@mikernet
Copy link

mikernet commented Jul 11, 2020

I'll agree to disagree on your last point.

Regarding the quick fix - no, because T? is invalid syntax for unconstrained T. Also MaybeNull is not the same as T? - [MaybeNull] means the getter might return null but the setter value still has to be non-null.

@333fred
Copy link
Member

333fred commented Jul 11, 2020

I'll agree to disagree on your last point.

Regarding the quick fix - no, because T? is invalid syntax for unconstrained T. Also MaybeNull is not the same as T? - [MaybeNull] means the getter might return null but input still has to be non-null.

T? for unconstrained generics is literally what I'm discussing here? That's the thing we're adding in C# 9.

@mikernet
Copy link

mikernet commented Jul 11, 2020

You said it offers a quick fix. I thought you meant that's what it does now (i.e. C# 8). Regardless, it can't do that because the meaning isn't the same.

Also, on a separate note: if we are going to go with T? for maybe null unconstrained generics then I really don't see the value in having to add where T : unconstrained to it.

@mikernet
Copy link

mikernet commented Jul 11, 2020

The proposed form cannot appear in metadata

The proposed form certainly can appear in metadata - you can always add an overload to the MaybeNull constructor (potentially hidden/inaccessible directly to the dev from code) that encodes the information in a suitable way and gets applied at the type/method/return level.

would conflict with any future attempt to put attributes on generic type parameter uses

How so? There are very easy solutions to any problem I can manage to come up with here - what is your particular concern?

@333fred
Copy link
Member

333fred commented Jul 11, 2020

Also, on a separate note: if we are going to go with T? for maybe null unconstrained generics then I really don't see the value in having to add where T : unconstrained to it.

The original notes on choosing T?? as the syntax were pretty clear on it: there's ambiguities when overriding. https://github.com/dotnet/csharplang/blob/master/meetings/2020/LDM-2020-01-08.md#unconstrained-type-parameter-annotation-t. You only have to add where T : default to these particular cases.

How so? There are very easy solutions to any problem I can manage to come up with here - what is your particular concern?

My particular concern is that we would likely not want to do this in the same way that we do this for the compiler: if we were going to actually allow putting these things on references in metadata we'd want a spot to actually put them. There would be very weird interactions if you tried to apply an attribute to two separate type parameters with different arguments, for example (particularly if that attribute didn't allow multiple applications!). We would run into that issue immediately, in fact, as MaybeNull disallows multiple. You could very easily want to apply it like this: ([MaybeNullWhen(true) T, [MaybeNullWhen(false) T). It would not be very good to try and do a half-implemented version that we would regret later.

@333fred
Copy link
Member

333fred commented Jul 11, 2020

Also, to once again bring back up your original point:

That is infinitely clearer than what is proposed in the latest LDMs and aligns perfectly with how we annotate everything else as being maybe null.

That's not how we annotate things as being possibly null except for this case of unconstrained generics.

@mikernet
Copy link

mikernet commented Jul 12, 2020

@333fred

The original notes on choosing T?? as the syntax were pretty clear on it: there's ambiguities when overriding.

That doesn't show any ambiguities that the compiler has to struggle with, those are design choices that need to be decided on what should be allowed in those various circumstances.

This is why they are moving away from the T?? syntax. @CyrusNajmabadi just explained to me that those examples show that it is ambiguous with the null coalescing operator in some situations. Quote from those LDM notes:

The original reason we rejected this was that there could be confusion that T? means "maybe default," so that the type is nullable if it's a reference type, but not nullable if it's a value type.

That's still an issue if we go back to T? and that's my issue with it.

That's not how we annotate things as being possibly null except for this case of unconstrained generics.

We annotate things as being possibly null with [MaybeNull]. We annotate things that ARE nullable types T?. Now that won't be clear - a return type of T? now means that it could actually be returning a non-nullable value type.

The biggest problem this entire feature is trying to solve is that we don't have a way to indicate that an embedded generic type parameter maybe null, i.e. with Task<T> values. In parameters and direct return values we get around that by using [MaybeNull] so I think an approach that lets us do that with the type parameters would be better. It conveys that the type MAY be nullable, not that it IS nullable.

@mikernet
Copy link

mikernet commented Jul 12, 2020

For example, I'll look at intellisense or a method signature and see:

Task<T?> GetValueAsync<T>();

Great, so now I do this:

var x = await GetValueAsync<int>();

And think...I'm getting back an int? so that's what x is. NOPE, oops, I missed that it has where T : unconstrained at the END of the method signature. That's annoying and weird, especially for long method signatures.

Furthermore, it now means that I have to scan the entire method signature for every generic method that returns T? so I can figure out if it is returning something actually nullable vs maybe nullable depending on the generic type argument, so that I can decide if I should be passing in int or int? as a generic parameter. Imagine doing this with a method that has multiple generic parameters, scanning back and forth for each one.

If instead I see this signature:

Task<[MaybeNull] T> GetValueAsync<T>();

Then it's all clear as day right off the get go.

Most of my work involves writing libraries that are very generics heavy. This is going to be incredibly annoying to deal with if that syntax is used.

@333fred
Copy link
Member

333fred commented Jul 12, 2020

@333fred

The original notes on choosing T?? as the syntax were pretty clear on it: there's ambiguities when overriding.

That doesn't show any ambiguities that the compiler has to struggle with, those are design choices that need to be decided on what should be allowed in those various circumstances.

See this section from the notes I linked:

  1. Unresolved design problems. In overriding T? means where T : struct, going back to the beginning of Nullable. This already caused problems with T? where T : class in C# 8, which is why we introduced a feature where you could specify T? where T : class in an override, contrary to past C# design where constraints are not allowed to be re-specified in overrides. To accommodate overloads for unconstrained T? we would have to introduce a new type of explicit constraint meaning unconstrained. We don't have a syntax for that, and don't particularly want one.

There is a semantic ambiguity with T? in inheritance, which is what where T : unconstrained solves.

This is why they are moving away from the T?? syntax. Cyrus just explained to me that those examples show that it is ambiguous with the null coalescing operator in some situations.

I know, I was in that meeting :). We favored introducing a new syntax to solve the inheritance semantic ambiguity (particularly since almost no one will ever see it) in favor getting rid of a syntactic ambiguity which would affect quite a few possible scenarios (nearly every ?? parse).

The original reason we rejected this was that there could be confusion that T? means "maybe default," so that the type is nullable if it's a reference type, but not nullable if it's a value type.

That's still an issue if we go back to T? and that's my issue with it.

That's a sticking point with a lot of people. I understand it. I just don't agree. I think it'll take a minute to wrap your head around it, but it's exactly the same wrapping that needs to be done for an unconstrained generic annotated with MaybeNull, except that it's significantly more discoverable and is the first instinct of most users.

That's not how we annotate things as being possibly null except for this case of unconstrained generics.

We annotate things as being possibly null with [MaybeNull]. We annotate things that ARE nullable types T?.

That's a misunderstanding of how NRT works. It would have been better if we marketed them as just nullable references: there's no such thing as a nullable type except Nullable<T>. In the output position, T? is the same thing as MaybeNull. In the input position, T? is the same thing as AllowNull. In the input/output position, it's the same thing as the combination of those attributes.

The biggest problem this entire feature is trying to solve is that we don't have a way to apply [MaybeNull] to the embedded type parameter, i.e. with Task<T> values. I think an approach that lets us do that instead would be better.

I still don't agree. Type parameters can be inputs or outputs, and if we were to allow attributes on individual substitutions we'd likely want to introduce a shorthand for [AllowNull, MaybeNull] T anyway. Task's T is effectively an output, but there are other places that you'd want to use T? such as List<T> which would need both. I think it would be an interesting exercise to see what problems we could solve if we allowed nested attributes like that, but I'm not convinced that there are real scenarios currently.

@mikernet
Copy link

(updated post above with better examples)

@mikernet
Copy link

mikernet commented Jul 12, 2020

I know NRTs aren't actual types, that's not my point. I'll rephrase in clearer terms:

We annotate things that could possibly output null depending on the type with [MaybeNull]. We annotate types that always can be null with T?. I think that's an important distinction that should be maintained.

Thank you for clarifying the ambiguity issue you were referring to. I'm fine with needing where T : unconstrained if it resolves ambiguities, but the rest is still very problematic IMO.

@mikernet
Copy link

The biggest issue with the difference in what T? would mean now is that it will require significantly more cognitive load anytime you are calling a generic method. You can't see T? as a return value and just know that whatever you pass in for T will mean the method returns T?. Now you have to reference the end of the method signature, which could have a bunch of constraints on a bunch of different type parameters, find the one you're interested in and check if it is constrained to unconstrained. Then you can work out whether you should be passing int or int? as the type parameter.

I think that's bananas.

@mikernet
Copy link

mikernet commented Jul 12, 2020

Last point because I'm spamming your inbox here.

A huge issue with this is that it can create subtle, hard to find bugs. Let's say you write this method:

Task<T?> GetValueOrDefaultAsync<T>() where T : unconstrained
{
    if (!HasValue)
        return Task.FromResult(default(T));

   // go fetch the value from somewhere...
}

Excellent. Now I'm using this method, see that the signature is Task<T?> GetValueOrDefault<T>(), think "okay I'm getting back null if there's no value" and proceed to do this:

public int? Value { get; set; }

Value = await GetValueOrDefaultAsync<int>();

if (Value == null)
    DoSomething();

Oops, DoSomething() is never going to get called - I looked at the return value and missed the where T : unconstrained at the end of the method or just forgot that I need to mentally parse the generic constraints on every single generic method I call now. Having to refer to both the start and end of a method signature and then do some mental arithmetic just figure out what it is going to return is just, uggghhhhh. Right now I see T? being returned, I know I'm getting back a nullable value, no mental arithmetic and constraint scanning needed.

This is a contrived and simple example obviously, but this makes writing correct code and finding incorrect code way more difficult, particularly when multiple generic types at multiple levels are involved.

If I see this:

Task<[MaybeNull]T> GetValueOrDefaultAsync<T>();

Then I know right away at a glance just by looking at the return value that if I want a nullable int back then I have to specify int? as the generic parameter. I also know that if I pass in int as a generic parameter then I'm getting back an int and not an int?.

SO MUCH "at a glance" information is lost by mangling the meaning of T?...literally ANY differentiating syntax would be better than reusing T?. T?? obviously would have achieved that but since that's a no-go then I don't care what the syntax is as long as it's different than T?.

@mikernet
Copy link

mikernet commented Jul 12, 2020

How about something like:

public Task<default T> GetValueAsync<T>();

I feel like that perfectly conveys that the type may take on its default value and it eliminates any confusion about what type it actually represents in relation to the generic parameter.

@jcouv jcouv changed the title Proposal: improvements to nullable reference types Proposal: improvements to nullable reference types (VS 16.8, .NET 5) Sep 1, 2020
@MadsTorgersen MadsTorgersen modified the milestones: 9.0 candidate, 9.0 Sep 9, 2020
@333fred 333fred added the Implemented Needs ECMA Spec This feature has been implemented in C#, but still needs to be merged into the ECMA specification label Oct 16, 2020
@sjb-sjb
Copy link

sjb-sjb commented Feb 16, 2021

Like @mikernet , I believe the context-dependency of T? for generic type parameters will be incredibly painful, counterintuitive and bug-prone. I agree with his view that it creates a huge burden to have to reason through what the type of T? is based on a review of the constraints on T.

Consider this example:

  class BV<TOut> where TOut: notnull
  {
       TOut? method( ...) => ...;
  }

Given that TOut is not nullable, it seems like TOut? has a pretty clear meaning, which would be the nullable version of TOut. Nope! Actually TOut? is the same as TOut, which seems very counterintuitive.

Worse, suppose that later on I decide that actually I only want to use types that are in fact structs, so, I change the type constraint from notnull to struct. Now, given that all structs are notnull, it would seem fairly obvious that restricting the types to structs should not create more possibilities for the output. Nope! When I restrict to structs, suddenly the method's output type increases in scope to now include nullables.

It seems to me that the ideas of defaultability and nullability are being conflated. These are two completely different mappings on types.

defaultable( T) ==> T where T is a (non-nullable) value type
defaultable( T) ==> T where T = Nullable<S> is a nullable value type
defaultable( T) ==> T? where T is a reference type marked as non-nullable
defaultable( T) ==> T where T = S? is a reference type marked as nullable

vs

 nullable( T) ==> T? where T is a (non-nullable) value type
 nullable( T) ==> T where T = Nullable<S> is a nullable value type
 nullable( T) ==> T? where T is a reference type marked as non-nullable
 nullable( T) ==> T where T = S? is a reference type marked as nullable

Considering that these two concepts are pretty different from each other, it is shocking to me that '?' was chosen to indicate defaultability specifically in the context of unconstrained (and notnull) type parameters, while everywhere else in the language it refers to nullability -- including in the context of type parameters constrained to struct and class.

Meanwhile, there is a gap in the language which is that one cannot refer to the nullable version of an unconstrained generic type parameter. To express the nullable version of a type parameter one must constrain to either struct or class, which causes a lot of grief because if we want to handle both struct and class (and possibly class?) then we have to duplicate the code with minor variations -- a real pain if you have more than one or two generic parameters. That language gap should be addressed eventually and when it is, we will be forced to use some character other than "?" to refer to the nullable version of unconstrained generic type T because we've already used ? to mean the defaultable version of T. Say we use some other character in that situation, T# or whatever, to mean the nullable version of unconstrained T. We are headed toward an implausibly backwards situation: if T is unconstrained or notnull then T# means the nullable version of T while T? means the defaultable version of T; but if T is constrained to struct or class then T? means the nullable version of T while the defaultable version of T has to be explicitly specified as either T or T? depending on the constraint.

I know there is an argument that this language gap will never be closed. All that says to me though, is that the language may fail to grow because we've boxed ourselves into a corner. We would need a new language (maybe based on typed nullables) to advance.

I am disappointed because I've really loved C# for it's beautiful and careful design, but I fear that we may have fubar'd ourselves on this one. What is the process for test driving these types of changes with a sizeable body of real-world programmers before they become carved in stone? Is it too late to get out of this situation?

@CyrusNajmabadi
Copy link
Member

is that the language may fail to grow

I don't see hte language failing to grow. It's likely just a wart that we'll live with (just like we lived with it since c# 2.0).

@sjb-sjb
Copy link

sjb-sjb commented Feb 17, 2021

@CyrusNajmabadi, good comment. Yes sometimes we just have to live with things.

Of course there is a balance between the desirability of making language improvements vs how wrenching it will be to deal with backward compatibility questions.

Also I appreciate your earlier comments that the purpose of introducing nullability checking was mostly to improve existing code. It worked for me, I did find some bugs by upgrading my code with nullability checking.

I do think sometimes about how fantastic it would be to have, at some point in the future, a language with non-nullability as a first-class type concept. This would allow a coherent resolution of the kind of challenges described above. It would permanently eliminate the problem of null values being stored in non-nullable references, and provide a complete checking of nullability. A type-based approach would be cognitively far easier to understand and reason about. It would also meet the expectations that many of us "everyday" programmers came in with: that non-nullable reference types would actually be types. On the other side of the scale is what seems to my innocent eyes to be a relatively straightforward task of ensuring backward compatibility. Could we not introduce a compiler context "#nullable typed" within which non-nullable references are given full type status? The difference between nullable and non-nullable reference types would just be that the new language design would eliminate at compile time all situations in which nulls are put into variables that have non-nullable type; they would be the same from a runtime perspective. I know there would be some challenges in the language design but I believe these are surmountable. As for compatibility, I don't see why one could not call back and forth between such a language and C#9 in a fairly direct way -- the main catch would be the desirability of introducing run-time null checks when calling back into the new language from C#9.

@roji
Copy link
Member

roji commented Feb 17, 2021

@sjb-sjb FWIW these questions were raised and discussed extensively, when the nullable reference types feature was being designed - that ship has pretty much sailed. Among other things, introducing nullable reference types as full types would entail lots of breakage, as existing APIs which accept string would not be able to accept the new nullable string (and introducing conversions is definitely not direct/simple).

@sjb-sjb
Copy link

sjb-sjb commented Feb 17, 2021

@roji I am talking about the future here, I'm not suggesting changing the decisions that were made leading to C#9.

Under the approach I described, existing APIs would continue to be compiled with C#9 and would then be called from the new language ("C#n" I guess) by pushing the C#n string (or string?) argument directly into the C#9 parameter, with no modification. Both the C#9 and C#n storage would contain exactly the same thing, i.e. a memory address referring to a String (or it could be null if the C#n argument was type string?). What do you think the problem would be?

P.S. The existing C#9 API could of course always be upgraded to C#n if it was felt worthwhile to do so.

@roji
Copy link
Member

roji commented Feb 17, 2021

If string? is a CLR type distinct from string, then at least with the current runtime and rules you can't just push a string? argument into a function accepting a string. It's not about the memory address - it's about how the types work in the runtime etc. My point was more that options like what you're proposing were visited in the early stages of the design for nullable reference types, and the decision was to go in another way.

@sjb-sjb
Copy link

sjb-sjb commented Feb 17, 2021

I think it would be possible to map both string and string? in C#n to the same CLR type. On a simple approach, you would just not have a guarantee that non-nullables returned from earlier (non-C#n) language assemblies were actually non-null -- it would be a bit like calling into 'unsafe' code. However it would make more sense to me to have a different CLR type for non-nullable string and cast them back and forth, since this would make things explicit in the IL and provide a way to introduce null-checking of values produced by compilation environments that do not guarantee null safety. You would need casting in both directions when going between null-respecting and non-null-respecting assemblies, with one direction possibly throwing an exception -- I know that exceptions are not normally used for implicit casts but it might make sense for this purpose. There might be a change needed to the castclass instruction to support this, but if so then the change would probably only apply when a non-nullable reference type was involved in the instruction. Hopefully that would provide backward compatibility given that currently there aren't any non-nullable reference types in the CLR.

Disclosure, I'm not an expert on compilers, CLR or CIL by any means. I'm just a lay programmer :-)

On the topic of the earlier discussions that were held, I have been browsing through the 2015, 2016, 2017, 2018 language design meeting minutes. If you look at it, even the very earliest discussions were about how to benefit existing code by improving null checking, and pretty much saying that the idea of evolving C# to include typed non-nullables was a great ideal but too difficult to accomplish, i.e. at least at that time. I do think the decision to focus on existing code made sense at the time and it has brought us a lot closer to the end goal of having a system with non-null references. What I'm saying though is it is time to plan for the next stage which is to get to a language in which we have truly non-null types.

@HaloFour
Copy link
Contributor

@sjb-sjb

The goal is not to avoid null, it's to avoid NREs, and NRTs does a good job of that. It's not academically ideal, but it doesn't have to be. So the question is, given NRTs, has the risk/reward to gutting the type system in the runtime and bifurcating the ecosystem worth it to chase down the remaining cases and/or to achieve an academic degree of type safety. In my opinion, it's much less worth it to consider it now than it was five years ago.

@sjb-sjb
Copy link

sjb-sjb commented Feb 18, 2021

Well I can see you're not likely to be convinced, so I guess we'll have to leave it at a philosophical difference. I do think it would be better, not just academically but also in the fact that I would not have to, for example, duplicate my code multiple times in order to refer to the nullable version of a type parameter (struct or class parameter) or deduce what T? means by referring back to the type constraints.

@HaloFour
Copy link
Contributor

@sjb-sjb

Well I can see you're not likely to be convinced

Because the debate really hasn't changed. But the circumstances have, in that we now have a solution that, while certainly imperfect, covers the vast majority of use cases. So while the risk is just as high as it was five years ago the reward is substantially less.

but also in the fact that I would not have to, for example, duplicate my code multiple times in order to refer to the nullable version of a type parameter

If anything making NRTs a part of the runtime would likely mean that you have to write that code even more times than you do now as NRTs would still be incompatible with NVTs and they would be additionally incompatible with nullable reference types.

@CyrusNajmabadi
Copy link
Member

@sjb-sjb The benefits are understood. But it's not as simple as just changing things as you mention. All this work has to happen in the context of the actual ecosystem out there that we absolutely do not want to destabilize or bifurcate.

@sjb-sjb
Copy link

sjb-sjb commented Feb 20, 2021

@CyrusNajmabadi, @HaloFour Which is why it's great to have people like you folk around who understand how it all fits together. I just happen to be long on the beauty / esthetics side of these things ... maybe it's because I don't have to actually deal with implementing the language :-).

Since I'm not a compiler expert I don't see why one couldn't take the approach I described of zero change to the CLR, using null checks for references coming back from nullability-unsafe environments. I guess the point is you would then have to maintain the old version of the compiler as well as the new one in order to support both #nullable enable and #nullable typed. Later on, after there were some more language features added to #nullable typed but not to #nullable enable, you could have people complain that they also want those new features in #nullable enable. Easy for me to say, "just say no" :-) but I guess you as owners of all this may not want to live with that situation.

@stijnherreman
Copy link

Regarding [MemberNotNull]:

  • Is it doable to detect such a call done by a parent method?
    string? foo;
    
    [MemberNotNull(nameof(foo))]
    private void InitFoo()
    {
    	foo = "foo";
    }
    
    public void DoSomething()
    {
    	InitFoo();
    	var _ = foo.Length; // OK
    }
    
    public void DoSomethingElse()
    {
    	InitFoo();
    	GetFooLength();
    }
    
    private void GetFooLength()
    {
    	var _ = foo.Length; // warning CS8602: Dereference of a possibly null reference.
    }
  • Is there a chance of getting support for async methods, through the same or a new attribute? The compiler (rightfully) warns that the member may still be null when exiting. See also: API Proposal: MemberNotNullWhenCompleteAttribute #5657
    string? foo;
    
    private Task<string> GetFoo() => Task.FromResult("foo");
    
    [MemberNotNull(nameof(foo))]
    private async Task LoadFooAsync()
    {
    	foo = await GetFoo(); // warning CS8774: Member 'foo' must have a non-null value when exiting.
    }

@TahirAhmadov
Copy link

@stijnherreman I think you would need something like this:

string? foo;

[MemberNotNull(nameof(foo))]
private void InitFoo()
{
	foo = "foo";
}

public void DoSomething()
{
	InitFoo();
	var _ = foo.Length; // OK
}

public void DoSomethingElse()
{
	GetFooLength(); // warning
	InitFoo();
	GetFooLength(); // no warning
}

[ExpectsMemberNotNull(nameof(foo))] // can somebody clarify if this already exists?
private void GetFooLength()
{
	var _ = foo.Length; // no warning
}

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Implemented Needs ECMA Spec This feature has been implemented in C#, but still needs to be merged into the ECMA specification Proposal champion Proposal
Projects
None yet
Development

No branches or pull requests