-
Notifications
You must be signed in to change notification settings - Fork 235
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Issue with Never vs LSP #1458
Comments
I don't see a bug here. My understanding is that The statement in Kevin Millikin's doc is a little imprecise, and there is discussion in the comments about exactly what it should mean, but it seems most of the confusion is about the interaction between Never and Any, which isn't at issue here. |
Never (And all other bottom types in type systems) is not a subtype of any other type. It is the uninhabited type. I can link to formal type theory as well if you'd prefer?
Please note that you have Never in the return type there, which will never result in a type, only an exception. Inverting this makes it impossible to correctly call the function. The presence of a reliance on an uninhabited type in any code which is reachable is verifiably an error, and formalized type theories all agree on this. This should absolutely be in error as otherwise this is just a sledgehammer that allows doing anything. class A:
x: int
class B:
x: str
class C(A, B):
x: Never We've created a class that is impossible to ever have an attribute x, but if we are expecting A or B, we're expecting an attribute. This is unsafe replacement and the difference between an uninhabited type and a universal subtype matters |
@DiscordLiz, your definition of It also differs from how the interface Parent {
x: number;
}
interface Child extends Parent {
x: never; // No type violation
} I agree with Jelle that this isn't a bug, at least in terms of how the |
I think this is poorly considered in the face of both formal theory and obvious cases where this just allows nonsensical subclasses. It was not possible for users to create a concrete dependence on Never prior to 3.11, where this was added without a PEP and without such consideration. NoReturn was handled correctly, where it was correctly determined that there can't be a correct result relying on the (lack of) type. Additionally, typing.assert_never, which was added at the same time, correctly shows that the reliance of a type being never is equivalent to having unreachable code. |
The first line of the Wikipedia article on bottom types says:
I think this also makes intuitive sense: the bottom type is the type that is not inhabited, so it's kind of like the empty set, and the empty set is a subset of all other sets. So, why does your example seemingly break LSP? It doesn't actually, because you can't actually construct an instance of Incidentally, the union of any type with from typing import Never
a: int | Never
reveal_type(a) # N: Revealed type is "Union[builtins.int, <nothing>]" |
Following over from where this came up as well: While I can see a definition of Never that allows this, @DiscordLiz is correct in that formal set-theoretic type systems based on subtyping relationships, including those with gradual typing, do not view the bottom type as a subtype of other types.* Caveat on not partaking in subtypingThere is a definition of subtyping that they do partake in, but this definition of subtyping is not how type checkers currently handle subtyping, we could adopt that definition, but doing that is equivalent to saying the current behavior is wrong, so I didn't want to claim "it does participate in subtyping, python just does subtyping wrong too", when there are many ways to define subtyping and we don't have formal codified accepted definitions, we have people pointing to what implementations are doing In such formal theories, much of which the work being done in formalization is based on, the top and bottom type are not types or even sets of types, but sets of sets of types. They exist at a higher order and are conceptually useful for indicating that something could be any possible compatible type or can't be satisfied by any type. I don't think allowing Never as a universal subtype is useful, and it arguably completely changes expectations of what is a valid subtype to allow it, for behavior that was added without a PEP if this is the interpretation, so adding it without a PEP was breaking and now it's being suggested that changing it would require a notice period. (prior to the addition of Never, in 3.11, we only had NoReturn, which being documented as only to be used in a function return, happened to be handled correctly incidentally if people were assuming Never to be a subtype of other types because it could only indicate an exception as a return type, not actually replacing a concrete type where there was a valid expectation of one.) To wit on the uselessness of this, all attribute access, even when we've specified a type that specifies an attribute exists, is now subject to it not existing if we accept this absurd premise, all because of an addition to the type system without a clear definition and without going through the process for addition where there would have been visibility on this. I also don't think that "well other language allows it" means it is useful or is a good justification. All that means is that multiple people reached the same conclusion, not that the conclusion was reached correctly in either case.
Wikipedia is not correct here without a specific definition of subtyping. Some definitions of subtyping this can be shown does not hold for. I may take the time to update it with proper sourcing correcting this later, but I've got a lot of various things I'm discussing in relation to this, and that's a pretty low priority. you may want to read Abstracting Gradual Typing (AGT), (Garcia et al, 2016) for more rigorous definitions that have been proven correct. |
mypy doesn't error at constructing an instance of B, so it allows it despite that that requires an instance of B.foo that is uninhabited, as shown in the mypy playground link. If mypy didn't allow constructing an instance of B, this would still be an issue however when it comes to accepting things of |
The crux of the issue here (functionally) is this:
As for formally, papers papers galore, but python is unfortunately not so well formalized at this time. |
If other type checkers are and have been allowing this and this may need to be discussed more as an issue for type safety overall. I don't think having a way to intentionally break compatibility and yet claim compatibility is a good thing here, and the difference between NoReturn (as a return type annotation) and Never everywhere in general changes the effects of the existance of a user specified Never which is treated as participating in subtype relationships. |
It was pointed out to me in a discord discussion that I don't think this changes the need to discuss whether this should be allowed or not. My personal opinion here is that Never should be viewed as the absence of a valid type and uninhabited, not as a universal subtype that is uninhabitable, for reasons above involving replacement, as well as goals of formalization long term, but checking for impact of that (while would already be important) is much more important with the context that this has been supported for multiple versions. |
Yes, that's correct. Thanks for pointing that out. The In Python 3.11, At the same time, As for the broader issue being discussed here, I'm trying to get a sense for whether this is simply a nomenclature issue or whether there's a meaningful difference of opinion. One area of contention is whether the concept called a "bottom type" participates in subtype relationships. It makes logical sense to me that it would given that it satisfies all of the set-theoretic properties of subtyping, but it's possible I'm missing something here. If we're not able to agree on whether The terms "bottom type" and "top type" have been used pretty consistently throughout the development of the Python type system (included in PEP 483), but some of the other terms being used in the It was pointed out in the thread above that mypy doesn't generate an error when a class with an attribute of type class Foo:
x: Never
f = Foo() # No type error
f.x = "" # Type violation
foo: Never # No type error
foo = "" # Type violation This makes This is in contrast to some (stricter) programming languages where it's considered a static error to declare a symbol and leave it unassigned. |
I agree that the fact that attributes can remain unassigned is an unrelated issue from this discussion. If you force the attribute to be assigned, then type checkers will complain in the correct way: from dataclasses import dataclass
from typing import Never
@dataclass
class Foo:
x: Never
f = Foo("") # type error no matter what you try to pass in
Yes, I believe that is the case. |
Well, I can't agree with that. The set-theoretic view as I've understood it and seen it presented formally in papers where people have actually shown formal proofs, is that the bottom and top types are not even really part of the same hierarchy as other static and gradual types. They exist at a higher order. The bottom type isn't a static type, but the absence of valid types. The top type isn't a singular static or gradual type (even though it behaves like a gradual type) but the potential for any compatible type. Neither participate in subtyping relationships [in the way python currently is treating subtyping] in this model, but it is possible to go from the top type to a more specific type when presented with evidence that the more specific gradual or static type is correct. It is also possible to go from having a type, to eliminating all possible types (such as in the case of exhaustive matching) |
Tangential comments about process (off-topic)
Which is type checkers changing the behavior via... their issue tracker instead of by specification being amended. Sigh it's years ago, little late to do much about that, but I don't think this is generally a good way for changes to happen. The specification should drive what is correct, not the implementation. If an implementation has a reason to need changes, there's a way to do that.
The thing is, typing sig isn't the most noticeable place for behavioral changes to be discussed. I can sign up and follow the mailing lists I guess, but I don't really like specifications changing by just a small thing in a mailing list that may not receive attention. Specification changes have very long-term and wide-reaching impact. |
(Sorry if this is a lot, I've split my comments by concern to not have a singular wall of text) I don't know how breaking changing the definition of Never to be more in line with what proven theory says is correct for the bottom type would be, and I'm not in a great place to implement something to that effect right now to run against mypy_primer, but I think it is worth considering. Notably, the only actual changes behaviorally is that Never can still arise from type narrowing removing all other possible types, but any code depending on an instance of Never can't be expected to be validly reached at runtime, and Never can't be used to replace other types without it being by removing the other types. With this change, the only function which should ever receive Never as an argument, is assert_never (or any other future function intended to handle how runtime realities may diverge from static analysis due to misuse or the uncertainties of the real-world code), and only at type checking time. It would still be valid to have as a return type (indicating the lack of return) assert_never's signature should also probably be updated too@overload
def assert_never(obj: object) -> Never: ... # raises at runtime if called with any real object, purpose is `unreachable` code
@overload
def assert_never(obj: Never) -> None: ... # the point is ensuring the type has been exhausted of all possible types
def assert_never(obj):
... I know there's existing work on formalization going on elsewhere, but we're running into issues with the current less formal definitions now, and with ongoing work in trying to get the Intersections PEP ready. |
I disagree that Never is consistent with other types, but it's probably useful to understand how it seems people got to a point where they have come to the conclusion that it is. Lets start with something basic, If I specify in a function parameter annotation a singular static type, it isn't just that singular static type which is correct to give the function. Anything which can suitably stand in as a replacement for this singular type is also valid. So the parameter annotation doesn't represent one type (except in cases of invariant parameters) even though it is spelled as one type. This conceptual set of possible types which can be used in the hypothetical function exists at the same higher order @mikeshardmind referred to when describing how the top and bottom type don't exist at the same level as static and gradual types in the set-theoretic model. In python, we have two ways that are supported by the type system for this. Inheritence (nominal typing) and Protocols (structural typing). Never can't suitably stand in for any type. It lacks the structure of every type, as it can't even exist, and it isn't a subclass of any other type either. It isn't a gradual type, it's the lack of a type all together. So why is Never a suitable replacement in a return type? Well, it isn't. Because we aren't returning a type anymore, at least, not in a way that typing checks* and python does not have the same code path for returning values and raising exceptions, a function which unconditionally raises can have it's return value annotation used to mark the lack of return value without this being a replacement of the return type; Rather, it's specifying the lack of a type being returned. I believe the original name of NoReturn was actually more correct for most places users could actually specify it, and some of the original insight of those who named it that may have fallen away to lack of recording anywhere important. The one case where Never is more correct as an annotation, is in the special purpose function assert_never, which the goal of is also to say that there is no longer a valid type. This is similar to I think it's how errors are handled in python and many other languages, and Never appearing to work as a replacement in one place (return values) that misleads people into believing it should work everywhere and be consistent with all types. *Exceptions are really still just values with special control flow, but absent a proposal to add checked exceptions to python's type system, this distinction doesn't matter. I don't care about or want checked exceptions in python, but adding this to the type system would be an ordeal that if someone does want, they would find issues arise with Never as a replacement annotation without additional syntax. ** Never is still a good and useful name though. Never can still correctly arise in inference with elimination of possible types, and it would be weird to have NoReturn as the name in inference in that case. |
I think it is debatable whether constructing an instance of from typing import NoReturn
class A:
def foo(self, x: int) -> int: ...
class B(A):
def foo(self, x: int) -> NoReturn: ... mypy allows this, but this arguably violates LSP for the same reasons as the example in OP. It introduces unreachable code into methods using Also note that
[1] python/mypy#4116 |
I think a type theorist would reply that yes, you are allowed to construct an instance of this But, python's static typing obviously doesn't have to follow type theory exactly, so I won't object to potential changes on this ground. (I just object to OP's framing of this issue as mypy violating type theory. In type theory, the bottom type is definitely a subtype of all other types. You can advocate for deviating from this, but I don't think you can say that it's more theoretically grounded that way.) |
This is correct, but also not useful IMO. It is not different from passing in an object where I agree that bottom type is by definition compatible with every other type. I think the OP's argument is that |
In the set-theoretic model of typing which current formalization efforts are largely being built on top of, the bottom and top types don't participate in subtyping behavior. (At least in the way python type checkers are defining what a subtype is, see other posts that point out a reconciliation) Relations with these types are defined by other properties. In these models, the bottom type is uninhabitable and does not participate in subtyping behavior directly with static or gradual types, instead participating with a well-defined set of behaviors (that is not equivalent to subtyping for the top and bottom types) with the relation of type specifications, which are the sets of types that can be used where a static or gradual type is requested. This distinction is subtle but important. (Note that some papers have defined this as subtyping as well, but that these definitions do not match python's current behavior) Outside of two things:
Python's type system fully follows the set-theoretic model's definitions, specifically those built up in If that doesn't change your perspective, I'd be interested to hear how you reconcile this, as well as the effects on LSP, as it could be illuminating for how we handle this going forward.
@eltoder See above, but no, the argument is that python actually follows the set-theoretic definitions for nearly everything else, and that it should for Never as well. (the set-theoretic ones have the bottom type as uninhabited, as you nodded to with "some theories")) Not doing so creates inconsistency, and creates a situation where established rules for safe-replacement do not work. (Note, This still is being discussed in the formalization happening in the background as well, there isn't good agreement on which definitions should be used yet anywhere, but there are issues we can show with treating Never as a type that participates in subtyping.) |
As for the current behavior concretely violating LSP, @randolf-scholz raised this recently in the related conversation pertaining to intersections
This makes an attempt to square the two, but notably, this creates a specific exception that reaches the same result as changing the general definition as argued here. |
@mikeshardmind I think it is hard to argue "python actually follows the set-theoretic definitions for nearly everything". The typing PEPs are generally not precise enough, so mypy is used as a de facto interpretation of those PEPs. But mypy has over 2000 open issues and also includes behaviors that address style or practical concerns and are far from theory. I expect that if you examine the actual behavior of mypy in enough detail, you find a lot of places where it deviates from theory. OTOH, the current definition of [1] https://en.wikipedia.org/wiki/Bottom_type#In_programming_languages |
@eltoder This discussion in this thread is overflowing from over a month of exploring issues with a potential addition to the type system which is based on theory. I don't lightly bandy about that python mostly follows these definitions, this has been heavily discussed and led to a lot of discussion that ultimately needed to come here due to a poor interaction between how type checkers are currently interpreting Never and other assumptions people have about type safety. I also think we would benefit from better definitions that don't lead to relying on "well, this is what type checkers are currently doing" to define what is correct, or the end result is "type checkers, including any errors in their implementation, are the source of truth, not the specified rules."
Funnily enough, what you linked isn't quite making the argument you seem to think it is. For example, Haskell and Rust's definitions both match more closely to what is described in this issue than what type checkers are currently doing. Haskell:
Rust:
Rust and Haskell do not allow the bottom type where python type checkers currently do, only allowing it where there is no value, and correctly treating reaching code that has a type of Never to be an error. @DiscordLiz gave a good explanation above for how easy it would be for someone without the type theory background to make the incorrect leap that many have made and why. Additionally, When comparing TypeScript's use of Intersections, with what is being proposed for python, it was found that typescript has an arbitrary undocumented sorting algorithm that has an impact on the outcome of intersections. Should we conclude that intersections must have this feature because another language did this? Just because another language does something does not mean that that something has had the same considerations for use that would be appropriate for other use. |
@mikeshardmind just a slight clarification for anyone confused, Haskell allows specifying such code but treats the use of it as and impossible to recover from error. This is strictly as a mathematical outcome that could arise due to composition of functions, and not as an intent to allow such code to exist or be used. You are still correct that this is not treating Never as a subtype but as the absence of a valid type. |
Hopefully mniip doesn't mind this being shared here to make the point that this is possible, but... It's possible to create a program to actually check this by encoding the rules of a specific model of working with types into a formal proof engine and comparing it to what is currently the "accepted consensus behavior" since you are arguing that the PEPs aren't precise. I similarly share the above opinion that the imprecision in PEPs meant to specify is a problem. Coq formalization of "Gradual Typing with Union and Intersection Types" (https://www.irif.fr/~gc/papers/icfp17.pdf)Starting from: https://gist.github.com/mniip/0bfb52f35da3ad6c96878082e6af37e6, but modified further. Require Import Coq.Lists.List.
Require Import Coq.Bool.Bool.
Require Import Coq.Relations.Relations.
Require Import Coq.Compat.AdmitAxiom.
Import ListNotations.
Infix "<$>" := map (at level 65, left associativity).
Notation "k >>= f" := (flat_map f k) (at level 66, left associativity).
Definition allb {A} (f : A -> bool) (xs : list A) : bool :=
fold_right andb true (map f xs).
Definition anyb {A} (f : A -> bool) (xs : list A) : bool :=
fold_right orb false (map f xs).
Definition apList {A B} (fs : list (A -> B)) (xs : list A) : list B :=
fs >>= fun f => f <$> xs.
Infix "<*>" := apList (at level 65, left associativity).
Definition null {A} (xs : list A) : bool :=
match xs with
| [] => true
| _ => false
end.
(* Subtyping decision procedure for base types. We parameterize over it here. *)
Class SubtypeDecidable (BaseType : Type) : Type :=
{ bsubDecide : list BaseType -> list BaseType -> bool
(* `bsubDecide xs ys` is whether the intersection of xs lies within the union of ys *)
; BSub := fun t s => bsubDecide [t] [s] = true
; bsub_refl : reflexive _ BSub
; bsub_trans : transitive _ BSub
}.
Section Types.
Context {BaseType : Type} `{SubtypeDecidable BaseType}.
Inductive SType : Type :=
| SBottom : SType
| STop : SType
| SNot : SType -> SType
| SArrow : SType -> SType -> SType
| SBase : BaseType -> SType
| SAnd : SType -> SType -> SType
| SOr : SType -> SType -> SType.
Inductive GType : Type :=
| GBottom : GType
| GTop : GType
| GAny : GType
| GNot : SType -> GType
| GArrow : GType -> GType -> GType
| GBase : BaseType -> GType
| GAnd : GType -> GType -> GType
| GOr : GType -> GType -> GType.
Declare Scope stype_scope.
Delimit Scope stype_scope with stype.
Bind Scope stype_scope with SType.
Notation "𝟘" := SBottom : stype_scope.
Notation "𝟙" := STop : stype_scope.
Notation "¬ t" := (SNot t) (at level 75) : stype_scope.
Infix "→" := SArrow (at level 99, right associativity) : stype_scope.
Infix "∧" := SAnd (at level 85, right associativity) : stype_scope.
Infix "∨" := SOr (at level 85, right associativity) : stype_scope.
Notation "⋀ xs" := (fold_right SAnd STop xs) (at level 85) : stype_scope.
Notation "⋁ xs" := (fold_right SOr SBottom xs) (at level 85) : stype_scope.
Declare Scope gtype_scope.
Delimit Scope gtype_scope with gtype.
Bind Scope gtype_scope with GType.
Notation "𝟘" := GBottom : gtype_scope.
Notation "𝟙" := GTop : gtype_scope.
Notation "?" := GAny : gtype_scope.
Notation "¬ t" := (GNot t) (at level 75) : gtype_scope.
Infix "→" := GArrow (at level 99, right associativity) : gtype_scope.
Infix "∧" := GAnd (at level 85, right associativity) : gtype_scope.
Infix "∨" := GOr (at level 85, right associativity) : gtype_scope.
Notation "⋀ xs" := (fold_right GAnd GTop xs) (at level 85) : gtype_scope.
Notation "⋁ xs" := (fold_right GOr GBottom xs) (at level 85) : gtype_scope.
Reserved Infix "≤" (at level 70, no associativity).
Inductive Sub : SType -> SType -> Prop :=
| sub_base : forall b c, BSub b c -> SBase b ≤ SBase c
| sub_not_mono : forall t s, s ≤ t -> (¬t) ≤ (¬s)
| sub_arrow_mono : forall t1 t2 s1 s2, s1 ≤ t1 -> t2 ≤ s2 -> (t1 → t2) ≤ (s1 → s2)
| sub_bottom : forall t, 𝟘 ≤ t
| sub_top : forall t, t ≤ 𝟙
| sub_and_l : forall t1 t2, (t1 ∧ t2) ≤ t1
| sub_and_r : forall t1 t2, (t1 ∧ t2) ≤ t2
| sub_and : forall t s1 s2, t ≤ s1 -> t ≤ s2 -> t ≤ (s1 ∧ s2)
| sub_or_l : forall t1 t2, t1 ≤ (t1 ∨ t2)
| sub_or_r : forall t1 t2, t2 ≤ (t1 ∨ t2)
| sub_or : forall t1 t2 s, t1 ≤ s -> t2 ≤ s -> (t1 ∨ t2) ≤ s
| sub_trans : forall t s r, t ≤ s -> s ≤ r -> t ≤ r
where "t1 ≤ t2" := (Sub t1 t2).
Section SubDecide.
(* Either conjuncts or disjuncts depending on context, categorized by whether
it's a BaseType or an arrow, and by whether it's negated *)
Record Junct := mkJunct
{ bases : list BaseType
; arrows : list (SType * SType)
; notBases : list BaseType
; notArrows : list (SType * SType)
}.
Definition unitJunct : Junct :=
{| bases := []
; arrows := []
; notBases := []
; notArrows := []
|}.
Definition mergeJuncts (j1 j2 : Junct) : Junct :=
{| bases := bases j1 ++ bases j2
; arrows := arrows j1 ++ arrows j2
; notBases := notBases j1 ++ notBases j2
; notArrows := notArrows j1 ++ notArrows j2
|}.
Definition invertJunct (j : Junct) : Junct :=
{| bases := notBases j
; arrows := notArrows j
; notBases := bases j
; notArrows := notArrows j
|}.
Fixpoint udnf (t : SType) : list Junct :=
match t with
| SBase b => [{| bases := [b]; arrows := []; notBases := []; notArrows := [] |}]
| t1 → t2 => [{| bases := []; arrows := [(t1, t2)]; notBases := []; notArrows := [] |}]
| 𝟘 => []
| 𝟙 => [unitJunct]
| ¬t => invertJunct <$> ucnf t
| t1 ∨ t2 => udnf t1 ++ udnf t2
| t1 ∧ t2 => mergeJuncts <$> udnf t1 <*> udnf t2
end%stype
with ucnf (t : SType) : list Junct :=
match t with
| SBase b => [{| bases := [b]; arrows := []; notBases := []; notArrows := [] |}]
| t1 → t2 => [{| bases := []; arrows := [(t1, t2)]; notBases := []; notArrows := [] |}]
| 𝟘 => [unitJunct]
| 𝟙 => []
| ¬t => invertJunct <$> udnf t
| t1 ∧ t2 => ucnf t1 ++ ucnf t2
| t1 ∨ t2 => mergeJuncts <$> ucnf t1 <*> ucnf t2
end%stype.
Definition junctDecide (jt js : Junct) : bool :=
match bases jt ++ notBases js, arrows jt ++ notArrows js with
| lbases, [] => bsubDecide lbases (bases js ++ notBases jt)
| [], _ => match proof_admitted with
(* How to decide subtyping of a bunch of arrows? Not sure. *)
end
| _, _ => match proof_admitted with
(* How to decide whether base types intersect arrow types?
This depends on the target language in some way. *)
end
end.
Definition subDecide (t s : SType) : bool :=
allb id (junctDecide <$> udnf t <*> ucnf s).
End SubDecide.
Infix "≤!" := subDecide (at level 70, no associativity).
Notation "t ∈ S" := (S t) (at level 70, only parsing).
Fixpoint γ (τ : GType) : SType -> Prop :=
match τ with
| ? => fun _ => True
| 𝟘 => eq 𝟘%stype
| 𝟙 => eq 𝟙%stype
| GBase b => eq (SBase b)
| ¬t => eq (¬t)%stype
| τ1 ∧ τ2 => fun t => match t with
| t1 ∧ t2 => t1 ∈ γ τ1 /\ t2 ∈ γ τ2
| _ => False
end%stype
| τ1 ∨ τ2 => fun t => match t with
| t1 ∨ t2 => t1 ∈ γ τ1 /\ t2 ∈ γ τ2
| _ => False
end%stype
| τ1 → τ2 => fun t => match t with
| t1 → t2 => t1 ∈ γ τ1 /\ t2 ∈ γ τ2
| _ => False
end%stype
end%gtype.
Definition lift2 (P : SType -> SType -> Prop) : GType -> GType -> Prop
:= fun τ1 τ2 => exists t1 t2, t1 ∈ γ τ1 /\ t2 ∈ γ τ2 /\ P t1 t2.
Fixpoint gradualMinimum (τ : GType) : SType :=
match τ with
| ? => 𝟘%stype
| 𝟘 => 𝟘%stype
| 𝟙 => 𝟙%stype
| GBase b => SBase b
| ¬t => (¬t)%stype
| τ1 ∧ τ2 => (gradualMinimum τ1 ∧ gradualMinimum τ2)%stype
| τ1 ∨ τ2 => (gradualMinimum τ1 ∨ gradualMinimum τ2)%stype
| τ1 → τ2 => (gradualMaximum τ1 → gradualMinimum τ2)%stype
end%gtype
with gradualMaximum (τ : GType) : SType :=
match τ with
| ? => 𝟙%stype
| 𝟘 => 𝟘%stype
| 𝟙 => 𝟙%stype
| GBase b => SBase b
| ¬t => (¬t)%stype
| τ1 ∧ τ2 => (gradualMaximum τ1 ∧ gradualMaximum τ2)%stype
| τ1 ∨ τ2 => (gradualMaximum τ1 ∨ gradualMaximum τ2)%stype
| τ1 → τ2 => (gradualMinimum τ1 → gradualMaximum τ2)%stype
end%gtype.
Notation "τ ⇓" := (gradualMinimum τ) (at level 65).
Notation "τ ⇑" := (gradualMaximum τ) (at level 65).
Infix "̃≤" := (lift2 Sub) (at level 70, no associativity).
Definition consSubDecide (σ τ : GType) : bool := σ⇓ ≤! τ⇑.
Infix "̃≤!" := consSubDecide (at level 70, no associativity).
Infix "̃≰" := (lift2 (fun s t => ~(s ≤ t))) (at level 70, no associativity).
Definition consNotSubDecide (σ τ : GType) : bool := negb (σ⇑ ≤! τ⇓).
Infix "̃≰!" := consNotSubDecide (at level 70, no associativity).
(* ??? The paper completely glances over that it treats SType as a subset of
GType in the definition of γA. γA+ restricted to SType is γApos' here. *)
Fixpoint stype2gtype (t : SType) : GType :=
match t with
| 𝟘 => 𝟘%gtype
| 𝟙 => 𝟙%gtype
| ¬t => (¬t)%gtype
| t1 → t2 => (stype2gtype t1 → stype2gtype t2)%gtype
| SBase b => GBase b
| t1 ∧ t2 => (stype2gtype t1 ∧ stype2gtype t2)%gtype
| t1 ∨ t2 => (stype2gtype t1 ∨ stype2gtype t2)%gtype
end%stype.
Fixpoint γAneg (t : SType) : list (list (GType * GType)) :=
match t with
| t1 ∨ t2 => (fun T1 T2 => T1 ++ T2) <$> γAneg t1 <*> γAneg t2
| t1 ∧ t2 => γAneg t1 ++ γAneg t2
| _ → _ => [[]]
| ¬t => γApos' t
| 𝟘 => [[]]
| 𝟙 => []
| SBase _ => [[]]
end%stype
with γApos' (t : SType) : list (list (GType * GType)) :=
match t with
| t1 ∨ t2 => γApos' t1 ++ γApos' t2
| t1 ∧ t2 => (fun T1 T2 => T1 ++ T2) <$> γApos' t1 <*> γApos' t2
| t1 → t2 => [[(stype2gtype t1, stype2gtype t2)]]
| ¬t => γAneg t
| 𝟘 => []
| 𝟙 => [[]]
| SBase _ => [[]]
end%stype.
Fixpoint γApos (τ : GType) : list (list (GType * GType)) :=
match τ with
| τ1 ∨ τ2 => γApos τ1 ++ γApos τ2
| τ1 ∧ τ2 => (fun T1 T2 => T1 ++ T2) <$> γApos τ1 <*> γApos τ2
| τ1 → τ2 => [[(τ1, τ2)]]
| ¬t => γAneg t
| 𝟘 => []
| 𝟙 => [[]]
| GBase _ => [[]]
| ? => [[(?, ?)]]
end%gtype.
Definition dom (τ : GType) : SType :=
⋀ (fun S => ⋁ (fun '(ρ, _) => ρ⇑) <$> S)%stype <$> γApos τ.
Fixpoint assignments {X : Type} (xs : list X) : list (list (X * bool)) :=
match xs with
| [] => [[]]
| (x :: xs) => cons <$> [(x, false); (x, true)] <*> assignments xs
end.
Definition partitions {X : Type} (xs : list X) : list (list X * list X) :=
(fun ys => (fst <$> filter snd ys, fst <$> filter (fun p => negb (snd p)) ys))
<$> assignments xs.
Definition appResult (τ σ : GType) : GType := ⋁ (fun S =>
⋁ (fun '(Q, Qbar) => ⋀ snd <$> Qbar) <$> filter
(fun '(Q, Qbar) => negb (null Qbar) && (σ ̃≰! ⋁ fst <$> Q)
&& negb ((σ⇑ ∧ ⋁ (fun '(ρ, _) => ρ⇑) <$> Qbar) ≤! 𝟘))
(partitions S)
)%gtype <$> γApos τ.
Infix "∘" := appResult (at level 90, no associativity).
Section SomeTheorems.
Local Theorem sub_and_mono t1 t2 s1 s2 : t1 ≤ s1 -> t2 ≤ s2 -> (t1 ∧ t2) ≤ (s1 ∧ s2).
Proof.
intros p q.
apply sub_and.
- refine (sub_trans _ _ _ _ p).
apply sub_and_l.
- refine (sub_trans _ _ _ _ q).
apply sub_and_r.
Qed.
Local Theorem sub_or_mono t1 t2 s1 s2 : t1 ≤ s1 -> t2 ≤ s2 -> (t1 ∨ t2) ≤ (s1 ∨ s2).
Proof.
intros p q.
apply sub_or.
- refine (sub_trans _ _ _ p _).
apply sub_or_l.
- refine (sub_trans _ _ _ q _).
apply sub_or_r.
Qed.
Local Theorem sub_refl t : t ≤ t.
Proof.
pose proof sub_and_mono.
pose proof sub_or_mono.
pose proof bsub_refl.
induction t; auto; now constructor.
Qed.
Local Theorem gradual_extrema_in_concretization τ : τ⇓ ∈ γ τ /\ τ⇑ ∈ γ τ.
Proof.
induction τ as [ | | | | ? [] ? [] | | ? [] ? [] | ? [] ? [] ]; simpl; auto.
Qed.
Local Theorem gradual_extrema_bounds τ : forall t, t ∈ γ τ -> τ⇓ ≤ t /\ t ≤ τ⇑.
Proof.
induction τ as [ | | | | ? IHl ? IHr | | ? IHl ? IHr | ? IHl ? IHr ]; simpl.
- intros _ <-. repeat constructor.
- intros _ <-. repeat constructor.
- repeat constructor.
- intros _ <-. repeat constructor.
+ apply sub_refl.
+ apply sub_refl.
- intros [ | | | t1 t2 | | | ]; try contradiction.
intros [].
destruct (IHl t1); auto.
destruct (IHr t2); auto.
repeat constructor; auto.
- intros _ <-. repeat constructor.
+ apply bsub_refl.
+ apply bsub_refl.
- intros [ | | | | | t1 t2 | ]; try contradiction.
intros [].
destruct (IHl t1); auto.
destruct (IHr t2); auto.
split.
+ apply sub_and_mono; auto.
+ apply sub_and_mono; auto.
- intros [ | | | | | | t1 t2 ]; try contradiction.
intros [].
destruct (IHl t1); auto.
destruct (IHr t2); auto.
split.
+ apply sub_or_mono; auto.
+ apply sub_or_mono; auto.
Qed.
Local Theorem cons_sub_reduction σ τ : σ ̃≤ τ <-> σ⇓ ≤ τ⇑.
Proof.
unfold lift2. split.
- intros [s [t [? []]]].
apply (sub_trans _ s _).
+ now apply gradual_extrema_bounds.
+ apply (sub_trans _ t _).
* assumption.
* now apply gradual_extrema_bounds.
- intros ?.
exists (σ⇓), (τ⇑).
split; try split.
+ apply gradual_extrema_in_concretization.
+ apply gradual_extrema_in_concretization.
+ assumption.
Qed.
Theorem sub_decide_agrees t s : t ≤ s <-> t ≤! s = true.
(* Depends on the contents of junctDecide *)
Admitted.
Theorem cons_sub_decide_agrees σ τ : σ ̃≤ τ <-> σ ̃≤! τ = true.
Proof.
transitivity (σ⇓ ≤ τ⇑).
- apply cons_sub_reduction.
- unfold consSubDecide.
apply sub_decide_agrees.
Qed.
Theorem cons_not_sub_reduction σ τ : σ ̃≰ τ <-> ~(σ⇑ ≤ τ⇓).
Proof.
unfold lift2. split.
- intros [s [t [? [? p]]]] n.
apply p.
apply (sub_trans _ (σ⇑) _).
+ now apply gradual_extrema_bounds.
+ apply (sub_trans _ (τ⇓) _).
* assumption.
* now apply gradual_extrema_bounds.
- intros n.
exists (σ⇑), (τ⇓).
split; try split.
+ apply gradual_extrema_in_concretization.
+ apply gradual_extrema_in_concretization.
+ assumption.
Qed.
Theorem cons_not_sub_decide_agrees σ τ : σ ̃≰ τ <-> σ ̃≰! τ = true.
Proof.
transitivity (~(σ⇑ ≤ τ⇓)).
- apply cons_not_sub_reduction.
- unfold consNotSubDecide.
transitivity (σ⇑ ≤! τ⇓ <> true).
+ apply not_iff_compat.
apply sub_decide_agrees.
+ transitivity (σ⇑ ≤! τ⇓ = false).
* apply not_true_iff_false.
* symmetry. apply negb_true_iff.
Qed.
End SomeTheorems.
Section SomeContradictions.
Inductive PartialEvalResult : Type :=
| EvBottom
| EvUnknown
| EvTop.
Local Fixpoint partialEval (t : SType) : PartialEvalResult :=
match t with
| 𝟘 => EvBottom
| 𝟙 => EvTop
| SBase _ => EvUnknown
| _ → _ => EvUnknown
| ¬t => match partialEval t with
| EvBottom => EvTop
| EvUnknown => EvUnknown
| EvTop => EvBottom
end
| t1 ∧ t2 => match partialEval t1, partialEval t2 with
| EvBottom, _ => EvBottom
| _, EvBottom => EvBottom
| EvUnknown, _ => EvUnknown
| _, EvUnknown => EvUnknown
| _, _ => EvTop
end
| t1 ∨ t2 => match partialEval t1, partialEval t2 with
| EvTop, _ => EvTop
| _, EvTop => EvTop
| EvUnknown, _ => EvUnknown
| _, EvUnknown => EvUnknown
| _, _ => EvBottom
end
end%stype.
Local Theorem partial_eval_mono t s :
t ≤ s -> match partialEval t, partialEval s with
| EvBottom, _ => True
| _, EvTop => True
| EvUnknown, EvBottom => False
| EvUnknown, _ => True
| EvTop, _ => False
end.
Proof.
intros p.
induction p.
all: repeat first [ progress (simpl; auto) | destruct (partialEval _) ].
Qed.
Local Theorem top_nle_bottom : ~(𝟙 ≤ 𝟘).
Proof.
intros n. apply (partial_eval_mono _ _ n).
Qed.
Theorem cons_sub_nontrans : ~(forall g h k, g ̃≤ h -> h ̃≤ k -> g ̃≤ k).
Proof.
intros n.
enough (p : 𝟙 ̃≤ 𝟘).
- apply top_nle_bottom.
unfold lift2 in p. simpl in p.
now destruct p as [? [? [-> [->]]]].
- apply (n _ ?%gtype _).
+ unfold lift2. simpl.
exists 𝟙%stype, 𝟙%stype.
split; auto. split; auto.
constructor.
+ unfold lift2. simpl.
exists 𝟘%stype, 𝟘%stype.
split; auto. split; auto.
constructor.
Qed.
End SomeContradictions.
End Types.
(* Have to redeclare all notations now that the section is closed *)
Declare Scope stype_scope.
Delimit Scope stype_scope with stype.
Bind Scope stype_scope with SType.
Notation "𝟘" := SBottom : stype_scope.
Notation "𝟙" := STop : stype_scope.
Notation "¬ t" := (SNot t) (at level 75) : stype_scope.
Infix "→" := SArrow (at level 99, right associativity) : stype_scope.
Infix "∧" := SAnd (at level 85, right associativity) : stype_scope.
Infix "∨" := SOr (at level 85, right associativity) : stype_scope.
Notation "⋀ xs" := (fold_right SAnd STop xs) (at level 85) : stype_scope.
Notation "⋁ xs" := (fold_right SOr SBottom xs) (at level 85) : stype_scope.
Declare Scope gtype_scope.
Delimit Scope gtype_scope with gtype.
Bind Scope gtype_scope with GType.
Notation "𝟘" := GBottom : gtype_scope.
Notation "𝟙" := GTop : gtype_scope.
Notation "?" := GAny : gtype_scope.
Notation "¬ t" := (GNot t) (at level 75) : gtype_scope.
Infix "→" := GArrow (at level 99, right associativity) : gtype_scope.
Infix "∧" := GAnd (at level 85, right associativity) : gtype_scope.
Infix "∨" := GOr (at level 85, right associativity) : gtype_scope.
Notation "⋀ xs" := (fold_right GAnd GTop xs) (at level 85) : gtype_scope.
Notation "⋁ xs" := (fold_right GOr GBottom xs) (at level 85) : gtype_scope.
Infix "≤" := Sub (at level 70, no associativity).
Infix "≤!" := subDecide (at level 70, no associativity).
Notation "τ ⇓" := (gradualMinimum τ) (at level 65).
Notation "τ ⇑" := (gradualMaximum τ) (at level 65).
Infix "̃≤" := (lift2 Sub) (at level 70, no associativity).
Infix "̃≤!" := consSubDecide (at level 70, no associativity).
Infix "̃≰" := (lift2 (fun s t => ~(s ≤ t))) (at level 70, no associativity).
Infix "̃≰!" := consNotSubDecide (at level 70, no associativity).
Infix "∘" := appResult (at level 90, no associativity).
Section Example.
Inductive BaseType :=
| object
| int
| str
| bytes
| float.
Scheme Equality for BaseType.
Definition SBase' := @SBase BaseType.
Coercion SBase' : BaseType >-> SType.
Definition GBase' := @GBase BaseType.
Coercion GBase' : BaseType >-> GType.
Definition decideSubclass (b c : BaseType) : bool :=
BaseType_beq b c || match b, c with
| _, object => true
| _, _ => false
end.
Program Instance subtype_decidable_basetype : SubtypeDecidable BaseType :=
{| bsubDecide := fun conjs disjs =>
anyb id (decideSubclass <$> conjs <*> disjs)
|| anyb (fun c => match c with object => true | _ => false end) disjs
|| (anyb (fun c => match c with int => true | _ => false end) conjs
&& anyb (fun c => match c with str => true | _ => false end) conjs)
|}.
Next Obligation.
unfold reflexive.
intros []; simpl; auto.
Qed.
Next Obligation.
unfold transitive, anyb.
intros b c d.
destruct b, d; simpl; auto; destruct c; simpl; auto.
Qed.
Eval cbn in (int → float) ∧ (str → bytes) ∘ int.
(*
= (((float ∧ bytes ∧ 𝟙) ∨ (float ∧ 𝟙) ∨ 𝟘) ∨ 𝟘)%gtype
: GType
*)
End Example. Note: This doesn't contradict what was said about the set-theoretic view not allowing this despite the subtyping relation, The rules proposed in this paper and encoded in the coq program there treat the bottom type not as participating in subtyping as most people are aware of the term, but redefine subtyping more generally in such a way that it does apply. |
Not really. Haskell allows fn :: Int -> Int
fn x = 1
main = print $ fn undefined It is considered a feature: https://wiki.haskell.org/Undefined A similar example works in Rust: use std::process;
fn test(x: u32) -> u32 {
x
}
fn main() {
#[allow(unreachable_code)]
test(process::exit(1));
} You have to allow unreachable code, but this is not a type error. This example works in Scala, Kotlin, TypeScript and probably others (with respective syntax changes): trait Trait:
def foo: Int
class Impl extends Trait:
override def foo: Nothing = throw Exception("test")
object Hello {
def main(args: Array[String]) = {
val obj: Trait = new Impl
print(obj.foo)
}
} |
The very first line of that page: states: "You can put it anywhere and it will compile. When evaluated, it throws an exception "undefined". But thanks to laziness you can put it anywhere you've not written yet:" This is not the same as type checkers allowing this and claiming it to be safe, in fact, this is haskell specifically telling you that use of it isn't safe.
This isn't an example of it being a value, Please read again about the difference between a value and a lack of a value, it matters. On top of this, Rust correctly detects unreachable code and you are opting out of the safety it provides visibly here.
This is also an example of something throwing an exception and not returning a value at all, not of something having a value of the bottom type. |
@mikeshardmind nudged me to put this comment separately rather than in a collapsible section over in discord, but what definition of "subtyping" is used matters here too. The set-theoretic model can be (see the proof above) reconciled with a general type connective which the paper it was proved in defines as their version of subtyping, but this use isn't consistent with what current type checkers are allowing. (And indeed the paper takes the time to lay out how prior definitions of subtyping are lacking) We have a very complex issue here where there is lots of good well proven theory, and no good definitions that are commonly accepted in python to actually communicate the issue due to lack of specification in PEPs. |
I didn't miss it, but it wasn't sensical for me to address it as it wasn't one of the assumptions of the proof.
If you can't disagree with the assumptions but you disagree with the outcome, you need to show that the proof is incorrect or it is your model of thought which is wrong about the outcome.
This is one of the two papers referenced specifically, and the one that has a link in thread, as well as a machine-proof version of it that shows how these definitions interact with Python's types Paper: https://www.irif.fr/~gc/papers/icfp17.pdf The other is Titled "Abstracting General Typing" and is a 2016 paper attributable to Garcia. I don't have a link to it on hand, and I'm on mobile right now. |
this one: |
@diabolo-dan yes, though the one linked in thread previously is the one which specific claims where made based on, that paper is referenced and built upon in the other. All of that said, I think the part about "but this is proven correct" is only tangentially important right now. That can all be addressed in any formalization effort, The real issue here most relevant to what started all of this, is that mutable instance variables are not treated as invariant. This has a relevant open issue since 2017 linked above. We can't compare directly to the decisions of other languages that don't have nominal subtyping (typescript) or that don't have the same concerns on mutability (Haskell) without addressing how things those languages don't have to consider interact. |
I think there are some things not quite right here. (I only found out about this issue this morning). A bottom type does participate in subtyping, by definition of what "bottom" means. It is a bottom type with respect to subtyping. What I intended to write in the draft spec was that Never was just such a static type. An uninhabited type is one that, when intepreted as a set of values, is empty. In semantic subtyping where subtyping is subset inclusion, then an uninhabited type is a subtype of every type. (Trivially, every element of the empty set is an element of every other set.). Python's Never is an uninhabited static type (there are no values of type Never). AGT ("abstracting gradual typing") represents ("concretizes") gradual types (possibly involving Any) as sets of static types. Any is the set of all static types. A static type (not involving Any) is concretized as the singleton set consisting of just that static type. So crucially, the bottom type Never is concretized as a singleton set containing the static type Never and this is definitely completely distinct from a gradual type that was concretized as an empty set (if such a thing existed). This is definition 2 on page 9 in the paper (https://www.irif.fr/~gc/papers/icfp17.pdf). Notice that concretization here can not produce an empty set of static types. To my knowledge, nobody has worried about what would be the gradual type that would be concretized as an empty set of static types (the original AGT paper just says that abstracting such a set is undefined). What I intended to write in the spec was that Never was most certainly a static type. If there were a hypothetical "fully statically typed" Python language, it would have Never as a static type (unlike Any). I was wishy washy about whether Never was a static subtype of Any, as an open question. (I think it could be, and I'm not aware of any problem with that.) |
And mutable fields are definitely invariant. One way to think of a mutable field with type |
Under the definitions of that paper, and the definition of subtyping therin, Never participates in subtyping. The confusion here is that python type checkers aren't checking certain things (such as the variance of mutable fields of sub-classes) so with the way type checkers are treating subtyping by their behavior, Never shouldn't be considered to participate in subtyping. One of the two needs to change here for this problem to go away. |
Considering that we probably want it to remain the case that Never is uninhabited, I think the set-theoretic definition of Never is more useful for any formalization. This would have Never participate in subtyping behavior as a subset of all type specifications (sets of types) but not as a subtype of individual static types (see what you pointed out about the undefinable nature of concretization for this case.) The distinction here may not matter if other things that type checkers aren't checking get checked, but I think having that distinction makes definitions of other things simpler, such as those of Union, Intersection (not yet added), Not (not yet added) as the behavior of Never becomes a natural dual to the behavior of |
I think there is still a misunderstanding. The set-theoretic interpretation of static types is that static types loosely denote sets of values. A static type
It's really kind of an open question of What I wrote in the draft spec was that I'm not sure what you mean by "type specifications", I think we need to define the term. And AGT-style concretization is simple for |
I think the clear solution for a Python typing spec is that class signatures are checked for invalid overrides and that the types of mutable fields are invariant. This is sort of recognized already for special cases, like Checkers are of course free to implement whatever they want. |
In the intersection discussions we came to the conclusion that from typing_extensions import Any, Never
def foo(x: Any | int, y: Any | Never, z: int | Never):
reveal_type(x) # mypy: Any | int pyright: Any | int
reveal_type(y) # mypy: Any | <nothing> pyright: Never
reveal_type(z) # mypy: int | <nothing> pyright: int
pass |
To come back to the issue. The point is that LSP aka behavioral subtyping is much stronger than mere structural subtyping. In particular the rules
are blatantly violated when a subclass annotates a return value as What is promised when one writes: class Foo:
def bar(self) -> str: ...
The current way things are checked corresponds to ③, because subclasses can override the return type with Implementing full (LSP-exception) would require typed exceptions, which are possibly not wanted. But a simple solution would be an augmented rule: Despite |
I actually would not try to define any reductions on types. We can simply say that The subtyping rule for unions that I wrote in the spec uses (plain) subtyping, so for a type such as I don't really know what you mean by "simplification rules for unions only apply to ... gradual types with gradual types". This is a union of two gradual types: (And it seems warty that we wouldn't do the same thing, whatever it is, to Special cases add up and I don't see much value in terms of what the type system can do. |
Oh, I thought in the new document, gradual types refer to
so
But the draft does define a reduction rule:
In the example |
Python types don't say anything about exceptions or postconditions, so we really can't talk about them meaningfully here. I wouldn't call either of those "blatantly" violated by changing a return type to Are you proposing to allow: class A:
def foo(self) -> str: return 'A'
class B(A):
def foo(self) -> str: raise NotImplementedError('B') but forbid: class C(A):
def foo(self) -> Never: raise NotImplementedError('C') I don't think that really makes sense. The typing of |
Obviously I have to be even more careful with my writing. The draft spec does not intend to define such a reduction rule. I give two rules for subtyping of unions. Then I argues in non-normative text that these are enough to subsume the more complicated properties defined in the PEPs, because all of those properties are consequences of the simple definition. "Note that we don't say that subtypes must be removed...." |
Is it perfectly good, though? You can't use class B(A):
def foo(self) -> str:
raise NotImplementedError('B') # type: ignore[no-substitute-never]
class C(A):
def foo(self) -> Never: # type: ignore[no-substitute-never]
raise NotImplementedError('C') Notice how the error should be raised in both cases, but at different lines. So the proposed rule is actually allowing neither class D(A):
foo: Never
def __getattribute__(self, key):
if key=="foo":
raise RuntimeError("Totally forbidden.")
raise AttributeError Is this also a perfectly good override? |
Well, that's what I was arguing in #1458 (comment), that most developers, most of the time, likely want an annotation to mean that there is no violation. It feels weird that you argue that the status quo states X (=③) when the argument is it should state Y (=①) instead. I was expecting arguments in favor of *why* ③ is a good idea. |
We're just talking in circles. Take a step back and consider: the things that we can statically decide about Python programs are limited (your #1 isn't even decidable). The type system we have is unsound, and by design. We can complicate the type system arbitrarily much, but there is a point of vanishing returns. We really want an "ergonomic" design that is easy to explain, easy to understand, and easy to implement correctly. I am advocating to keep the type system simple and understandable for both programmers and implementers. Avoid special cases. (Example: we can't in general decide if a function raises different exceptions than another, so we shouldn't conclude as a special case that a function with |
It's about what promise is made by a type annotation. The promise made by ③ is pretty useless in practice, I don't know anyone who uses this as their mental model to reason about code. The decidability is irrelevant here, a type-checker has no business attempting to infer whether code terminates and actually returns a value. This is done by the human brain who wrote the type-annotation. The argument is that an annotation like You are essentially arguing that the static type shouldn't warn me even if it can detect a program failure ahead of time. |
@kmillikin The problem comes in at mutable attributes. Subclasses should not be able to be more or less specific in the typing for mutable fields. As far as I can tell, all current type checkers allow this, and mypy has an open issue stating this should not be allowed going back to 2017. Annotations are sometimes a two-way street. It isn't always just what is provided, but what can be assigned. This should be fine, this breaks no expectations: class AFine:
def foo(self) -> int:
...
class BFine(AFine):
def foo(self) -> IntSubclassFromNumericLibrary:
... But class ANotFine:
foo: int
class BNotFine(AFine):
foo: IntSubclassFromNumericLibrary is not, because def breaks(x: ANotFine):
x.foo = 1 # not the subclass breaks any reliance BNotFine would have on the subclass. So the definition of that subclass itself is unsound as a drop in replacement for the base class, and people should not be allowed to do this, but instead use a generic for this pattern: class AGeneric[T: int]:
foo: T
AliasB = AGeneric[IntSubclassFromNumericLibrary] And this would only be safe for use in functions expecting an invariant match on the generic. This example extends to Never (Swap in The problem of Never's (lack of) specification may be a red herring here. The larger issue that has been left known broken by mypy since 2017, and that other type checkers have replicated clouded how people were interpreting subtyping behavior absent a specification. |
I do understand that. As I wrote above, You will not fix this problem by tweaking the semantics of |
@kmillikin so what's the path forward here for people working on anything more advanced that needs to interact with this if the whole ecosystem is wrong about something so basic and there isn't a specification to work from, only informal documents, various people's perspectives on which models of type systems python is best modeled by? I'd like to see intersections added to python, but having good examples of how they should behave and even having good rules without special cases is hard when there is no specification, and the implementations are all wrong about more basic things. |
@DiscordLiz We get the ecosystem fixed on the basics (I've just now opened an issue for pyright, there's an open issue already for mypy, and I'll look into triple-checking that pytype and pyre also have this issue), and we work on the underlying specification first We have to spend time on the foundations first. |
@DiscordLiz that was the subject of my talk at the Pycon US Typing Summit this year. We are in a bad position where we have multiple implementations and no specifications. This is not good for users, it's not good for implementers, and it's not good for designers of new typing features. Unfortunately, I don't think we were able to get recordings of the Typing Summit talks. My suggestion still is: we finish the simple subtyping specification, get a sponsor, make it a PEP, get it approved. Then (actually in parallel) we do the same thing for Part II: signature checking. This includes signature subtyping, override checking, and specification of how generic functions are implicitly specialized. We do this with input and buy-in from the implementations. |
@kmillikin Well, I'm still somewhat willing to help with an effort on that, but I raised a few points earlier about that. I think most importantly, the type system needs a living reference doc that is kept up to date and which is strict in it's wording. Not just a collection of peps, but that each pep that adds to or modifies the behavior of the type system should also update the living specification. This gives a clear single source of truth without ambiguity of how various peps should resolve each other. If something new is added to the document, it is up to those adding it to ensure it isn't in conflict. This doesn't lead to people looking to potentially flawed implementations for the correct behavior. quoting myself from earlier rather than just reference it: (This was a set of "I'm interested in helping with formalization, but conditional on these things")
@erictraut also indicated that formalization of the existing parts of the type system should be done holistically, not in parts earlier in this thread, so I'm not sure how that interacts with your idea of splitting it into the subtyping behavior and then everything else. I think it is more natural to do everything at once to ensure consistency of definitions. Days later edit... |
That's a very narrow view, in my opinion. Subclasses are used for many things, drop-in replacement being only one of them. Then again, arguably LSP doesn't quite mesh perfectly with dynamic, duck-typed languages. It's a tool to think about a program, but it's not the only tool? |
Never was added without a PEP as a way to directly indicate that something is uninhabited in 3.11
This brings up something from CarliJoy/intersection_examples#5 (comment) where we now have a way to express what appears to be an LSP violation that type checks (incorrectly) properly via structural subtyping by mishandling Never as a subtype of other types. Never is not a subtype of all other types. As the bottom type, it is uninhabited and does not follow subtyping rules.
https://mypy-play.net/?mypy=latest&python=3.11&flags=strict&gist=a4279b36d82c1a28d7b17be9a4bbcbdf
I believe under LSP, B is no longer a safe drop-in replacement for A. It's important to note that ongoing work to formalize the type system, including Never does not treat Never as a subtype of all other types, see the ongoing work here: http://bit.ly/python-subtyping
The text was updated successfully, but these errors were encountered: