You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The current null analysis in ECJ does not recognize a certain class of patterns where nullness of one variable could theoretically be inferred more precisely due to correlation to another variable.
Simple example:
Stringm(booleanf) {
Strings = null;
if (f)
s = "good";
// more stuff hereif (f)
returns.toUpperCase(); // expect no warningreturn"";
}
Our current implementation has a very compact representation of nullness information, consisting basically of 6 bits per variable (at each point in the flow). This fixed-size representation is not able to capture any correlation information concerning an unknown number of other variables. Hence in the above example at the marked location, ECJ can only see that "s" can be either null or nonnull on some paths and it will raise a warning against the dereference.
This RFE captures the request to implement a new null analysis that would capture correlation information in order to 'understand' that examples like the above are indeed null-safe.
I'm saying "new null analysis", because I don't believe that extending the current implementation to include correlation analysis is a viable strategy. Aside from matters of code complexity, it should be kept in mind that the current analysis has been designed for small footprint (mem and cpu), good enough to be invoked on every key stroke as you type in a Java editor. The new null analysis will probably not fulfill this criterion.
This could imply, that in addition to a new analysis, we'd also need new infrastructure to invoke this analysis at selected points in time.
I should mention that no-one in the JDT team currently has the time to even start working on a detailed design of this feature, not to mention implement it.
The text was updated successfully, but these errors were encountered:
This issue is created as a duplicate of the corresponding RFE in bugzilla. It's only purpose as of today is to make that RFE visible in github so we can mark other github issues as duplicates of this one.
It is closed immediately, because no-one currently plans to work on it.
If later some party has the energy to address this, they are free to reopen this issue.
Java language lead architect Brian Goetz has announced "null restricted (value) types" to emerge from the Valhalla project.
See e. g. https://mail.openjdk.org/pipermail/valhalla-spec-observers/2023-May/002243.html. So maybe null analysis in tools will become less important in the future. But of course it is still valuable for legacy code and for cases with unknown/unspecified nullability. In any case this new language feature will probably require a major overhaul of existing null analysis tool support
Continued from https://bugs.eclipse.org/bugs/show_bug.cgi?id=538421:
The current null analysis in ECJ does not recognize a certain class of patterns where nullness of one variable could theoretically be inferred more precisely due to correlation to another variable.
Simple example:
Our current implementation has a very compact representation of nullness information, consisting basically of 6 bits per variable (at each point in the flow). This fixed-size representation is not able to capture any correlation information concerning an unknown number of other variables. Hence in the above example at the marked location, ECJ can only see that "s" can be either null or nonnull on some paths and it will raise a warning against the dereference.
This RFE captures the request to implement a new null analysis that would capture correlation information in order to 'understand' that examples like the above are indeed null-safe.
I'm saying "new null analysis", because I don't believe that extending the current implementation to include correlation analysis is a viable strategy. Aside from matters of code complexity, it should be kept in mind that the current analysis has been designed for small footprint (mem and cpu), good enough to be invoked on every key stroke as you type in a Java editor. The new null analysis will probably not fulfill this criterion.
This could imply, that in addition to a new analysis, we'd also need new infrastructure to invoke this analysis at selected points in time.
I should mention that no-one in the JDT team currently has the time to even start working on a detailed design of this feature, not to mention implement it.
The text was updated successfully, but these errors were encountered: