-
Notifications
You must be signed in to change notification settings - Fork 3.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Byte arrays are allocated in LOH #21766
Comments
@windhandel EF Core has no control over what gets allocated into LOH. If you think the behavior you are seeing is incorrect open an issue on https://github.com/dotnet/runtime |
@AndriySvyryd without symbols loaded into PerfView for the necessary DLLs, you can't tell the source of the allocations. Without identifying the source, you can't say whether this issue is or isn't caused by EF Core. As a developer, you have complete control over whether things like byte arrays are allocated into the LOH or whether they use a more relevant data type such as Span < T >. |
Correct, as I've said before we would need a repro project to investigate this
Data structures used when reading the data returned by a query are managed by the ADO.NET providers, if you are using SQL Server this would be Microsoft.Data.SqlClient. That's why I suggested creating a repro without EF Core. |
No, you don't have control over where things are allocated, the runtime has rules that it follows. If you create a large object then it'll be put on the LOH, end of story. Spans are pretty irrelevant here because they'd have to point at something and that something would still be a heap allocated byte array unless you go really hardcore and write your own MemoryManager implementation. Reading through the original thread it seems you were unable to explain the discrepancy between the allocated bytes as reported in management studio and the perfview trace you saw. I probably can if you can provide an isolated reproduction that I can profile and investigate. It is highly unlikely that EF is doing anything with large objects because that just isn't the sort of thing it deals with. As @AndriySvyryd points out that's much more likely to be the SqlClient driver. I've made numerous performance enhancements to that library and if, as already stated, you can provide me with an isolated reproduction of your problem I can investigate and possibly identify further improvements. |
Thanks, @Wraith2, my point on the LOH was that the developer has complete control over whether the objects they create are large and what the duration/durability of such an object is (whether it's pooled or not). I'm aware of the 85k threshold, that's what we must work around to create optimizations that don't drastically impact performance. I'll try to create a repro with one of the pre-existing samples for the previous issue and get back to you. Kind regards, |
This is not true. If you ask the SQL driver for a 2meg chunk of text or binary then it has to be a contiguous allocation to fulfil your request. That's going to go on the LOH. |
Using |
@AndriySvyryd , @ajcvickers , @roji : I apologize for my tone on the prior thread. I wasn't intending to be argumentative and confrontational. Although, in hindsight, I can completely see how it came off that way. @Wraith2 what you're saying about the driver does not negate my statement. Both can be true, my statement is always true. Developers always have control over their allocations. However, if what you're saying is true, then possibly what we're discussing is a deficit in the driver itself for allocating memory in a way that is non-performant and do not necessarily ultimately meet performance requirements (assuming we require them as such). It's best to delineate the degree to which an issue is difficult to resolve and the truthyness of the cause. I think what you're saying is: "If this issue goes back to the driver, it will be difficult to resolve." That makes a lot of sense, but doesn't change the cause. |
Lets get that test case and see what the numbers are like then we can talk specifics wherever is most appropriate. |
@Wraith2 - that makes sense. I'll try to have something to you by EOW. Thanks for the help. |
EF Team Triage: Closing this issue as the requested additional details have not been provided and we have been unable to reproduce it. BTW this is a canned response and may have info or details that do not directly apply to this particular issue. While we'd like to spend the time to uniquely address every incoming issue, we get a lot traffic on the EF projects and that is not practical. To ensure we maximize the time we have to work on fixing bugs, implementing new features, etc. we use canned responses for common triage decisions. |
@ajcvickers personally, I'm less frustrated than I am amused. It appears that customer debate over the expected behavior is more the goal than customer (developer) expectations and/or satisfaction, based on the duration, depth and repetitiveness of these conversations over the years.
One can't help but wonder whether the cumulative time spent debating developers on the behavior could have resolved this issue 2 or 3 times over at this point.
I've recognized that the industry, as a whole, has a long way to go when it comes to properly asserting against a variety of these types of qualitative measures. We've all a long way to go. Thus, me creating my business to aid in this process.
I see what you were saying when you pointed to references within the object model. You're saying that Gen 2 GCs are not caused by the fixup behavior, that's a separate issue that should be addressed, as @AndriySvyryd mentioned.
End of last year I worked with the .NET Core team to resolve LOH allocations within String.Split. This issue was quickly addressed once it was determined the severity of the issue.
Operations such as this causing multiple LOH allocations are high severity. Are you saying that EF Core causing LOH allocations are of no consequence?
Again: Why isn't your existing performance testing process picking up these obvious GC issues?
Originally posted by @windhandel in #11564 (comment)
The text was updated successfully, but these errors were encountered: