Skip to content
This repository has been archived by the owner on Jan 23, 2023. It is now read-only.
/ corefx Public archive

Add MemoryPool apis #26563

Merged
1 commit merged into from Jan 25, 2018
Merged

Add MemoryPool apis #26563

1 commit merged into from Jan 25, 2018

Conversation

ghost
Copy link

@ghost ghost commented Jan 24, 2018

@ghost ghost added the area-System.Memory label Jan 24, 2018
@ghost ghost self-assigned this Jan 24, 2018
@ghost ghost requested a review from ahsonkhan January 24, 2018 16:05

public sealed override bool Release()
{
int newRefCount = Interlocked.Decrement(ref _referenceCount);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What's the goal of the interlockeds being used here? As it stands, for example, even if Release throws, the _referenceCount will already have gone negative, and code could subsequently continue to use the object that now has a negative reference count... so then, for example, another thread could call Retain, have it succeed, but the count could still be <= 0 and IsRetained would return false.

Seems like either we shouldn't care about thread-safety, in which case we don't need to use interlockeds, or we should care, in which case this should be more robust and use a standard Interlocked.CompareExchange loop for these kinds of operations.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ahsonkhan, @KrzysztofCwalina - what are the thread-safety goals here? These interlocked calls were all part of the original prototype in corefxlab.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The main goal of the implementation is to detect use after free. "Detect" is the key word here. I dont think we can make it 100% thread safe without unacceptable perf hit. For that to happen, we would need to do what @stephentoub says plus synchronize access to the array (the Span property, for example).

BTW, I don't mind making this method more robust, I am just not sure it will add value without also changing get_Span, which I don't think we can change without significant perf hit.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok - I'm inclined to drop any pretense of thread-safety then. It's pretty well established the .NET objects are thread-unsafe in the 90% case and synchronization is outsourced to the users.

Copy link
Member

@KrzysztofCwalina KrzysztofCwalina Jan 24, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wait, gat_Span is synchronized as it checks for IsDisposed first. Maybe we are not too far from having valuable "more safety". Still there is an issue of Memory callers not having to call Retain, but at least I think we can make it totally safe for those who want to religiously use Retain/Release.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's not a thread-safe check though - when I see "synchronized", I think thread-safe.

}

public sealed override bool Release()
{
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Retain checks IsDisposed. Release doesn't need to? And we don't need disposal to be coordinated with the interlockeds used by Retain/Release? (Goes back to my question about what the thread-safety goals are.)

Copy link
Author

@ghost ghost Jan 24, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ahsonkhan khan, @KrzysztofCwalina - care to address this? I noticed this asymmetry in the corefxlab code too but figured there may be a reason (perhaps because if someone didn't follow the normal dispose pattern, Release can happen as part of a MemoryHandle finalization which could happen after the OwnedMemory it came from got disposed - and we don't want an exception being thrown there?)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If somebody calls "release" they simply communicate that they wont be using this OwnedMemory (OM) anymore. I am not sure why we would want to throw on already disposed objects. It does not seem to be adding to safety of the system and just makes the Release callsite subject to exceptions. Having said that, I don't mind changing it, if we feel it adds value.

Copy link
Member

@jkotas jkotas Jan 24, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you are calling Release() on already disposed object, it means that you have use-after-free security bug. The calls to Rent and Release have to be always matched.

The C/C++ memory allocators fail fast right away and abort the program when they detect use after free. We may want to do the same instead of throwing catchable exception.


protected sealed override void Dispose(bool disposing)
{
T[] array = Interlocked.Exchange(ref _array, null);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What if it's currently retained?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Then it throws an InvalidOperationException. That check is encapsulated in the OwnedMemory class itself.

Copy link
Member

@stephentoub stephentoub Jan 24, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What throws an InvalidOperationException? If you Dispose of this while it's already retained, then the array is likely already in use, but we're returning it to the pool here.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right here:

ThrowHelper.ThrowInvalidOperationException_OutstandingReferences();

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But you're not calling base.Dispose(disposing). So who calls that?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh, sorry, different overload. I see.

{
if (minimumBufferSize == -1)
minimumBufferSize = DefaultSize;
else if (minimumBufferSize > MaxBufferSize || minimumBufferSize < 1)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does MaxBufferSize get devirtualized? Otherwise, this is another virtual call.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would hope the JIT can do that considering the class is sealed at this level. Though it's easy enough to substitute a const to be safe.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why not allow minimumBufferSize == 0?

else if (minimumBufferSize > MaxBufferSize || minimumBufferSize < 1)
ThrowHelper.ThrowArgumentOutOfRangeException(ExceptionArgument.minimumBufferSize);

return new ArrayMemoryPoolBuffer(minimumBufferSize);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Every rental allocates?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's not clear to me how that could be avoided. The OwnedMemory wrapper object is meant to be used in a "using () {}" type-pattern. If we tried to pool those as well, we'd be handing out IDisposable objects that the caller isn't meant to dispose.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not really commenting on the implementation so much as the MemoryPool design. Seems problematic if the default implementation that most folks will use allocates on every Rent.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This depends on what the MemoryPool abstraction is meant to be used for.

I see it designed for large buffers. You get large buffer and allocate a little bit in the process. I do not think that allocation is problematic in this scenario.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The default size here is 4096, which is also a common size used with ArrayPool, and for ArrayPool, there have been complaints about things as small as the virtual dispatch cost associated with calling ArrayPool.Shared.Rent/Release; even a small allocation and the associated GC impact is at least as much as that. I'm skeptical this will only be used for large buffers; of course, if we add costs like such allocations on every Rent/Release, we may certainly force it in that direction.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you want a bare bone malloc/free, you should use ArrayPool. MemoryPool is higher level abstraction.

And yes - we need to fix ArrayPool to not have the unnecessary overheads and to not leak.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Still seems like it's going to be a problem. I guess we'll have to agree to disagree.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The memory pool in kestrel pools the OwnedMemory<byte> objects 😄

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can certainly pool OwnedMemory memory if you know what you are doing. It increases your security risk profile.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We used to have a cookie in OM to allow it to be pooled safely. We removed the cookie during the performance push, but I am not sure it actually helps performance, yet it makes it so much more difficult to pool OM safely.

{
private const int DefaultSize = 4096;

private static readonly ArrayMemoryPool<T> s_shared = new ArrayMemoryPool<T>();
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is unnecessary extra layer MemoryPool<T> Shared can allocate ArrayMemoryPool directly.

Copy link
Member

@KrzysztofCwalina KrzysztofCwalina Jan 24, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, I think we should use the shared array pool directly.

private static readonly ArrayMemoryPool<T> s_shared = new ArrayMemoryPool<T>();
public static new ArrayMemoryPool<T> Shared => s_shared;

public sealed override int MaxBufferSize => 1024 * 1024 * 1024;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is this 1GB and not 2GB?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, I think we should change it to 2GB

{
internal sealed partial class ArrayMemoryPool<T> : MemoryPool<T>
{
private const int DefaultSize = 4096;
Copy link
Member

@jkotas jkotas Jan 24, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this a good default size even for large Ts ? Should this rather be 1 + (4095 / sizeof(T)) ?

public abstract class MemoryPool<T> : IDisposable
{
public static System.Buffers.MemoryPool<T> Shared { get; }
public abstract System.Buffers.OwnedMemory<T> Rent(int minBufferSize=-1);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We had a discussion at the API review about the default here. Did we really settle on -1? Why not have the default be 0?

Copy link
Member

@stephentoub stephentoub Jan 24, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why not have the default be 0?

FWIW, ArrayPool.Shared.Rent(0) gives back an empty array. Then again, it also throws for -1.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IIRC, we have settled on -1.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's on youtube, it was -1.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you mean "it's on the internet"? :-)

private sealed class ArrayMemoryPoolBuffer : OwnedMemory<T>
{
private T[] _array;
private bool _disposed;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we use array == null to signal disposes instead of an extra bool flag?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems reasonable.

{
get
{
yield return new object[] { 0 };
Copy link
Member

@KrzysztofCwalina KrzysztofCwalina Jan 24, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is 0 bad? ArrayPool allows to rent empty buffers.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Another artifact from the corefxlab code. Since we've settled on -1 as the sentinel, I'll remove that restriction.

}

[Fact]
public static void RefCounting()
Copy link
Member

@KrzysztofCwalina KrzysztofCwalina Jan 24, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you add some stress tests too to ensure we don't let people access the array (the Span) after Dispose on a separate thread. Also, to ensure that Dispose throws is there are outstanding Retain (called on a different thread).

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Dispose on a separate thread

That would be contingent on whether we want to guarantee thread-safety. I'm inclined against that.

On a separate note, is CI really the place to be doing stress tests? CI should be about prevent clearly bad stuff from going in, not randomly holding up PR's because of intermittent stress failures.

The Dispose throwing on an outstanding ref-count is already tested in the OwnedMemory tests (and the OwnedMemory class is the one that does that check so checking for individual variants doesn't add any code coverage.) It's an invariant of OwnedMemory objects in general and not specific to MemoryPool.

@@ -33,6 +33,9 @@
<Compile Include="System\Buffers\Binary\Writer.cs" />
<Compile Include="System\Buffers\Binary\WriterBigEndian.cs" />
<Compile Include="System\Buffers\Binary\WriterLittleEndian.cs" />
<Compile Include="System\Buffers\MemoryPool.cs" />
<Compile Include="System\Buffers\ArrayMemoryPool.cs" />
<Compile Include="System\Buffers\ArrayMemoryPool.ArrayMemoryPoolBuffer.cs" />
Copy link
Member

@ahsonkhan ahsonkhan Jan 25, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: fix sort order. put it above
<Compile Include="System\Buffers\OperationStatus.cs" />

Also, the file name ArrayMemoryPool.ArrayMemoryPoolBuffer.cs is confusing. Do we have this type of naming based on nested classes elsewhere?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The double-dot notation is used generally when partial classes are used. This is just another instance of that.

_array = ArrayPool<T>.Shared.Rent(size);
}

public sealed override int Length => _array.Length;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do properties and methods on a sealed class need to marked as sealed as well?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do properties and methods on a sealed class need to marked as sealed as well?

No

unsafe
{
Retain(); // this checks IsDisposed
GCHandle handle = GCHandle.Alloc(_array, GCHandleType.Pinned);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add argument checks for byteOffset?

Similar to: https://github.com/dotnet/corefx/blob/master/src/System.Memory/tests/Memory/CustomMemoryForTest.cs#L47

if (byteOffset < 0 || (byteOffset/Unsafe.SizeOf<T>()) > _array.Length) throw

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This check isn't right - for an array of 10 ints, "40" should be a valid input, but not 41, 42 and 43.

This seems like low-level api to be doing this kind of check - I'll put in but it gets tricky since MemoryPool owners don't have strict control over the sizes they get back. The byteOffset might not actually fit in an Int32.

public static MemoryPool<T> Shared => ArrayMemoryPool<T>.Shared;

/// <summary>
/// Returns a memory block capable of holding at least <paramref name="minBufferSize">minBufferSize</paramref> elements of T.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Returns a memory block capable of holding at least <paramref name="minBufferSize"/> elements of T.?

/// <summary>
/// Returns a memory block capable of holding at least <paramref name="minBufferSize">minBufferSize</paramref> elements of T.
/// </summary>
/// <param name="minBufferSize">The minimum required size of the buffer in elements. If -1 is passed, this is set to a default value for the pool.</param>
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: This is awkward to read. Do we need to mention in elements?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll just nuke that part of the phrase - the summary plus the param name makes it obvious.

/// Returns a memory block capable of holding at least <paramref name="minBufferSize">minBufferSize</paramref> elements of T.
/// </summary>
/// <param name="minBufferSize">The minimum required size of the buffer in elements. If -1 is passed, this is set to a default value for the pool.</param>
/// <returns></returns>
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Remove empty returns tag.

internal static void ThrowInvalidOperationException_OutstandingReferences() { throw CreateInvalidOperationException_OutstandingReferences(); }
[MethodImpl(MethodImplOptions.NoInlining)]
private static Exception CreateInvalidOperationException_OutstandingReferences() { return new InvalidOperationException(SR.OutstandingReferences); }

internal static void ThrowObjectDisposedException(string objectName) { throw CreateObjectDisposedException(objectName); }
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We recently removed the objectName argument from ThrowObjectDisposedException_MemoryDisposed.
3f03814#diff-4021c6d7a5c4f084b4e8867d8cb0f973L67
@jkotas, why was that?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the context! Should we keep passing the object name to the newly added ThrowObjectDisposedException method or remove that argument here as well?

OwnedMemory<int> block = MemoryPool<int>.Shared.Rent(42);
Assert.False(block.IsDisposed);
block.Dispose();
Assert.True(block.IsDisposed);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we add another block.Dispose() and check that block.IsDisposed remains true?

}

[Fact]
public static void NoPinAfterDispose()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How are we testing Pin after Dispose here?

{
internal sealed partial class ArrayMemoryPool<T> : MemoryPool<T>
{
private const int s_maxBufferSize = int.MaxValue;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: s_ prefix is used for statics; consts should be PascalCased.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I know - this is kinda tough case since the "best" name is already taken by the property. MaxBufferSizeConst just sounds awkward.

@ahsonkhan
Copy link
Member

@atsushikan, can you please maintain the commit history during PR review? It makes it easier to view only the diffs between commits.

@ghost
Copy link
Author

ghost commented Jan 25, 2018

can you please maintain the commit history during PR review?

That's not so easy on GitHub when a PR stretches across days. I merge from upstream at least once a day so you either get a commit history that includes massive merges (and thus the main file list containing hundreds of unrelated changes, which someone complained about earlier) or a multi-commit rebase (which is treacherous and a pain in the neck and still results in a force push with new hashes so have to take on faith the "older" commits are actually the commits you reviewed previously.) Either way, there are drawbacks that make someone unhappy.

@ahsonkhan
Copy link
Member

ahsonkhan commented Jan 25, 2018

I merge from upstream at least once a day so you either get a commit history that includes massive merges (and thus the main file list containing hundreds of unrelated changes, which someone complained about earlier)

Why would merging from upstream result in unrelated changes to show up? Isn't the diff view against what is in master? So, we may see the Merge branch... commits in the PR, but the changes themselves shouldn't show up.

I would imagine it would be something like this (from #26232), which only shows my changes even though I have merged from upstream:
image

@ghost
Copy link
Author

ghost commented Jan 25, 2018

Why would merging from upstream result in unrelated changes to show up?

That was the experience about a year ago - people complained so I switched to the rebase-and-force-push method. Maybe GitHub is smarter now - I'd be happy to switch back (until someone complains again...)

@ghost
Copy link
Author

ghost commented Jan 25, 2018

@ahsonkhan - any further blocking issues?

{
Retain(); // this checks IsDisposed

if (byteOffset != 0 && (((uint)byteOffset) - 1) / Unsafe.SizeOf<T>() >= _array.Length)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should allow byteOffset that points right at the end of the array.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(See discussion at #25770 (comment))

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, you are right.

Copy link
Member

@ahsonkhan ahsonkhan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM.

@ghost ghost merged commit a398684 into dotnet:master Jan 25, 2018
@ghost ghost deleted the mempool branch January 26, 2018 15:11
@karelz karelz added this to the 2.1.0 milestone Feb 4, 2018
picenka21 pushed a commit to picenka21/runtime that referenced this pull request Feb 18, 2022
This pull request was closed.
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants