-
Notifications
You must be signed in to change notification settings - Fork 4.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Improve a vector implementation to support alignment and non-temporal tores #93296
Conversation
Tagging subscribers to this area: @dotnet/area-system-numerics Issue Detailsnull
|
e948b16
to
51da7cb
Compare
51da7cb
to
0547c8a
Compare
src/libraries/System.Numerics.Tensors/src/System/Numerics/Tensors/TensorPrimitives.netcore.cs
Outdated
Show resolved
Hide resolved
src/libraries/System.Numerics.Tensors/src/System/Numerics/Tensors/TensorPrimitives.netcore.cs
Show resolved
Hide resolved
src/libraries/System.Numerics.Tensors/src/System/Numerics/Tensors/TensorPrimitives.netcore.cs
Show resolved
Hide resolved
If a function is idempotent (which most of them are) it's enough a single load to align data (like I did in https://github.com/dotnet/runtime/pull/93214/files) without any unrolled loops, etc - presumably, you can expose a |
src/libraries/System.Numerics.Tensors/src/System/Numerics/Tensors/TensorPrimitives.netcore.cs
Show resolved
Hide resolved
src/libraries/System.Numerics.Tensors/src/System/Numerics/Tensors/TensorPrimitives.netcore.cs
Show resolved
Hide resolved
Here are the before/after results for
|
Similar results for
|
@@ -12,6 +12,8 @@ namespace System.Numerics.Tensors | |||
{ | |||
public static partial class TensorPrimitives | |||
{ | |||
private const nuint NonTemporalByteThreshold = 256 * 1024; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you add a comment about what this is and how the value was chosen?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
private const nuint NonTemporalByteThreshold = 256 * 1024; | |
/// <summary>Defines the threshold, in bytes, at which non-temporal stores will be used.</summary> | |
/// <remarks> | |
/// A non-temporal store is one that allows the CPU to bypass the cache when writing to memory. | |
/// | |
/// This can be beneficial when working with large amounts of memory where the writes would otherwise | |
/// cause large amounts of repeated updates and evictions. The hardware optimization manuals recommend | |
/// the threshold to be roughly half the size of the last level of on-die cache -- that is, if you have approximately | |
/// 4MB of L3 cache per core, you'd want this to be approx. 1-2MB, depending on if hyperthreading was enabled. | |
/// | |
/// However, actually computing the amount of L3 cache per core can be tricky or error prone. Native memcpy | |
/// algorithms use a constant threshold that is typically around 256KB and we match that here for simplicity. This | |
/// threshold accounts for most processors in the last 10-15 years that had approx. 1MB L3 per core and support | |
/// hyperthreading, giving a per core last level cache of approx. 512KB. | |
/// </remarks> | |
private const nuint NonTemporalByteThreshold = 256 * 1024; |
if (canAlign) | ||
{ | ||
// Compute by how many elements we're misaligned and adjust the pointers accordingly | ||
// | ||
// Noting that we are only actually aligning dPtr. THis is because unaligned stores | ||
// are more expensive than unaligned loads and aligning both is significantly more | ||
// complex. | ||
|
||
nuint misalignment = ((uint)(sizeof(Vector128<float>)) - ((nuint)(dPtr) % (uint)(sizeof(Vector128<float>)))) / sizeof(float); | ||
|
||
xPtr += misalignment; | ||
dPtr += misalignment; | ||
|
||
Debug.Assert(((nuint)(dPtr) % (uint)(sizeof(Vector128<float>))) == 0); | ||
|
||
remainder -= misalignment; | ||
} | ||
|
||
Vector128<float> vector1; | ||
Vector128<float> vector2; | ||
Vector128<float> vector3; | ||
Vector128<float> vector4; | ||
|
||
if (canAlign && (remainder > (NonTemporalByteThreshold / sizeof(float)))) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We have two checks on canAlign here. Would it help to eliminate a branch if this were instead structured as:
if (canAlign)
{
... // do alignment
if (remainder > (NonTemporalByteThreshold / sizeof(float)))
{
... // handle non-temporal path
goto AdjustingRefs;
}
}
... // what's currently in the else block
AdjustingRefs:
... // stuff currently after the else block
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think the additional complexity is worth it here and it can cause subtle issues with control flow analysis that are probably undesirable.
If we were concerned, I'd prefer switching the order so its if ((remainder > (NonTemporalByteThreshold / sizeof(float)) && canAlign)
instead.
@@ -20,7 +20,7 @@ public static partial class TensorPrimitivesTests | |||
TensorLengths.Concat(new object[][] { [0] }); | |||
|
|||
public static IEnumerable<object[]> TensorLengths => | |||
from length in Enumerable.Range(1, 128) | |||
from length in Enumerable.Range(1, 256) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is good, but it's also going to double the number of test cases we're running. Not a big deal except that on netfx it seems to struggle with the theory count. We should probably subsequently change some of those theories to be loops, e.g. instead of:
[Theory]
[MemberData(nameof(TensorLengths))]
public void Foo(int tensorLength)
{
...
}
do:
[Fact]
public void Foo()
{
foreach (int tensorLength in TensorLengths)
{
...
}
}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'll handle this in the immediately following PR covering binary/ternary.
This appears to have broken the outerloop tests |
@lewing is there an issue or a link that can be shared so it can be investigated? |
#93412 fixes the issue. Another PR went in that removed |
… tores (dotnet#93296) * Improve a vector implementation to support alignment and non-temporal stores * Fix a build error and mark a couple methods as AggressiveInlining * Fix the remaining block count computation * Ensure overlapping for small data on the V256/512 is handled * Ensure we only go down the vectorized path when supported for netstandard
* Use FMA in TensorPrimitives (#92205) * Simplify TensorPrimitive's AbsoluteOperator (#92577) Vector{128/256/512} all provide Abs; no need to do this manually. * Reduce some boilerplate in TensorPrimitive's IBinaryOperator (#92576) Change a few of the static abstract interface methods to be virtual, as most implementations throw from these methods; we can consolidate that throwing to the base. * Minor code cleanup in TensorPrimitives tests (#92575) * Normalize some test naming * Alphabetize tests * Improve mistmatched length tests with all positions of the shorter tensor * Alphabetize methods in TensorPrimitives.cs * Vectorize TensorPrimitives.Min/Max{Magnitude} (#92618) * Vectorize TensorPrimitives.Min/Max{Magnitude} * Use AdvSimd.Max/Min * Rename some parameters/locals for consistency * Improve HorizontalAggregate * Move a few helpers * Avoid scalar path for returning found NaN * Update TensorPrimitives aggregations to vectorize handling of remaining elements (#92672) * Update TensorPrimitives.CosineSimilarity to vectorize handling of remaining elements * Vectorize remainder handling for Aggregate helpers * Flesh out TensorPrimitives XML docs (#92749) * Flesh out TensorPrimitives XML docs * Address PR feedback - Remove use of FusedMultiplyAdd from all but CosineSimilarity - Remove comments about platform/OS-specific behavior from Add/AddMultiply/Subtract/Multiply/MultiplyAdd/Divide/Negate - Loosen comments about NaN and which exact one is returned * Address PR feedback * Vectorize TensorPrimitives.ConvertToHalf (#92715) * Enable TensorPrimitives to perform in-place operations (#92820) Some operations would produce incorrect results if the same span was passed as both an input and an output. When vectorization was employed but the span's length wasn't a perfect multiple of a vector, we'd do the standard trick of performing one last operation on the last vector's worth of data; however, that relies on the operation being idempotent, and if a previous operation has overwritten input with a new value due to the same memory being used for input and output, some operations won't be idempotent. This fixes that by masking off the already processed elements. It adds tests to validate in-place use works, and it updates the docs to carve out this valid overlapping. * Vectorize TensorPrimitives.ConvertToSingle (#92779) * Vectorize TensorPrimitives.ConvertToSingle * Address PR feedback * Throw exception in TensorPrimitives for unsupported span overlaps (#92838) * This vectorizes TensorPrimitives.Log2 (#92897) * Add a way to support operations that can't be vectorized on netstandard * Updating TensorPrimitives.Log2 to be vectorized on .NET Core * Update src/libraries/System.Numerics.Tensors/src/System/Numerics/Tensors/TensorPrimitives.netstandard.cs Co-authored-by: Stephen Toub <[email protected]> * Ensure we do an arithmetic right shift in the Log2 vectorization * Ensure the code can compile on .NET 7 * Ensure that edge cases are properly handled and don't resolve to `x` * Ensure that Log2 special results are explicitly handled. --------- Co-authored-by: Stephen Toub <[email protected]> * Adding Log2 tests covering some special values (#92946) * [wasm] Disable `TensorPrimitivesTests.ConvertToHalf_SpecialValues` (#92953) Failing test: `System.Numerics.Tensors.Tests.TensorPrimitivesTests.ConvertToHalf_SpecialValues` Issue: #92885 * Adding a vectorized implementation of TensorPrimitives.Log (#92960) * Adding a vectorized implementation of TensorPrimitives.Log * Make sure to hit Ctrl+S * Consolidate some TensorPrimitivesTests logic around special values (#92982) * Vectorize TensorPrimitives.Exp (#93018) * Vectorize TensorPrimitives.Exp * Update src/libraries/System.Numerics.Tensors/src/System/Numerics/Tensors/TensorPrimitives.netstandard.cs * Vectorize TensorPrimitives.Sigmoid and TensorPrimitives.SoftMax (#93029) * Vectorize TensorPrimitives.Sigmoid and TensorPrimitives.SoftMax - Adds a SigmoidOperator that just wraps the ExpOperator - Vectorizes both passes of SoftMax, on top of ExpOperator. Simplest way to do this was to augment the existing InvokeSpanScalarIntoSpan to take a transform operator. - In doing so, found some naming inconsistencies I'd previously introduced, so I did some automatic renaming to make things more consistent. - Added XML comments to all the internal/private surface area. - Fleshes out some tests (and test values). * Disable tests on mono * Address PR feedback * Vectorize TensorPrimitives.Tanh/Cosh/Sinh (#93093) * Vectorize TensorPrimitives.Tanh/Cosh/Sinh Tanh and Cosh are based on AOCL-LibM. AOCL-LibM doesn't appear to have a sinh implementation, so this Sinh is just based on the sinh formula based on exp(x). I also augmented the tests further, including: - Added more tests for sinh/cosh/tanh - Add an equality routine that supports comparing larger values with a tolerance - Tightened the tolerance for most functions - Changed some tests to be theories to be consistent with style elsewhere in the tests - Fixed some use of Math to be MathF * Remove unnecessary special-handling path from cosh * Remove unnecessary special-handling path from tanh * Redo sinh based on cosh * Address PR feedback * Replace confusing new T[] { ... } * Remove a few unnecessary `unsafe` keyword uses in TensorPrimitives (#93219) * Consolidate a few exception throws in TensorPrimitives (#93168) * Fix TensorPrimitives.IndexOfXx corner-case when first element is seed value (#93169) * Fix TensorPrimitives.IndexOfXx corner-case when first element is seed value Found as part of adding more tests for Min/Max{Magnitude} to validate they match their IndexOfXx variants. * Address PR feedback * Improve a vector implementation to support alignment and non-temporal tores (#93296) * Improve a vector implementation to support alignment and non-temporal stores * Fix a build error and mark a couple methods as AggressiveInlining * Fix the remaining block count computation * Ensure overlapping for small data on the V256/512 is handled * Ensure we only go down the vectorized path when supported for netstandard * Mark TensorPrimitives as unsafe (#93412) * Use the improved vectorization algorithm for binary and ternary TensorPrimitives operations (#93409) * Update InvokeSpanSpanIntoSpan<TBinaryOperator> for TensorPrimitives to use the better SIMD algorithm * Update InvokeSpanScalarIntoSpan<TTransformOperator, TBinaryOperator> for TensorPrimitives to use the better SIMD algorithm * Update InvokeSpanSpanSpanIntoSpan<TTernaryOperator> for TensorPrimitives to use the better SIMD algorithm * Update InvokeSpanSpanScalarIntoSpan<TTernaryOperator> for TensorPrimitives to use the better SIMD algorithm * Update InvokeSpanScalarSpanIntoSpan<TTernaryOperator> for TensorPrimitives to use the better SIMD algorithm * Improve codegen slightly by using case 0, rather than default * Adjust the canAlign check to be latter, to reduce branch count for data under the threshold * Add a comment explaining the NonTemporalByteThreshold * Make sure xTransformOp.CanVectorize is checked on .NET Standard * Use the improved vectorization algorithm for aggregate TensorPrimitives operations (#93695) * Improve the handling of the IAggregationOperator implementations * Update Aggregate<TTransformOperator, TAggregationOperator> for TensorPrimitives to use the better SIMD algorithm * Update Aggregate<TBinaryOperator, TAggregationOperator> for TensorPrimitives to use the better SIMD algorithm * Respond to PR feedback * [wasm] Remove more active issues for #92885 (#93596) * adding patch from pr 93556 * Vectorizes IndexOfMin/Max/Magnitude (#93469) * resolved merge conflicts * net core full done * minor code cleanup * NetStandard and PR fixes. * minor pr changes * Fix IndexOfMaxMagnitudeOperator * Fix IndexOfMaxMagnitudeOperator on netcore * updates from PR comments * netcore fixed * net standard updated * add reference assembly exclusions * made naive approach better * resolved PR comments * minor comment changes * minor formatting fixes * added inlining * fixes from PR comments * comments from pr * fixed spacing --------- Co-authored-by: Eric StJohn <[email protected]> --------- Co-authored-by: Stephen Toub <[email protected]> Co-authored-by: Tanner Gooding <[email protected]> Co-authored-by: Ankit Jain <[email protected]> Co-authored-by: Radek Doulik <[email protected]> Co-authored-by: Eric StJohn <[email protected]>
No description provided.