Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Faster MethodInfo.Invoke #7560

Closed
davidfowl opened this issue Mar 4, 2017 · 28 comments
Closed

Faster MethodInfo.Invoke #7560

davidfowl opened this issue Mar 4, 2017 · 28 comments
Labels
area-System.Reflection enhancement Product code improvement that does NOT require public API changes/additions tenet-performance Performance related issue untriaged New issue has not been triaged by the area owner
Milestone

Comments

@davidfowl
Copy link
Member

When writing a framework that dispatches to user methods it's quite common to reflect over all methods of a particular shape on an object and store a MethodInfo for later use. The scan and store is typically a one time cost (done up front) but the invocation of these methods happen lots of times (one or many times per http request for example). Historically, to make invocation fast, you would generate a DynamicMethod using reflection emit or generate a compiled expression tree to invoke the method instead of using methodInfo.Invoke. ASP.NET does this all the time, so much so that we now have a shared component to do this (as some of the code can be a bit tricky).

With the advent of newer runtimes like project N and coreRT, it would be great if there was first class support for creating a thunk to method. This is what I think it could look like:

class Program
{
    // First class delegate type in the CLR use to express the stub that calls into the
    // underlying method
    public delegate object Thunk(object target, object[] arguments);

    static void Main(string[] args)
    {
        var methods = typeof(Controller).GetMethods();
        var map = new Dictionary<string, Thunk>();
        foreach (var methodInfo in methods)
        {
            map[methodInfo.Name] = methodInfo.CreateThunk();
        }

        // Invoke the method index (result is null)
        var result = Invoke(map, typeof(Controller), "Index");

        // Result is 3 (yes things get boxed)
        result = Invoke(map, typeof(Controller), "Add", 1, 2);
    }

    private static object Invoke(Dictionary<string, Thunk> map, Type type, string method, params object[] arguments)
    {
        var target = Activator.CreateInstance(type);
        return map[method](target, arguments);
    }
}

public class Controller
{
    public void Index()
    {

    }

    public int Add(int a, int b)
    {
        return a + b;
    }
}

Few notes:

  • We could disable type checking in the stub, if the caller passes the wrong arguments they would get an invalid cast exception.
  • The arguments and return values are always boxed (that's not a huge deal for our use cases at least).
@jkotas
Copy link
Member

jkotas commented Mar 5, 2017

You are really just asking for faster MethodInfo.Invoke - to make it precompiled, not interpreted. It does not need a new API. This performance improvement can be done without introducing a new APIs.

@jkotas jkotas changed the title First class "thunk" support Faster MethodInfo.Invoke Mar 5, 2017
@davidfowl
Copy link
Member Author

Sounds good. That would solve world hunger. Can you elaborate on why methodinfo.Invoke is slow today? What's the difference between Delegate.CreateDelegate() -> delegate.Invoke vs methodInfo.Invoke?

@jkotas
Copy link
Member

jkotas commented Mar 5, 2017

MethodInfo.Invoke has to marshal the argument values from the boxed objects and put them into registers for the method to call. It is done via interpreter today. You can speed it up by precompiling the marshalling. It is essentially the same thing as what you are doing with expression trees today, except that it would be done in corelib using reflection.emit or something similar.

@davidfowl
Copy link
Member Author

@jkotas Would it make sense to change the existing methodInfo.Invoke? Compatibility concerns etc?

You can speed it up by precompiling the marshalling

Is there any concern with storing the "precompiled" state?

I'd like to play around with this 😄

@jkotas
Copy link
Member

jkotas commented Mar 5, 2017

Would it make sense to change the existing methodInfo.Invoke?

It does not make sense to introduce new APIs to fix perf issues in the existing APIs. Otherwise, we would end up over time with Invoke, InvokeFast, InvokeFaster, InvokeFastest, ... .

Compatibility concerns etc?

Yes, it takes extra care to ensure that you do not break anything.

Is there any concern with storing the "precompiled" state?

I did not mean to AOT compile, just compile it in memory on demand. It can be regular DynamicMethod.

You can take a look how it is done in CoreRT: https://github.com/dotnet/corert/blob/master/src/Common/src/TypeSystem/IL/Stubs/DynamicInvokeMethodThunk.cs

@davidfowl
Copy link
Member Author

davidfowl commented Mar 5, 2017

I did not mean to AOT compile, just compile it in memory on demand. It can be regular DynamicMethod.

I wasn't talking about AOT compilation, I was just asking if storing a dynamic method or whatever the extra state was on the MethodInfo would be a problem.

You can take a look how it is done in CoreRT: https://github.com/dotnet/corert/blob/master/src/Common/src/TypeSystem/IL/Stubs/DynamicInvokeMethodThunk.cs

Thanks! Now I just need to grok it and turn it into C++.

BTW are you suggesting we generate a dynamic method (via whatever mechanism is available) when creating method infos or would be a first time you invoke thing? (probably the latter)

@mattwarren
Copy link
Contributor

mattwarren commented Mar 5, 2017

What's the difference between Delegate.CreateDelegate() -> delegate.Invoke vs methodInfo.Invoke?

<shameless plug> This intrigued me as well, so I wrote a few blog posts about it, see Why is reflection slow? (section 'How does Reflection work?') also How do .NET delegates work? has some related info

I saw that MethodInfo Invoke also has to do security checks and parameter validation every time, which I guess adds to the cost

@jkotas
Copy link
Member

jkotas commented Mar 5, 2017

Thanks! Now I just need to grok it and turn it into C++.

You do not need to. It can be in C# just fine.

@jkotas
Copy link
Member

jkotas commented Mar 5, 2017

whatever the extra state was on the MethodInfo would be a problem

RuntimeMethodInfo has 12 fields already. Adding a 13th one should not be a problem; and if it is a problem - some of the existing 12 fields can be folded together.

@mattwarren
Copy link
Contributor

Would this be something that is opt-in or the default behaviour?

There may be some scenarios where the extra overhead of time needed to create the delegate and then space to store it, could outweigh the cost of just using methodInfo.Invoke()

@mattwarren
Copy link
Contributor

Also something similar was asked before, see #6968

@jkotas
Copy link
Member

jkotas commented Mar 5, 2017

Right, it would need to be done only once the runtime sees the method invoked multiple times to be beneficial.

@davidfowl
Copy link
Member Author

There may be some scenarios where the extra overhead of time needed to create the delegate and then space to store it, could outweigh the cost of just using methodInfo.Invoke()

Thats why I prefer the 2 stage approach, getting the method info then preparing it to get something back that is optimized for invocation. You pay that cost explicitly and up front. It also gets around any compatibility concerns around methodinfo.invoke.

Right, it would need to be done only once the runtime sees the method invoked multiple times to be beneficial.

We do something similar in our dependency injection system. After 2 invocations, we fire off a background thread for compilation so the next time it's faster.

@jkotas
Copy link
Member

jkotas commented Mar 5, 2017

Thats why I prefer the 2 stage approach

Then this does not need to be built into the runtime. It can be regular upstack nuget package, e.g. the Thunk method can be extension method.

@davidfowl
Copy link
Member Author

Part of the reason for baking this into the runtime was so that the implementation could adapt on AOT platforms...

@jkotas
Copy link
Member

jkotas commented Mar 6, 2017

nuget package can adapt on AOT platforms too.

@Ciantic
Copy link

Ciantic commented Mar 19, 2018

I think the hard work in https://github.com/aspnet/Common/blob/rel/2.0.0/shared/Microsoft.Extensions.ObjectMethodExecutor.Sources/ObjectMethodExecutor.cs could be turned to a NuGet package meanwhile.

Especially since that is dreaded internal, which one can't use without copy & pasting.

@danielcrenna
Copy link

danielcrenna commented Oct 29, 2018

@Ciantic Wouldn't Reflection.Emit yield better performance than Expression, the latter of which is used internally in ObjectMethodExecutor?

@Ciantic
Copy link

Ciantic commented Oct 29, 2018

@danielcrenna I am not a right person to evaluate the performance. However since I wrote that, I was given advice that those source packages are usable, one just have to include them in the own project. I think there is even a tool or command in dotnet to include those source packages in own projects.

@msftgits msftgits transferred this issue from dotnet/coreclr Jan 31, 2020
@msftgits msftgits added this to the Future milestone Jan 31, 2020
@maryamariyan maryamariyan added the untriaged New issue has not been triaged by the area owner label Feb 26, 2020
@GrabYourPitchforks
Copy link
Member

There's a lot to unpack here, so I'll do my best to try. :)

First, if you know the signature of the method you're trying to invoke, the easiest codegen-free way to get a thunk to it would be via MethodInfo.CreateDelegate. So let's for now assume that this issue handles only the case where you need to handle methods of unknown signature.

The runtime offers multiple features now that weren't available when aspnet first started using ref emit back in the MVC 1.0 days. Back in MVC 1.0, one of our primary reasons for using ref emit was to avoid having exceptions wrapped in TargetInvocationException, as would be occur during a normal MethodInfo.Invoke call. However, the runtime now offers BindingFlags.DoNotWrapExceptions to provide more granular control over this behavior. This removes the need for ref emit as a vehicle for preserving the original stack trace.

For environments without access to ref emit or where ref emit is interpreted (thus potentially slow), the APIs described at #25959 can be used to check for this condition. This allows callers to know ahead of time whether using ref emit at all would result in reduced performance.

There are other outstanding issues to improve the performance of MethodInfo.Invoke. See for instance #12832. When that PR comes through, all consumers of reflection should benefit regardless of which entry point they used.

Finally, it appears the thunk mechanism described here is opinionated. At an initial glance it seems the desire is for this thunk not to provide support for in / ref / out parameters or type coercion. It's also not really defined how overload resolution would work. This would ultimately have the effect of creating a parallel reflection stack whose behaviors don't necessarily match the existing stack's behaviors. We would not be able to guarantee that the behaviors of this parallel stack will match the behaviors people will want 5 years down the road as new programming paradigms are introduced. I don't think this is a direction we want to go in the runtime.

My recommendation would be to continue to use ref emit if you're in a suitable environment and if it meets your performance needs. If ref emit is unavailable, try using the standard reflection APIs and passing whatever flags are appropriate for your scenario. If you need different behaviors than typical reflection allows, that's a good candidate for creating a standalone package which implements your desired custom behaviors.

@davidfowl
Copy link
Member Author

I mean, we may as well close this issue if the answer is to keep using ref emit 😄 .

@GrabYourPitchforks
Copy link
Member

Well, ref emit isn't the only option. If you're performing AOT compilation, you could emit the thunk directly into the compilation unit.

public class MyController
{
    public int Add(int a, int b)
    {
        /* user-written code */
    }

    internal static object <>k_CompilerGeneratedThunk_Add(object @this, object[] parameters)
    {
        return ((MyController)@this).Add((int)parameters[0], (int)parameters[1]);
    }
}

The runtime could use standard reflection to discover these thunks and link to them via MethodInfo.CreateDelegate with the common signature (object, object[]) -> object.

@davidfowl
Copy link
Member Author

@MichalStrehovsky
Copy link
Member

Well, ref emit isn't the only option. If you're performing AOT compilation, you could emit the thunk directly into the compilation unit.

FWIW, .NET Native/CoreRT does pretty much that. Last time I was looking at it, reflection invoke in .NET Native was about 4x faster than CoreCLR.

The AOT generated stubs are a bit more complex than what's suggested here because they handle the annoying things like automatic widening, Type.Missing, and the like. It can be faster if we can avoid these conveniences.

@danielcrenna
Copy link

danielcrenna commented Apr 17, 2020 via email

@davidfowl
Copy link
Member Author

@MichalStrehovsky how are those stubs discovered at runtime?

@jkotas
Copy link
Member

jkotas commented Apr 17, 2020

Via the CoreRT-specific native AOT metadata.

@steveharter
Copy link
Member

Assuming #45152 supersedes this.

@ghost ghost locked as resolved and limited conversation to collaborators Dec 25, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
area-System.Reflection enhancement Product code improvement that does NOT require public API changes/additions tenet-performance Performance related issue untriaged New issue has not been triaged by the area owner
Projects
None yet
Development

No branches or pull requests

10 participants