-
Notifications
You must be signed in to change notification settings - Fork 765
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
NotSupportedException is thrown during ConfigureBuilder #3753
Comments
@CodeBlanch Can you take a look? |
Sadly, this is working as designed! The OP has hinted at a potential solution, but let me add it to the full snippet: services.AddOpenTelemetryTracing(builder =>
{
// Register delegate to configure options while the IServiceCollection is available...
builder.ConfigureServices(services => {
services.Configure<AspNetCoreInstrumentationOptions>(options => {
options.Filter = context => true;
});
});
// Do IServiceProvider work...
builder.ConfigureBuilder(static (provider, nestedBuilder) =>
{
var tracingOptions = provider.GetRequiredService<IOptions<TracingOptions>>().Value;
if (!tracingOptions.Enabled)
{
return;
}
nestedBuilder.AddConsoleExporter();
var resourceBuilder = ResourceBuilder
.CreateDefault()
.AddService("name");
nestedBuilder
.AddSource("name")
.SetResourceBuilder(resourceBuilder)
.AddAspNetCoreInstrumentation();
});
}); @hankovich It looks like you want to disable tracing based on your runtime configuration. This really isn't the use case if (builder.Configuration.GetValue<bool>("OpenTelemetry:Tracing:Enabled", true))
{
builder.Services.AddOpenTelemetryTracing(options =>
{
// Configure only if enabled
});
} Just FYI we would like to have a built-in mechanism for disabling the SDK, but we're not there yet! We're kind of kicking around ideas for example: #3639 |
Folks......... please, you gotta be careful with these random breaking changes... I had code since May that was relying on I just spent nearly 2 entire days debugging why my traces were not showing up for a specific project. Doing all sorts of crazy tests, trying to eliminate as many factors as possible. I thought it was something in the Azure AppService environment that might have changed with the shift to .NET7, but no.... a breaking change on this library started to silently throw an exception that caused my traces to vanish completely. Took me a very long time to finally be able to reproduce the issue locally, because it was related to a conditional call based on "Environment != 'Development'". I even set up an ubuntu distro locally using WSL to see if running in that environment would reproduce the problem I was seeing... When I finally changed the environment to "Production" locally, I then immediately saw this exception. I thought "huh maybe this new To top it all of, you decide to fucking swallow exceptions like this...... why would you do that, that's an incredibly bad practice. This is a critical mistake (the exception message clearly says "you shouldn't do this"), by swallowing this exception you make it nearly impossible to detect the problem. I totally understand ignoring some configuration things, using the standard path if things are not setup, etc, but you can't just swallow critical exceptions like this. So frustrating... I just can't believe I wasted so much time on something that would be trivially found if the exception was just thrown and the app crashed. I've mentioned earlier how terrible I thought the EventSource-based custom diagnostics thing was (I still hope you'll abandon all of that at some point and migrate towards |
Wow that was spirited! Would you be able to provide some code which reproduces the problem? Something which works on 1.3.1 but not on 1.4.0. Happy to look into it. |
I'm sorry if that was a bit too harsh... I was just incredibly frustrated with the issue after spending so much time trying all sorts of crazy things.
When I went to try to reproduce the issue and find the exact version that caused a breaking change, I realized there was a bit more to the whole problem: turns out part of our functionality actually stopped working earlier than I suspected. Take a look at this repo here (worker project): Notes:
If I take the exact same codebase, and update
So, code that was configuring metrics, suddenly makes all OTEL stop working completely. This was made worse in our case because the issue would only occur when running the application in "Production" mode, as the For reference, I included the final solution (passing This is now the second time the OTEL library has made us waste more than 2 days in investigation work for weird defects. Whenever I find something in the web about issues, you guys always respond with "just enable the diagnostics and provide the logs". Please note that your custom diagnostics solution that relies on file system to both configure and save the logs, is not at all container/serverlss friendly: our application here is hosted in Azure AppService, which doesn't normally have you messing around with the file system. I'll once again urge the team to reconsider this strategy of using a deactivated-by-default diagnostics solution that doesn't rely at all on |
@julealgon I discussed this with @alanwest and we agree that these types of issues should surface. I opened #4006 we'll see if the other maintainers agree 😄 For context, it isn't necessarily by choice that the exception was being swallowed. The OpenTelemetry Specification is super opinionated that SDKs should never crash the process once it is up and running. But I think in this case it is OK to throw because the process hasn't exactly started yet. Using OpenTelemetry.Extensions.Hosting v1.0.0-rc9.6 (1.3 behavior)
Using OpenTelemetry.Extensions.Hosting to v1.0.0-rc9.7 (1.4 behavior):
Agree with this! It used to silently fail and I fixed it recently so that it will correctly throw if you try to add a service after the
This is a common ask! You can actually get the internal logs from AppService. But in other things, for example Functions, it is totally impossible. I think we will have a solution for this at some point. In the meantime you can use Macross.OpenTelemetry.Extensions if you really want to just dump the internal logging into |
Hey! I have the same issue when I need to configure the AddOtlpExporter, for example I need to get the url from IConfiguration and then pass to otlpOptions: private static void ConfigureTracing(TracerProviderBuilder configure)
{
configure.AddHttpClientInstrumentation();
configure.AddAspNetCoreInstrumentation();
configure.ConfigureBuilder((sp, builder) =>
{
var configuration = sp.GetRequiredService<IConfiguration>();
var exporter = configuration.GetValue<string>("OpenTelemetry:Traces:Exporter") ?? string.Empty;
switch (exporter.ToLowerInvariant())
{
case MetricsExporterResolver.Otlp:
// Adding the OtlpExporter creates a GrpcChannel.
// This switch must be set before creating a GrpcChannel/HttpClient when calling an insecure gRPC service.
// See: https://docs.microsoft.com/aspnet/core/grpc/troubleshoot#call-insecure-grpc-services-with-net-core-client
AppContext.SetSwitch("System.Net.Http.SocketsHttpHandler.Http2UnencryptedSupport", true);
builder.AddOtlpExporter(otlpOptions =>
{
otlpOptions.Endpoint = new Uri(configuration.GetValue<string>("OpenTelemetry:CollectorEndpoint"));
});
break;
default:
break;
}
});
} I saw in the examples retriving from |
I'm aware that the OTEL spec is very sensitive in this regard, but as you said, this type of exception is different: it is not an OTEL/collection issue, it is a preliminary setup problem that is part of the library and not of OTEL itself.
Agreed with having the exception always happen here. This would've allowed us to catch the problem in our case months earlier than we did.
Definitely looking forward to that.
This is absolutely fantastic! Thank you so much for sharing it. I'll for sure be using this in all of our services until official |
@alefcarlos OTEL already defines a well-known configuration setting that you can use to change the URL, As for how to conditionally add the exporter based on configuration, you might have to pass the |
@julealgon, thank you ! But that used to work using 1.3.0 version :(
Thats the exactly behaviour I have. To solve that I changed my extension, now it uses public static WebApplicationBuilder AddOpenTelemetryCommon(this WebApplicationBuilder appBuilder, string appName)
{
void ConfigureResource(ResourceBuilder r) => r.AddService(
serviceName: appName,
serviceVersion: Assembly.GetExecutingAssembly().GetName().Version?.ToString() ?? "unknown",
serviceInstanceId: Environment.MachineName);
void ConfigureTracing(TracerProviderBuilder configure)
{
var exporter = appBuilder.Configuration.GetValue<string>("OpenTelemetry:Traces:Exporter") ?? string.Empty;
switch (exporter.ToLowerInvariant())
{
case MetricsExporterResolver.Otlp:
// Adding the OtlpExporter creates a GrpcChannel.
// This switch must be set before creating a GrpcChannel/HttpClient when calling an insecure gRPC service.
// See: https://docs.microsoft.com/aspnet/core/grpc/troubleshoot#call-insecure-grpc-services-with-net-core-client
AppContext.SetSwitch("System.Net.Http.SocketsHttpHandler.Http2UnencryptedSupport", true);
configure.AddOtlpExporter(otlpOptions =>
{
otlpOptions.Endpoint = new Uri(appBuilder.Configuration.GetValue<string>("OpenTelemetry:CollectorEndpoint"));
});
break;
default:
break;
}
configure.AddHttpClientInstrumentation();
}
void ConfigureMetrics(MeterProviderBuilder configure)
{
var exporter = new MetricsExporterResolver(appBuilder.Configuration);
configure.ConfigureServices(services =>
{
services.AddSingleton(exporter);
services.AddTransient<IStartupFilter, OpenTelemetryPrometheusStartupFilter>();
});
switch (exporter.Value.ToLowerInvariant())
{
case MetricsExporterResolver.Prometheus:
configure.AddPrometheusExporter();
break;
case MetricsExporterResolver.Otlp:
// Adding the OtlpExporter creates a GrpcChannel.
// This switch must be set before creating a GrpcChannel/HttpClient when calling an insecure gRPC service.
// See: https://docs.microsoft.com/aspnet/core/grpc/troubleshoot#call-insecure-grpc-services-with-net-core-client
AppContext.SetSwitch("System.Net.Http.SocketsHttpHandler.Http2UnencryptedSupport", true);
configure.AddOtlpExporter(otlpOptions =>
{
otlpOptions.Endpoint = new Uri(appBuilder.Configuration.GetValue<string>("OpenTelemetry:CollectorEndpoint"));
});
break;
default:
break;
}
}
appBuilder.Services.AddOpenTelemetry()
.ConfigureResource(ConfigureResource)
.WithTracing(ConfigureTracing)
.WithMetrics(ConfigureMetrics)
.StartWithHost();
return appBuilder;
} |
@alefcarlos Thanks for sharing!
Agree with this. The So this is a breaking behavioral change, but I think the new features/tighter integrations are worth it. I don't think most users will run into this. The code above extending You could get your original code up and running by avoiding the |
@CodeBlanch is there any particular reason why the team decided not to follow proper semantic versioning for these libraries? Minor version bumps shouldn't contain breaking API changes ideally, only deprecations at most. Would certainly be nice if you were following semantic versioning as that gives more information about actual breaking changes to consumers by just looking at the version numbers. |
@julealgon I hear you. My thoughts on this... Define breaking API change for me 😄
All that being said I'm sure I'm going to get flamed on this issue hard once 1.4 comes out 🤣 |
Hello everyone. It looks like this library has its own way of doing things, does anyone read aspnetcore source code or documentation here? because this library ignores all widely know conventions from dotnet ecosystem. A summary:
If we forget about points I've mentioned we end up with problems described here. I know that we cannot change current API in a drastic way for now, but I hope we can do better in version 2.0, how can I help? together we could build something truly amazing :) |
Hey @maciejw I can't think of anywhere we are starting anything in a |
@maciejw , adding HostedServices to the ServicesCollection automatically starts them up when the application is run. This is how the hosting extensions work by definition. What this library does is exactly that: it adds a hosted service that listens to the telemetry and then exports them.
Again, it relies on a hosted service. You are just misunderstanding how it works fundamentally.
I don't quite follow this one. Can you clarify?
This should work just fine though. Did you run into any issues trying to move configuration into
My understanding is that this works as well.
What conditional registration are you alluding to? In general I agree, but I'm not sure what you are referring to here in particular.
This looks like an exaggeration... what aspects do you think would need to be testable exactly?
See first points. This relies on hosted services, the same way AspNetCore itself does. Similarly for AspNetCore, you don't "see" the hosted service being registered anywhere but it is done behind the scenes by one of the extensions or builders.
🤔 |
Linking to: |
This issue was marked stale due to lack of activity and will be closed in 7 days. Commenting will instruct the bot to automatically remove the label. This bot runs once per day. |
Closed as inactive. Feel free to reopen if this issue is still a concern. |
Bug Report
Packages:
Runtime version: .NET 7 RC 1
Symptom
I use a combination of
AddOpenTelemetryTracing
andConfigureBuilder
to emulate a missing overload mentioned in #3086 (I need an access toIServiceProvider
to register tracing). It works fine and I can add all exporters and instrumentions. Hovewer, the approach fails if I passAction<AspNetCoreInstrumentationOptions>
toAddAspNetCoreInstrumentation
.What is the expected behavior?
No exceptions are thrown
What is the actual behavior?
NotSupporteException
is thrown. I see it only inDebug
, inRelease
it's swallowed, i.e. there're no logs, no errors and no externally visible exceptions. Open telemetry just disables itself (no traces are exported, the application continues handling requests).Reproduce
As a workaround I can call
.AddAspNetCoreInstrumentation()
and addAdditional Context
The text was updated successfully, but these errors were encountered: