Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Environment variable & AppCtx switch to force-disable IPV6 #45893

Closed
wants to merge 1 commit into from

Conversation

antonfirsov
Copy link
Member

@antonfirsov antonfirsov commented Dec 10, 2020

This is an alternative fix for #44686.

In some Azure App Service and AWS environments IPV6 (thus dual-stack) is broken, but there's no way to notice it before an actual connection attempt is made.

Since we don't know at the moment what further checks we may perform to detect IPv6 in a reliable manner, I'm proposing to expose a switch for users and/or infra people to explicitly disable IPV6 in .NET Sockets and all layers depending on them.

This way we can avoid hacky workarounds in HttpClient while still addressing the issue quickly, and at the same time provide a workaround for all users of the Socket(SocketType, ProtocolType) constructor not just HttpClient.

@ghost
Copy link

ghost commented Dec 10, 2020

Tagging subscribers to this area: @dotnet/ncl
See info in area-owners.md if you want to be subscribed.

Issue Details

This is an alternative fix for #44686.

In some Azure App Service and AWS environments IPV6 (thus dual-stack) is broken, but there's no way to notice it before an actual connection attempt is made.

Since we don't know at the moment what further checks we need to perform to detect IPv6 in a reliable manner, we may expose a switch for users and/or infra people to explicitly disable IPV6 in .NET Sockets and all layers depending on them.

This way we can avoid hacky workarounds in HttpClient while still addressing the issue quickly, and at the same time provide a workaround for all users of the Socket(SocketType, ProtocolType) constructor not just HttpClient.

Author: antonfirsov
Assignees: -
Labels:

area-System.Net

Milestone: -

@antonfirsov

This comment has been minimized.

@azure-pipelines

This comment has been minimized.

Copy link
Contributor

@geoffkizer geoffkizer left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@karelz
Copy link
Member

karelz commented Dec 15, 2020

Closing the PR as we do not expect to take it at this moment -- see #44686 (comment) for details.

From the 2 PRs to fix the original issue, this one is top candidate if we find out about more environments half-supporting IPv6 -- a "big red swtich" to allow disabling of IPv6 in the app. It is something we may consider also from other reasons (there have been occasional asks to disable IPv6 app-wide). We will evaluate that option later, after the main problem is understood also in AWS environment.

@karelz karelz closed this Dec 15, 2020
@mikaelliljedahl
Copy link

This is really bad news @karelz

Could you at least consider adding support for an environment variable mentioned such as "FORCE_OUTGOING_IPV4" that can be set during startup that would override the behavior of all external libraries that relies on this code? This is actually not the first time I encounter bugs related to Azure VNet not working and it seems hard for cloud VNet implementation to get IPv6 routing to work correctly.

I was looking forward to update to .Net 5.0 with this patch since our whole environment relies on VNets for placing calls to protected resources such as Azure SQL, BlobStorage and Keyvault. I haven't found a way to apply the suggested workaround to the libraries placing the calls (e.g. Azure.Security.KeyVault.Secrets, Azure.Storage.Blobs and EF/EF Core for db). The setup is using standard App Services connected to VNets with Service endpoints.

I created a case to the Azure support but I could not get any useful response from them with regards to if/when it is going to be fixed. The responses I get is like talking to an AI. Last mail i sent to them was (after reading this thread):

"Hi, where do I find the patch status for this issue?
There is no info about it in the thread. And the .Net team will not create a workaround in the .net runtime so it is up to the Azure team to fix this.
"

The response was:

"
Thanks for your response.

Regarding where to get an information about the update for .net 5 please see the following url https://github.com/Azure/app-service-announcements/issues and then go to .NET 5 availability on App Service."

So I guess we will be stuck on 3.1 forever.

@jraadt
Copy link

jraadt commented Jan 9, 2021

I agree with @mikaelliljedahl . I understand this is an Azure/AWS issue, but it's very disheartening to see that .NET could fix it for us, but choose not to. Key areas of our enterprise applications like our microservices and other service calls no longer work.

I've created a simple test case with ASP.NET 5 running in Azure App Service and a NodeJS server on a VM for it to call. While the workaround by forcing it to IPv4 works, we are still stuck for any libraries we don't control like connecting to Elasticsearch using Nest.

@karelz karelz added this to the 6.0.0 milestone Jan 26, 2021
@ghost ghost locked as resolved and limited conversation to collaborators Feb 25, 2021
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants