-
Notifications
You must be signed in to change notification settings - Fork 267
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
inbound: Discover policies from the control plane #1205
Conversation
This change makes the inbound accept stack generic over an implementation of `CheckPolicy`. This lets us start modifying the concrete PortPolicies type without changing the accept stack.
Supports policies on opaque transported connections.
Move the store into its own module to help hide/isolate implementation details. Move discover/initialization into Store::discover.
We've recently introduced support for authorization policies. These policies are configured statically from the proxy's environment. This change adds an optional new mode to support dynamic configuration from the control plane. The proxy now supports new environment variables, `LINKERD2_PROXY_POLICY_SVC_{ADDR,NAME}` that may be set with the address and identity of the policy controller. When this is set, other static inbound port configurations (requiring identity, marking ports as opaque, etc) are ignored in favor of the results returned by the API. Instead, the proxy uses the `LINKER2_PROXY_INBOUND_PORTS` environment variable to discover the list of ports configured in the pod's spec and discovers the policies for each of these ports before admitting inbound connections. When a connection is received for a port not in this list, the `LINKERD2_PROXY_INBOUND_DEFAULT_POLICY` configuration is used to determine whether the connection may be permitted or whether it should be refused. This change modifies the set of default policies to support only `deny`, `all-authenticated` and `all-unauthenticated` (removing `all-mtls-unauthenticated`). Instead a `LINKERD2_PROXY_INBOUND_PORTS_REQUIRE_TLS` configuration is introduced to enable tls requirements for individual pods. This static configuration should only be necessary for the identity controller, in which case we need to use an unauthenticated default policy to permit kubernetes healtchecking probes; and we can use this new configuration to enforce a default policy on the gRPC API port.
I've tested this end-to-end by manually configuring the proxy for discovery. There's more we need to do testing-wise but for now I think we should try to get this merged and then focus on integration testing in the linkerd2 repo. |
@@ -75,7 +75,8 @@ pub async fn serve<M, S, I, A>( | |||
} | |||
} | |||
} | |||
}; | |||
} | |||
.in_current_span(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm surprised this is necessary but without it we don't see the inbound/outbound span on log messages from this future
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
oh, huh...it really seems like this shouldn't be necessary, since the future isn't being spawned, so it should all be in the current span of whatever context serve
is awaited in. weird!
Protocol::Http1 => { | ||
// HTTP/1 services may actually be transported over HTTP/2 | ||
// connections between proxies, so we have to do detection. | ||
return Ok(svc::Either::B(Detect { | ||
timeout: detect_timeout, | ||
tls, | ||
})); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This was an oversight before: we need to do detection in this case to support protocol upgrade (found during testing)
The release build is right at the 14-15 minute mark (even with boxing). Not much we can do, I think, but wait for the fixes for rust-lang/rust#84873 to hit stable. Going to bump the timeout in the meantime |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code-wise looks good. Going to walk through the logic a little more with the changes actually in use in an installation, but nothing major standing out to me right now.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
noticed one thing that seemed like a potential concern, plus a couple small nits (not blockers). once that question's addressed, lgtm!
@@ -75,7 +75,8 @@ pub async fn serve<M, S, I, A>( | |||
} | |||
} | |||
} | |||
}; | |||
} | |||
.in_current_span(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
oh, huh...it really seems like this shouldn't be necessary, since the future isn't being spawned, so it should all be in the current span of whatever context serve
is awaited in. weird!
name: Some(name), | ||
protocol, | ||
} => { | ||
let allow = policies.check_policy(client.local_addr)?; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why does the case where the transport header includes a name and a port check the client's local addr, while the case where there's no name check the client's local addr IP and the port from the transport header?
is this an oversight, or is it on purpose? and if it's on purpose, can we have a comment here explaining why it's different?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
when it includes a name/port, it's a gateway configuration targeting some other service; so the policy check determines whether the client can contact the gateway itself. will add a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thanks for clarifying the gateway behavior, lgtm!
This release adds support for dynamic inbound policies. The proxy now discovers policies from Linkerd'2 policy-controller API for all application ports documented in a pod spec. Rejected connections are logged. Policies are not yet reflected in the proxy's metrics. These policies also allow the proxy to skip protocol detection when a server is explicitly annotated as HTTP/2 or when the server is documented to be opaque or application-terminated TLS. --- * inbound: Use policy protocol configurations (linkerd/linkerd2-proxy#1203) * build(deps): bump tokio from 1.9.0 to 1.10.0 (linkerd/linkerd2-proxy#1204) * build(deps): bump tracing-subscriber from 0.2.19 to 0.2.20 (linkerd/linkerd2-proxy#1207) * inbound: Discover policies from the control plane (linkerd/linkerd2-proxy#1205) * build(deps): bump httparse from 1.4.1 to 1.5.1 (linkerd/linkerd2-proxy#1208)
This release adds support for dynamic inbound policies. The proxy now discovers policies from Linkerd'2 policy-controller API for all application ports documented in a pod spec. Rejected connections are logged. Policies are not yet reflected in the proxy's metrics. These policies also allow the proxy to skip protocol detection when a server is explicitly annotated as HTTP/2 or when the server is documented to be opaque or application-terminated TLS. --- * inbound: Use policy protocol configurations (linkerd/linkerd2-proxy#1203) * build(deps): bump tokio from 1.9.0 to 1.10.0 (linkerd/linkerd2-proxy#1204) * build(deps): bump tracing-subscriber from 0.2.19 to 0.2.20 (linkerd/linkerd2-proxy#1207) * inbound: Discover policies from the control plane (linkerd/linkerd2-proxy#1205) * build(deps): bump httparse from 1.4.1 to 1.5.1 (linkerd/linkerd2-proxy#1208)
We've recently introduced support for authorization policies. These
policies are configured statically from the proxy's environment. This
change adds an optional new mode to support dynamic configuration from
the control plane.
The proxy now supports new environment variables,
LINKERD2_PROXY_POLICY_SVC_{ADDR,NAME}
that may be set with the addressand identity of the policy controller. When this is set, other static
inbound port configurations (requiring identity, marking ports as
opaque, etc) are ignored in favor of the results returned by the API.
Instead, the proxy uses the
LINKER2_PROXY_INBOUND_PORTS
environmentvariable to discover the list of ports configured in the pod's spec and
discovers the policies for each of these ports before admitting inbound
connections. When a connection is received for a port not in this list,
the
LINKERD2_PROXY_INBOUND_DEFAULT_POLICY
configuration is used todetermine whether the connection may be permitted or whether it should
be refused.
This change modifies the set of default policies to support only
deny
,all-authenticated
andall-unauthenticated
(removingall-mtls-unauthenticated
). Instead aLINKERD2_PROXY_INBOUND_PORTS_REQUIRE_TLS
configuration is introducedto enable tls requirements for individual pods. This static
configuration should only be necessary for the identity controller, in
which case we need to use an unauthenticated default policy to permit
kubernetes healtchecking probes; and we can use this new configuration
to enforce a default policy on the gRPC API port.