Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

inbound: Discover policies from the control plane #1205

Merged
merged 49 commits into from
Aug 19, 2021
Merged

Conversation

olix0r
Copy link
Member

@olix0r olix0r commented Aug 18, 2021

We've recently introduced support for authorization policies. These
policies are configured statically from the proxy's environment. This
change adds an optional new mode to support dynamic configuration from
the control plane.

The proxy now supports new environment variables,
LINKERD2_PROXY_POLICY_SVC_{ADDR,NAME} that may be set with the address
and identity of the policy controller. When this is set, other static
inbound port configurations (requiring identity, marking ports as
opaque, etc) are ignored in favor of the results returned by the API.
Instead, the proxy uses the LINKER2_PROXY_INBOUND_PORTS environment
variable to discover the list of ports configured in the pod's spec and
discovers the policies for each of these ports before admitting inbound
connections. When a connection is received for a port not in this list,
the LINKERD2_PROXY_INBOUND_DEFAULT_POLICY configuration is used to
determine whether the connection may be permitted or whether it should
be refused.

This change modifies the set of default policies to support only deny,
all-authenticated and all-unauthenticated (removing
all-mtls-unauthenticated). Instead a
LINKERD2_PROXY_INBOUND_PORTS_REQUIRE_TLS configuration is introduced
to enable tls requirements for individual pods. This static
configuration should only be necessary for the identity controller, in
which case we need to use an unauthenticated default policy to permit
kubernetes healtchecking probes; and we can use this new configuration
to enforce a default policy on the gRPC API port.

This change makes the inbound accept stack generic over an
implementation of `CheckPolicy`. This lets us start modifying the
concrete PortPolicies type without changing the accept stack.
Supports policies on opaque transported connections.
Move the store into its own module to help hide/isolate implementation
details.

Move discover/initialization into Store::discover.
We've recently introduced support for authorization policies. These
policies are configured statically from the proxy's environment. This
change adds an optional new mode to support dynamic configuration from
the control plane.

The proxy now supports new environment variables,
`LINKERD2_PROXY_POLICY_SVC_{ADDR,NAME}` that may be set with the address
and identity of the policy controller. When this is set, other static
inbound port configurations (requiring identity, marking ports as
opaque, etc) are ignored in favor of the results returned by the API.
Instead, the proxy uses the `LINKER2_PROXY_INBOUND_PORTS` environment
variable to discover the list of ports configured in the pod's spec and
discovers the policies for each of these ports before admitting inbound
connections. When a connection is received for a port not in this list,
the `LINKERD2_PROXY_INBOUND_DEFAULT_POLICY` configuration is used to
determine whether the connection may be permitted or whether it should
be refused.

This change modifies the set of default policies to support only `deny`,
`all-authenticated` and `all-unauthenticated` (removing
`all-mtls-unauthenticated`). Instead a
`LINKERD2_PROXY_INBOUND_PORTS_REQUIRE_TLS` configuration is introduced
to enable tls requirements for individual pods. This static
configuration should only be necessary for the identity controller, in
which case we need to use an unauthenticated default policy to permit
kubernetes healtchecking probes; and we can use this new configuration
to enforce a default policy on the gRPC API port.
linkerd/app/inbound/src/detect.rs Outdated Show resolved Hide resolved
linkerd/app/inbound/src/policy/discover.rs Outdated Show resolved Hide resolved
linkerd/app/inbound/src/policy/mod.rs Outdated Show resolved Hide resolved
linkerd/app/inbound/src/policy/store.rs Outdated Show resolved Hide resolved
linkerd/app/inbound/src/policy/tests.rs Outdated Show resolved Hide resolved
linkerd/app/inbound/src/policy/tests.rs Outdated Show resolved Hide resolved
linkerd/app/inbound/src/policy/tests.rs Outdated Show resolved Hide resolved
linkerd/app/src/env.rs Outdated Show resolved Hide resolved
linkerd/app/src/env.rs Outdated Show resolved Hide resolved
linkerd/app/src/env.rs Outdated Show resolved Hide resolved
@olix0r olix0r marked this pull request as ready for review August 19, 2021 00:00
@olix0r
Copy link
Member Author

olix0r commented Aug 19, 2021

I've tested this end-to-end by manually configuring the proxy for discovery. There's more we need to do testing-wise but for now I think we should try to get this merged and then focus on integration testing in the linkerd2 repo.

@@ -75,7 +75,8 @@ pub async fn serve<M, S, I, A>(
}
}
}
};
}
.in_current_span();
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm surprised this is necessary but without it we don't see the inbound/outbound span on log messages from this future

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

oh, huh...it really seems like this shouldn't be necessary, since the future isn't being spawned, so it should all be in the current span of whatever context serve is awaited in. weird!

Comment on lines +198 to +205
Protocol::Http1 => {
// HTTP/1 services may actually be transported over HTTP/2
// connections between proxies, so we have to do detection.
return Ok(svc::Either::B(Detect {
timeout: detect_timeout,
tls,
}));
}
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This was an oversight before: we need to do detection in this case to support protocol upgrade (found during testing)

@olix0r
Copy link
Member Author

olix0r commented Aug 19, 2021

The release build is right at the 14-15 minute mark (even with boxing). Not much we can do, I think, but wait for the fixes for rust-lang/rust#84873 to hit stable. Going to bump the timeout in the meantime

Copy link
Contributor

@kleimkuhler kleimkuhler left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code-wise looks good. Going to walk through the logic a little more with the changes actually in use in an installation, but nothing major standing out to me right now.

Copy link
Member

@hawkw hawkw left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

noticed one thing that seemed like a potential concern, plus a couple small nits (not blockers). once that question's addressed, lgtm!

@@ -75,7 +75,8 @@ pub async fn serve<M, S, I, A>(
}
}
}
};
}
.in_current_span();
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

oh, huh...it really seems like this shouldn't be necessary, since the future isn't being spawned, so it should all be in the current span of whatever context serve is awaited in. weird!

linkerd/app/inbound/src/detect.rs Outdated Show resolved Hide resolved
name: Some(name),
protocol,
} => {
let allow = policies.check_policy(client.local_addr)?;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why does the case where the transport header includes a name and a port check the client's local addr, while the case where there's no name check the client's local addr IP and the port from the transport header?

is this an oversight, or is it on purpose? and if it's on purpose, can we have a comment here explaining why it's different?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

when it includes a name/port, it's a gateway configuration targeting some other service; so the policy check determines whether the client can contact the gateway itself. will add a comment

Copy link
Member

@hawkw hawkw left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks for clarifying the gateway behavior, lgtm!

@olix0r olix0r merged commit 822cd15 into main Aug 19, 2021
@olix0r olix0r deleted the ver/policy-discover branch August 19, 2021 19:39
olix0r added a commit to linkerd/linkerd2 that referenced this pull request Aug 19, 2021
This release adds support for dynamic inbound policies. The proxy now
discovers policies from Linkerd'2 policy-controller API for all
application ports documented in a pod spec. Rejected connections are
logged. Policies are not yet reflected in the proxy's metrics.

These policies also allow the proxy to skip protocol detection when a
server is explicitly annotated as HTTP/2 or when the server is
documented to be opaque or application-terminated TLS.

---

* inbound: Use policy protocol configurations (linkerd/linkerd2-proxy#1203)
* build(deps): bump tokio from 1.9.0 to 1.10.0 (linkerd/linkerd2-proxy#1204)
* build(deps): bump tracing-subscriber from 0.2.19 to 0.2.20 (linkerd/linkerd2-proxy#1207)
* inbound: Discover policies from the control plane (linkerd/linkerd2-proxy#1205)
* build(deps): bump httparse from 1.4.1 to 1.5.1 (linkerd/linkerd2-proxy#1208)
olix0r added a commit to linkerd/linkerd2 that referenced this pull request Aug 19, 2021
This release adds support for dynamic inbound policies. The proxy now
discovers policies from Linkerd'2 policy-controller API for all
application ports documented in a pod spec. Rejected connections are
logged. Policies are not yet reflected in the proxy's metrics.

These policies also allow the proxy to skip protocol detection when a
server is explicitly annotated as HTTP/2 or when the server is
documented to be opaque or application-terminated TLS.

---

* inbound: Use policy protocol configurations (linkerd/linkerd2-proxy#1203)
* build(deps): bump tokio from 1.9.0 to 1.10.0 (linkerd/linkerd2-proxy#1204)
* build(deps): bump tracing-subscriber from 0.2.19 to 0.2.20 (linkerd/linkerd2-proxy#1207)
* inbound: Discover policies from the control plane (linkerd/linkerd2-proxy#1205)
* build(deps): bump httparse from 1.4.1 to 1.5.1 (linkerd/linkerd2-proxy#1208)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants