Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Restrict accessing host for user in the initializer job #733

Closed
tennix opened this issue Aug 6, 2019 · 6 comments
Closed

Restrict accessing host for user in the initializer job #733

tennix opened this issue Aug 6, 2019 · 6 comments
Labels
enhancement New feature or request good first issue Good for newcomers status/help-wanted Extra attention is needed

Comments

@tennix
Copy link
Member

tennix commented Aug 6, 2019

Feature Request

Is your feature request related to a problem? Please describe:

The initializer job always creates users with @% which allows any host IP to access the TiDB cluster. This is not secure

Describe the feature you'd like:

Restrict the host to 127.0.0.1, and add instruction to allow users using kubectl port-forward -n svc/<cluster-name>-tidb 4000:4000 to change the accessing host IP ranges.
Note: The client source IP may be different when using different ways to access the cluster.

Describe alternatives you've considered:

Using network policy to restrict access, but this is complicated to setup and not enabled in many k8s clusters.

Teachability, Documentation, Adoption, Migration Strategy:

This enhances the security of TiDB cluster, protect it accessing from an unknown host.

@tennix tennix added enhancement New feature or request good first issue Good for newcomers status/help-wanted Extra attention is needed labels Aug 6, 2019
@gregwebs
Copy link
Contributor

gregwebs commented Aug 6, 2019

Should users configure the initializer job itself with the desired root access? So this is just the default if they don't configure anything?

What is the threat model? Currently we advise users to put their password into a K8s secret where the initiliazer job can access it.

This may create usability issues for non-production setups (new users testing things out). I think it would break some of our tutorials.

@kolbe
Copy link
Contributor

kolbe commented Aug 6, 2019

If we base tutorials around port forwarding and other mechanisms that give users localhost access to TiDB nodes, we can use [email protected] accounts without creating usability problems. For setups that require access through a bastion or lb, there is probably a way to identify a range of addresses a user would be connecting from.

@gregwebs
Copy link
Contributor

gregwebs commented Aug 6, 2019

Kubernetes access restriction is fundamentally based around namespace restriction (other more fine-grained restrictions may be circumventable). We could restrict access to the TiDB cluster namespace (with a host wildcard), particularly if the installation is into a new namespace.

For the bastion we can probably create it with a stable IP address and whitelist that IP. But I don't think it makes sense to open up that access point if kubectl port-forward is going to be available.

@tennix
Copy link
Member Author

tennix commented Aug 7, 2019

We may use the bastion machine to run sysbench for benchmark, I don't think kubectl port-forward is appropriate for this. So we need to whitelist the bastion IP in the default setup.

@gregwebs
Copy link
Contributor

gregwebs commented Aug 7, 2019

I don't think we can use the bastion for sysbench. Sysbench needs resources for its task, but the bastion is always on and should be as small as possible. I run sysbench as a K8s job.

@tennix
Copy link
Member Author

tennix commented Oct 8, 2019

With #779, it's possible to restrict accessing host now.

@tennix tennix closed this as completed Oct 8, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request good first issue Good for newcomers status/help-wanted Extra attention is needed
Projects
None yet
Development

No branches or pull requests

3 participants