-
Notifications
You must be signed in to change notification settings - Fork 14.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Enhancements in clarity to access-application-cluster/connecting-frontend-backend.md #25927
Conversation
…ecting-frontend-backend.md
Welcome @murillodigital! |
@Aut0R3V @sftim I've made changes to the overall document towards more accurate description and better clarity, including renaming the manifests and config files to better match their place inside the tutorial. I do have one question, the nginx config in the docs is already baked inside an already published dockerimage |
✔️ Deploy preview for kubernetes-io-master-staging ready! 🔨 Explore the source changes: 542e4a0 🔍 Inspect the deploy logs: https://app.netlify.com/sites/kubernetes-io-master-staging/deploys/5ff9d6310e08c10008630bbc 😎 Browse the preview: https://deploy-preview-25927--kubernetes-io-master-staging.netlify.app |
``` | ||
|
||
At this point, you have a backend Deployment running, and you have a | ||
Service that can route traffic to it. | ||
At this point, you have a `backend` Deployment running seven replicas of your `hello` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sounds to me that 7 replicas is an overkill. Can/shall we reduce it to 3?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agreed, reduced to 3 (here as well as deployment manifest and sample apply output)
The frontend connects to the backend worker Pods by using the DNS name | ||
given to the backend Service. The DNS name is "hello", which is the value | ||
of the `name` field in the preceding Service configuration file. | ||
Now that you have your backend running, you can create a frontend that is available |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"available" may be a little ambiguous?
Now that you have your backend running, you can create a frontend that is available | |
Now that you have your backend running, you can create a frontend that is accessible from |
Similar to the backend, the frontend has a Deployment and a Service. An important | ||
difference to notice between the backend and frontend services, is that the | ||
configuration for the frontend Service has `type: LoadBalancer`, which means that | ||
the Service uses the default load balancer of your cloud provider and will be |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the Service uses the default load balancer of your cloud provider and will be | |
the Service uses a load balancer of your cloud provider and will be accessible from outside the cluster. |
Will the new "nginx.conf" override the defaults in the docker image? If not, this problem may become a blocker. |
The Dockerfile does indeed replace the default conf file with our frontend-nginx.conf:
The problem is, the current image will attempt to proxy traffic to a hostname named |
In that case, we may need to change the backend service name back to |
Either that or replace the current image or update one with a newer version, which is what I would prefer. I don't think the name My preferred approach is build/push a new image |
Doable, but we need to find the person for the image upgrade and we will need to hold this PR till the new image is published. |
Patience is a virtue 🙂. How would I escalate/get a hold of somebody with access and cycles to support? Is there something other than delay that I may be overlooking? |
Just tried to google the source for this image but failed. From gcr, I can see the image was created in 2016 and never updated since its initial version. Anyway, I'm okay with upgrading the image too, so long the whole example works. |
Is this the source code? https://github.com/GoogleCloudPlatform/kubernetes-engine-samples/tree/master/hello-app |
You don't need to change the actual hello-app, only the nginx frontend. The Dockerfile to build it is in the docs repo itself, it's nothing but an nginx image with a swapped config file. The Dockerfile is at |
@murillodigital I see. Thanks. |
Hi there @tengqm ! Just doing a quick drive by to see if you had been able to make some progress on the image building? Let me know if there's anything I can do to help out. Hope you have a phenomenal weekend! |
@murillodigital Unfortunately, I don't have access to the image repo. |
Not a problem, do you think it's reasonably possible to find somebody with access, or would you advise that, although less than ideal from my perspective, we simply revert the service name (all other changes would still apply)? |
@murillodigital I don't know who has this access. Maybe keep the service name as |
@tengqm - Cool, just renamed the backend service back to |
@tengqm - quick ping on this, this should be good to go with my last change. |
/label tide/merge-method-squash |
/lgtm |
LGTM label has been added. Git tree hash: 3af30a92194097a89b8e65c113c1ef8aac8d2aa2
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: tengqm The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
fixes #25818