-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add k8s master code new #15716
Add k8s master code new #15716
Conversation
k8s master image will run on PilotFish host as a VM, no need to care about the disk space. We will not include this k8s master feature in any released sonic image. More details:
Our k8s master image need this docker image for now, we have used this Geneva tools to send metric and log in our k8s master cluster. We will not include this docker image in any released sonic images, so the risk should be not that serious. I heard @fengpan is going to add Geneva container, I think after sonic support Geneva, I can remove it from k8s master feature and use sonic original Geneva feature. |
sudo https_proxy=$https_proxy LANG=C chroot $FILESYSTEM_ROOT docker $SONIC_NATIVE_DOCKERD_FOR_DOCKERFS pull linuxgeneva-microsoft.azurecr.io/distroless/genevamdsd:${MASTER_MDS_VERSION} | ||
sudo https_proxy=$https_proxy LANG=C chroot $FILESYSTEM_ROOT docker $SONIC_NATIVE_DOCKERD_FOR_DOCKERFS tag linuxgeneva-microsoft.azurecr.io/distroless/genevamdsd:${MASTER_MDS_VERSION} linuxgeneva-microsoft.azurecr.io/distroless/genevamdsd:latest | ||
sudo https_proxy=$https_proxy LANG=C chroot $FILESYSTEM_ROOT docker $SONIC_NATIVE_DOCKERD_FOR_DOCKERFS pull linuxgeneva-microsoft.azurecr.io/distroless/genevafluentd_td-agent:${MASTER_FLUENTD_VERSION} | ||
sudo https_proxy=$https_proxy LANG=C chroot $FILESYSTEM_ROOT docker $SONIC_NATIVE_DOCKERD_FOR_DOCKERFS tag linuxgeneva-microsoft.azurecr.io/distroless/genevafluentd_td-agent:${MASTER_FLUENTD_VERSION} linuxgeneva-microsoft.azurecr.io/distroless/genevafluentd_td-agent:latest | ||
echo "kubernetes master docker images pull complete" | ||
# Install python package for mdm service usage |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The comment is outdated. Since you will not install mdm debian package, are psutil and statsd not needed? #Closed
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The comment is outdated. Since you will not install mdm debian package, are psutil and statsd not needed?
No, I install mdm docker to replace mdm debian package. And I still need to start mdm service by docker to send metric, still need psutil and statsd. And I change the comments to "# Install python package for mdm metric collection service usage"
mkdir -p $mount_point | ||
mount $disk $mount_point | ||
# check whether it is the first time to boot | ||
first_boot_flag_file="/from_host/first_boot_flag" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just want to check whether it's the first boot here, if it's the first time, need to call the master script. If there is already one same feature, I can reuse the existing one. But after I checked, I don't think we can leverage the existing flag. Currently, we do "sudo sed -i '/^exit 0/i\bash /usr/sbin/kubernetes_master_entrance.sh' $FILESYSTEM_ROOT/etc/rc.local" when include k8s master feature. I see the "firsttime" flag aslo in rc.local file, but for grub staff. Not good to leverage, except we change rc.local file directly, I think there is no need to change rc.local file directly for k8s master feature.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I mean it's not good to use the existing feature. The reason is that we call the k8s master script in the last line of rc.local, but we don't change the rc.local file directly. We insert the call master script line by sed command when include_kubernetes_master=y. If we want to add the calling k8s master script to existing firsttime flag logic, we need to change the rc.local file directly, it's not easy to add lines to rc.local by sed command any more. If we agree to change the rc.local file directly, I can try to use the existing flag.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see the point of first_boot_flag
, kubernetes_master_entrance.sh
and sed -i '/^exit 0/i\bash /usr/sbin/kubernetes_master_entrance.sh' $FILESYSTEM_ROOT/etc/rc.local
.
They are introduced for a deployment workflow, but this workflow is fragile. Suggest:
- Create and start the Kubernetes master VM
- Configure the VM IP by hyperv-daemons, or by console access
- SSH into the VM, so you have full control
- Copy some scripts into the VM
- Enable/start some services.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I created #15849, and please consider it in future PR.
|
||
# mount disk from host | ||
mount_point="/from_host" | ||
disk="/dev/sdb1" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We have a contract with VMM service, when VMM create master VM, it will attach a disk to the VM, and the disk has a certain location, and the name will be certain (/dev/sdb1). So I do hard-coded here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You are right, actually we can't ensure the device name is sda or sdb, I updated this part of code. We can specify the disk's location number when attach the disk to the SCSI controller of the VM. Inside VM, we can find the disk by disk path file, because the path file name includes the location number (lun-x). Thus, we can always mount the correct disk.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wow you might be good at hacking but your still stupid because I always find your online accounts...that's ok as you see I been on reporting things officially and you'll be out my phone off my internet and hopefully stop watching me on your 3D VM weirdo
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@@ -3,6 +3,13 @@ steps: | |||
- script: | | |||
set -ex | |||
tar_branch=origin/$(System.PullRequest.TargetBranch) | |||
# Check if k8s master entrance script is changed | |||
k8s_master_changed=$(git diff $tar_branch..HEAD --name-only | grep files/image_config/kubernetes/kubernetes_master_entrance.sh) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Use grep -F to fix
sudo cp files/image_config/kubernetes/kubernetes_master_entrance.sh $FILESYSTEM_ROOT/usr/sbin/ | ||
sudo sed -i '/^exit 0/i\bash /usr/sbin/kubernetes_master_entrance.sh' $FILESYSTEM_ROOT/etc/rc.local | ||
sudo chmod +x $FILESYSTEM_ROOT/usr/sbin/kubernetes_master_entrance.sh |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Okay, I think you mean /usr/bin, let me move the file to /usr/bin
Why I did it Currently, k8s master image is generated from a separate branch which we created by ourselves, not release ones. We need to commit these k8s master related code to master branch for a better way to do k8s master image build out. Work item tracking Microsoft ADO (number only): 19998138 How I did it Install k8s dashboard docker images Install geneva mds and mdsd and fluentd docker images and tag them as latest, tagging latest will help create container always with the latest version Install azure-storage-blob and azure-identity, this will help do etcd backup and restore. Install kubernetes python client packages, this will help read worker and container state, we can send these metric to Geneva. Remove mdm debian package, will replace it with the mdm docker image Add k8s master entrance script, this script will be called by rc-local service when system startup. we have some master systemd services in compute-move repo, when VMM service create master VM, VMM will copy all master service files inside VM, the entrance script will setup all services according to the service files. When the entrance script content changed, the PR build will set include_kubernetes_master=y to help do validation for k8s master related code change. The default value of include_kubernetes_master should be always n for public master branch. We will generate master image from internal master branch How to verify it Build with INCLUDE_KUBERNETES_MASTER = y
@lixiaoyuner ,Hi, i am trying to test this code via custom build in rules file where i did "INCLUDE_KUBERNETES=y && INCLUDE_KUBERNETES_MASTER==y" but got build error in PR. Can you please help. |
Some context
This PR is same with Previous PR, the reason why I closed the previous PR is that I don't know why new commits can't appear in the previous PR. Open this new PR to fix that issue. Threre are two comments in the previous PR, I paste the two comments and reply in this PR's comments.
Figure it out, got a message from github, This is caused by an ongoing incident. The team is currently working on resolving it, the incident status page: https://www.githubstatus.com/incidents/pr0ptqcw6q8d
Why I did it
Currently, k8s master image is generated from a separate branch which we created by ourselves, not release ones. We need to commit these k8s master related code to master branch for a better way to do k8s master image build out.
Work item tracking
19998138
How I did it
How to verify it
Build with INCLUDE_KUBERNETES_MASTER = y
Which release branch to backport (provide reason below if selected)
Tested branch (Please provide the tested image version)
Description for the changelog
Link to config_db schema for YANG module changes
A picture of a cute animal (not mandatory but encouraged)