-
Notifications
You must be signed in to change notification settings - Fork 20
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
do not create a new cert each time akash-provider gets restarted #9
Comments
Already solved in new fork - |
I think we should create the state that's needed before running helm and then have helm package it up into secrets/config, where we can mount it into the pod.
can all be done locally, before installing onto helm. Local storage works well but it depends on node affinity, which can be cumbersome and/or problematic. |
For instance this is how we used to include an init script: |
Since we can query the blockchain before invoking the
That's already done locally and injected upon helm install.
The already existing This can be used:
Same here, just check the diff between new values in
More ideas: We can add an Akash RPC time check, so provider won't run until Akash RPC node is synced.
Akash allows multiple valid certs unless they expire or manually revoked, so this is going to work as we don't revoke old certs. I do not really see any issues with using the local volumes nor the need to deal with the node affinity. @boz let me know WDYT. |
I'd like to at some point default persistent storage in this chart to enabled. This is the very simple helm chart persistent storage which uses local-storage (not Ceph). With persistent storage and a statefulset we know if the chart is installing for the first time or not because we have a disk we can query for the config. This config could also be compared against the chain if it exists and update commands only run if they are needed. We are constantly conscious about not extending the initial install documentation at all. Enabling persistent storage in this chart means 2 additional steps. 1. finding a node name to bind the pod to and 2. creating a directory on that node to hold the data. We'll get feedback on whether this over complicated the instructions and if not I think that's the preferred way to go. |
Changes: - bump akash-provider chart to 0.153.0 - install bc - check Akash RPC node is not 30 seconds behind/ahead before continuing - do not append provider.yaml but rather create it from scratch - figure the provider address in case the user passes `--from=<key_name>` instead of `--from=<akash1...>` address - check provider existence on the blockchain before attempting to create a new one (`akash tx provider create provider.yaml ...`) - check whether provider settings (host uri, attributes, ...) have changed before broadcasting the new ones on the blockchain (`akash tx provider update provider.yaml ...`) - before generating and broadcasting the new provider certificate - check the last provider certificate found on the blockchain is valid - check whether the last provider certificate serial number found on the blockchain matches the local one Issues addressed: - fixes akash-network#35 - fixes akash-network#9
Changes: - bump akash-provider chart to 0.153.0 - install bc - check Akash RPC node is not 30 seconds behind/ahead before continuing - do not append provider.yaml but rather create it from scratch - figure the provider address in case the user passes `--from=<key_name>` instead of `--from=<akash1...>` address - check provider existence on the blockchain before attempting to create a new one (`akash tx provider create provider.yaml ...`) - check whether provider settings (host uri, attributes, ...) have changed before broadcasting the new ones on the blockchain (`akash tx provider update provider.yaml ...`) - before generating and broadcasting the new provider certificate - check the last provider certificate found on the blockchain is valid - check whether the last provider certificate serial number found on the blockchain matches the local one Issues addressed: - fixes #35 - fixes #9
I think this is now solved through pod's lifecycle only (PR #36), which should be sufficient as we don't expect this pod to be recreated often to cause any significant issue such as AKT drainage. I'm going to keep this issue open until I confirm the fix (PR #36) is working as expected by only restarting the pod (via kill cpid 1) instead of recreating it (kubectl delete pod). |
Have just tested this now, so it appears that everything gets removed even on a simple pod restart. EvidencePod
Making pod
Pod is still same
But the cert is gone now...
|
Changes: - bump akash-provider chart to 0.153.0 - install bc - check Akash RPC node is not 30 seconds behind/ahead before continuing - do not append provider.yaml but rather create it from scratch - figure the provider address in case the user passes `--from=<key_name>` instead of `--from=<akash1...>` address - check provider existence on the blockchain before attempting to create a new one (`akash tx provider create provider.yaml ...`) - check whether provider settings (host uri, attributes, ...) have changed before broadcasting the new ones on the blockchain (`akash tx provider update provider.yaml ...`) - before generating and broadcasting the new provider certificate - check the last provider certificate found on the blockchain is valid - check whether the last provider certificate serial number found on the blockchain matches the local one Issues addressed: - fixes akash-network#35 - fixes akash-network#9
See if we can leverage the configmap to store & restore the cert. |
configmap & secrets are for consuming only (i.e. they are always readonly) - ref kubernetes/kubernetes#62099 The easiest and straightforward way is to use a |
After checking with @88plug , it looks like the akash-provider is creating new certs each time it gets restarted:
based on the configmap-boot
https://github.com/ovrclk/helm-charts/blob/688b55b5/charts/akash-provider/templates/configmap-boot.yaml#L64
I think it should really save
~/.akash/<akash1....>.pem
file in a local volume somewhere, so it won't attempt to recreate that cert each time it gets restarted, but instead, it would detect it's already present first.local volumes can be added this way
https://kubernetes.io/docs/concepts/storage/volumes/#local
and there are more alternative ways in K8s, most of them are at that page.
The text was updated successfully, but these errors were encountered: