Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

vmware_cluster_vsan - add support for advanced parameters #260

Closed
lupa95 opened this issue Jun 24, 2020 · 12 comments · Fixed by #289
Closed

vmware_cluster_vsan - add support for advanced parameters #260

lupa95 opened this issue Jun 24, 2020 · 12 comments · Fixed by #289

Comments

@lupa95
Copy link

lupa95 commented Jun 24, 2020

SUMMARY

While configuring vsan clusters, setting the advanced parameters for the vsan service (Object repair timer, Thin Swap, Large Cluster Support etc.) should be possible.

ISSUE TYPE
  • Feature Idea
COMPONENT NAME

vmware_cluster_vsan

ADDITIONAL INFORMATION

These parameters often need to be tweaked while configuring vsan clusters.

- name: Enable vSAN and set advanced parameters
  vmware_cluster_vsan:
    hostname: '{{ vcenter_hostname }}'
    username: '{{ vcenter_username }}'
    password: '{{ vcenter_password }}'
    datacenter_name: datacenter
    cluster_name: cluster
    enable_vsan: yes
    advanced_parameters:
        object_repair_timer: 120
        site_read_locality: enabled
        thin_swap: enabled
        large_cluster_support: enabled
        automatic_rebalance: disabled
  delegate_to: localhost
@mariolenz
Copy link
Collaborator

I'd like to work on this but don't have much time at the moment. For the record: I think object_repair_timer, site_read_locality, large_cluster_support and maybe thin_swap can be controlled through VsanExtendedConfig. Haven't found anything to implement automatic_rebalance yet, though.

These parameters often need to be tweaked while configuring vsan clusters.

We've been running VSAN clusters for years and never had to change these settings at all. I don't know your use cases, maybe these parameters are really important to you... but I think most people never touch them. As an ops guy, I especially would question the design decision to build VSAN clustern with > 32 hosts... that's a fairly big failure domain ;-)

Anyway, I'll try to find some time work on this.

@lupa95
Copy link
Author

lupa95 commented Jun 29, 2020

In my use case i only had to tweak object_repair_timer and thin_swap but i thought implementing them all might make sense while someone is at it.

I stumbled across the problem that the VsanExtendedConfig is part of the vsan api, and i had to use additional modules from VMware(vsanmgmtObjects.py). Only the stuff from the vSphere Webservices API is available in pyvmomi if i understand that correctly? I'm very new to VMware and their APIs so i might be talking nonsense here.

Great to hear that someone wants to work on it though :-)

@mariolenz
Copy link
Collaborator

In my use case i only had to tweak object_repair_timer and thin_swap but i thought implementing them all might make sense while someone is at it.

Yes, I can imagine use cases where you want to tweak object_repair_timer and thin_swap. And I agree that, when implementing these two, it makes sense to implement the other advanced settings as well.

Great to hear that someone wants to work on it though :-)

Well... yes... wants to... I really do, it's just a question of finding the time. At the end of the day, I use these modules and fix bugs or implement features that affect us. Everything else, I do in my spare time... but I'll try. After all, sooner or later we might want to tweak these settings, too ;-)

@mariolenz
Copy link
Collaborator

mariolenz commented Jul 6, 2020

Bad news: It looks like pyVmomi doesn't know about these advanced VSAN options at all, you really need to install the vSAN Management SDK for Python first. (edit: But you've stumbled across this already) I have a really bad feeling introducing this as a dependency... @Akasurde @goneri Or do you think differently?

Btw: Feel free to kick VMware for not providing this SDK in an easily consumable way ;-)

Maybe I could implement this with the vSphere Automation SDK for Python but I'm still waiting for it to be published on PyPI (vmware/vsphere-automation-sdk-python#38).

@goneri
Copy link
Member

goneri commented Jul 6, 2020

Hi @lupa95 and @mariolenz,

vmware_vsan_health_info already depends on the vsan sdk (a.ka vSAN Management SDK for Python). The extra dependency is mentioned in the requirements of the module. We cannot properly test the module in the CI, since we cannot easily pip install the depency.
I was also reluctant to include the module, but at this time, they were no real alternative. I don't know if the situation has changed.

So, I would say, I'm ok to merge to patch BUT we must the extra dependency must be mentioned in the documentation, and if someone try to run the module without the dependency, the error message should be as clear as possible.

@mariolenz
Copy link
Collaborator

vmware_vsan_health_info already depends on the vsan sdk (a.ka vSAN Management SDK for Python).

Didn't know this, but that means I'm not introducing a new dependency. OK, then I'll give it a try.

I was also reluctant to include the module, but at this time, they were no real alternative. I don't know if the situation has changed.

To the best of my knowledge the situation hasn't changed... unfortunately :-(

@mariolenz
Copy link
Collaborator

@lupa95 @goneri fyi: vmware/pyvmomi#909

@eschek87
Copy link

eschek87 commented Jul 8, 2020

Hi,
maybe we can extend the features request for the following settings:

  • claim unused disk
  • configure stretched cluster and fault domains

Regards,
Stephan

@mariolenz
Copy link
Collaborator

maybe we can extend the features request for the following settings:

  • claim unused disk

Isn't auto-claiming deprecated? I'm sure I've read something about it...

  • configure stretched cluster and fault domains

This might be a bit too much for a single PR, but I can't tell at the moment. I'm still trying to understand the vSAN Management SDK for Python and it kind of keeps fighting back... but since this afternoon, it looks like I'm winning :-)=)

I'll have a look at it and will let you know whether I think it's a good idea to implement this in one go or not.

@eschek87
Copy link

eschek87 commented Jul 8, 2020

Isn't auto-claiming deprecated? I'm sure I've read something about it...

Yes you are right, but there must we another way to do this because the vmware webclient allow to do this more or less automatically. Maybe this doc helps: https://vdc-download.vmware.com/vmwb-repository/dcr-public/424d010b-c80e-40de-b1a3-25f6e9861e6a/3b934f51-98b6-4ea1-9336-b1bac1f23403/vsan-sdk-67.pdf "Claiming and Managing Disks"

This might be a bit too much for a single PR, but I can't tell at the moment. I'm still trying to understand the vSAN Management SDK for Python and it kind of keeps fighting back... but since this afternoon, it looks like I'm winning :-)=)

I'll have a look at it and will let you know whether I think it's a good idea to implement this in one go or not.

Thanks and regards,
Stephan

@mariolenz
Copy link
Collaborator

@eschek87

  • claim unused disk

The module has a parameter vsan_auto_claim_storage, isn't that what you want? Or do you want to explicitly claim a special disk?

  • configure stretched cluster and fault domains

I'd prefer another issue / PR for this. VsanVcClusterConfigSystem doesn't know about stretched clusters, only about the "normal" VSAN stuff. Stretched cluster configuration is done through completely different API calls. I think it would be too much change in a single PR to implement this as well... but feel free to open an issue for this so we don't forget about it.

@eschek87
Copy link

Hi mariolenz,

as you has written auto claim is deprecated and doesn't work anymore. I will open a new pr for this.

Thanks and regards,
Stephan

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants