Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Impl Dedicated Nasha #349

Merged
merged 14 commits into from
Dec 28, 2022
Merged

Impl Dedicated Nasha #349

merged 14 commits into from
Dec 28, 2022

Conversation

FlorianPerrot
Copy link

Description

Partial implementation NASHA resources / data with test and docs.

Thanks @goblain. Author of first implementation. see : #44

Type of change

Add Resources :

  • ovh_dedicated_nasha_partition
  • ovh_dedicated_nasha_partition_access
  • ovh_dedicated_nasha_partition_snapshot

Add Data:

  • ovh_dedicated_nasha

@scraly
Copy link
Collaborator

scraly commented Dec 27, 2022

Thanks :)
In every page of OVHcloud this service is called "HA-NAS" in english (or NAS-HA in french ;-) ).
So I added a suggestion of modification in all the PR (code and documentation) with this correct product name :).

image

Copy link
Collaborator

@scraly scraly left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Several suggestions about the name of the product :)


# ovh_dedicated_nasha (Data Source)

Use this data source to retrieve information about a dedicated nasha.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
Use this data source to retrieve information about a dedicated nasha.
Use this data source to retrieve information about a dedicated HA-NAS.

page_title: "OVH: dedicated_nasha"
sidebar_current: "docs-ovh-datasource-dedicated-nasha"
description: |-
Get information of a dedicated nasha.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
Get information of a dedicated nasha.
Get information of a dedicated HA-NAS.


## Argument Reference

* `service_name` - (Required) The service_name of your dedicated NASHA.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
* `service_name` - (Required) The service_name of your dedicated NASHA.
* `service_name` - (Required) The service_name of your dedicated HA-NAS.


## Attributes Reference

`id` is set with the service_name of the dedicated nasha.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
`id` is set with the service_name of the dedicated nasha.
`id` is set with the service_name of the dedicated HA-NAS.

In addition, the following attributes are exported:

* `service_name` - The storage service name
* `can_create_partition` - True, if partition creation is allowed on this nas HA
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
* `can_create_partition` - True, if partition creation is allowed on this nas HA
* `can_create_partition` - True, if partition creation is allowed on this HA-NAS

return fmt.Errorf("calling GET %s :\n\t %s", endpoint, err.Error())
}

fmt.Printf("NASHA partition snapshot : %+v\n", partitionSnapshotResponse)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
fmt.Printf("NASHA partition snapshot : %+v\n", partitionSnapshotResponse)
fmt.Printf("HA-NAS partition snapshot: %+v\n", partitionSnapshotResponse)

return fmt.Errorf("calling GET %s :\n\t %s", endpoint, err.Error())
}

fmt.Printf("NASHA partition : %+v\n", partitionResponse)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
fmt.Printf("NASHA partition : %+v\n", partitionResponse)
fmt.Printf("HA-NAS partition: %+v\n", partitionResponse)

return fmt.Errorf("calling GET %s :\n\t %s", endpoint, err.Error())
}

fmt.Printf("NASHA partition access : %+v\n", partitionAccessResponse)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
fmt.Printf("NASHA partition access : %+v\n", partitionAccessResponse)
fmt.Printf("HA-NAS partition access: %+v\n", partitionAccessResponse)

partitionSnapshotResponse := &DedicatedNASHAPartitionSnapshot{}
err := config.OVHClient.Get(endpoint, partitionSnapshotResponse)
if err != nil {
return fmt.Errorf("calling GET %s :\n\t %s", endpoint, err.Error())
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
return fmt.Errorf("calling GET %s :\n\t %s", endpoint, err.Error())
return fmt.Errorf("calling GET %s:\n\t %s", endpoint, err.Error())


err := config.OVHClient.Get(endpoint, nil)
if err == nil {
return fmt.Errorf("NASHA Partition (%s) still exists", parititionName)
Copy link
Collaborator

@scraly scraly Dec 27, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
return fmt.Errorf("NASHA Partition (%s) still exists", parititionName)
return fmt.Errorf("HA-NAS Partition (%s) still exists", partitionName)

@FlorianPerrot
Copy link
Author

Thanks @scraly for review my PR.
I can replace on documentation and log message. But on "api.ovh.com" real entrypoint is called "/dedicated/nasha" to have a link between terraform resources and OVH resources i kept dedicated_nasha.

@scraly
Copy link
Collaborator

scraly commented Dec 27, 2022

Yes the name of the resource and datasource are good, because it matches with the API endpoint but I think we need to be homogeneous with the product name: https://www.ovhcloud.com/en-gb/storage-solutions/nas-ha/ :)

Copy link
Collaborator

@scraly scraly left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fix parititionName to partitionName

}

serviceName := nashaPartitionAccessResource.Primary.Attributes["service_name"]
parititionName := nashaPartitionAccessResource.Primary.Attributes["name"]
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
parititionName := nashaPartitionAccessResource.Primary.Attributes["name"]
partitionName := nashaPartitionAccessResource.Primary.Attributes["name"]

parititionName := nashaPartitionAccessResource.Primary.Attributes["name"]

config := testAccProvider.Meta().(*Config)
endpoint := fmt.Sprintf("/dedicated/nasha/%s/partition/%s", serviceName, parititionName)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
endpoint := fmt.Sprintf("/dedicated/nasha/%s/partition/%s", serviceName, parititionName)
endpoint := fmt.Sprintf("/dedicated/nasha/%s/partition/%s", serviceName, partitionName)

parititionName := nashaPartitionAccessResource.Primary.Attributes["name"]

config := testAccProvider.Meta().(*Config)
endpoint := fmt.Sprintf("/dedicated/nasha/%s/partition/%s", serviceName, parititionName)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
endpoint := fmt.Sprintf("/dedicated/nasha/%s/partition/%s", serviceName, parititionName)
endpoint := fmt.Sprintf("/dedicated/nasha/%s/partition/%s", serviceName, partitionName)

parititionName := nashaPartitionAccessResource.Primary.Attributes["name"]

config := testAccProvider.Meta().(*Config)
endpoint := fmt.Sprintf("/dedicated/nasha/%s/partition/%s", serviceName, parititionName)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
endpoint := fmt.Sprintf("/dedicated/nasha/%s/partition/%s", serviceName, parititionName)
endpoint := fmt.Sprintf("/dedicated/nasha/%s/partition/%s", serviceName, partitionName)

@scraly
Copy link
Collaborator

scraly commented Dec 27, 2022

Thank you,
but some suggestions are not fixed for the moment :)
image

@FlorianPerrot
Copy link
Author

Fixed!
Sorry for this.

Copy link
Collaborator

@scraly scraly left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I added some suggestions again :-)

Moreover, the creation of resources are working :

$ terraform apply
data.ovh_dedicated_nasha.foo: Reading...
ovh_dedicated_nasha_partition.my-partition: Refreshing state... [id=zpool-xxxxxx/my-partition]
data.ovh_dedicated_nasha.foo: Read complete after 1s [id=zpool-xxxxxx]

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # ovh_dedicated_nasha_partition_access.my-partition will be created
  + resource "ovh_dedicated_nasha_partition_access" "my-partition" {
      + id             = (known after apply)
      + ip             = "123.123.123.123/32"
      + partition_name = "my-partition"
      + service_name   = "zpool-xxxxxx"
      + type           = "readwrite"
    }

Plan: 1 to add, 0 to change, 0 to destroy.

Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes

ovh_dedicated_nasha_partition_access.my-partition: Creating...
ovh_dedicated_nasha_partition_access.my-partition: Still creating... [10s elapsed]
ovh_dedicated_nasha_partition_access.my-partition: Still creating... [20s elapsed]
ovh_dedicated_nasha_partition_access.my-partition: Still creating... [30s elapsed]
ovh_dedicated_nasha_partition_access.my-partition: Still creating... [40s elapsed]
ovh_dedicated_nasha_partition_access.my-partition: Still creating... [50s elapsed]
ovh_dedicated_nasha_partition_access.my-partition: Creation complete after 51s [id=zpool-xxxxxx/my-partition/123.123.123.123/32]

Apply complete! Resources: 1 added, 0 changed, 0 destroyed.

But I have an issue when I want to "read" the partition_access :

$ terraform apply
data.ovh_dedicated_nasha.foo: Reading...
ovh_dedicated_nasha_partition_access.my-partition: Refreshing state... [id=zpool-xxxx/my-partition/123.123.123.123/32]
ovh_dedicated_nasha_partition.my-partition: Refreshing state... [id=zpool-xxxx/my-partition]
data.ovh_dedicated_nasha.foo: Read complete after 1s [id=zpool-xxxx]
╷
│ Error: cant parse access partition id: zpool-xxxx/my-partition/123.123.123.123/32
│
│   with ovh_dedicated_nasha_partition_access.my-partition,
│   on pr349.tf line 12, in resource "ovh_dedicated_nasha_partition_access" "my-partition":
│   12: resource "ovh_dedicated_nasha_partition_access" "my-partition" {
│

* `ip` - Access IP of HA-NAS
* `monitored` - Send an email to customer if any issue is detected
* `zpool_capacity` - percentage of HA-NAS space used in %
* `zpool_size` - the size of the HA-NAS in Go
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The size is in ... bytes, in GB? :-)

```
resource "ovh_dedicated_nasha_partition" "foo" {
service_name = "zpool-12345"
name = "foo"
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
name = "foo"
name = "my-partition"

## Import

HA-NAS partition access can be imported using the `{service_name}/{partition_name}/{ip}`, e.g.
`$ terraform import ovh_dedicated_nasha_partition_access.foo zpool-12345/foo/123.123.123.123%2F32`
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
`$ terraform import ovh_dedicated_nasha_partition_access.foo zpool-12345/foo/123.123.123.123%2F32`
`$ terraform import ovh_dedicated_nasha_partition_access.foo zpool-12345/my-partition/123.123.123.123%2F32`

## Import

HA-NAS can be imported using the `{service_name}/{name}`, e.g.
`$ terraform import ovh_dedicated_nasha_partition.foo zpool-12345/foo`
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
`$ terraform import ovh_dedicated_nasha_partition.foo zpool-12345/foo`
`$ terraform import ovh_dedicated_nasha_partition.foo zpool-12345/my-partition`

service_name = "zpool-12345"
partition_name = "foo"
ip = "123.123.123.123/32"
protocol = "NFS"
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
protocol = "NFS"
type = "readwrite"

```
resource "ovh_dedicated_nasha_partition_snapshot" "foo" {
service_name = "zpool-12345"
partition_name = "foo"
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
partition_name = "foo"
partition_name = "my-partition"

## Example Usage

```
resource "ovh_dedicated_nasha_partition_snapshot" "foo" {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
resource "ovh_dedicated_nasha_partition_snapshot" "foo" {
resource "ovh_dedicated_nasha_partition_snapshot" "my-partition" {


The following arguments are supported:

* `service_name` - (Required) The internal name of your HA-NAS (it has to be ordered via OVH interface)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
* `service_name` - (Required) The internal name of your HA-NAS (it has to be ordered via OVH interface)
* `service_name` - (Required) The internal name of your HA-NAS (it has to be ordered via OVHcloud interface)


The following arguments are supported:

* `service_name` - (Required) The internal name of your HA-NAS (it has to be ordered via OVH interface)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
* `service_name` - (Required) The internal name of your HA-NAS (it has to be ordered via OVH interface)
* `service_name` - (Required) The internal name of your HA-NAS (it has to be ordered via OVHcloud interface)

## Import

HA-NAS partition snapshot can be imported using the `{service_name}/{partition_name}/{type}`, e.g.
`$ terraform import ovh_dedicated_nasha_partition_snapshot.foo zpool-12345/foo/day-3`
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
`$ terraform import ovh_dedicated_nasha_partition_snapshot.foo zpool-12345/foo/day-3`
`$ terraform import ovh_dedicated_nasha_partition_snapshot.foo zpool-12345/my-partition/day-3`

Copy link
Collaborator

@scraly scraly left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I added some suggestions again :-)

Moreover, the creation of resources are working :

$ terraform apply
data.ovh_dedicated_nasha.foo: Reading...
ovh_dedicated_nasha_partition.my-partition: Refreshing state... [id=zpool-xxxxxx/my-partition]
data.ovh_dedicated_nasha.foo: Read complete after 1s [id=zpool-xxxxxx]

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # ovh_dedicated_nasha_partition_access.my-partition will be created
  + resource "ovh_dedicated_nasha_partition_access" "my-partition" {
      + id             = (known after apply)
      + ip             = "123.123.123.123/32"
      + partition_name = "my-partition"
      + service_name   = "zpool-xxxxxx"
      + type           = "readwrite"
    }

Plan: 1 to add, 0 to change, 0 to destroy.

Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes

ovh_dedicated_nasha_partition_access.my-partition: Creating...
ovh_dedicated_nasha_partition_access.my-partition: Still creating... [10s elapsed]
ovh_dedicated_nasha_partition_access.my-partition: Still creating... [20s elapsed]
ovh_dedicated_nasha_partition_access.my-partition: Still creating... [30s elapsed]
ovh_dedicated_nasha_partition_access.my-partition: Still creating... [40s elapsed]
ovh_dedicated_nasha_partition_access.my-partition: Still creating... [50s elapsed]
ovh_dedicated_nasha_partition_access.my-partition: Creation complete after 51s [id=zpool-xxxxxx/my-partition/123.123.123.123/32]

Apply complete! Resources: 1 added, 0 changed, 0 destroyed.

But I have an issue when I want to "read" the partition_access :

$ terraform apply
data.ovh_dedicated_nasha.foo: Reading...
ovh_dedicated_nasha_partition_access.my-partition: Refreshing state... [id=zpool-xxxx/my-partition/123.123.123.123/32]
ovh_dedicated_nasha_partition.my-partition: Refreshing state... [id=zpool-xxxx/my-partition]
data.ovh_dedicated_nasha.foo: Read complete after 1s [id=zpool-xxxx]
╷
│ Error: cant parse access partition id: zpool-xxxx/my-partition/123.123.123.123/32
│
│   with ovh_dedicated_nasha_partition_access.my-partition,
│   on pr349.tf line 12, in resource "ovh_dedicated_nasha_partition_access" "my-partition":
│   12: resource "ovh_dedicated_nasha_partition_access" "my-partition" {
│

@FlorianPerrot
Copy link
Author

I fixed this issue yesterday. here: 9940526
Can you retry your test with this fix?

Acceptance test case works for me. If I'm not mistaken, he should also work for your test.

@scraly
Copy link
Collaborator

scraly commented Dec 28, 2022

I fixed this issue yesterday. here: 9940526 Can you retry your test with this fix?

Acceptance test case works for me. If I'm not mistaken, he should also work for your test.

After your fix, I have still this issue in the read access:

$ terraform destroy
data.ovh_dedicated_nasha.foo: Reading...
ovh_dedicated_nasha_partition_snapshot.my-partition: Refreshing state... [id=zpool-xxxxx/my-partition/day-3]
ovh_dedicated_nasha_partition_access.my-partition: Refreshing state... [id=zpool-xxxxx/my-partition/123.123.123.123/32]
ovh_dedicated_nasha_partition.my-partition: Refreshing state... [id=zpool-xxxxx/my-partition]
data.ovh_dedicated_nasha.foo: Read complete after 0s [id=zpool-xxxxx]
╷
│ Error: can't parse access partition ID: zpool-xxxxx/my-partition/123.123.123.123/32
│
│   with ovh_dedicated_nasha_partition_access.my-partition,
│   on pr349.tf line 12, in resource "ovh_dedicated_nasha_partition_access" "my-partition":
│   12: resource "ovh_dedicated_nasha_partition_access" "my-partition" {
│

@FlorianPerrot
Copy link
Author

You should retry without context. In your case the access resource have a bad ID, generated before my fix.

Copy link
Collaborator

@scraly scraly left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Tests are now Ok for me :-)

So please look at every suggestions have done and commit them.
Thank you

## Example Usage

```hcl
data "ovh_dedicated_nasha" "foo" {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
data "ovh_dedicated_nasha" "foo" {
data "ovh_dedicated_nasha" "my-nas-ha" {

## Import

HA-NAS can be imported using the `{service_name}/{name}`, e.g.
`$ terraform import ovh_dedicated_nasha_partition.foo zpool-12345/my-partition`
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
`$ terraform import ovh_dedicated_nasha_partition.foo zpool-12345/my-partition`
`$ terraform import ovh_dedicated_nasha_partition.my-partition zpool-12345/my-partition`

## Import

HA-NAS can be imported using the `{service_name}/{name}`, e.g.
`$ terraform import ovh_dedicated_nasha_partition.foo zpool-12345/my-partition`
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
`$ terraform import ovh_dedicated_nasha_partition.foo zpool-12345/my-partition`
`$ terraform import ovh_dedicated_nasha_partition.my-partition zpool-12345/my-partition`

## Import

HA-NAS partition access can be imported using the `{service_name}/{partition_name}/{ip}`, e.g.
`$ terraform import ovh_dedicated_nasha_partition_access.foo zpool-12345/my-partition/123.123.123.123%2F32`
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
`$ terraform import ovh_dedicated_nasha_partition_access.foo zpool-12345/my-partition/123.123.123.123%2F32`
`$ terraform import ovh_dedicated_nasha_partition_access.my-partition zpool-12345/my-partition/123.123.123.123%2F32`

## Import

HA-NAS partition snapshot can be imported using the `{service_name}/{partition_name}/{type}`, e.g.
`$ terraform import ovh_dedicated_nasha_partition_snapshot.foo zpool-12345/my-partition/day-3`
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
`$ terraform import ovh_dedicated_nasha_partition_snapshot.foo zpool-12345/my-partition/day-3`
`$ terraform import ovh_dedicated_nasha_partition_snapshot.my-partition zpool-12345/my-partition/day-3`

@scraly scraly added the 0.26.0 label Dec 28, 2022
@scraly
Copy link
Collaborator

scraly commented Dec 28, 2022

Good new feature, it's OK for me :)

@scraly scraly merged commit 89e6eb6 into ovh:master Dec 28, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants