forked from szaydel/Nexenta-Docs-Pub
-
Notifications
You must be signed in to change notification settings - Fork 1
/
NexentaStorHowtoAddVIPstoHAClusterConfiguration.html
124 lines (102 loc) · 8.73 KB
/
NexentaStorHowtoAddVIPstoHAClusterConfiguration.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
<h1 id="nexentastorhowtoaddvirtualipaddressestohaclusterconfiguration">NexentaStor How to Add Virtual IP Addresses to HA Cluster Configuration</h1>
<p>There are times when having a single Virtual Internet Protocol Address (VIP) tied to a Service Group of your HA Cluster is not sufficient, and multiple VIPs are required. A good example of this is where you might have dedicated networks for different security levels, or slow and fast networks, etc. Please keep in mind that this is an advanced configuration option, and <strong>failure to configure correctly may lead to loss of HA services</strong>. As such, please follow with caution, and if unsure, engage Support prior to making changes. </p>
<p>It is possible to configure multiple Virtual IPs for any Service Group, but there are some things to keep in mind. </p>
<ul>
<li>Do not assign VIPs from the same subnet to physical interfaces with different bandwidth capabilities, such as 10GbE and 1GbE. This may result in inbound packets coming from the faster 10GbE interface and leaving via the slower 1GbE.</li>
<li>It is acceptable to create an aggregate out of available interfaces and assign a VIP to the aggregate interface.</li>
</ul>
<p>Please, complete the following steps via NMC: </p>
<ol>
<li><p>Create a configuration checkpoint to have a known good configuration snapshot, prior to making any changes in case further changes result in a malfunctioning configuration. </p>
<pre><code>nmc@clus-a-node-1:/$ setup appliance checkpoint create
</code></pre></li>
<li><p>Validate current network configuration to determine which interface(s) are available to provision new VIP. Pay close attention to the netmask setting on the interface(s). If a network interface to which the VIP will be assigned is already configured with an IP address, make sure to validate and use the same network mask when configuring the VIP. It is not desirable to have multiple interfaces on the same subnet configured with different network masks. </p>
<pre><code>nmc@clus-a-node-1:/$ show network interface
==== Interfaces ====
lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
inet 127.0.0.1 netmask ff000000
e1000g0: flags=1001000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,FIXEDMTU> mtu 1500 index 2
inet 10.10.100.100 netmask fffffc00 broadcast 10.10.103.255
ether 0:c:29:ed:4a:a8
e1000g0:1: flags=1001000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,FIXEDMTU> mtu 1500 index 2
inet 10.10.100.101 netmask ff000000 broadcast 10.255.255.255
e1000g1: flags=1000842<BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 3
inet 0.0.0.0 netmask 0
ether 0:c:29:ed:4a:b2
...snipped...
</code></pre></li>
<li><p>We can obtain further details for interface which will be used for the new VIP by adding <interface name> to command from prior step. Interface must be physically connected to correct network. In our example, we select interface <em>e1000g1</em>, and validate its configuration.</p>
<pre><code>nmc@clus-a-node-1:/$ show network interface e1000g1
==== Interface: e1000g1 ====
e1000g1: flags=1000842<BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 3
inet 0.0.0.0 netmask 0
ether 0:c:29:ed:4a:b2
==== Interface e1000g1: statistics ====
Name Mtu Net/Dest Address Ipkts Ierrs Opkts Oerrs Collis Queue
e1000g1 1500 default 0.0.0.0 89235 0 12 0 0 0
</code></pre></li>
<li><p>In addition, it is a good idea to validate the routing table on each node. It is prudent to confirm that ALL nodes in the cluster share the same entries in the routing table. Any differences should be investigated. </p>
<pre><code>nmc@clus-a-node-1:/$ netstat -nr
Routing Table: IPv4
Destination Gateway Flags Ref Use Interface
-------------------- -------------------- ----- ----- ---------- ---------
default 10.10.100.1 UG 5 2770
10.0.0.0 10.10.100.101 U 2 0 e1000g0
10.10.100.0 10.10.100.100 U 6 14291 e1000g0
127.0.0.1 127.0.0.1 UH 23 5714 lo0
...snipped...
</code></pre></li>
<li><p>Assuming that we do not have entries in <em>/etc/hosts</em>, we need to update entries on ALL nodes in the cluster. We need to add an entry for the new VIP. Consider being descriptive and using the Service Group name as part of the VIP hostname in the <em>/etc/hosts</em> file. Command <code><setup appliance hosts></code> will launch the vi-editor, and allow for modification of the hosts file. </p>
<pre><code>We are adding the following entry to /etc/hosts:
10.10.100.102 clus-a-pool-a-01
nmc@clus-a-node-1:/$ setup appliance hosts
</code></pre></li>
<li><p>We will next validate /etc/netmasks, and for this VIP we are expecting to use a /22 netmask. Changes to /etc/netmasks should again be made on ALL nodes. Command <code><setup appliance netmasks></code> will launch the vi-editor, and allow for modification of the hosts file. </p>
<pre><code>We are adding the following entry to /etc/netmasks:
10.10.100.0 255.255.252.0
nmc@clus-a-node-1:/$ setup appliance netmasks
</code></pre></li>
<li><p>We should now have everything in place to configure the new Virtual IP. We can make this change from any node in the cluster. We will be presented with a a list of cluster groups, and need to select a group to be modified from the list. In this instance we are selecting <em>clus–1_pool_a</em>. </p>
<pre><code>nmc@clus-a-node-1:/$ setup group rsf-cluster clus-alpha-01 vips add
Service :
clus-1_pool_a
-----------------------------------------------------------------------------
Select HA Cluster service. Navigate with arrow keys (or hjkl), or
Ctrl-C to exit.
</code></pre></li>
<li><p>Next, we are presented with a configuration VIPs currently in place, and a prompt to add our new VIP. In this case we are going to use hostname <strong>clus-a-pool-a–01</strong>, which we added to /etc/hosts earlier.</p>
<pre><code>Your current network configuration for service 'clus-1_pool_a':
VIP1: clus-a-pool-a
clus-a-node-1.homer.lab:e1000g0
clus-a-node-2:e1000g0
VIP2 Shared logical hostname:
-----------------------------------------------------------------------------
IPv4 address or hostname that maps to the failover IP interface on all the
appliances in the HA Cluster. This IP address or hostname is expected to be
used by services such as NFS/CIFS/iSCSI to reliably access shared storage at
all times.. Press Ctrl-C to exit.
</code></pre></li>
<li><p>At this point we are prompted with a list of interfaces avaialble to us on the system, and as we planned earlier, we will select <em>e1000g1</em> from this list. Names and numbers of interfaces will vary from system to system. We are going to see the prompt to select interface for this VIP repeat for each node in the cluster. We are expecting that all nodes will use the same interface. </p>
<pre><code>VIP2 Shared logical hostname: clus-a-pool-a-01
VIP2 Network interface at clus-a-node-1.homer.lab:
e1000g2 e1000g3 e1000g0 e1000g1
</code></pre></li>
<li><p>Now we have to specify the mask used for this VIP, and while we already have /etc/masks updated, we are still going to enter the mask here, assurming that if there any changes in the future to the logic of HA configuration, we are already covered and do not have to reconfigure. We are using the same mask as in /etc/netmasks. </p>
<pre><code>Verifying logical (failover) IP address '10.10.100.102' ...Success.
VIP2 Failover Netmask:
-----------------------------------------------------------------------------
Optionally, specify IP network mask for the failover interface
'10.10.100.102'. The mask includes the network part address and the subnet
part. Press Enter to use default netmask: 255.0.0.0.. Press Ctrl-C to exit.
</code></pre></li>
<li><p>We are prompted with changes that are about to be committed, and we accept these changes. If we are done adding VIPs, we are going to say <em>Yes</em> to ‘Stop adding VIPs?’. </p>
<pre><code>The IP configuration to be used with 'clus-a-pool-a-01' is: 10.10.100.102/255.255.252.0.
Please confirm that this configuration is correct ? Yes
Stop adding VIPs? Yes
Success.
</code></pre></li>
<li><p>At this point we need to validate that the VIP was added, and on the system where this service group is active, we can validate that the VIP is now in service. We can simply ping the VIP to validate its state. We should make sure that we can ping the VIP from ALL nodes in the cluster. </p>
<pre><code>nmc@clus-a-node-1:/$ ping clus-a-pool-a-01
clus-a-pool-a-01 is alive
</code></pre></li>
</ol>