8. IPU‑POD128 network configuration
This section describes how to set up the network configuration on the management server and IPU-M2000s for your IPU‑POD128.
8.1. Overview
This section describes how two IPU‑POD64 racks can be merged into a new IPU‑POD128. The two IPU‑POD64 racks will be referred to as POD7 and POD8 in this section of the document for illustration purposes. POD7 has the management server role for the BMC and IPU-Gateway management network.
An RNIC spine switch can be used in an IPU‑POD128 if required. With an RNIC spine switch installed in the system, all the IPU-M2000s in the IPU‑POD128 can be accessed by both the POD7 and POD8 servers in the RDMA network (data plane). Without the spine switch, only local IPU-POD servers can access their local IPU-M2000s over the RDMA network - so the POD7 server can only access the IPU-M2000s in POD7 and the POD8 server can only access the IPU-M2000s in POD8. Since the POD7 server is used as the management server, it can access the POD8 IPU-M2000s using the management network, with or without a spine switch.
You need to upgrade the IP addressing on both IPU‑POD64 racks to the IP address scheme required for IPU-PODs larger than IPU‑POD64. The default factory IPU‑POD64 setup has no rack number indication in the IP addresses, however this is required for larger IPU-PODs. With this updated IP address scheme multiple IPU‑POD64 racks can be connected together and can also form part of a larger IPU-POD later on without having to change their IP addresses.
The default factory scheme for IPU‑POD64 racks as shipped is 10.1.x.z.
The z values are for the IPU-M2000s in the rack
The updated IP address scheme required for the IPU‑POD128 uses 10.x.y.z where y = the logical rack number (lrack) which is normally identical to the IPU-POD number
The x values stay the same: x=1 for BMC, x=2 for IPU-Gateway, x=3 for management server port, x=5 for data RNIC
POD7 hosts a management server running the V-IPU Management Controller, 1GbE management DHCP server, as well as NTP and syslog servers for the IPU‑POD128.
POD8 has a limited management server role if no spine switch is being used to fully connect all the IPU‑POD128 servers with all the IPU-M2000s across the two racks. In this case (not fully connected) the RNIC interfaces on the IPU-M2000s are still served by their local (existing) DHCP server on each IPU‑POD64 rack, but this time with a new IP address that identifies the lrack number.
Note
If there is no spine switch you will not be able to reach the RNICs on POD8 when checking the status with rack_tool
from POD7 since there is no inter-rack 100GbE connectivity.
Note
If any of your IPU‑POD64 racks have DHCP config files named vlan-14.conf
these should be changed to vlan-13.conf
. You should also check the port names for the rack switches - any that are named vlan-14
should be changed to vlan-13
.
8.2. Useful resources
For more details on using V-IPU you can reference the V-IPU user guide and V-IPU administrator guide. The BMC user guide also contains relevant information.
8.3. IP addressing
The previous template for management network IP addresses (10.x.y.z) has been modified since logical rack (lrack) numbers are now needed in the IP address. This information was previously given as the second octet of the IP address (x), but is now moved to the third octet (y) to ease subnetting based on interface types.
Note
Since we are using 7 and 8 as the IPU‑POD64 logical rack numbers in this section, the IP addresses in the examples reflect this (10.x.7.z for example). If your IPU‑POD64 racks have different logical rack numbers then you will have to manually edit the files accordingly.
This means that new IP addresses are of the form:
10.<interface type>.<lrack#>.<IPU-M2000 number>
The IPU-M2000 BMCs have IP addresses:
10.1.<lrack#>.<IPU-M2000 number=1..16>
The IPU-M2000 IPU-Gateways have IP addresses:
10.2.<lrack#>.<IPU-M2000 number=1..16>
The IPU-M2000 RNIC ports have IP addresses:
10.5.<lrack#>.<IPU-M2000 number=1..16>
- The management server ports that face the IPU-M2000 BMC, IPU-Gateway and V-IPU management subnet (V-IPU CLI from one host) should have IP addresses that match the IP subnets they will reach:
10.1.<lrack#>.150
(BMC network)10.2.<lrack#>.150
(IPU-Gateway network)10.3.<lrack#>.150
(V-IPU management subnet)
as shown in this example:
user: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000 link/ether f4:02:70:b9:18:3e brd ff:ff:ff:ff:ff:ff inet 10.1.7.150/16 brd 10.1.255.255 scope global eno1 valid_lft forever preferred_lft forever inet 10.2.7.150/16 brd 10.2.255.255 scope global eno1 valid_lft forever preferred_lft forever inet 10.3.7.150/16 brd 10.3.255.255 scope global eno1 valid_lft forever preferred_lft forever ....
8.4. Phased merge to create IPU‑POD128 from two IPU‑POD64 racks
Note
You will need to take the two IPU‑POD64 racks out of service to carry out the merge procedures described in this section.
8.4.1. Networking pre-requisites
Make sure that the POD7 and POD8 1GbE management switches are trunked together on a site-specific switch that takes VLAN 13 uplinks from each rack’s management switch and connects VLAN13 across both IPU‑POD64 racks as one L2 broadcast domain.
Warning
These trunks must be disabled for VLAN13 traffic forwarding during the phased merge to a single IPU‑POD128 since the migration steps require the IPU‑POD64 racks to be fully isolated during the upgrade to the new IP address scheme.
If your IPU‑POD64 1GbE switches have the VLAN13 ports named as VLAN-14 or VLAN14 then you should correct them.
Make sure the DHCP files in both the POD7 and POD8 management servers are named vlan-13.conf not vlan-14.conf:
$ sudo find . -name vlan-14.conf /etc/dhcp/dhcpd.d/vlan-14.conf
If you do have to correct this then any references to these files should also be fixed in
/etc/dhcp/dhcpd.conf
.Turn off V-IPU services and DHCP services
To turn off the V-IPU services and DHCP services, use these commands:
sudo systemctl stop vipu-server.service sudo systemctl stop isc-dhcp-server.service
8.4.2. Phase 1: Edit configuration files
A user with sudo rights needs to edit the configuration files first.
Warning
It is important that you do NOT apply the new configuration until you have carried out all the steps in this section and are ready to embark on Section 8.4.3, Phase 2: Activation of new configuration.
Step 1: For POD7 and POD8
1a) Copy rack_config.json
to rack_config.json_pod128
in the same directory:
cp /home/ipuuser/.rack_tool/rack_config.json /home/ipuuser/.rack_tool/rack_config.json_pod128
You then need to rewrite the rack_config.json_pod128
config file to match the IPU‑POD128 IP addresses as shown in Section 8.7.3, POD7: rack_config.json file.
Doing this means that you will be able to run rack_tool
commands from the POD7 management server for all the IPU-M2000s, in both IPU‑POD64 racks (32 IPU-M2000s). Upgrading IPU-M2000 software for all 32 IPU-M2000s can then be run as a single operation.
1b) RNIC entries in POD7 and POD8 rack_tool.json
files
System fully connected with spine switches
If your IPU‑POD128 is using spine switches then you need to keep all the entries for the RNIC interfaces on both POD7 and POD8 rnic_ip
in the POD7 rack_tool.json
file. This file will also contain all the IPU-Gateway and BMC interfaces for both POD7 and POD8. The POD8 rack_tool.json
file is not required as POD7 will be used for all rack_tool
operations.
System NOT fully connected with spine switches
If you are not using spine switches then you need to keep the entries for all the IPU-Gateway and BMC interfaces for both POD7 and POD8 in the POD7 rack_tool.json
file, but only the POD7 RNIC interfaces rnic_ip
. You need to list the POD8 RNIC interfaces rnic_ip
in the POD8 rack_tool.json
file instead.
Note
There is no inter-rack RDMA connectivity (data plane) between the two IPU‑POD64 racks unless there is a RoCE spine switch that brings the leaf switches together. This means that, on the data plane, the POD7 server(s) cannot access the POD8 IPU-M2000s, and the POD8 server(s) cannot access the POD7 IPU-M2000s. The POD7 server(s) can reach the POD8 IPU-M2000s using the management network (control plane) for software updates.
Note
Kubernetes will not be supported on the IPU‑POD128 unless there are spine switches providing inter-rack RDMA connectivity.
Step 2: DHCP config files for POD7 and POD8
The DHCP config file vlan-13.conf
contains the ports for the IPU-M2000 IPU-Gateway and BMC ports.
You need to add a copy of this file into a directory called /etc/dhcp/dhcpd.d/lrack#
, where # denotes the rack (so 7 or 8 in our example). You also need to copy over the vlan-11.conf
file.
2a) POD7
sudo mkdir /etc/dhcp/dhcpd.d/lrack7
sudo mkdir /etc/dhcp/dhcpd.d/lrack8
sudo cp /etc/dhcp/dhcpd.d/vlan-13.conf /etc/dhcp/dhcpd.d/lrack7
sudo cp /etc/dhcp/dhcpd.dvlan-11.conf /etc/dhcp/dhcpd.d/lrack7
2b) POD8
sudo mkdir /etc/dhcp/dhcpd.d/lrack8
sudo cp /etc/dhcp/dhcpd.d/vlan-11.conf /etc/dhcp/dhcpd.d/lrack8
sudo scp /etc/dhcp/dhcpd.d/vlan-13.conf ipuuser@pod7:/etc/dhcp/dhcpd.d/lrack8
2c) POD 7 (required for spine switches)
You will need to run the following command if you have spine switches. If you don’t have spine switches then it is not necessary, however, if you might add spine switches in the future then it is a good idea to run it now so that POD7 is prepared.
sudo scp /etc/dhcp/dhcpd.d/vlan-11.conf ipuuser@pod7:/etc/dhcp/dhcpd.d/lrack8
Step 3: Edit DHCP config files for POD7
You need to edit the following management network DHCP files on POD7:
/etc/dhcp/dhcpd.d/lrack7/vlan-13.conf
/etc/dhcp/dhcpd.d/lrack8/vlan-13.conf
POD7 is lrack7 and POD8 is lrack8. You can see examples of the DHCP files in Section 8.6, DHCP files.
Step 4: netplan setup
There are examples of the netplan files that you are required to edit in Section 8.7, /etc/netplan files. There are also descriptions of how to edit them. You need to carry out the netplan setup changes in this step in the order they are given (4a - 4d).
4a) BMC management
Create a netplan setup for the BMC management subnet interface of the POD7 management server using the new management network address 10.1.7.150.
4b) IPU-Gateway management
Create a netplan setup for the IPU-Gateway management subnet interface of the POD7 management server using the new management network address 10.2.7.150.
4c) POD7 RNIC interface
Create a netplan setup for the POD7 management server RNIC interface using the new management network address 10.5.7.150.
4d) POD8 RNIC interface
Note
This part of the step is only required if there is no spine switch enabling full RDMA connectivity between the two IPU‑POD64 racks.
Create a netplan setup for the POD8 management server RNIC interface using the new management network address 10.5.8.150.
Step 5: Update vlan-11.conf files
Modify
/etc/dhcp/dhcpd.d/lrack7/vlan-11.conf
on POD7 using the new network addresses. For example:
# server RNIC addresses also set by the 150 server’s DHCP server
# ONLY: if rack has 4 servers
# lr7-server1mx is setup using netplan to 10.5.7.150
host lr7-server2mx { hardware ethernet 1c:36:da:4b:ea:ef; fixed-address 10.5.7.151; }
host lr7-Server3mx { hardware ethernet 0c:44:a1:20:7c:83; fixed-address 10.5.7.152; }
host lr7-Server4mx { hardware ethernet 0c:44:a1:20:80:a3; fixed-address 10.5.7.153; }
Once you have edited this file then you follow 2) or 3) depending on whether you have spine switches or not.
Either:
2) Modify /etc/dhcp/dhcpd.d/lrack8/vlan-11.conf
on POD7 using the new network addresses.
Note
This is only required if you have spine switches enabling full RDMA connectivity between the two IPU‑POD64 racks.
# server RNIC addresses also set by the Pod7 management server’s DHCP server
# ONLY: if rack has 4 servers
host lr8-server1mx { hardware ethernet 1c:36:da:4b:ea:ef; fixed-address 10.5.8.150; }
host lr8-server2mx { hardware ethernet 1c:36:da:4b:ea:ef; fixed-address 10.5.8.151; }
host lr8-Server3mx { hardware ethernet 0c:44:a1:20:7c:83; fixed-address 10.5.8.152; }
host lr8-Server4mx { hardware ethernet 0c:44:a1:20:80:a3; fixed-address 10.5.8.153; }
Or:
3) Modify /etc/dhcp/dhcpd.d/lrack8/vlan-11.conf
on POD8 using the new network addresses.
Note
This is only required if you do NOT have spine switches enabling full RDMA connectivity between the two IPU‑POD64 racks.
# server RNIC addresses also set by the Pod7 management server’s DHCP server
# ONLY: if rack has 4 servers
# lr8-server1mx is setup using netplan to 10.5.8.150
host lr8-server2mx { hardware ethernet 1c:36:da:4b:ea:ef; fixed-address 10.5.8.151; }
host lr8-Server3mx { hardware ethernet 0c:44:a1:20:7c:83; fixed-address 10.5.8.152; }
host lr8-Server4mx { hardware ethernet 0c:44:a1:20:80:a3; fixed-address 10.5.8.153; }
8.4.3. Phase 2: Activation of new configuration
This section describes how to activate the new configuration to combine the two IPU‑POD64 racks into an IPU‑POD128. While this is ongoing users will not be able to use either of the two IPU‑POD64 racks so this phase should be carried out during a maintenance window that avoids peak user access.
Warning
It is important that you do NOT apply the new configuration in this phase until you have carried out all the steps in Section 8.4.2, Phase 1: Edit configuration files.
Step 6: Inform users of down time
You need to inform all users of the two IPU‑POD64 racks (POD7 and POD8 in this example) that they will be unavailable for use while they are switched over to form a single IPU‑POD128. This activiation of the new configuration will generally take a couple of hours.
Warning
You need to follow these instructions in the order they are given to avoid problems with the configuration activation. Do NOT enable VLAN13 trunking between the POD7 and POD8 management switches at this stage.
Step 7: POD8 DHCP server
7a) The local POD8 DHCP server must not service requests from the 1GbE management interface vlan-13.conf
. You therefore need to disable vlan-13.conf
by editing the current DHCP setup ipum-dhcp.conf
and removing the vlan-13.conf
file, as shown in Section 8.6.3, POD8: /etc/dhcp/dhcpd.d/ipum-dhcp.conf.
Note
The following is only required if you do NOT have spine switches enabling full RDMA connectivity between the two IPU‑POD64 racks.
7b) Use the DHCP setup and file location /etc/dhcp/dhcpd.d/lrack8/
for vlan-11.conf
to use the new IP address scheme for POD8. Then restart the DHCP server on POD8 with this command:
sudo systemctl restart isc-dhcp-server
Step 8: POD7 DHCP server
On POD7 you need to edit /etc/dhcp/dhcpd.conf
and include /etc/dhcp/dhcpd.d/ipum-dhcp.conf
as described in Section 8.6.1, POD7 and POD8: /etc/dhcp/dhcpd.conf. This dhcpd.conf file will then start pointing to the new /etc/dhcp/dhcpd.d/lrack7
and /etc/dhcp/dhcpd.d/lrack8
config files.
You then need to split the vlan-13.conf
file into two files, one containing IPU-Gateway ports only, and the other containing BMC ports only.
Restart the DHCP server on POD7 with this command:
sudo systemctl restart isc-dhcp-server
Step 9: Restart IPU-M2000s on POD7
Next you need to restart all the IPU-M2000s on POD7 by using the BMC controller to manually power cycle them with the following commands:
rack_tool.py power-cycle
rack_tool.py run-command –c reboot –d bmc
This will use the old /home/ipuuser/.rack_tool/rack_config.json
file so no network setup change should have been activated before this.
The IPU-M2000s will reboot and request new IP, as enabled above.
Step 10: netplan configuration on POD7
You need to change to the new netplan configuration for management interfaces on POD7 and then activate it.
Step 11: Update rack_config files on POD7
11a) Save the old rack_config.json
file on POD7 as this is not required any more:
cp /home/ipuuser/.rack_tool/rack_config.json /home/ipuuser/.rack_tool/rack_config.json_pod64
11b) Update the rack_config.json
file on POD7 to an IPU‑POD128 setup:
cp /home/ipuuser/.rack_tool/rack_config.json_pod128 /home/ipuuser/.rack_tool/rack_config.json
More details about the contents of rack_config.json_pod128
can be found in Step 1: For POD7 and POD8.
11c) Use rack_tool
on POD7 to verify that the new POD7 IPU-M2000, IPU-Gateway and BMC IP addresses have been set up correctly:
rack_tool.py status
The POD8 IPU-M2000s will fail at this point as they still have their old IP addresses and there is no trunking over VLAN13 between the switches enabled yet.
Step 12: Restart IPU-M2000s on POD8
12a) Restart the IPU-M2000s on POD8 by using the BMC controller to manually power cycle them with the following commands run on POD8:
rack_tool.py power-cycle
rack_tool.py run-command –c reboot –d bmc
This will use the old /home/ipuuser/.rack_tool/rack_config.json
file.
12b) Save the old rack_config.json
file on POD8 as this is not required any more:
cp /home/ipuuser/.rack_tool/rack_config.json /home/ipuuser/.rack_tool/rack_config.json_pod64
12c) Update the rack_config.json
file on POD8 to an IPU‑POD128 setup:
cp /home/ipuuser/.rack_tool/rack_config.json_pod128 /home/ipuuser/.rack_tool/rack_config.json
More details about the contents of rack_config.json_pod128
can be found in Step 1: For POD7 and POD8.
12d) Use rack_tool
on POD8 to verify that the new POD8 IPU-M2000 IP addresses have been set up correctly:
rack_tool.py status
Step 13: Verify IPU-M2000 interface access
Note
this step is only required if the IPU‑POD128 is NOT fully connected with spine switches.
Run the following rack_tool
command on POD7 to verify that there is no access to the IPU-M2000 interfaces on POD8 from POD7:
rack_tool.py status
The IPU-M2000 RNICs on POD8 will fail as they are not reachable from POD7 unless there are spine switches.
Step 14: Create V-IPU cluster on POD7
Create a new V-IPU cluster on POD7 and add all the V-IPU agents from both IPU‑POD64 racks (POD7 and POD8) to the cluster. There is one V-IPU agent per IPU-Gateway on each IPU-M2000 so there will be 32 in total. Make sure the cluster is added as a torus if the IPU-Link and GW-Link cables are connected as a loop. To do this you need to use the --cluster-topology looped
and --topology torus
options. The --cluster-topology
argument defines the GW-Link topology (horizontal torus) and the --topology
argument defines the IPU-Link topology (vertical torus).
For example:
vipu-admin create cluster cl128 --num-ilds 2 --topology torus --cluster-topology looped --agents ${ALL_IPUM_NAMES_FROM_vipu_list_agents}
Step 15: Test access to V-IPU agents
Use the V-IPU test
command to test access to V-IPU agents for both POD7 and POD8. For example:
vipu-admin test cluster cl128
For more details on how to use this command refer to the V-IPU guides (V-IPU user guide and V-IPU administrator guide.)
Step 16: Create partitions on POD7
Use the IPUOF_VIPU_API
environment variables to create user specific partitions on POD7 with the new IP address setup. These environment variables are:
IPUOF_VIPU_API_HOST
: The IP address of the server running the V-IPU controller. Required.IPUOF_VIPU_API_PORT
: The port to connect to the V-IPU controller. Optional. The default is 8090.IPUOF_VIPU_API_PARTITION_ID
: The name of the partition to use. Required.IPUOF_VIPU_API_GCD_ID
: The ID of the GCD you want to use. Required for multi-GCD systems.IPUOF_VIPU_API_TIMEOUT
: Set the time-out for client calls in seconds. Optional. Default 200.
More details about how to use these environment variables are given in the V-IPU user guide.
Use gc-info
on the management server to check that the IPUs in the partition are visible. More details about how to use gc-info
are given here.
Step 17: POD7 RNIC addresses
Run netplan apply
on all POD7 servers so that they get new RNIC IP addresses. Then use gc-info
to check that they have access to the partition setup and that all POD7 servers can reach all the IPU-M2000s they should have access to.
Step 18: POD8 RNIC addresses
Run netplan apply
on all POD8 servers so that they get new RNIC IP addresses. Then use gc-info
to check that they have access to the partition setup and that all POD8 servers can reach all the IPU-M2000s they should have access to.
Step 19: rsyslog.d
Edit /home/ipuuser/.rack_tool/root-overlay/etc/rsyslog.d
to point to POD7 IP (10.2.7.150).
Step 20: chrony.conf
Edit /home/ipuuser/.rack_tool/root-overlay/etc/chrony/chrony.conf
to point to POD7 IP (10.2.7.150).
Step 21: Refresh overlay files on POD7
Use rack_tool
to refresh the overlay files.
rack_tool.py update-root-overlay
Step 22: Check IPU-M2000s logging to POD7
You need to check whether all 32 IPU-M2000s are logging to the POD7 syslog
. This will either be located in /var/log/syslog
or in a specified location if you have filters in place for IPU-M2000 logs. A common syslog
filter is /etc/rsyslog.d/99_ipum.conf
:
# Graphcore.ai - Config Management
# This file is managed by puppet, so any manual changes will be overwritten automatically!
$template tplremote,"%timegenerated% %HOSTNAME% %fromhost-ip% %syslogtag%%msg:::drop-last-lf%\n"
$template bmclogdir,"/var/log/ipumlogs/bmclogs/%fromhost-ip%.log"
$template gwlogdir,"/var/log/ipumlogs/gwlogs/%fromhost-ip%.log"
if $fromhost-ip startswith '10.1' then ?bmclogdir;tplremote
if $fromhost-ip startswith '10.2' then ?gwlogdir;tplremote
& ~
Run this command to check if the syslog
server is running on POD7’s management server (server1):
systemctl | grep syslog
If the syslog
server is not running then you need to start it.
Step 23: Check IPU-M2000s have NTP date and time
You need to check whether all 32 IPU-M2000s are taking their date and time from the NTP server on POD7.
This can be checked by ssh to itadmin @ IPUM Gateway and checking the system time/date.
Step 24: Run ML application
Run a machine learning application on the IPU‑POD128 to check that is it correctly using both POD7 and POD8 resources. This will verify that the merge to an IPU‑POD128 has been successful. You can find examples of ML applications to run in Graphcore’s GihHub examples repo.
8.5. IPU-M2000 setup files
8.5.1. Syslog and chrony on the IPU-Gateway
Update the
chrony.conf
file in theroot_overlay
files.
Below is a truncated example of a
chrony.conf
file.
(venv) ipuuser@lr23-poplar1:~/IPU_M_SW-2.3.0$ more ~/.rack_tool/root-overlay/etc/chrony/chrony.conf
# Welcome to the chrony configuration file. See chrony.conf(5) for more
# information about usuable directives.
pool 10.2.7.150 iburst
Update the
rsyslog.d
configuration file in theroot_overlay
files.
(venv) ipuuser@lr23-poplar1:~/IPU_M_SW-2.3.0$ more ~/.rack_tool/root-overlay/etc/rsyslog.d/99-loghost.conf
*.* @10.2.7.150:514
Update the
root_overlay
files withrack_tool
:
rack_tool.py update-root-overlay
8.5.2. Syslog on BMC
8.6. DHCP files
8.6.1. POD7 and POD8: /etc/dhcp/dhcpd.conf
This setup assumes that the IPU‑POD64 switches, server BMCs and PDUs are connected to the IT network (see Section 3, IPU-POD64 rack assembly), leaving the BMC, IPU-Gateway and RDMA networking to connect to the management server(s). The management server in POD7 is for all BMC, IPU-Gateway, and POD7 RDMA networking; the management server in POD8 is the DHCP server for POD8 RDMA networking.
root@gbnwp-pod013-1:/home/user1# cat /etc/dhcp/dhcpd.conf
default-lease-time 600;
max-lease-time 7200;
ddns-update-style none;
authoritative;
log-facility local7;
include "/etc/dhcp/dhcpd.d/ipum-dhcp.conf";
8.6.2. POD7: /etc/dhcp/dhcpd.d/ipum-dhcp.conf
root@gbnwp-pod007-1:/home/user1# cat /etc/dhcp/dhcpd.d/ipum-dhcp.conf
shared-network "L2" {
# IPU-M BMC subnet
subnet 10.1.0.0 netmask 255.255.0.0 {
option subnet-mask 255.255.0.0;
default-lease-time 600;
max-lease-time 7200;
option ntp-servers 10.1.7.150;
}
# IPU-M GW subnet
subnet 10.2.0.0 netmask 255.255.0.0 {
option subnet-mask 255.255.0.0;
default-lease-time 600;
max-lease-time 7200;
option ntp-servers 10.2.7.150;
}
# V-IPU management subnet i.e. V-IPU CLI from one host
# to V-IPU Controller on management server
subnet 10.3.0.0 netmask 255.255.0.0 {
option subnet-mask 255.255.0.0;
default-lease-time 600;
max-lease-time 7200;
option ntp-servers 10.3.7.150;
}
}
# list all lrack numbers that are served by this host here
# lrack7
include "/etc/dhcp/dhcpd.d/lrack7/ipum-bmc.conf";
include "/etc/dhcp/dhcpd.d/lrack7/ipum-gw.conf";
include "/etc/dhcp/dhcpd.d/lrack7/poplar-mgmt.conf";
# lrack8
include "/etc/dhcp/dhcpd.d/lrack8/ipum-bmc.conf";
include "/etc/dhcp/dhcpd.d/lrack8/ipum-gw.conf";
include "/etc/dhcp/dhcpd.d/lrack8/poplar-mgmt.conf";
# IPU-M IPUoF subnet
subnet 10.5.0.0 netmask 255.255.0.0 {
option subnet-mask 255.255.0.0;
default-lease-time 600;
max-lease-time 7200;
range 10.5.7.240 10.5.7.244;
}
# list all lrack numbers that is served by this host here
include "/etc/dhcp/dhcpd.d/lrack7/ipum-rnic.conf";
8.6.3. POD8: /etc/dhcp/dhcpd.d/ipum-dhcp.conf
root@gbnwp-pod8-1:/home/hhoeg# cat /etc/dhcp/dhcpd.d/ipum-dhcp.conf
shared-network "L2" {
# IPU-M BMC subnet
# V-IPU management subnet i.e. V-IPU CLI from one host
# to V-IPU Controller on management server
subnet 10.3.0.0 netmask 255.255.0.0 {
option subnet-mask 255.255.0.0;
default-lease-time 600;
max-lease-time 7200;
option ntp-servers 10.3.8.150;
}
}
# list all lrack numbers that is served by this host here
# lrack8
include "/etc/dhcp/dhcpd.d/lrack8/poplar-mgmt.conf";
# IPU-M IPUoF subnet
subnet 10.5.0.0 netmask 255.255.0.0 {
option subnet-mask 255.255.0.0;
default-lease-time 600;
max-lease-time 7200;
range 10.5.8.240 10.5.8.244;
}
# list all lrack numbers that is served by this host here
include "/etc/dhcp/dhcpd.d/lrack8/ipum-rnic.conf";
8.6.4. POD7 and POD8: /etc/dhcp/dhcpd.d/ files
root@gbnwp-pod013-1:/etc/dhcp/dhcpd.d# ll
total 16
drwxrwx--- 4 root dhcpd 164 Mar 1 09:57 ./
dr-xrwxr-x 6 root dhcpd 4096 Feb 18 16:46 ../
-rw-r--r-- 1 root root 1449 Mar 1 09:55 ipum-dhcp.conf
drwxr-xr-x 2 root root 93 Mar 11 15:28 lrack7/
drwxr-xr-x 2 root root 93 Mar 1 09:57 lrack8/
8.6.5. POD7: /etc/dhcp/dhcpd.d/lrack7 files
ipum-bmc.conf
root@gbnwp-pod7-1:/etc/dhcp/dhcpd.d/lrack7# cat ipum-bmc.conf
host lr7-ipum1bmc { hardware ethernet 70:69:79:20:24:D0; fixed-address 10.1.7.1; }
host lr7-ipum2bmc { hardware ethernet 70:69:79:20:14:48; fixed-address 10.1.7.2; }
host lr7-ipum3bmc { hardware ethernet 70:69:79:20:12:BC; fixed-address 10.1.7.3; }
host lr7-ipum4bmc { hardware ethernet 70:69:79:20:13:2C; fixed-address 10.1.7.4; }
host lr7-ipum5bmc { hardware ethernet 70:69:79:20:13:04; fixed-address 10.1.7.5; }
host lr7-ipum6bmc { hardware ethernet 70:69:79:20:14:68; fixed-address 10.1.7.6; }
host lr7-ipum7bmc { hardware ethernet 70:69:79:20:14:D4; fixed-address 10.1.7.7; }
host lr7-ipum8bmc { hardware ethernet 70:69:79:20:14:BC; fixed-address 10.1.7.8; }
host lr7-ipum9bmc { hardware ethernet 70:69:79:20:14:38; fixed-address 10.1.7.9; }
host lr7-ipum10bmc { hardware ethernet 70:69:79:20:14:94; fixed-address 10.1.7.10; }
host lr7-ipum11bmc { hardware ethernet 70:69:79:20:13:F4; fixed-address 10.1.7.11; }
host lr7-ipum12bmc { hardware ethernet 70:69:79:20:12:74; fixed-address 10.1.7.12; }
host lr7-ipum13bmc { hardware ethernet 70:69:79:20:12:6C; fixed-address 10.1.7.13; }
host lr7-ipum14bmc { hardware ethernet 70:69:79:20:14:84; fixed-address 10.1.7.14; }
host lr7-ipum15bmc { hardware ethernet 70:69:79:20:16:BC; fixed-address 10.1.7.15; }
host lr7-ipum16bmc { hardware ethernet 70:69:79:20:17:A4; fixed-address 10.1.7.16; }
ipum-gw.conf
root@gbnwp-pod7-1:/etc/dhcp/dhcpd.d/lrack7# cat ipum-gw.conf
host lr7-ipum1gw { hardware ethernet 70:69:79:20:24:D1; fixed-address 10.2.7.1; }
host lr7-ipum2gw { hardware ethernet 70:69:79:20:14:49; fixed-address 10.2.7.2; }
host lr7-ipum3gw { hardware ethernet 70:69:79:20:12:BD; fixed-address 10.2.7.3; }
host lr7-ipum4gw { hardware ethernet 70:69:79:20:13:2D; fixed-address 10.2.7.4; }
host lr7-ipum5gw { hardware ethernet 70:69:79:20:13:05; fixed-address 10.2.7.5; }
host lr7-ipum6gw { hardware ethernet 70:69:79:20:14:69; fixed-address 10.2.7.6; }
host lr7-ipum7gw { hardware ethernet 70:69:79:20:14:D5; fixed-address 10.2.7.7; }
host lr7-ipum8gw { hardware ethernet 70:69:79:20:14:BD; fixed-address 10.2.7.8; }
host lr7-ipum9gw { hardware ethernet 70:69:79:20:14:39; fixed-address 10.2.7.9; }
host lr7-ipum10gw { hardware ethernet 70:69:79:20:14:95; fixed-address 10.2.7.10; }
host lr7-ipum11gw { hardware ethernet 70:69:79:20:13:F5; fixed-address 10.2.7.11; }
host lr7-ipum12gw { hardware ethernet 70:69:79:20:12:75; fixed-address 10.2.7.12; }
host lr7-ipum13gw { hardware ethernet 70:69:79:20:12:6D; fixed-address 10.2.7.13; }
host lr7-ipum14gw { hardware ethernet 70:69:79:20:14:85; fixed-address 10.2.7.14; }
host lr7-ipum15gw { hardware ethernet 70:69:79:20:16:BD; fixed-address 10.2.7.15; }
host lr7-ipum16gw { hardware ethernet 70:69:79:20:17:A5; fixed-address 10.2.7.16; }
ipum-rnic.conf
root@gbnwp-pod7-1:/etc/dhcp/dhcpd.d/lrack7# cat ipum-rnic.conf
host lr7-ipum1mx { hardware ethernet 0c:42:a1:78:90:35; fixed-address 10.5.7.1; }
host lr7-ipum2mx { hardware ethernet 0c:42:a1:78:90:d5; fixed-address 10.5.7.2; }
host lr7-ipum3mx { hardware ethernet 0c:42:a1:78:7e:55; fixed-address 10.5.7.3; }
host lr7-ipum4mx { hardware ethernet 0c:42:a1:78:82:65; fixed-address 10.5.7.4; }
host lr7-ipum5mx { hardware ethernet 0c:42:a1:84:ec:3d; fixed-address 10.5.7.5; }
host lr7-ipum6mx { hardware ethernet 0c:42:a1:78:95:65; fixed-address 10.5.7.6; }
host lr7-ipum7mx { hardware ethernet 0c:42:a1:78:96:c5; fixed-address 10.5.7.7; }
host lr7-ipum8mx { hardware ethernet 0c:42:a1:84:f5:85; fixed-address 10.5.7.8; }
host lr7-ipum9mx { hardware ethernet 0c:42:a1:84:ea:85; fixed-address 10.5.7.9; }
host lr7-ipum10mx { hardware ethernet 0c:42:a1:78:7e:65; fixed-address 10.5.7.10; }
host lr7-ipum11mx { hardware ethernet 0c:42:a1:78:8a:0d; fixed-address 10.5.7.11; }
host lr7-ipum12mx { hardware ethernet 0c:42:a1:78:85:55; fixed-address 10.5.7.12; }
host lr7-ipum13mx { hardware ethernet 0c:42:a1:78:89:f5; fixed-address 10.5.7.13; }
host lr7-ipum14mx { hardware ethernet 0c:42:a1:78:8a:4d; fixed-address 10.5.7.14; }
host lr7-ipum15mx { hardware ethernet 0c:42:a1:84:e7:bd; fixed-address 10.5.7.15; }
host lr7-ipum16mx { hardware ethernet 0c:42:a1:78:87:25; fixed-address 10.5.7.16; }
8.6.6. POD7: /etc/dhcp/dhcpd.d/lrack8 files
ipum-bmc.conf
root@gbnwp-pod7-1:/etc/dhcp/dhcpd.d/lrack8# cat ipum-bmc.conf
host lr8-ipum1bmc { hardware ethernet 70:69:79:20:24:D0; fixed-address 10.1.8.1; }
host lr8-ipum2bmc { hardware ethernet 70:69:79:20:14:48; fixed-address 10.1.8.2; }
host lr8-ipum3bmc { hardware ethernet 70:69:79:20:12:BC; fixed-address 10.1.8.3; }
host lr8-ipum4bmc { hardware ethernet 70:69:79:20:13:2C; fixed-address 10.1.8.4; }
host lr8-ipum5bmc { hardware ethernet 70:69:79:20:13:04; fixed-address 10.1.8.5; }
host lr8-ipum6bmc { hardware ethernet 70:69:79:20:14:68; fixed-address 10.1.8.6; }
host lr8-ipum7bmc { hardware ethernet 70:69:79:20:14:D4; fixed-address 10.1.8.7; }
host lr8-ipum8bmc { hardware ethernet 70:69:79:20:14:BC; fixed-address 10.1.8.8; }
host lr8-ipum9bmc { hardware ethernet 70:69:79:20:14:38; fixed-address 10.1.8.9; }
host lr8-ipum10bmc { hardware ethernet 70:69:79:20:14:94; fixed-address 10.1.8.10; }
host lr8-ipum11bmc { hardware ethernet 70:69:79:20:13:F4; fixed-address 10.1.8.11; }
host lr8-ipum12bmc { hardware ethernet 70:69:79:20:12:74; fixed-address 10.1.8.12; }
host lr8-ipum13bmc { hardware ethernet 70:69:79:20:12:6C; fixed-address 10.1.8.13; }
host lr8-ipum14bmc { hardware ethernet 70:69:79:20:14:84; fixed-address 10.1.8.14; }
host lr8-ipum15bmc { hardware ethernet 70:69:79:20:16:BC; fixed-address 10.1.8.15; }
host lr8-ipum16bmc { hardware ethernet 70:69:79:20:17:A4; fixed-address 10.1.8.16; }
ipum-gw.conf
root@gbnwp-pod7-1:/etc/dhcp/dhcpd.d/lrack8# cat ipum-gw.conf
host lr8-ipum1gw { hardware ethernet 70:69:79:20:24:D1; fixed-address 10.2.8.1; }
host lr8-ipum2gw { hardware ethernet 70:69:79:20:14:49; fixed-address 10.2.8.2; }
host lr8-ipum3gw { hardware ethernet 70:69:79:20:12:BD; fixed-address 10.2.8.3; }
host lr8-ipum4gw { hardware ethernet 70:69:79:20:13:2D; fixed-address 10.2.8.4; }
host lr8-ipum5gw { hardware ethernet 70:69:79:20:13:05; fixed-address 10.2.8.5; }
host lr8-ipum6gw { hardware ethernet 70:69:79:20:14:69; fixed-address 10.2.8.6; }
host lr8-ipum7gw { hardware ethernet 70:69:79:20:14:D5; fixed-address 10.2.8.7; }
host lr8-ipum8gw { hardware ethernet 70:69:79:20:14:BD; fixed-address 10.2.8.8; }
host lr8-ipum9gw { hardware ethernet 70:69:79:20:14:39; fixed-address 10.2.8.9; }
host lr8-ipum10gw { hardware ethernet 70:69:79:20:14:95; fixed-address 10.2.8.10; }
host lr8-ipum11gw { hardware ethernet 70:69:79:20:13:F5; fixed-address 10.2.8.11; }
host lr8-ipum12gw { hardware ethernet 70:69:79:20:12:75; fixed-address 10.2.8.12; }
host lr8-ipum13gw { hardware ethernet 70:69:79:20:12:6D; fixed-address 10.2.8.13; }
host lr8-ipum14gw { hardware ethernet 70:69:79:20:14:85; fixed-address 10.2.8.14; }
host lr8-ipum15gw { hardware ethernet 70:69:79:20:16:BD; fixed-address 10.2.8.15; }
host lr8-ipum16gw { hardware ethernet 70:69:79:20:17:A5; fixed-address 10.2.8.16; }
8.6.7. POD8: /etc/dhcp/dhcpd.d/lrack8 files
Note
This setup is ONLY required on POD8 if there is no spine switch. In this situation you will also need a DHCP server to run on POD8.
ipum-rnic.conf
root@gbnwp-pod8-1:/etc/dhcp/dhcpd.d/lrack8# cat ipum-rnic.conf
host lr8-ipum1mx { hardware ethernet 0c:42:a1:78:90:35; fixed-address 10.5.8.1; }
host lr8-ipum2mx { hardware ethernet 0c:42:a1:78:90:d5; fixed-address 10.5.8.2; }
host lr8-ipum3mx { hardware ethernet 0c:42:a1:78:7e:55; fixed-address 10.5.8.3; }
host lr8-ipum4mx { hardware ethernet 0c:42:a1:78:82:65; fixed-address 10.5.8.4; }
host lr8-ipum5mx { hardware ethernet 0c:42:a1:84:ec:3d; fixed-address 10.5.8.5; }
host lr8-ipum6mx { hardware ethernet 0c:42:a1:78:95:65; fixed-address 10.5.8.6; }
host lr8-ipum7mx { hardware ethernet 0c:42:a1:78:96:c5; fixed-address 10.5.8.7; }
host lr8-ipum8mx { hardware ethernet 0c:42:a1:84:f5:85; fixed-address 10.5.8.8; }
host lr8-ipum9mx { hardware ethernet 0c:42:a1:84:ea:85; fixed-address 10.5.8.9; }
host lr8-ipum10mx { hardware ethernet 0c:42:a1:78:7e:65; fixed-address 10.5.8.10; }
host lr8-ipum11mx { hardware ethernet 0c:42:a1:78:8a:0d; fixed-address 10.5.8.11; }
host lr8-ipum12mx { hardware ethernet 0c:42:a1:78:85:55; fixed-address 10.5.8.12; }
host lr8-ipum13mx { hardware ethernet 0c:42:a1:78:89:f5; fixed-address 10.5.8.13; }
host lr8-ipum14mx { hardware ethernet 0c:42:a1:78:8a:4d; fixed-address 10.5.8.14; }
host lr8-ipum15mx { hardware ethernet 0c:42:a1:84:e7:bd; fixed-address 10.5.8.15; }
host lr8-ipum16mx { hardware ethernet 0c:42:a1:78:87:25; fixed-address 10.5.8.16; }
8.7. /etc/netplan files
8.7.1. 1GbE management interface on POD7 server
The 1GbE management interface is required to have the setup described in this section. The eno1
interface has three IP subnets for communicating with: BMC ports, IPU-Gateway ports, and between servers (for example V-IPU CLI to V-IPU controller on the management server, and Poplar instance to V-IPU controller).
3: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether bc:97:e1:46:00:b6 brd ff:ff:ff:ff:ff:ff
inet 10.1.7.150/16 brd 10.1.255.255 scope global eno1
valid_lft forever preferred_lft forever
inet 10.2.7.150/16 brd 10.2.255.255 scope global eno1
valid_lft forever preferred_lft forever
inet 10.3.7.150/16 brd 10.3.255.255 scope global eno1
valid_lft forever preferred_lft forever
inet6 fe80::be97:e1ff:fe46:b6/64 scope link
valid_lft forever preferred_lft forever
8.7.2. RNIC interfaces on the servers
The IPU‑POD128 contains two IPU‑POD64 racks, and each of the IPU‑POD64 racks can contain up to four servers. Each of these servers has an RNIC interface which needs to be configured.
You need to setup the RNIC interfaces as described in this section. If there are multiple servers on POD7 (up to four), the management server (POD7 server 1) will control the remaining servers (POD7 servers 2, 3 and 4) with DHCP. POD7 server 1 will be running the v-ipu server and also a DHCP server to control the IP addressing for the POD7 IPU-M2000s and servers (POD7 servers 2, 3 and 4). Server 1 on POD8 only runs a DHCP server for the POD8 IPU-M2000s and servers (POD8 servers 2, 3 and 4).
7: enp161s0f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 0c:42:a1:1e:79:27 brd ff:ff:ff:ff:ff:ff
inet 10.5.7.150/16 brd 10.5.255.255 scope global enp161s0f1
valid_lft forever preferred_lft forever
inet6 fe80::e42:a1ff:fe1e:7927/64 scope link
valid_lft forever preferred_lft forever
This can be achieved by setting up the /etc/netplan/01-netcfg.yaml
file as follows:
network:
version: 2
renderer: networkd
ethernets:
eno1:
addresses:
- 10.1.7.150/16
- 10.2.7.150/16
- 10.3.7.150/16
eno2:
dhcp4: yes
enp161s0f0:
dhcp4: yes
enp161s0f1:
addresses:
- 10.5.7.150/16
If there are more servers in the POD7 rack (up to four), each will require an /etc/netplan/01-netcfg.yaml
file with the enp161s0f1
interface edited accordingly. For example. server 2 in the rack would have /etc/netplan/01-netcfg.yaml
as follows:
network:
version: 2
renderer: networkd
ethernets:
eno2:
dhcp4: yes
enp161s0f0:
dhcp4: yes
enp161s0f1:
addresses:
- 10.5.7.151/16
Server 3 would have enp161s0f1
as 10.5.7.152/16
and server 4 10.5.7.153/16
.
8.7.3. POD7: rack_config.json file
This file on POD7 is required for software upgrades and connectivity checks.
With a spine switch installed, this file will contain entries for all 32 IPu-M2000s in the IPU‑POD128, as shown in the code example below. If there is no spine switch, this file will only contain entries for POD7 IPU-M2000s and a similar rack_config.json
file on POD8, containing entries for POD8 IPU-M2000s, will be required for RNIC connectivity checks on POD8 IPU-M2000s.
root@gbnwp-pod7-1:/etc/dhcp/dhcpd.d/lrack13# cat /home/ipuuser/.rack_tool/rack_config.json
{
"gw_root_overlay": "/home/ipuuser/.rack_tool/root-overlay",
"global_credentials": {
"bmc_username": "root",
"bmc_passwd": "0penBmc",
"gw_username": "itadmin",
"gw_passwd": "ChangeMeFdh5P"
},
"machines": [
{
"name": "lr7_ipum1",
"bmc_ip": "10.1.7.1",
"gw_ip": "10.2.7.1",
"rnic_ip": "10.5.7.1"
},
{
"name": "lr7_ipum2",
"bmc_ip": "10.1.7.2",
"gw_ip": "10.2.7.2",
"rnic_ip": "10.5.7.2"
},
{
"name": "lr7_ipum3",
"bmc_ip": "10.1.7.3",
"gw_ip": "10.2.7.3",
"rnic_ip": "10.5.7.3"
},
{
"name": "lr7_ipum4",
"bmc_ip": "10.1.7.4",
"gw_ip": "10.2.7.4",
"rnic_ip": "10.5.7.4"
},
{
"name": "lr7_ipum5",
"bmc_ip": "10.1.7.5",
"gw_ip": "10.2.7.5",
"rnic_ip": "10.5.7.5"
},
{
"name": "lr7_ipum6",
"bmc_ip": "10.1.7.6",
"gw_ip": "10.2.7.6",
"rnic_ip": "10.5.7.6"
},
{
"name": "lr7_ipum7",
"bmc_ip": "10.1.7.7",
"gw_ip": "10.2.7.7",
"rnic_ip": "10.5.7.7"
},
{
"name": "lr7_ipum8",
"bmc_ip": "10.1.7.8",
"gw_ip": "10.2.7.8",
"rnic_ip": "10.5.7.8"
},
{
"name": "lr7_ipum9",
"bmc_ip": "10.1.7.9",
"gw_ip": "10.2.7.9",
"rnic_ip": "10.5.7.9"
},
{
"name": "lr7_ipum10",
"bmc_ip": "10.1.7.10",
"gw_ip": "10.2.7.10",
"rnic_ip": "10.5.7.10"
},
{
"name": "lr7_ipum11",
"bmc_ip": "10.1.7.11",
"gw_ip": "10.2.7.11",
"rnic_ip": "10.5.7.11"
},
{
"name": "lr7_ipum12",
"bmc_ip": "10.1.7.12",
"gw_ip": "10.2.7.12",
"rnic_ip": "10.5.7.12"
},
{
"name": "lr7_ipum13",
"bmc_ip": "10.1.7.13",
"gw_ip": "10.2.7.13",
"rnic_ip": "10.5.7.13"
},
{
"name": "lr7_ipum14",
"bmc_ip": "10.1.7.14",
"gw_ip": "10.2.7.14",
"rnic_ip": "10.5.7.14"
},
{
"name": "lr7_ipum15",
"bmc_ip": "10.1.7.15",
"gw_ip": "10.2.7.15",
"rnic_ip": "10.5.7.15"
},
{
"name": "lr7_ipum16",
"bmc_ip": "10.1.7.16",
"gw_ip": "10.2.7.16",
"rnic_ip": "10.5.7.16"
},
{
"name": "lr8_ipum1",
"bmc_ip": "10.1.8.1",
"gw_ip": "10.2.8.1",
"rnic_ip": "10.5.8.1"
},
{
"name": "lr8_ipum2",
"bmc_ip": "10.1.8.2",
"gw_ip": "10.2.8.2",
"rnic_ip": "10.5.8.2"
},
{
"name": "lr8_ipum3",
"bmc_ip": "10.1.8.3",
"gw_ip": "10.2.8.3",
"rnic_ip": "10.5.8.3"
},
{
"name": "lr8_ipum4",
"bmc_ip": "10.1.8.4",
"gw_ip": "10.2.8.4",
"rnic_ip": "10.5.8.4"
},
{
"name": "lr8_ipum5",
"bmc_ip": "10.1.8.5",
"gw_ip": "10.2.8.5",
"rnic_ip": "10.5.8.5"
},
{
"name": "lr8_ipum6",
"bmc_ip": "10.1.8.6",
"gw_ip": "10.2.8.6",
"rnic_ip": "10.5.8.6"
},
{
"name": "lr8_ipum7",
"bmc_ip": "10.1.8.7",
"gw_ip": "10.2.8.7",
"rnic_ip": "10.5.8.7"
},
{
"name": "lr8_ipum8",
"bmc_ip": "10.1.8.8",
"gw_ip": "10.2.8.8",
"rnic_ip": "10.5.8.8"
},
{
"name": "lr8_ipum9",
"bmc_ip": "10.1.8.9",
"gw_ip": "10.2.8.9",
"rnic_ip": "10.5.8.9"
},
{
"name": "lr8_ipum10",
"bmc_ip": "10.1.8.10",
"gw_ip": "10.2.8.10",
"rnic_ip": "10.5.8.10"
},
{
"name": "lr8_ipum11",
"bmc_ip": "10.1.8.11",
"gw_ip": "10.2.8.11",
"rnic_ip": "10.5.8.11"
},
{
"name": "lr8_ipum12",
"bmc_ip": "10.1.8.12",
"gw_ip": "10.2.8.12",
"rnic_ip": "10.5.8.12"
},
{
"name": "lr8_ipum13",
"bmc_ip": "10.1.8.13",
"gw_ip": "10.2.8.13",
"rnic_ip": "10.5.8.13"
},
{
"name": "lr8_ipum14",
"bmc_ip": "10.1.8.14",
"gw_ip": "10.2.8.14",
"rnic_ip": "10.5.8.14"
},
{
"name": "lr8_ipum15",
"bmc_ip": "10.1.8.15",
"gw_ip": "10.2.8.15",
"rnic_ip": "10.5.8.15"
},
{
"name": "lr8_ipum16",
"bmc_ip": "10.1.8.16",
"gw_ip": "10.2.8.16",
"rnic_ip": "10.5.8.16"
}
]
}