Fibre Channel Path failing manually

Note in progress. Not to be believed!

Currently if there is a fabric failure, the db servers notice the failure at a kernel level and the db server reboots. The objective is to be able to switchdisable one of the FC switches and the server/cluster not notice. This allows for firmware updates on the FC switches and also would give protection against one of the fabrics having a failure.

It would seem that if paths are failed manually in multipathd -k, the server survives without a reboot. So if the paths to one fabric are failed one by one, we should not notice a port going offline after as there will not be any active paths through the failed port.

Testing is going on db05.

1. get wwn from requested fabric from snmp on the required switch.

2. Use this to decide which pci adapter we are on. (host 1 or 2)

ls -l /sys/class/fc_host/host*/device
lrwxrwxrwx 1 root root 0 Apr 30 10:04 /sys/class/fc_host/host1/device -> ../../../devices/pci0000:00/0000:00:07.0/0000:06:00.0/host1
lrwxrwxrwx 1 root root 0 Apr 30 10:04 /sys/class/fc_host/host2/device -> ../../../devices/pci0000:00/0000:00:07.0/0000:06:00.1/host2

3. fdisk -l | grep -i cciss |

 
rb/fc-pathfail.txt · Last modified: 09/07/2019 16:42 by 127.0.0.1