Lookup the Partition ID, adapter ID and VIO servers for the lpar in the HMC (under the VIO virtual adapter list).
Logon to the LPAR and shutdown the OS
# shutdown -p now
Shutdown the LPAR in the HMC.
Logon onto the VIO servers (x2) and get a list of vhosts
$ lsdev -virtual |grep vhost |awk '{print $1}' |while read v > do > lsmap -vadapter $v |grep $v > done
Match the Partition ID recorded from HMC with hex value from list and the Adapter ID with the Cnn number to determine the vhost id.
List the vhost adapters to make sure its the right one
$ lsmap -vadapter $vhost
Remove each of the hdisk mappings from the vhost on both VIO servers.
$ rmvdev -vtd $vdiskname $ rmdev -dev $hdisk
When all the mappings are removed, remove the vhost itself
$ rmdev -dev $vhost
On the nim server, list vdisks and remove the mapping from SVC to VIO for each vdisk using relevant script in /opt/support/svc
# ssh admin@SVC2 svcinfo lsvdisk | grep lpar_name # /opt/support/svc/rmmaptoviosB22D.ksh $vdisk
On VIOS (x2)
Count before and after cfgdev to check that disks do not reappear
$ lspv |wc –l 309 $ cfgdev $ lspv |wc -l 309
then on the nim server, delete the vdisks
# ssh admin@SVC2 svctask rmvdisk $vdisk
In the HMC, DLPAR out the virtual adapter from the VIO servers (x2) and do the same in the profile. Then, delete the LPAR from the HMC. Do a 'cfgdev' on the VIO servers to make sure nothing comes back.
Make sure to remove the LPAR from /etc/exports, and the mksysb and nmon lists on the nim server and also remove the association and schedule from TSM.
/etc/exports /export/nim/nmon/scripts/nmon_collect_client_list.txt /opt/support/mksysb/mksysb$ID/mksysb$ID_to_nim_run.txt