Instructions for creating lun on svc and creating filesystem from it on target server.
On appropriate NIM server, to see mgroup sizes and free space, run:-
bash-3.00# ssh admin@SVC2(wm) (svc3 for laindon) svcinfo lsmdiskgrp id name status mdisk_count vdisk_count capacity extent_size free_capacity 0 M_4821_1673_5 online 10 180 16733.4GB 64 1320.9GB 1 M_4821_1952_5 online 10 160 19522.5GB 64 15.8GB 2 M_8321_1_5_QUOR online 1 0 704.0MB 64 704.0MB 3 M_8321_909_5_0 online 12 95 10908.0GB 64 287.7GB 4 M_8321_779_5_0 online 12 89 9346.7GB 64 476.4GB 5 M_8321_779_5_1 online 12 94 9348.0GB 64 765.8GB 6 M_4821_1673_5_1 online 4 38 6693.5GB 64 1123.5GB 7 M_8321_909_5_1 online 12 83 10908.0GB 64 1121.6GB 8 M_4821_1952_5_1 online 4 37 7809.0GB 64 2233.6GB
To see which existing vdisks are allocated to an existing server, run:-
bash-3.00# ssh admin@SVC3 svcinfo lsvdisk | grep cstpjq id name IOGrp_id IO_grp_name stat mdisk_grp_id mdisk_grp_name capacity type FC_id FC_name RC_id RC_name vdisk_UID fc_map_count copy_count fast_write_state 200 cstpjq_rootvg 0 io_grp0 online 7 M_48_1952_5 60.00GB striped 60050768019101CC600000000000010C 0 1 not_empty 201 cstpjq_rootvgm 1 io_grp1 online 6 M_48_1673_5 60.00GB striped 60050768019101CC600000000000010D 0 1 not_empty 208 cstpjq_pagevg2 0 io_grp0 online 7 M_48_1952_5 32.00GB striped 60050768019101CC6000000000000116 0 1 empty
In this case, the disks are on a DS4800, if it were Production, the disks would be on a DS8300.
To select which iogrp to use, run the following;
bash-3.00# ssh admin@SVC3 svcinfo lsiogrp id name node_count vdisk_count host_count 0 io_grp0 2 362 70 1 io_grp1 2 377 70 2 io_grp2 2 175 70 3 io_grp3 0 0 70 4 recovery_io_grp 0 0 0
The io group is svc node number (there are three IO groups 0,1,and 2 - try to use all of them evenly). iogrp 2 has 175 and therefore is the lowest, so use this iogrp to try and maintain a 'balance' across SVC nodes
To select which disk to use, do the following;
bash-3.00# ssh admin@SVC3 svcinfo lsmdiskgrp id name status mdisk_count vdisk_count capacity extent_size free_capacity virtual_capacity used_capacity real_capacity overallocation warning 0 M_8331_1582_5 online 24 201 37.1TB 64 7.8TB 29.28TB 29.28TB 29.28TB 78 0 1 M_8331_1844_5 online 24 206 43.2TB 64 14.2TB 29.04TB 29.04TB 29.04TB 67 0 2 M_53S_1396_5 online 12 16 16.4TB 64 14.4TB 1.97TB 1.97TB 1.97TB 12 0 3 M_53S_1629_5 online 12 11 19.1TB 64 14.6TB 4.49TB 4.49TB 4.49TB 23 0 4 M_53S_1861_10 online 8 20 14.5TB 64 9.7TB 4.82TB 4.82TB 4.82TB 33 0 5 M_53S_1629_10 online 8 15 12.7TB 64 8.2TB 4.49TB 4.49TB 4.49TB 35 0 6 M_48_1673_5 online 14 216 22.9TB 64 4.1TB 18.81TB 18.81TB 18.81TB 82 0 7 M_48_1952_5 online 14 229 26.7TB 64 8.7TB 18.01TB 18.01TB 18.01TB 67 0
Looking at the free_capacity column (scroll right) for id's 6 & 7, no. 7 has 8.7TB compared to 4.1Tb on no. 6 - so we will use no. 7
Create the Vdisk, on b0045nim, run:-
bash-3.00# ssh admin@SVC2 svctask mkvdisk -mdiskgrp M_8321_909_5_1 -iogrp 0 -unit gb -size 38 -name cdwpdb_cdwmetp Virtual Disk, id [770], successfully created
In order for the mapping script to work, the server ipaddress must be present in /etc/hosts as ${servername}-gbe. Now is a good time to add it if it is not already there.
On the appropriate NIM server, cd to /opt/support/svc and run ./mapvclient.ksh. The paramenters for this are:-
bash-3.00# ./mapvclient.ksh Usage : Format should be mapvclient.ksh LPARname vdiskname bash-3.00#
An example dialogue is below, 1st disk, rootvg:-
bash-3.00# ./mapvclient.ksh unixtest1 unitst1_rootvg unixtest1 exists on 1 frame(s) attached to hmc1 Server-9119-595-1-SN83724B2 Is this the frame that you want to map to ? Y/N y Mapping vdisk unitst1_rootvg to unixtest1 (vhost7) on VIOS servers B17D(Server-9119-595-1-SN83724B2) do you wish to continue ? Y/N y /opt/support/svc/utilities/vmapdisk.ksh unitst1_rootvg vhost7 B17D Mapping Vdisk unitst1_rootvg on svc3 to B0017-VIOS1SD Running cfgdev on VIOS server B0017-VIOS1SD Mapping Vdisk unitst1_rootvg (hdisk267) to Vhost vhost7 on B0017-VIOS1SD unitst1_rootvg Available Mapping Vdisk unitst1_rootvg on svc3 to B0017-VIOS2SD Running cfgdev on VIOS server B0017-VIOS2SD Mapping Vdisk unitst1_rootvg (hdisk267) to Vhost vhost7 on B0017-VIOS2SD unitst1_rootvg Available bash-3.00#
2nd disk, rootvgm:-
bash-3.00# time ./mapvclient.ksh unixtest1 unitst1_rootvgm
unixtest1 exists on 1 frame(s) attached to hmc1 Server-9119-595-1-SN83724B2 Is this the frame that you want to map to ? Y/N y Mapping vdisk unitst1_rootvgm to unixtest1 (vhost7) on VIOS servers B17D(Server-9119-595-1-SN83724B2) do you wish to continue ? Y/N y /opt/support/svc/utilities/vmapdisk.ksh unitst1_rootvgm vhost7 B17D Mapping Vdisk unitst1_rootvgm on svc3 to B0017-VIOS1SD Running cfgdev on VIOS server B0017-VIOS1SD Mapping Vdisk unitst1_rootvgm (hdisk268) to Vhost vhost7 on B0017-VIOS1SD unitst1_rootvgm Available Mapping Vdisk unitst1_rootvgm on svc3 to B0017-VIOS2SD Running cfgdev on VIOS server B0017-VIOS2SD Mapping Vdisk unitst1_rootvgm (hdisk268) to Vhost vhost7 on B0017-VIOS2SD unitst1_rootvgm Available real 1m44.806s user 0m1.027s sys 0m0.179s
(If the vdisk is built too small, the vdisk can be expanded with:-
ssh admin@svc2 svctask expandvdisksize -size 20 -unit gb cstpro_bpo
However the volume group on AIX needs to be varied off, after expansion, use
bash-3.00# bootinfo -s >aix-hdisknumber< bash-3.00# varyonvg cstpro_bpovg 0516-1434 varyonvg: Following physical volumes appear to be grown in size. Run chvg command to activate the new space. hdisk10 bash-3.00# chvg -g >volumegroup<
Log in to client with ssh and on client partition check hostname is correct and partition number is ok:-
bash-3.00# uname -a AIX cdwpdb 3 5 00C627FE4C00 bash-3.00# lparstat -i Node Name : cdwpdb Partition Name : cdwpdb Partition Number : 42 --edited--
Check current disks on client partition & run discover new one, recheck to see new hdisk number:-
bash-3.00# lspv hdisk0 00c627fed98be3c3 rootvg active hdisk1 00c627fed98be41e rootvg active hdisk2 00c627fed9f4c048 cdwporaexevg active hdisk3 00c627feda0024ce cdwporaflvg active hdisk4 00c6830ff8aa92d9 cdwporavg active bash-3.00# cfgmgr bash-3.00# lspv hdisk0 00c627fed98be3c3 rootvg active hdisk1 00c627fed98be41e rootvg active hdisk2 00c627fed9f4c048 cdwporaexevg active hdisk3 00c627feda0024ce cdwporaflvg active hdisk4 00c6830ff8aa92d9 cdwporavg active hdisk5 none None
cd to /opt/support/
Change disk path priority:-
bash-3.00# ./lspriority.ksh hdisk0 vscsi0=2 vscsi1=1 hdisk1 vscsi0=2 vscsi1=1 hdisk2 vscsi0=2 vscsi1=1 hdisk3 vscsi0=2 vscsi1=1 hdisk4 vscsi0=2 vscsi1=1 hdisk5 vscsi0=1 vscsi1=1
run ./setpriority.ksh without options to see swat to do.
bash-3.00# ./setpriority.ksh 1
This will produce errors for the mounted disk, but will change the priority for the new disk and will show the new priority table:-
path Changed path Changed hdisk5 changed path Changed path Changed hdisk0 vscsi0=2 vscsi1=1 hdisk1 vscsi0=2 vscsi1=1 hdisk2 vscsi0=2 vscsi1=1 hdisk3 vscsi0=2 vscsi1=1 hdisk4 vscsi0=2 vscsi1=1 hdisk5 vscsi0=2 vscsi1=1
Create Volume group:-
bash-3.00# mkvg -y cdwpmetvg hdisk5 0516-1254 mkvg: Changing the PVID in the ODM. cdwpmetvg
If you are cloning a machine with a new name, there may be old entries in /etc/filesystems which will interfere with creating the new filesystems. If you see an error such as:-
crfs: 0506-909 /opt/app/dqxi file system already exists.
Check /etc/filesystems and delete the conflicting entries.
cd to /opt/support/crfs/ create /opt/support/crfs/crmetfs & edit to suit. Use lslv to see the pp size and use this to calculate how many pp's are needed to create a filesystem of the required size:-
bash-3.00# lsvg >volumegroupname< VOLUME GROUP: gisextpvg VG IDENTIFIER: 00c724b200004c000000011e20e7b2ec VG STATE: active PP SIZE: 128 megabyte(s) VG PERMISSION: read/write TOTAL PPs: 839 (107392 megabytes) MAX LVs: 256 FREE PPs: 734 (93952 megabytes) LVs: 5 USED PPs: 105 (13440 megabytes) OPEN LVs: 5 QUORUM: 2 TOTAL PVs: 1 VG DESCRIPTORS: 2 STALE PVs: 0 STALE PPs: 0 ACTIVE PVs: 1 AUTO ON: yes MAX PPs per VG: 32512 MAX PPs per PV: 1016 MAX PVs: 32 LTG size (Dynamic): 256 kilobyte(s) AUTO SYNC: no HOT SPARE: no BB POLICY: relocatable
bash-3.00# cat /opt/support/crfs/crmetfs
#Make log file, 1pp big (always 1pp) mklv -t jfs2log -y cdwpmetplog cdwpmetvg 1 logform /dev/cdwpmetplog <<! y ! #create logical volume (cdwpmetplv) inside volume group (cdwpmetvg) 16pp big mklv -t jfs2 -e x -y cdwpmetplv cdwpmetvg 16 #create file system inside logical volume and specify mount point crfs -v jfs2 -d cdwpmetplv -m /oracle/cdwpmetp -Ayes -prw -aagblksize=4096 #Mount filesystem created mount /oracle/cdwpmetp \\ #Start all over again for other filesystems mklv -t jfs2 -e x -y cdwpmetpredoalv cdwpmetvg 16 crfs -v jfs2 -d cdwpmetpredoalv -m /oracle/cdwpmetp/redoa -Ayes -prw -aoptions=cio -aagblksize=512 mount /oracle/cdwpmetp/redoa mklv -t jfs2 -e x -y cdwpmetpredoblv cdwpmetvg 16 crfs -v jfs2 -d cdwpmetpredoblv -m /oracle/cdwpmetp/redob -Ayes -prw -aoptions=cio -aagblksize=512 mount /oracle/cdwpmetp/redob mklv -t jfs2 -e x -y cdwpmetparchlv cdwpmetvg 80 crfs -v jfs2 -d cdwpmetparchlv -m /oracle/cdwpmetp/archlogs -Ayes -prw -aagblksize=4096 mount /oracle/cdwpmetp/archlogs mklv -t jfs2 -e x -y cdwpmetpdata1lv cdwpmetvg 478 crfs -v jfs2 -d cdwpmetpdata1lv -m /oracle/cdwpmetp/db01 -Ayes -prw -aoptions=cio -aagblksize=4096 mount /oracle/cdwpmetp/db01 #eof
Run this script and then check again with:-
bash-3.00# lsvg >volumegroupname<
to see how many pp's are used and free.
Check with df -g to see if the filesystems have been created at the correct size.
bash-3.00# df -g . . edited . . /dev/gisextporalv 1.00 1.00 1% 8 1% /oracle/gisextp /dev/gisextpredoalv 1.00 1.00 1% 4 1% /oracle/gisextp/redoa /dev/gisextpredoblv 1.00 1.00 1% 4 1% /oracle/gisextp/redob /dev/gisextparchlv 10.00 10.00 1% 4 1% /oracle/gisextp/archlogs /dev/gisextpdata1lv 91.75 91.74 1% 4 1% /oracle/gisextp/db01 ^ ^ ^ | Device | Filesystem size in GB | Mount point
Change owernership of new filesystems if required:-
bash-3.00# cd /oracle bash-3.00# ls -l <--- check filesystem ownership bash-3.00# chown -R oracle:dba /oracle/cdwpmetp
Create new ipalias if required. Add to /etc/rc.net for restarts and run immediately.
bash-3.00# vi /etc/rc.net (remove any entries not required to avoid duplicate ip's)
Add (example) to last line:-
/usr/sbin/ifconfig en1 inet 10.84.1.19 netmask 255.255.240.0 alias 1>/dev/null 2>&1
& run from command line:-
bash-3.00# /usr/sbin/ifconfig en1 inet 10.84.1.19 netmask 255.255.240.0 alias 1>/dev/null 2>&1
Finished! Complete paperwork.