Migrating Data To A New Larger Disk

The following document is a guide to migrating data from one disk to another without the need for downtime. This is particularly useful when you want to increase the size of the disk. Please note that there may be some i/o performance issues while the migratepv takes place. Please ensure you advise the customer of this before agreeing to undertake work.

On nim server

find the existing disk on the nim server using the SVC

bash-3.00# ssh admin@SVC2 svcinfo lsvdisk |grep hsgd
719              hsgdapp_spp       1              io_grp1           online         11             M_8321_593_5      10.00GB        striped  60050768019201D9B000000000000408 0              1              not_empty
720              hsgdapp_app       0              io_grp0           online         12             M_8321_692_5      20.00GB        striped  60050768019201D9B000000000000407 0              1              not_empty
721              hsgdapp_ora_sw    1              io_grp1           online         11             M_8321_593_5      20.00GB        striped  60050768019201D9B000000000000406 0              1              not_empty
744              hsgdapp_rootvg    0              io_grp0           online         12             M_8321_692_5      60.00GB        striped  60050768019201D9B000000000000403 0              1              not_empty
748              hsgdapp_ora       0              io_grp0           online         12             M_8321_692_5      1.00GB         striped  60050768019201D9B000000000000405 0              1              not_empty
771              hsgdapp_dbridge   1              io_grp1           online         11             M_8321_593_5      5.00GB         striped  60050768019201D9B0000000000003DE 0              1              not_empty
773              hsgdapp_rootvgm   1              io_grp1           online         11             M_8321_593_5      60.00GB        striped  60050768019201D9B000000000000404 0              1              not_empty

Create a script to make the new disk

bash-3.00# cat /tmp/create.hsgdevap_orasw.ksh
ssh admin@SVC2 svctask mkvdisk -mdiskgrp M_8321_593_5 -iogrp 1 -unit gb -size 30 -name hsgdapp_orasw

Navigate to the svc directory and run the map disk script

bash-3.00# cd /opt/support/svc
bash-3.00# ./mapvclient.ksh hsgdevapp hsgdapp_orasw
hsgdevapp exists on 1 frame(s) attached to hmc2
Server-9119-595-2-SN83627FE
Is this the frame that you want to map to ? Y/N
y
Mapping vdisk hsgdapp_orasw to hsgdevapp (vhost1) on VIOS servers B22D(Server-9119-595-2-SN83627FE)
do you wish to continue ? Y/N
y
/opt/support/svc/utilities/vmapdisk.ksh hsgdapp_orasw vhost1 B22D
Mapping Vdisk hsgdapp_orasw on svc2 to B0022-VIOS1SD
Running cfgdev on VIOS server B0022-VIOS1SD
Mapping Vdisk hsgdapp_orasw (hdisk80) to Vhost vhost1 on B0022-VIOS1SD
hsgdapp_orasw Available
Mapping Vdisk hsgdapp_orasw on svc2 to B0022-VIOS2SD
Running cfgdev on VIOS server B0022-VIOS2SD
Mapping Vdisk hsgdapp_orasw (hdisk80) to Vhost vhost1 on B0022-VIOS2SD
hsgdapp_orasw Available

Check the new disk has been created in svc

bash-3.00# ssh admin@SVC2 svcinfo lsvdisk |grep hsgd
567              hsgdapp_orasw     1              io_grp1           online         11             M_8321_593_5      30.00GB        striped  60050768019201D9B000000000000841 0              1              empty
719              hsgdapp_spp       1              io_grp1           online         11             M_8321_593_5      10.00GB        striped  60050768019201D9B000000000000408 0              1              not_empty
720              hsgdapp_app       0              io_grp0           online         12             M_8321_692_5      20.00GB        striped  60050768019201D9B000000000000407 0              1              not_empty
721              hsgdapp_ora_sw    1              io_grp1           online         11             M_8321_593_5      20.00GB        striped  60050768019201D9B000000000000406 0              1              not_empty
744              hsgdapp_rootvg    0              io_grp0           online         12             M_8321_692_5      60.00GB        striped  60050768019201D9B000000000000403 0              1              not_empty
748              hsgdapp_ora       0              io_grp0           online         12             M_8321_692_5      1.00GB         striped  60050768019201D9B000000000000405 0              1              not_empty
771              hsgdapp_dbridge   1              io_grp1           online         11             M_8321_593_5      5.00GB         striped  60050768019201D9B0000000000003DE 0              1              not_empty
773              hsgdapp_rootvgm   1              io_grp1           online         11             M_8321_593_5      60.00GB        striped  60050768019201D9B000000000000404 0              1              not_empty

On Client Server

Check current disks

bash-3.00# lspv
hdisk0          00c627feb7b746a8                    rootvg          active
hdisk1          00c627feb7b746f9                    rootvg          active
hdisk2          00c627feb7cd4f3d                    hsgdapporavg    active
hdisk3          00c627feb7cf1504                    hsgdapporaswvg  active
hdisk4          00c627feb7d13c89                    hsgdappappvg    active
hdisk5          00c627feb7d2cb6a                    hsgdappsppvg    active
hdisk6          00c6830f520ebcab                    hsgdappdbrvg    active

Run config manager to bring in the new disk created earlier on the nim server

bash-3.00# cfgmgr
bash-3.00# lspv
hdisk0          00c627feb7b746a8                    rootvg          active
hdisk1          00c627feb7b746f9                    rootvg          active
hdisk2          00c627feb7cd4f3d                    hsgdapporavg    active
hdisk3          00c627feb7cf1504                    hsgdapporaswvg  active
hdisk4          00c627feb7d13c89                    hsgdappappvg    active
hdisk5          00c627feb7d2cb6a                    hsgdappsppvg    active
hdisk6          00c6830f520ebcab                    hsgdappdbrvg    active
hdisk7          none                                None

List the priority

bash-3.00# /opt/support/lspriority.ksh
hdisk0 vscsi0=2 vscsi1=1
hdisk1 vscsi0=2 vscsi1=1
hdisk2 vscsi0=2 vscsi1=1
hdisk3 vscsi0=2 vscsi1=1
hdisk4 vscsi0=2 vscsi1=1
hdisk5 vscsi0=2 vscsi1=1
hdisk6 vscsi0=2 vscsi1=1
hdisk7 vscsi0=1 vscsi1=1

Set the correct priority as per output from above

bash-3.00# /opt/support/setpriority.ksh 1

Now we need to extend the data to the new disk

bash-3.00# extendvg hsgdapporaswvg hdisk7 &
[1] 643308
bash-3.00# 0516-1254 extendvg: Changing the PVID in the ODM.

[1]+  Done                    extendvg hsgdapporaswvg hdisk7

View the changes in lspv

bash-3.00# lspv
hdisk0          00c627feb7b746a8                    rootvg          active
hdisk1          00c627feb7b746f9                    rootvg          active
hdisk2          00c627feb7cd4f3d                    hsgdapporavg    active
hdisk3          00c627feb7cf1504                    hsgdapporaswvg  active
hdisk4          00c627feb7d13c89                    hsgdappappvg    active
hdisk5          00c627feb7d2cb6a                    hsgdappsppvg    active
hdisk6          00c6830f520ebcab                    hsgdappdbrvg    active
hdisk7          00c627fef30c44b6                    hsgdapporaswvg  active

Migrate the data to the new disk (this is the part where the server may incur performance issues relating to i/o for the disk)

bash-3.00# nohup migratepv hdisk3 hdisk7 &
[1] 442440
Sending nohup output to nohup.out.

You can check the progress by tailing the nohup.out file, however this does not show progress or you can do jobs to see if its still running

bash-3.00# jobs
[1]+  Running                 nohup migratepv hdisk3 hdisk7 &
bash-3.00#
[1]+  Done                    nohup migratepv hdisk3 hdisk7
bash-3.00# reducevg hsgdapporaswvg hdisk3
bash-3.00# chfs -a size=+10G /oracle/ora_sw
0516-787 extendlv: Maximum allocation for logical volume hsgdapporaswlv
      is 638.
bash-3.00# df -g | grep sw
/dev/hsgdapporaswlv     19.94      4.52   78%   187481    14% /oracle/ora_sw
bash-3.00# lsvg
rootvg
hsgdapporavg
hsgdapporaswvg
hsgdappappvg
hsgdappsppvg
hsgdappdbrvg
bash-3.00# lsvg -l hsgdapporaswvg
hsgdapporaswvg:
LV NAME             TYPE       LPs     PPs     PVs  LV STATE      MOUNT POINT
hsgdaorswjfslog     jfs2log    1       1       1    open/syncd    N/A
hsgdapporaswlv      jfs2       638     638     1    open/syncd    /oracle/ora_sw

bash-3.00# lslv hsgdapporaswlv
LOGICAL VOLUME:     hsgdapporaswlv         VOLUME GROUP:   hsgdapporaswvg
LV IDENTIFIER:      00c627fe00004c000000011cb7cf152a.2 PERMISSION:     read/write
VG STATE:           active/complete        LV STATE:       opened/syncd
TYPE:               jfs2                   WRITE VERIFY:   off
MAX LPs:            638                    PP SIZE:        32 megabyte(s)
COPIES:             1                      SCHED POLICY:   parallel
LPs:                638                    PPs:            638
STALE PPs:          0                      BB POLICY:      relocatable
INTER-POLICY:       maximum                RELOCATABLE:    yes
INTRA-POLICY:       middle                 UPPER BOUND:    32
MOUNT POINT:        /oracle/ora_sw         LABEL:          /oracle/ora_sw
MIRROR WRITE CONSISTENCY: on/ACTIVE
EACH LP COPY ON A SEPARATE PV ?: yes
Serialize IO ?:     NO

bash-3.00# chlv -x 1024 hsgdapporaswlv

bash-3.00# chfs -a size=+10G /oracle/ora_sw
Filesystem size changed to 62783488
bash-3.00# df -g | grep sw
/dev/hsgdapporaswlv     29.94     14.51   52%   187494     6% /oracle/ora_sw
bash-3.00# rmdev -dl hdisk3
hdisk3 deleted

See below that the disk has been removed

bash-3.00# lspv
hdisk0          00c627feb7b746a8                    rootvg          active
hdisk1          00c627feb7b746f9                    rootvg          active
hdisk2          00c627feb7cd4f3d                    hsgdapporavg    active
hdisk4          00c627feb7d13c89                    hsgdappappvg    active
hdisk5          00c627feb7d2cb6a                    hsgdappsppvg    active
hdisk6          00c6830f520ebcab                    hsgdappdbrvg    active
hdisk7          00c627fef30c44b6                    hsgdapporaswvg  active

On Nim Server- Removing

bash-3.00# ./unmapvclient.ksh hsgdevapp hsgdapp_ora_sw
hsgdevapp exists on 1 frame(s) attached to hmc3
Server-9119-595-2-SN83627FE
Is this the frame that you want to Unmap from ? Y/N
y
Unmapping vdisk hsgdapp_ora_sw from hsgdevapp (vhost1) on VIOS servers B22D(Server-9119-595-2-SN83627FE)
do you wish to continue ? Y/N
y
/opt/support/svc/utilities/vunmapdisk.ksh hsgdapp_ora_sw vhost1 B22D
Unmapping Vdisk hsgdapp_ora_sw from Vhost vhost1 on B0022-VIOS1SD
Deleting hdisk4 (hsgdapp_ora_sw) from VIOS Server B0022-VIOS1SD
hdisk4 deleted
Unmapping Vdisk hsgdapp_ora_sw on svc2 from B0022-VIOS1SD
Running cfgdev on VIOS server B0022-VIOS1SD
Unmapping Vdisk hsgdapp_ora_sw from Vhost vhost1 on B0022-VIOS2SD
Deleting hdisk4 (hsgdapp_ora_sw) from VIOS Server B0022-VIOS2SD
hdisk4 deleted
Unmapping Vdisk hsgdapp_ora_sw on svc2 from B0022-VIOS2SD
Running cfgdev on VIOS server B0022-VIOS2SD
bash-3.00# extendvg bssprodvg hdisk11

0516-1254 extendvg: Changing the PVID in the ODM.
0516-1162 extendvg: Warning, The Physical Partition Size of 128 requires the
        creation of 1200 partitions for hdisk11.  The limitation for volume group
        carerepvg is 1016 physical partitions per physical volume.  Use chvg command
        with -t option to attempt to change the maximum Physical Partitions per
        Physical volume for this volume group.
0516-792 extendvg: Unable to extend volume group.
bash-3.00# chvg -t 2 bssprodvg
0516-1164 chvg: Volume group carerepvg changed.  With given characteristics carerepvg
        can include upto 16 physical volumes with 2032 physical partitions each.

Check to see if the size of the free PP's has changed

bash-3.00# lsvg bssprodvg
VOLUME GROUP:       bssprodvg                VG IDENTIFIER:  00c627fe00004c00000001189e020ae6
VG STATE:           active                   PP SIZE:        128 megabyte(s)
VG PERMISSION:      read/write               TOTAL PPs:      2158 (276224 megabytes)
MAX LVs:            256                      FREE PPs:       1199 (153472 megabytes)
LVs:                6                        USED PPs:       959 (122752 megabytes)
OPEN LVs:           6                        QUORUM:         2 (Enabled)
TOTAL PVs:          2                        VG DESCRIPTORS: 3
STALE PVs:          0                        STALE PPs:      0
ACTIVE PVs:         2                        AUTO ON:        yes
MAX PPs per VG:     32512
MAX PPs per PV:     2032                     MAX PVs:        16
LTG size (Dynamic): 256 kilobyte(s)          AUTO SYNC:      no
HOT SPARE:          no                       BB POLICY:      relocatable
bash-3.00# extendvg bssprodvg hdisk11
bash-3.00# nohup migratepv hdisk10 hdisk11 &

you can check the progress by tailing the nohup out

once complete

bash-3.00# reducevg bssprodvg hdisk10 (the one you want to remove out)

Then you will be left with the new disk with the extra disk space and you will need to remove out the old disk completely from the client and vio servers including svc.

 
aix/aix_migratevg.txt · Last modified: 06/04/2022 10:06 by andrew