Differences

This shows you the differences between two versions of the page.

Link to this comparison view

rb:linuxsan-lvm [19/08/2019 16:23] (current)
andrew created
Line 1: Line 1:
 +====== Linux SAN disk with LVM based filesystem ======
 +
 +This assumes that a vdisk has been created and presented to the host, and that SAN zoning is in place to allow the host to see the storage.
 +
 +In this run through, the ''​multipath.conf''​ file had the multipath definition commented out before the multipath command was run. This created the ''​mpath0''​ device. After adding a multipath alias line, the name changes to the ''​vdisk001''​ alias which is defined.
 +
 +  [root@test01 ~]# multipath
 +  remove: mpath0 (dup of vdisk001)
 +  create: vdisk001 (36001438005dedc000000300000370000)
 +  [size=1 GB][features="​0"​][hwhandler="​0"​]
 +  \_ round-robin 0 [prio=1]
 +   \_ 0:0:0:1 sda 8:0   ​[active][ready]
 +  \_ round-robin 0 [prio=1]
 +   \_ 0:0:1:1 sdb 8:16  [active][ready]
 +  ​
 +  ...edited...
 +
 +''​fdisk''​ shows the underlying eight (Four paths over each SAN fabric) sdX devices and the new ''/​dev/​dm-1'':​-
 +
 +  [root@test01 ~]# fdisk -l
 +  ​
 +  Disk /dev/sda: 1073 MB, 1073741824 bytes
 +  34 heads, 61 sectors/​track,​ 1011 cylinders
 +  Units = cylinders of 2074 * 512 = 1061888 bytes
 +  ​
 +  Disk /dev/sda doesn'​t contain a valid partition table
 +  ​
 +  ...edited...
 +  ​
 +  Disk /dev/dm-1: 1073 MB, 1073741824 bytes
 +  34 heads, 61 sectors/​track,​ 1011 cylinders
 +  Units = cylinders of 2074 * 512 = 1061888 bytes
 +  ​
 +  Disk /dev/dm-1 doesn'​t contain a valid partition table
 +  ​
 +  ​
 +  [root@test01 ~]# ls -l /​dev/​mapper/​vdisk001 ​
 +  brw-rw---- ​ 1 root disk 253, 1 Dec  3 14:26 /​dev/​mapper/​vdisk001
 +
 +
 +===== LVM =====
 +
 +
 +==== Physical Volume creation ====
 +
 +  [root@test01 ~]# pvcreate /​dev/​mapper/​vdisk001
 +    Physical volume "/​dev/​mapper/​vdisk001"​ successfully created
 +
 +
 +==== Volume Group creation ====
 +
 +  [root@test01 ~]# vgcreate testvg /​dev/​mapper/​vdisk001
 +    Volume group "​testvg"​ successfully created
 +  ​
 +  [root@test01 ~]# vgdisplay
 +    --- Volume group ---
 +    VG Name               ​testvg
 +    System ID             
 +    Format ​               lvm2
 +    Metadata Areas        1
 +    Metadata Sequence No  1
 +    VG Access ​            ​read/​write
 +    VG Status ​            ​resizable
 +    MAX LV                0
 +    Cur LV                0
 +    Open LV               0
 +    Max PV                0
 +    Cur PV                1
 +    Act PV                1
 +    VG Size               ​1020.00 MB
 +    PE Size               4.00 MB
 +    Total PE              255
 +    Alloc PE / Size       0 / 0   
 +    Free  PE / Size       255 / 1020.00 MB
 +    VG UUID               ​VyUGva-mSgX-2pCC-ZuBI-YdKs-MFQ6-EiNt6c
 +
 +
 +==== Logical Volume creation ====
 +   
 +  [root@test01 ~]# lvcreate ​ --size 500M --name test01lv testvg
 +    Logical volume "​test01lv"​ created
 +  ​
 +  [root@test01 ~]# lvcreate ​ --size 500M --name test02lv testvg
 +    Logical volume "​test02lv"​ created
 +  ​
 +  ​
 +  [root@test01 ~]# lvdisplay ​
 +    --- Logical volume ---
 +    LV Name                /​dev/​testvg/​test01lv
 +    VG Name                testvg
 +    LV UUID                PffBiq-8zhP-VABE-ga6O-Asnk-Cpig-vhHO0S
 +    LV Write Access ​       read/write
 +    LV Status ​             available
 +    # open                 0
 +    LV Size                500.00 MB
 +    Current LE             125
 +    Segments ​              1
 +    Allocation ​            ​inherit
 +    Read ahead sectors ​    auto
 +    - currently set to     256
 +    Block device ​          253:0
 +     
 +    --- Logical volume ---
 +    LV Name                /​dev/​testvg/​test02lv
 +    VG Name                testvg
 +    LV UUID                h2LG5y-w4XB-VHez-BC3Q-pMcf-kQVc-UHV4vH
 +    LV Write Access ​       read/write
 +    LV Status ​             available
 +    # open                 0
 +    LV Size                500.00 MB
 +    Current LE             125
 +    Segments ​              1
 +    Allocation ​            ​inherit
 +    Read ahead sectors ​    auto
 +    - currently set to     256
 +    Block device ​          253:2
 +
 +
 +==== Make Filesystems ====   
 +
 +  [root@test01 ~]# mkfs /​dev/​testvg/​test01lv
 +  mke2fs 1.35 (28-Feb-2004)
 +  Filesystem label=
 +  OS type: Linux
 +  Block size=1024 (log=0)
 +  Fragment size=1024 (log=0)
 +  128016 inodes, 512000 blocks
 +  25600 blocks (5.00%) reserved for the super user
 +  First data block=1
 +  Maximum filesystem blocks=67633152
 +  63 block groups
 +  8192 blocks per group, 8192 fragments per group
 +  2032 inodes per group
 +  Superblock backups stored on blocks: ​
 +          8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409
 +  ​
 +  Writing inode tables: done                            ​
 +  Writing superblocks and filesystem accounting information:​ done
 +  ​
 +  This filesystem will be automatically checked every 24 mounts or
 +  180 days, whichever comes first. ​ Use tune2fs -c or -i to override.
 +  ​
 +  [root@test01 ~]# mkfs /​dev/​testvg/​test02lv
 +  mke2fs 1.35 (28-Feb-2004)
 +  Filesystem label=
 +  OS type: Linux
 +  Block size=1024 (log=0)
 +  Fragment size=1024 (log=0)
 +  128016 inodes, 512000 blocks
 +  25600 blocks (5.00%) reserved for the super user
 +  First data block=1
 +  Maximum filesystem blocks=67633152
 +  63 block groups
 +  8192 blocks per group, 8192 fragments per group
 +  2032 inodes per group
 +  Superblock backups stored on blocks: ​
 +          8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409
 +  ​
 +  Writing inode tables: done                            ​
 +  Writing superblocks and filesystem accounting information:​ done
 +  ​
 +  This filesystem will be automatically checked every 32 mounts or
 +  180 days, whichever comes first. ​ Use tune2fs -c or -i to override.
 +
 +
 +==== Mount filesystems ====
 +
 +  [root@test01 ~]# mount /​dev/​testvg/​test01lv /​mnt/​test01lv
 +  [root@test01 ~]# mount /​dev/​testvg/​test02lv /​mnt/​test02lv
 +  ​
 +  [root@test01 ~]# df -h
 +  Filesystem ​                   Size  Used Avail Use% Mounted on
 +  /​dev/​cciss/​c0d0p5 ​            ​27G ​  ​18G ​ 8.0G  69% /
 +  /​dev/​cciss/​c0d0p1 ​           145M   ​21M ​ 116M  16% /boot
 +  none                         ​7.9G ​    ​0 ​ 7.9G   0% /dev/shm
 +  /​dev/​cciss/​c0d0p3 ​           2.9G  1.7G  1.2G  59% /var
 +  /​dev/​mapper/​testvg-test01lv ​ 485M  2.3M  457M   1% /​mnt/​test01lv
 +  /​dev/​mapper/​testvg-test02lv ​ 485M  2.3M  457M   1% /​mnt/​test02lv
 +  [root@test01 ~]# 
 +
 +
 +===== Moving LVM filesystems =====
 +
 +This assumes that you have a LVM pv with one or more vg's and lv's built on top. 
 +You want to move a volume group from one pv to another, for instance to move from a smaller disk to a larger one and then to grow the vg and lv. 
 +
 +Two pv's are present, one of which contains the testvg.
 +
 +  [root@test01 ~]# pvdisplay
 +    --- Physical volume ---
 +    PV Name               /​dev/​dm-0
 +    VG Name               ​testvg
 +    PV Size               2.00 GB / not usable 4.00 MB
 +    Allocatable ​          yes
 +    PE Size (KByte) ​      4096
 +    Total PE              511
 +    Free PE               383
 +    Allocated PE          128
 +    PV UUID               ​2sora7-Utg0-8VIl-kCdM-jdcj-WND9-DqxYn3
 +  ​
 +    "/​dev/​dm-1"​ is a new physical volume of "2.00 GB"
 +    --- NEW Physical volume ---
 +    PV Name               /​dev/​dm-1
 +    VG Name
 +    PV Size               2.00 GB
 +    Allocatable ​          NO
 +    PE Size (KByte) ​      0
 +    Total PE              0
 +    Free PE               0
 +    Allocated PE          0
 +    PV UUID               ​Ts6Uwd-UYam-RSXq-bsdH-3N2F-7s3N-FfOrDF
 +  ​
 +  [root@test01 ~]# pvmove /​dev/​mapper/​vdisk001 /​dev/​mapper/​vdisk002
 +    /​dev/​mapper/​vdisk001:​ Moved: 79.7%
 +    /​dev/​mapper/​vdisk001:​ Moved: 100.0%
 +
 +Remove the original pv from vg, this is with the filesystem mounted and active!
 +pvdisplay shows testvg on the second pv.
 +
 +  [root@test01 ~]# vgreduce testvg /dev/dm-0
 +    Removed "/​dev/​dm-0"​ from volume group "​testvg"​
 +   ​[root@test01 ~]# pvdisplay
 +    --- Physical volume ---
 +    PV Name               /​dev/​dm-1
 +    VG Name               ​testvg
 +    PV Size               2.00 GB / not usable 4.00 MB
 +  ... edited ...
 +    "/​dev/​dm-0"​ is a new physical volume of "2.00 GB"
 +    --- NEW Physical volume ---
 +    PV Name               /​dev/​dm-0
 +    VG Name
 +    PV Size               2.00 GB
 +  ... edited ...
 +  ​
 +  [root@test01 ~]# pvremove ​ /dev/dm-0
 +    Labels on physical volume "/​dev/​dm-0"​ successfully wiped
 +
 +All gone from pv /dev/dm-0.
 +
 +===== Removing LVM =====
 +
 +This will remove the LVM volumes from the server, it does not cover removing the vdisk from your storage array.
 +If you want to move the LUN from one server and remount on another, DO NOT FOLLOW THESE INSTRUCTIONS!!!!!
 +
 +  [root@test01 ~]# mount
 +  ...edited...
 +  /​dev/​mapper/​testvg-test01lv on /​mnt/​test01lv type ext2 (rw)
 +  /​dev/​mapper/​testvg-test02lv on /​mnt/​test02lv type ext2 (rw)
 +  ​
 +  [root@test01 ~]# 
 +  [root@test01 ~]# umount /​mnt/​test01lv
 +  [root@test01 ~]# umount /​mnt/​test02lv
 +  [root@test01 ~]# lvdisplay
 +    --- Logical volume ---
 +    LV Name                /​dev/​testvg/​test01lv
 +    VG Name                testvg
 +    LV UUID                PffBiq-8zhP-VABE-ga6O-Asnk-Cpig-vhHO0S
 +    LV Write Access ​       read/write
 +    LV Status ​             available
 +    # open                 1
 +    LV Size                500.00 MB
 +    Current LE             125
 +    Segments ​              1
 +    Allocation ​            ​inherit
 +    Read ahead sectors ​    auto
 +    - currently set to     256
 +    Block device ​          253:0
 +     
 +    --- Logical volume ---
 +    LV Name                /​dev/​testvg/​test02lv
 +    VG Name                testvg
 +    LV UUID                h2LG5y-w4XB-VHez-BC3Q-pMcf-kQVc-UHV4vH
 +    LV Write Access ​       read/write
 +    LV Status ​             available
 +    # open                 0
 +    LV Size                500.00 MB
 +    Current LE             125
 +    Segments ​              1
 +    Allocation ​            ​inherit
 +    Read ahead sectors ​    auto
 +    - currently set to     256
 +    Block device ​          253:2
 +
 +The next step will DESTROY your data on test01lv:-
 +
 +  [root@test01 ~]# lvremove /​dev/​testvg/​test01lv
 +  Do you really want to remove active logical volume "​test01lv"?​ [y/n]: y
 +    Logical volume "​test01lv"​ successfully removed
 +  [root@test01 ~]# 
 +  ​
 +  [root@test01 ~]# lvremove /​dev/​testvg/​test02lv
 +  Do you really want to remove active logical volume "​test02lv"?​ [y/n]: y
 +    Logical volume "​test02lv"​ successfully removed
 +  [root@test01 ~]# 
 +  ​
 +  [root@test01 ~]# vgdisplay
 +    --- Volume group ---
 +    VG Name               ​testvg
 +    System ID             
 +    Format ​               lvm2
 +    Metadata Areas        1
 +    Metadata Sequence No  5
 +    VG Access ​            ​read/​write
 +    VG Status ​            ​resizable
 +    MAX LV                0
 +    Cur LV                0
 +    Open LV               0
 +    Max PV                0
 +    Cur PV                1
 +    Act PV                1
 +    VG Size               ​1020.00 MB
 +    PE Size               4.00 MB
 +    Total PE              255
 +    Alloc PE / Size       0 / 0   
 +    Free  PE / Size       255 / 1020.00 MB
 +    VG UUID               ​VyUGva-mSgX-2pCC-ZuBI-YdKs-MFQ6-EiNt6c
 +     
 +  [root@test01 ~]# vgremove testvg
 +    Volume group "​testvg"​ successfully removed
 +  [root@test01 ~]# 
 +  ​
 +  [root@test01 ~]# pvdisplay
 +    "/​dev/​dm-1"​ is a new physical volume of "1.00 GB"
 +    --- NEW Physical volume ---
 +    PV Name               /​dev/​dm-1
 +    VG Name               
 +    PV Size               1.00 GB
 +    Allocatable ​          NO
 +    PE Size (KByte) ​      0
 +    Total PE              0
 +    Free PE               0
 +    Allocated PE          0
 +    PV UUID               ​hJ2jFO-Bn7t-oa73-pqDI-Hetc-X6MY-9G9C5v
 +     
 +  [root@test01 ~]# pvremove ​ /dev/dm-1
 +    Labels on physical volume "/​dev/​dm-1"​ successfully wiped
 +  [root@test01 ~]#
 +
 +
 +Edit /​etc/​multipath.conf and remove any lines relating to previously configured multipath disks. Restart multipathd with ''​service multipathd restart''​. This should cause SAN multipath disks to disappear. On RHEL4.x, a restart seems the safest option.
  

rb/linuxsan-lvm.txt ยท Last modified: 19/08/2019 16:23 by andrew