"Nova volume-attach" - Views: 1,463 · Hits: 1,465 - Type: Public

First attach volume to instance :-

[[email protected] ~(keystone_andrew)]$ cinder list
+--------------------------------------+-----------+----------------+------+-------------+----------+--------------------------------------+
|                  ID                  |   Status  |  Display Name  | Size | Volume Type | Bootable |             Attached to              |
+--------------------------------------+-----------+----------------+------+-------------+----------+--------------------------------------+
| 3c6f4fcd-e6c0-4806-acdf-272de397d047 | available |  test-volume   |  1   |  glusterfs  |  false   |                                      |
| 3cb7e4c5-9fa1-4d9f-a35d-20f37c6cc7fd |   in-use  | UbuntuTLVG0514 |  5   |  glusterfs  |   true   | 36bc640d-8a21-44ff-8c05-f78dcb2aeb54 |
| 48a9e8e7-8299-42f2-a791-60fafd38f4b1 |   in-use  | UbuntuTLVG0512 |  5   |  glusterfs  |   true   | b75147f9-db8f-4135-9945-5d9106c0cf11 |
+--------------------------------------+-----------+----------------+------+-------------+----------+--------------------------------------+
Here 36bc640d-8a21-44ff-8c05-f78dcb2aeb54 3c6f4fcd-e6c0-4806-acdf-272de397d047 is id of running instance.
[[email protected] ~(keystone_andrew)]$ nova volume-attach 36bc640d-8a21-44ff-8c05-f78dcb2aeb54 3c6f4fcd-e6c0-4806-acdf-272de397d047 auto
+----------+--------------------------------------+
| Property | Value                                |
+----------+--------------------------------------+
| device   | /dev/vdb                             |
| serverId | 36bc640d-8a21-44ff-8c05-f78dcb2aeb54 |
| id       | 3c6f4fcd-e6c0-4806-acdf-272de397d047 |
| volumeId | 3c6f4fcd-e6c0-4806-acdf-272de397d047 |
+----------+--------------------------------------+
Second ssh to instance, and run within instance :-

[email protected]:~# pvcreate /dev/vdb
  Physical volume "/dev/vdb" successfully created
[email protected]:~# vgcreate test_vg /dev/vdb
  Volume group "test_vg" successfully created
[email protected]:~# vgdisplay
  --- Volume group ---
  VG Name               test_vg
  System ID             
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  1
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                0
  Open LV               0
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               1020.00 MiB
  PE Size               4.00 MiB
  Total PE              255
  Alloc PE / Size       0 / 0   
  Free  PE / Size       255 / 1020.00 MiB
  VG UUID               H5qrhM-MHOC-bRpQ-xO0m-M6fz-j7ZM-IwVeUh
   
[email protected]:~# lvcreate -L 128M -n lvolume01 test_vg
  Logical volume "lvolume01" created
[email protected]:~# lvscan
  ACTIVE            '/dev/test_vg/lvolume01' [128.00 MiB] inherit

At this point  logical volumes group && lvm within group are created.
For instance now you may run :-
[email protected]:~# mkfs.ext4 /dev/test_vg/lvolume01
mke2fs 1.42.9 (4-Feb-2014)
Filesystem label=
OS type: Linux
Block size=1024 (log=0)
Fragment size=1024 (log=0)
Stride=0 blocks, Stripe width=0 blocks
32768 inodes, 131072 blocks
6553 blocks (5.00%) reserved for the super user
First data block=1
Maximum filesystem blocks=67371008
16 block groups
8192 blocks per group, 8192 fragments per group
2048 inodes per group
Superblock backups stored on blocks: 
	8193, 24577, 40961, 57345, 73729

Allocating group tables: done                            
Writing inode tables: done                            
Creating journal (4096 blocks): done
Writing superblocks and filesystem accounting information: done 

[email protected]:~# mkdir /mnt/volume
[email protected]:~# mount /dev/test_vg/lvolume01 /mnt/volume
[email protected]:~# df -h
Filesystem                     Size  Used Avail Use% Mounted on
/dev/vda1                      4.9G  791M  3.9G  17% /
none                           4.0K     0  4.0K   0% /sys/fs/cgroup
udev                           997M   12K  997M   1% /dev
tmpfs                          201M  344K  200M   1% /run
none                           5.0M     0  5.0M   0% /run/lock
none                          1002M     0 1002M   0% /run/shm
none                           100M     0  100M   0% /run/user
/dev/mapper/test_vg-lvolume01  120M  1.6M  110M   2% /mnt/volume
[email protected]:~# echo "hello" > /mnt/volume/test
[email protected]:~# cat  /mnt/volume/test
hello