Monday, December 23, 2013

Ceph and OpenStack in a Nutshell


Ceph and OpenStack in a Nutshell



Ceph Filesystem ( CephFS) :: Step by Step Configuration


CephFS 

Ceph Filesystem is a posix compliant file system that uses ceph storage cluster to store its data. This is the only ceph component that is not ready for production , i would like to say ready for pre-production.


Internals 
Thanks to http://ceph.com/docs/master/cephfs/ for Image 

Requirement of CephFS


  • You need a running ceph cluster with at least one MDS node. MDS is required for CephFS to work.
  • If you don't have MDS configure one
    • # ceph-deploy mds create <MDS-NODE-ADDRESS>
Note : If you are running short of hardware or want to save hardware you can run MDS services on existing Monitor nodes. MDS services does not need much resources
  • A Ceph client to mount cephFS

Configuring CephFS
  • Install ceph on client node
[root@storage0101-ib ceph]# ceph-deploy install na_fedora19
[ceph_deploy.cli][INFO  ] Invoked (1.3.2): /usr/bin/ceph-deploy install na_fedora19
[ceph_deploy.install][DEBUG ] Installing stable version emperor on cluster ceph hosts na_csc_fedora19
[ceph_deploy.install][DEBUG ] Detecting platform for host na_fedora19 ...
[na_csc_fedora19][DEBUG ] connected to host: na_csc_fedora19
[na_csc_fedora19][DEBUG ] detect platform information from remote host
[na_csc_fedora19][DEBUG ] detect machine type
[ceph_deploy.install][INFO  ] Distro info: Fedora 19 Schrödinger’s Cat
[na_csc_fedora19][INFO  ] installing ceph on na_fedora19
[na_csc_fedora19][INFO  ] Running command: rpm --import https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc
[na_csc_fedora19][INFO  ] Running command: rpm -Uvh --replacepkgs --force --quiet http://ceph.com/rpm-emperor/fc19/noarch/ceph-release-1-0.fc19.noarch.rpm
[na_csc_fedora19][DEBUG ] ########################################
[na_csc_fedora19][DEBUG ] Updating / installing...
[na_csc_fedora19][DEBUG ] ########################################
[na_csc_fedora19][INFO  ] Running command: yum -y -q install ceph

[na_csc_fedora19][ERROR ] Warning: RPMDB altered outside of yum.
[na_csc_fedora19][DEBUG ] No Presto metadata available for Ceph
[na_csc_fedora19][INFO  ] Running command: ceph --version
[na_csc_fedora19][DEBUG ] ceph version 0.72.2 (a913ded2ff138aefb8cb84d347d72164099cfd60)
[root@storage0101-ib ceph]#
  • Create a new pool for CephFS
# rados mkpool cephfs
  • Create a new keyring (client.cephfs) for cephfs 
# ceph auth get-or-create client.cephfs mon 'allow r' osd 'allow rwx pool=cephfs' -o /etc/ceph/client.cephfs.keyring
  • Extract secret key from keyring
# ceph-authtool -p -n client.cephfs /etc/ceph/client.cephfs.keyring > /etc/ceph/client.cephfs
  • Copy the secret file to client node under /etc/ceph . This allow filesystem to mount when cephx authentication is enabled
# scp client.cephfs na_fedora19:/etc/ceph
client.cephfs                                                                100%   41     0.0KB/s   00:00
  • List all the keys on ceph cluster
# ceph auth list                                               


Option-1 : Mount CephFS with Kernel Driver


  • On the client machine add mount point in /etc/fstab . Provide IP address of your ceph monitor node and path of secret key that we have created above
192.168.200.101:6789:/ /cephfs ceph name=cephfs,secretfile=/etc/ceph/client.cephfs,noatime 0 2    
  • Mount cephfs mount point  , you might see some "mount: error writing /etc/mtab: Invalid argument" but you can ignore them and check  df -h
[root@na_fedora19 ceph]# mount /cephfs
mount: error writing /etc/mtab: Invalid argument

[root@na_fedora19 ceph]#
[root@na_fedora19 ceph]# df -h
Filesystem              Size  Used Avail Use% Mounted on
/dev/vda1               7.8G  2.1G  5.4G  28% /
devtmpfs                3.9G     0  3.9G   0% /dev
tmpfs                   3.9G     0  3.9G   0% /dev/shm
tmpfs                   3.9G  288K  3.9G   1% /run
tmpfs                   3.9G     0  3.9G   0% /sys/fs/cgroup
tmpfs                   3.9G  2.6M  3.9G   1% /tmp
192.168.200.101:6789:/  419T  8.5T  411T   3% /cephfs
[root@na_fedora19 ceph]#

Option-2 : Mounting CephFS as FUSE
  • Copy ceph configuration file ( ceph.conf ) from monitor node to client node and make sure it has permission of 644
# scp ceph.conf na_fedora19:/etc/ceph
# chmod 644 ceph.conf
  • Copy the secret file from monitor node to client node under /etc/ceph. This allow filesystem to mount when cephx authentication is enabled ( we have done this earlier )
# scp client.cephfs na_fedora19:/etc/ceph
client.cephfs                                                                100%   41     0.0KB/s   00:00
  • Make sure you have "ceph-fuse" package installed on client machine
# rpm -qa | grep -i ceph-fuse
ceph-fuse-0.72.2-0.fc19.x86_64 
  • To mount Ceph Filesystem as FUSE use ceph-fuse comand 
[root@na_fedora19 ceph]# ceph-fuse -m 192.168.100.101:6789  /cephfs
ceph-fuse[3256]: starting ceph client
ceph-fuse[3256]: starting fuse
[root@na_csc_fedora19 ceph]#

[root@na_fedora19 ceph]# df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/vda1       7.8G  2.1G  5.4G  28% /
devtmpfs        3.9G     0  3.9G   0% /dev
tmpfs           3.9G     0  3.9G   0% /dev/shm
tmpfs           3.9G  292K  3.9G   1% /run
tmpfs           3.9G     0  3.9G   0% /sys/fs/cgroup
tmpfs           3.9G  2.6M  3.9G   1% /tmp
ceph-fuse       419T  8.5T  411T   3% /cephfs
[root@na_fedora19 ceph]#



Thursday, December 5, 2013

Ceph + OpenStack :: Part-5



OpenStack Instance boot from Ceph Volume

  • For a list of images to choose from to create a bootable volume
[root@rdo /(keystone_admin)]# nova image-list
+--------------------------------------+-----------------------------+--------+--------+
| ID                                   | Name                        | Status | Server |
+--------------------------------------+-----------------------------+--------+--------+
| f61edc8d-c9a1-4ff4-b4fc-c8128bd1a10b | Ubuntu 12.04 cloudimg amd64 | ACTIVE |        |
| fcc07414-bbb3-4473-a8df-523664c8c9df | ceph-glance-image           | ACTIVE |        |
| be62a5bf-879f-4d1f-846c-fdef960224ff | precise-cloudimg.raw        | ACTIVE |        |
| 3c2db0ad-8d1e-400d-ba13-a506448f2a8e | precise-server-cloudimg     | ACTIVE |        |
+--------------------------------------+-----------------------------+--------+--------+
[root@rdo /(keystone_admin)]#
  • To create a bootable volume from an image, include the image ID in the command: Before the volume builds, its bootable state is false.
[root@rdo qemu(keystone_admin)]# cinder create --image-id be62a5bf-879f-4d1f-846c-fdef960224ff --display-name my-boot-vol 10
+---------------------+--------------------------------------+
|       Property      |                Value                 |
+---------------------+--------------------------------------+
|     attachments     |                  []                  |
|  availability_zone  |                 nova                 |
|       bootable      |                false                 |
|      created_at     |      2013-12-05T13:34:38.296723      |
| display_description |                 None                 |
|     display_name    |             my-boot-vol              |
|          id         | 5fca6e1b-b494-4773-9c78-63f72703bfdf |
|       image_id      | be62a5bf-879f-4d1f-846c-fdef960224ff |
|       metadata      |                  {}                  |
|         size        |                  10                  |
|     snapshot_id     |                 None                 |
|     source_volid    |                 None                 |
|        status       |               creating               |
|     volume_type     |                 None                 |
+---------------------+--------------------------------------+
[root@rdo qemu(keystone_admin)]#
[root@rdo qemu(keystone_admin)]# cinder list
+--------------------------------------+-------------+--------------+------+--------------+----------+--------------------------------------+
|                  ID                  |    Status   | Display Name | Size | Volume Type  | Bootable |             Attached to              |
+--------------------------------------+-------------+--------------+------+--------------+----------+--------------------------------------+
| 0e2bfced-be6a-44ec-a3ca-22c771c66cdc |    in-use   |  nova-vol_1  |  2   |     None     |  false   | 9d3c327f-1893-40ff-8a82-16fad9ce6d91 |
| 10cc0855-652a-4a9b-baa1-80bc86dc12ac |  available  |  ceph-vol1   |  5   | ceph-storage |  false   |                                      |
| 5fca6e1b-b494-4773-9c78-63f72703bfdf | downloading | my-boot-vol  |  10  |     None     |  false   |                                      |
+--------------------------------------+-------------+--------------+------+--------------+----------+--------------------------------------+

  • Wait for few minutes the bootable state turns to true. Copy the value in the ID field for your volume.
[root@rdo qemu(keystone_admin)]# cinder list
+--------------------------------------+-----------+--------------+------+--------------+----------+--------------------------------------+
|                  ID                  |   Status  | Display Name | Size | Volume Type  | Bootable |             Attached to              |
+--------------------------------------+-----------+--------------+------+--------------+----------+--------------------------------------+
| 0e2bfced-be6a-44ec-a3ca-22c771c66cdc |   in-use  |  nova-vol_1  |  2   |     None     |  false   | 9d3c327f-1893-40ff-8a82-16fad9ce6d91 |
| 10cc0855-652a-4a9b-baa1-80bc86dc12ac | available |  ceph-vol1   |  5   | ceph-storage |  false   |                                      |
| 5fca6e1b-b494-4773-9c78-63f72703bfdf | available | my-boot-vol  |  10  |     None     |   true   |                                      |
+--------------------------------------+-----------+--------------+------+--------------+----------+--------------------------------------+
[root@rdo qemu(keystone_admin)]#
  • Create a nova instance which will be boot from ceph volume
[root@rdo qemu(keystone_admin)]# nova boot --flavor 2 --image be62a5bf-879f-4d1f-846c-fdef960224ff --block_device_mapping vda=5fca6e1b-b494-4773-9c78-63f72703bfdf::0 --security_groups=default --nic net-id=4fe5909e-02db-4517-89f2-1278248fa26c  myInstanceFromVolume
+--------------------------------------+--------------------------------------+
| Property                             | Value                                |
+--------------------------------------+--------------------------------------+
| OS-EXT-STS:task_state                | scheduling                           |
| image                                | precise-cloudimg.raw                 |
| OS-EXT-STS:vm_state                  | building                             |
| OS-EXT-SRV-ATTR:instance_name        | instance-0000001e                    |
| OS-SRV-USG:launched_at               | None                                 |
| flavor                               | m1.small                             |
| id                                   | f24a0b29-9f1e-444b-b895-c3c694f2f1bc |
| security_groups                      | [{u'name': u'default'}]              |
| user_id                              | 99f8019ba2694d78a680a5de46aa1afd     |
| OS-DCF:diskConfig                    | MANUAL                               |
| accessIPv4                           |                                      |
| accessIPv6                           |                                      |
| progress                             | 0                                    |
| OS-EXT-STS:power_state               | 0                                    |
| OS-EXT-AZ:availability_zone          | nova                                 |
| config_drive                         |                                      |
| status                               | BUILD                                |
| updated                              | 2013-12-05T13:47:34Z                 |
| hostId                               |                                      |
| OS-EXT-SRV-ATTR:host                 | None                                 |
| OS-SRV-USG:terminated_at             | None                                 |
| key_name                             | None                                 |
| OS-EXT-SRV-ATTR:hypervisor_hostname  | None                                 |
| name                                 | myInstanceFromVolume                 |
| adminPass                            | qt34izQiLkG3                         |
| tenant_id                            | 0dafe42cfde242ddbb67b681f59bdb00     |
| created                              | 2013-12-05T13:47:34Z                 |
| os-extended-volumes:volumes_attached | []                                   |
| metadata                             | {}                                   |
+--------------------------------------+--------------------------------------+
[root@rdo qemu(keystone_admin)]#
[root@rdo qemu(keystone_admin)]#
[root@rdo qemu(keystone_admin)]#
[root@rdo qemu(keystone_admin)]# nova list
+--------------------------------------+----------------------+---------+------------+-------------+---------------------+
| ID                                   | Name                 | Status  | Task State | Power State | Networks            |
+--------------------------------------+----------------------+---------+------------+-------------+---------------------+
| 0043a8be-60d1-43ed-ba43-1ccd0bba7559 | instance2            | SHUTOFF | None       | Shutdown    | public=172.24.4.228 |
| f24a0b29-9f1e-444b-b895-c3c694f2f1bc | myInstanceFromVolume | BUILD   | spawning   | NOSTATE     | private=10.0.0.3    |
| 9d3c327f-1893-40ff-8a82-16fad9ce6d91 | small-ubuntu         | ACTIVE  | None       | Running     | public=172.24.4.230 |
+--------------------------------------+----------------------+---------+------------+-------------+---------------------+
[root@rdo qemu(keystone_admin)]#
  • Just in few minutes the instance starts RUNNING , time for a party now
[root@rdo qemu(keystone_admin)]# nova list
+--------------------------------------+----------------------+---------+------------+-------------+---------------------+
| ID                                   | Name                 | Status  | Task State | Power State | Networks            |
+--------------------------------------+----------------------+---------+------------+-------------+---------------------+
| 0043a8be-60d1-43ed-ba43-1ccd0bba7559 | instance2            | SHUTOFF | None       | Shutdown    | public=172.24.4.228 |
| f24a0b29-9f1e-444b-b895-c3c694f2f1bc | myInstanceFromVolume | ACTIVE  | None       | Running     | private=10.0.0.3    |
| 9d3c327f-1893-40ff-8a82-16fad9ce6d91 | small-ubuntu         | ACTIVE  | None       | Running     | public=172.24.4.230 |
+--------------------------------------+----------------------+---------+------------+-------------+---------------------+
[root@rdo qemu(keystone_admin)]#

OpenStack Instance boot from Ceph Volume :: Troubleshooting


  • During boot from volume , i encountered some errors after creating nova instance. The image was not able to get booted up from volume
[root@rdo nova(keystone_admin)]# nova boot --flavor 2 --image be62a5bf-879f-4d1f-846c-fdef960224ff --block_device_mapping vda=dd315dda-b22a-4cf8-8b77-7c2b2f163155:::0 --security_groups=default --nic net-id=4fe5909e-02db-4517-89f2-1278248fa26c  myInstanceFromVolume
+--------------------------------------+----------------------------------------------------+
| Property                             | Value                                              |
+--------------------------------------+----------------------------------------------------+
| OS-EXT-STS:task_state                | scheduling                                         |
| image                                | precise-cloudimg.raw                               |
| OS-EXT-STS:vm_state                  | building                                           |
| OS-EXT-SRV-ATTR:instance_name        | instance-0000001d                                  |
| OS-SRV-USG:launched_at               | None                                               |
| flavor                               | m1.small                                           |
| id                                   | f324e9b8-ec3a-4174-8b97-bf78dba62932               |
| security_groups                      | [{u'name': u'default'}]                            |
| user_id                              | 99f8019ba2694d78a680a5de46aa1afd                   |
| OS-DCF:diskConfig                    | MANUAL                                             |
| accessIPv4                           |                                                    |
| accessIPv6                           |                                                    |
| progress                             | 0                                                  |
| OS-EXT-STS:power_state               | 0                                                  |
| OS-EXT-AZ:availability_zone          | nova                                               |
| config_drive                         |                                                    |
| status                               | BUILD                                              |
| updated                              | 2013-12-05T12:42:22Z                               |
| hostId                               |                                                    |
| OS-EXT-SRV-ATTR:host                 | None                                               |
| OS-SRV-USG:terminated_at             | None                                               |
| key_name                             | None                                               |
| OS-EXT-SRV-ATTR:hypervisor_hostname  | None                                               |
| name                                 | myInstanceFromVolume                               |
| adminPass                            | eish5pu56CiE                                       |
| tenant_id                            | 0dafe42cfde242ddbb67b681f59bdb00                   |
| created                              | 2013-12-05T12:42:21Z                               |
| os-extended-volumes:volumes_attached | [{u'id': u'dd315dda-b22a-4cf8-8b77-7c2b2f163155'}] |
| metadata                             | {}                                                 |
+--------------------------------------+----------------------------------------------------+
[root@rdo nova(keystone_admin)]#
[root@rdo nova(keystone_admin)]#
[root@rdo nova(keystone_admin)]#
[root@rdo nova(keystone_admin)]# nova list
+--------------------------------------+----------------------+---------+------------+-------------+---------------------+
| ID                                   | Name                 | Status  | Task State | Power State | Networks            |
+--------------------------------------+----------------------+---------+------------+-------------+---------------------+
| 0043a8be-60d1-43ed-ba43-1ccd0bba7559 | instance2            | SHUTOFF | None       | Shutdown    | public=172.24.4.228 |
| f324e9b8-ec3a-4174-8b97-bf78dba62932 | myInstanceFromVolume | ERROR   | None       | NOSTATE     | private=10.0.0.3    |
| 9d3c327f-1893-40ff-8a82-16fad9ce6d91 | small-ubuntu         | ACTIVE  | None       | Running     | public=172.24.4.230 |
+--------------------------------------+----------------------+---------+------------+-------------+---------------------+
[root@rdo nova(keystone_admin)]#
  • After checking up logs from /var/log/libvirt/qemu/instance-0000001d.log
qemu-kvm: -drive file=rbd:ceph-volumes/volume-dd315dda-b22a-4cf8-8b77-7c2b2f163155:id=volumes:key=AQC804xS8HzFJxAAD/zzQ8LMzq9wDLq/5a472g==:auth_supported=cephx\;none:mon_host=192.168.1.31\:6789\;192.168.1.33\:6789\;192.168.1.38\:6789,if=none,id=drive-virtio-disk0,format=raw,serial=dd315dda-b22a-4cf8-8b77-7c2b2f163155,cache=none: could not open disk image rbd:ceph-volumes/volume-dd315dda-b22a-4cf8-8b77-7c2b2f163155:id=volumes:key=AQC804xS8HzFJxAAD/zzQ8LMzq9wDLq/5a472g==:auth_supported=cephx\;none:mon_host=192.168.1.31\:6789\;192.168.1.33\:6789\;192.168.1.38\:6789: No such file or directory
2013-12-05 12:42:29.544+0000: shutting down
  • Run qemu-img -h command to check for the supported format , here i found rbd format is not supported by qemu , so there is something fishy in this
Supported formats: raw cow qcow vdi vmdk cloop dmg bochs vpc vvfat qcow2 qed vhdx parallels nbd blkdebug host_cdrom host_floppy host_device file gluster gluster gluster gluster
  • Check the installed qemu version
[root@rdo qemu(keystone_admin)]# rpm -qa | grep -i qemu
qemu-img-0.12.1.2-2.415.el6_5.3.x86_64
qemu-guest-agent-0.12.1.2-2.415.el6_5.3.x86_64
gpxe-roms-qemu-0.9.7-6.10.el6.noarch
qemu-kvm-0.12.1.2-2.415.el6_5.3.x86_64
qemu-kvm-tools-0.12.1.2-2.415.el6_5.3.x86_64
[root@rdo qemu(keystone_admin)]#
  • Have a look on previous post to see the installation of correct version of qemu . After this your nova instance should boot from volume


[root@rdo qemu(keystone_admin)]# rpm -qa | grep -i qemu
qemu-img-0.12.1.2-2.355.el6.2.cuttlefish.async.x86_64
qemu-guest-agent-0.12.1.2-2.355.el6.2.cuttlefish.async.x86_64
qemu-kvm-0.12.1.2-2.355.el6.2.cuttlefish.async.x86_64
gpxe-roms-qemu-0.9.7-6.10.el6.noarch
qemu-kvm-tools-0.12.1.2-2.355.el6.2.cuttlefish.async.x86_64
[root@rdo qemu(keystone_admin)]#

[root@rdo /(keystone_admin)]# qemu-img -h | grep -i rbd
Supported formats: raw cow qcow vdi vmdk cloop dmg bochs vpc vvfat qcow2 qed parallels nbd blkdebug host_cdrom host_floppy host_device file rbd
[root@rdo /(keystone_admin)]#


[root@rdo qemu(keystone_admin)]# nova list
+--------------------------------------+----------------------+---------+------------+-------------+---------------------+
| ID                                   | Name                 | Status  | Task State | Power State | Networks            |
+--------------------------------------+----------------------+---------+------------+-------------+---------------------+
| 0043a8be-60d1-43ed-ba43-1ccd0bba7559 | instance2            | SHUTOFF | None       | Shutdown    | public=172.24.4.228 |
| f24a0b29-9f1e-444b-b895-c3c694f2f1bc | myInstanceFromVolume | ACTIVE  | None       | Running     | private=10.0.0.3    |
| 9d3c327f-1893-40ff-8a82-16fad9ce6d91 | small-ubuntu         | ACTIVE  | None       | Running     | public=172.24.4.230 |
+--------------------------------------+----------------------+---------+------------+-------------+---------------------+
[root@rdo qemu(keystone_admin)]#

Ceph + OpenStack :: Part-4


Testing OpenStack Glance + RBD

  • To allow glance to keep images on ceph RBD volume , edit /etc/glance/glance-api.conf
default_store = rbd
# ============ RBD Store Options =============================

# Ceph configuration file path
# If using cephx authentication, this file should
# include a reference to the right keyring
# in a client. section
rbd_store_ceph_conf = /etc/ceph/ceph.conf

# RADOS user to authenticate as (only applicable if using cephx)
rbd_store_user = images         ## This is the ceph user that we have created above in this document

# RADOS pool in which images are stored
rbd_store_pool = ceph-images   ## This is the ceph pool for images that we have created above in this document

# Images will be chunked into objects of this size (in megabytes).
# For best performance, this should be a power of two
rbd_store_chunk_size = 8
  • Check ceph auth to make sure keys are present for client.images user . This should present here as we have created them earlier in this document.
[root@rdo ceph(keystone_admin)]# ceph auth list
installed auth entries:

mds.ceph-mon1
   key: AQAxp35ScNUxOBAAfAXc+J5F3/v7jUrpztVRBQ==
   caps: [mds] allow
   caps: [mon] allow profile mds
   caps: [osd] allow rwx
osd.0
   key: AQCOvWpSsKN4JBAA015Uf53JjGCJS4cgzhxGFg==
   caps: [mon] allow profile osd
   caps: [osd] allow *
osd.1
   key: AQCn+mtSULePJxAACKvSkIqF39f5MaFiwsVR6Q==
   caps: [mon] allow profile osd
   caps: [osd] allow *
osd.10
   key: AQCjNIZSOF7AFxAA3vwLvgaB3PI+WAZPt2eIlQ==
   caps: [mon] allow profile osd
   caps: [osd] allow *
osd.2
   key: AQDHBmxSwKTZBxAAyWlQGj8H48sdPGl4PzlFbQ==
   caps: [mon] allow profile osd
   caps: [osd] allow *
osd.3
   key: AQBv/WtSwH5gOBAAHrSWblzq/n/qPbaurBMC2g==
   caps: [mon] allow profile osd
   caps: [osd] allow *
osd.4
   key: AQCiE2xSgDLQMRAAjWotlPtyqaSgpll1P6NTfw==
   caps: [mon] allow profile osd
   caps: [osd] allow *
osd.5
   key: AQCrFGxSOEnjMRAAnrqLcMR8UHu3rTTTQ5DHjw==
   caps: [mon] allow profile osd
   caps: [osd] allow *
osd.6
   key: AQAXFmxSUAmsJxAA83qr0mZ3sGLQbi+C59LXgw==
   caps: [mon] allow profile osd
   caps: [osd] allow *
osd.7
   key: AQBpFmxSOCZFNBAAONPg5I3QnB3Wd/pr7rSkEg==
   caps: [mon] allow profile osd
   caps: [osd] allow *
osd.8
   key: AQC7M4ZSSP9dMhAAh4HQ0uvKFs9yHiQrobXzUA==
   caps: [mon] allow profile osd
   caps: [osd] allow *
osd.9
   key: AQBmNIZSkAIjMRAA3FFGaMhGiPCmYmQ9REisRQ==
   caps: [mon] allow profile osd
   caps: [osd] allow *
client.admin
   key: AQBSt2pS4M5cCBAAUd4jWA1vxJT+y5C9X6juzg==
   caps: [mds] allow
   caps: [mon] allow *
   caps: [osd] allow *
client.bootstrap-mds
   key: AQBSt2pS8IirKxAAQ27MWZ4pEEBuNhCDrj/FRw==
   caps: [mon] allow profile bootstrap-mds
client.bootstrap-osd
   key: AQBSt2pSYLXVGRAAYs0R8gXKSEct6ApEy4h6dQ==
   caps: [mon] allow profile bootstrap-osd
client.images
   key: AQDS04xSEJEYABAA8Kl9eEqIr3Y8pyz+tPRpvQ==
   caps: [mon] allow r
   caps: [osd] allow class-read object_prefix rbd_children, allow rwx pool=ceph-images
client.volumes
   key: AQC804xS8HzFJxAAD/zzQ8LMzq9wDLq/5a472g==
   caps: [mon] allow r
   caps: [osd] allow class-read object_prefix rbd_children, allow rwx pool=ceph-volumes, allow rx pool=ceph-images

[root@rdo ceph(keystone_admin)]#
  • Copy the keyrings file to glance directory. This is the same file that we have generated above in this document.
cp /etc/ceph/ceph.client.images.keyring /etc/glance
chown glance:glance /etc/glance/ceph.client.images.keyring

 #  service openstack-glance-api restart
 #  service openstack-glance-registry restart
 #  service openstack-glance-scrubber restart
  • Before creating a new glance image on ceph volume , check the ceph pool content ( in my case its empty and it should be , this is for the first time we are using this volume )
[root@rdo init.d(keystone_admin)]# rbd -p ceph-images ls
rbd: pool ceph-images doesn't contain rbd images
[root@rdo init.d(keystone_admin)]#
  • Download a new image or use if you have existing.
[root@rdo var(keystone_admin)]# wget http://cloud-images.ubuntu.com/precise/current/precise-server-cloudimg-amd64-disk1.img
[root@rdo var(keystone_admin)]# glance add name="ceph-glance-image" is_public=True disk_format=qcow2 container_format=ovf architecture=x86_64 <  ubuntu-12.04.3-desktop-amd64.iso
Added new image with ID: fcc07414-bbb3-4473-a8df-523664c8c9df
[root@rdo var(keystone_admin)]# glance index
ID                                   Name                           Disk Format          Container Format     Size
------------------------------------ ------------------------------ -------------------- -------------------- --------------
fcc07414-bbb3-4473-a8df-523664c8c9df ceph-glance-image              qcow2                ovf                       742391808
3c2db0ad-8d1e-400d-ba13-a506448f2a8e precise-server-cloudimg        qcow2                ovf                       254738432
f61edc8d-c9a1-4ff4-b4fc-c8128bd1a10b Ubuntu 12.04 cloudimg amd64    qcow2                ovf                       254738432
[root@rdo var(keystone_admin)]#
  • Now check your ceph pool , it will have image ( even match the glance image ID with pool objects , also compare image size with object size )
[root@rdo var(keystone_admin)]# rbd -p ceph-images ls
fcc07414-bbb3-4473-a8df-523664c8c9df
[root@rdo var(keystone_admin)]#

[root@rdo var(keystone_admin)]# du ubuntu-12.04.3-desktop-amd64.iso
724996   ubuntu-12.04.3-desktop-amd64.iso
[root@rdo var(keystone_admin)]#

[root@rdo var(keystone_admin)]# rados df
pool name       category                 KB      objects       clones     degraded      unfound           rd        rd KB           wr        wr KB
ceph-images     -                     724993           92            0            0           0           63           50           98       724993
ceph-volumes    -                          1            9            0            0           0          284          212           72            8
data            -                  141557761        34563            0            0           0        71843    131424295        71384    146013188
metadata        -                       9667           23            0            0           0           72        19346          851        10102
rbd             -                          1            1            0            0           0         2117        21883          305       226753
  total used       287309244        34688
  total avail     6222206348
  total space     6509515592
  • Feeling Happy , you should be , now glance will use ceph to retrieve / store images

Please Follow Ceph + OpenStack :: Part-5 for next step in installation


Ceph + OpenStack :: Part-3



Testing OpenStack Cinder + RBD

  • Creating a cinder volume provided by ceph backend
[root@rdo /]#
[root@rdo /]# cinder create --display-name cinder-ceph-vol1 --display-description "first cinder volume on ceph backend" 10
+---------------------+--------------------------------------+
|       Property      |                Value                 |
+---------------------+--------------------------------------+
|     attachments     |                  []                  |
|  availability_zone  |                 nova                 |
|       bootable      |                false                 |
|      created_at     |      2013-11-27T19:35:39.481075      |
| display_description | first cinder volume on ceph backend  |
|     display_name    |           cinder-ceph-vol1           |
|          id         | 10cc0855-652a-4a9b-baa1-80bc86dc12ac |
|       metadata      |                  {}                  |
|         size        |                  10                  |
|     snapshot_id     |                 None                 |
|     source_volid    |                 None                 |
|        status       |               creating               |
|     volume_type     |                 None                 |
+---------------------+--------------------------------------+
[root@rdo /]#
[root@rdo /]#
[root@rdo /(keystone_admin)]# cinder list
+--------------------------------------+-----------+------------------+------+--------------+----------+-------------+
| ID | Status | Display Name | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+-----------+------------------+------+--------------+----------+-------------+
| 10cc0855-652a-4a9b-baa1-80bc86dc12ac | available | cinder-ceph-vol1 | 5 | ceph-storage | false | |
| 9671edaa-62c8-4f98-a36c-d6e59612141b | available | boot_from_volume | 20 | None | false | |
+--------------------------------------+-----------+------------------+------+--------------+----------+-------------+
[root@rdo /(keystone_admin)]#
[root@rdo /]#
[root@rdo /]# rados lspools
data
metadata
rbd
ceph-images
ceph-volumes
[root@rdo /]#
[root@rdo /]#
[root@rdo /]# rbd -p ceph-volumes ls
volume-10cc0855-652a-4a9b-baa1-80bc86dc12ac
[root@rdo /]#

  • Attaching cinder volume to Instance
[root@rdo /(keystone_admin)]# nova list
+--------------------------------------+------------------+---------+--------------+-------------+---------------------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+------------------+---------+--------------+-------------+---------------------+
| 0043a8be-60d1-43ed-ba43-1ccd0bba7559 | instance2 | SHUTOFF | None | Shutdown | public=172.24.4.228 |
| 9d3c327f-1893-40ff-8a82-16fad9ce6d91 | small-ubuntu | ACTIVE | None | Running | public=172.24.4.230 |
| 10d1c49f-9fbc-455f-b72d-f731338b2dd5 | small-ubuntu-pwd | ACTIVE | powering-off | Shutdown | public=172.24.4.231 |
+--------------------------------------+------------------+---------+--------------+-------------+---------------------+
[root@rdo /(keystone_admin)]#

[root@rdo /(keystone_admin)]# nova show 9d3c327f-1893-40ff-8a82-16fad9ce6d91
+--------------------------------------+--------------------------------------------------------------------+
| Property                             | Value                                                              |
+--------------------------------------+--------------------------------------------------------------------+
| status                               | ACTIVE                                                             |
| updated                              | 2013-12-03T15:58:31Z                                               |
| OS-EXT-STS:task_state                | None                                                               |
| OS-EXT-SRV-ATTR:host                 | rdo                                                                |
| key_name                             | RDO-admin                                                          |
| image                                | Ubuntu 12.04 cloudimg amd64 (f61edc8d-c9a1-4ff4-b4fc-c8128bd1a10b) |
| hostId                               | 4a74aa79a23a084f73f49a4fedba7447c132ab45c4701ed7fbbb2286           |
| OS-EXT-STS:vm_state                  | active                                                             |
| OS-EXT-SRV-ATTR:instance_name        | instance-00000018                                                  |
| public network                       | 172.24.4.230                                                       |
| OS-SRV-USG:launched_at               | 2013-12-03T08:55:46.000000                                         |
| OS-EXT-SRV-ATTR:hypervisor_hostname  | rdo                                                                |
| flavor                               | m1.small (2)                                                       |
| id                                   | 9d3c327f-1893-40ff-8a82-16fad9ce6d91                               |
| security_groups                      | [{u'name': u'default'}]                                            |
| OS-SRV-USG:terminated_at             | None                                                               |
| user_id                              | 99f8019ba2694d78a680a5de46aa1afd                                   |
| name                                 | small-ubuntu                                                       |
| created                              | 2013-12-03T08:55:39Z                                               |
| tenant_id                            | 0dafe42cfde242ddbb67b681f59bdb00                                   |
| OS-DCF:diskConfig                    | MANUAL                                                             |
| metadata                             | {}                                                                 |
| os-extended-volumes:volumes_attached | []                                                                 |
| accessIPv4                           |                                                                    |
| accessIPv6                           |                                                                    |
| progress                             | 0                                                                  |
| OS-EXT-STS:power_state               | 1                                                                  |
| OS-EXT-AZ:availability_zone          | nova                                                               |
| config_drive                         |                                                                    |
+--------------------------------------+--------------------------------------------------------------------+
[root@rdo /(keystone_admin)]#

[root@rdo /(keystone_admin)]# virsh list
 Id    Name                           State
----------------------------------------------------
 2     instance-00000018              running

[root@rdo /(keystone_admin)]# cat disk.xml
<disk type='network'>
 <driver name="qemu" type="raw"/>
 <source protocol="rbd" name="ceph-volumes/volume-10cc0855-652a-4a9b-baa1-80bc86dc12ac">
 <host name='192.168.1.38' port='6789'/>
 <host name='192.168.1.31' port='6789'/>
 <host name='192.168.1.33' port='6789'/>
 </source>
 <target dev="vdf" bus="virtio"/>
<auth username='volumes'>
<secret type='ceph' uuid='801a42ec-aec1-3ea8-d869-823c2de56b83'/>
</auth>
</disk>
[root@rdo /(keystone_admin)]#
  • Things you should know about this file
    • source name=<ceph_pool_name/volume_name> ## ceph pool that we have created above and its cinder volume
    • host name=<Your_Monitor_nodes>
    • auth username=<user_you_created_in_Ceph_having_rights_to_pools_that_will_be_used_with_OS> ## we have created 2 users client.volumes and client.images for ceph that will have access to pools for openstack
    • secret uuid=<secret_generated_by_virsh_secrec_define_command> ## refer above that we have generated.
  • Attaching disk device to instance
[root@rdo /(keystone_admin)]# virsh attach-device instance-00000018 disk.xml
Device attached successfully
[root@rdo /(keystone_admin)]#
  • Now the ceph volume is attached to your openstack instance , you can use this as a regular block disk.

Making integration more seamless

  • To allow openstack create and attach ceph volumes using nova / cinder CLI as well as horizon dashboard , we need to add in /etc/nova/nova.conf the following values
rbd_user=volumes
rbd_secret_uuid=801a42ec-aec1-3ea8-d869-823c2de56b83 
  • After updating nova.conf , try creating volume from nova cli and attach to instance
[root@rdo nova(keystone_admin)]#
[root@rdo nova(keystone_admin)]# nova volume-create --display_name=nova-vol_1 2
+---------------------+--------------------------------------+
| Property | Value |
+---------------------+--------------------------------------+
| status | creating |
| display_name | nova-vol_1 |
| attachments | [] |
| availability_zone | nova |
| bootable | false |
| created_at | 2013-12-04T14:13:07.265831 |
| display_description | None |
| volume_type | None |
| snapshot_id | None |
| source_volid | None |
| size | 2 |
| id | 0e2bfced-be6a-44ec-a3ca-22c771c66cdc |
| metadata | {} |
+---------------------+--------------------------------------+
[root@rdo nova(keystone_admin)]#

[root@rdo nova(keystone_admin)]# nova volume-list
+--------------------------------------+-----------+------------------+------+--------------+-------------+
| ID | Status | Display Name | Size | Volume Type | Attached to |
+--------------------------------------+-----------+------------------+------+--------------+-------------+
| 0e2bfced-be6a-44ec-a3ca-22c771c66cdc | available | nova-vol_1 | 2 | None | |
| 9671edaa-62c8-4f98-a36c-d6e59612141b | available | boot_from_volume | 20 | None | |
| 10cc0855-652a-4a9b-baa1-80bc86dc12ac | available | ceph-vol1 | 5 | ceph-storage | |
+--------------------------------------+-----------+------------------+------+--------------+-------------+
[root@rdo nova(keystone_admin)]#

[root@rdo nova(keystone_admin)]# nova list
+--------------------------------------+------------------+---------+--------------+-------------+---------------------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+------------------+---------+--------------+-------------+---------------------+
| 0043a8be-60d1-43ed-ba43-1ccd0bba7559 | instance2 | SHUTOFF | None | Shutdown | public=172.24.4.228 |
| 9d3c327f-1893-40ff-8a82-16fad9ce6d91 | small-ubuntu | ACTIVE | None | Running | public=172.24.4.230 |
| 10d1c49f-9fbc-455f-b72d-f731338b2dd5 | small-ubuntu-pwd | ACTIVE | powering-off | Shutdown | public=172.24.4.231 |
+--------------------------------------+------------------+---------+--------------+-------------+---------------------+
[root@rdo nova(keystone_admin)]#

[root@rdo nova(keystone_admin)]# nova volume-attach 9d3c327f-1893-40ff-8a82-16fad9ce6d91 0e2bfced-be6a-44ec-a3ca-22c771c66cdc /dev/vdi
+----------+--------------------------------------+
| Property | Value |
+----------+--------------------------------------+
| device | /dev/vdi |
| serverId | 9d3c327f-1893-40ff-8a82-16fad9ce6d91 |
| id | 0e2bfced-be6a-44ec-a3ca-22c771c66cdc |
| volumeId | 0e2bfced-be6a-44ec-a3ca-22c771c66cdc |
+----------+--------------------------------------+
[root@rdo nova(keystone_admin)]#
[root@rdo nova(keystone_admin)]#
[root@rdo nova(keystone_admin)]# nova volume-list
+--------------------------------------+-----------+------------------+------+--------------+--------------------------------------+
| ID | Status | Display Name | Size | Volume Type | Attached to |
+--------------------------------------+-----------+------------------+------+--------------+--------------------------------------+
| 0e2bfced-be6a-44ec-a3ca-22c771c66cdc | in-use | nova-vol_1 | 2 | None | 9d3c327f-1893-40ff-8a82-16fad9ce6d91 |
| 9671edaa-62c8-4f98-a36c-d6e59612141b | available | boot_from_volume | 20 | None | |
| 10cc0855-652a-4a9b-baa1-80bc86dc12ac | available | ceph-vol1 | 5 | ceph-storage | |
+--------------------------------------+-----------+------------------+------+--------------+--------------------------------------+
[root@rdo nova(keystone_admin)]#


Please Follow Ceph + OpenStack :: Part-4 for next step in installation



Ceph + OpenStack :: Part-2


Configuring OpenStack

Two parts of openstack integrates with Ceph’s block devices:

  • Images: OpenStack Glance manages images for VMs.
  • Volumes: Volumes are block devices. OpenStack uses volumes to boot VMs, or to attach volumes to running VMs. OpenStack manages volumes using Cinder services.
    • Create pools for volumes and images:
ceph osd pool create volumes 128
ceph osd pool create images 128
  • Configure OpenStack Ceph Client - The nodes running glance-api and cinder-volume act as Ceph clients. Each requires the ceph.conf file:
[root@ceph-mon1 ceph]# scp ceph.conf openstack:/etc/ceph
  • Installing ceph client packages on openstack node
    • First install Python bindings for librbd
yum install python-ceph
    • Install ceph
[root@ceph-mon1 ceph]# ceph-deploy install openstack
  • Setup Ceph Client Authentication for both pools along with keyrings
    • Create a new user for Nova/Cinder and Glance.
ceph auth get-or-create client.volumes mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rx pool=images'
ceph auth get-or-create client.images mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=images' 
    • Add these keyrings to glance-api and cinder-volume nodes.
ceph auth get-or-create client.images | ssh openstack tee /etc/ceph/ceph.client.images.keyring
ssh openstack chown glance:glance /etc/ceph/ceph.client.images.keyring
ceph auth get-or-create client.volumes | ssh openstack tee /etc/ceph/ceph.client.volumes.keyring
ssh openstack chown cinder:cinder /etc/ceph/ceph.client.volumes.keyring
    • Hosts running nova-compute do not need the keyring. Instead, they store the secret key in libvirt. To create libvirt secret key you will need key from client.volumes.key
ceph auth get-key client.volumes | ssh openstack tee client.volumes.key
    • on the compute nodes, add the secret key to libvirt create a secret.xml file
cat > secret.xml < <EOF
<secret ephemeral='no' private='no'>
  <usage type='ceph'>
    <name>client.volumes secret</name>
  </usage>
EOF
    • Generate secret from created secret.xml file , make a note of uuid of secret output
# virsh secret-define --file secret.xml 
    • Set libvirt secret using above key
# virsh secret-set-value --secret {uuid of secret} --base64 $(cat client.volumes.key) && rm client.volumes.key secret.xml
  • Configure OpenStack-Glance to use CEPH
    • Glance can use multiple back ends to store images. To use Ceph block devices by default, edit /etc/glance/glance-api.conf and add:
default_store=rbd
rbd_store_user=images
rbd_store_pool=images
    • If want to enable copy-on-write cloning of images into volumes, also add:
show_image_direct_url=True
  • Configure OpenStack - Cinder to use CEPH 
    • OpenStack requires a driver to interact with Ceph block devices. You must specify the pool name for the block device. On your OpenStack node, edit/etc/cinder/cinder.conf by adding:
volume_driver=cinder.volume.drivers.rbd.RBDDriver
rbd_pool=volumes
glance_api_version=2
  • If you’re using cephx authentication also configure the user and uuid of the secret you added to libvirt earlier:
rbd_user=volumes
rbd_secret_uuid={uuid of secret}
  • Restart Openstack
service glance-api restart
service nova-compute restart
service cinder-volume restart
  • Once OpenStack is up and running, you should be able to create a volume with OpenStack on a Ceph block device.
  • NOTE : Make sure /etc/ceph/ceph.conf file have sufficient rights to be ready by cinder and glance users.

Please Follow Ceph + OpenStack :: Part-3 for next step in installation


Ceph + OpenStack :: Part-1


Ceph & OpenStack Integration

We can use Ceph Block Device with openstack through libvirt, which configures the QEMU interface tolibrbd. To use Ceph Block Devices with openstack , we must install QEMU, libvirt, and openstack  first. ( we will not cover openstack installation in this document , you can use your existing openstack infrastructure ) The following diagram explains openstack  /Ceph technology stack.
OpenStack/Ceph technology stack

Installing QEMU


qemu-img version 0.12.1 does not have RBD support , so we need install packages with async . On Openstack Node , create 3 YUM repo files ceph-extras-source.repo , ceph-extras.repo, ceph-extras-noarch.repo
  [ceph-extras]
  name=Ceph Extra Packages and Backports $basearch
  baseurl=http://ceph.com/packages/ceph-extras/rpm/centos6/$basearch
  enabled=1
  gpgcheck=1
  type=rpm-md
  gpgkey=https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc
  [ceph-extras-noarch]
  name=Ceph Extra Packages and Backports noarch
  baseurl=http://ceph.com/packages/ceph-extras/rpm/centos6/noarch
  enabled=1
  gpgcheck=1
  type=rpm-md
  gpgkey=https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc
 [ceph-extras-source]
 name=Ceph Extra Packages and Backports Sources
 baseurl=http://ceph.com/packages/ceph-extras/rpm/centos6/SRPMS
 enabled=1
 gpgcheck=1
 type=rpm-md
 gpgkey=https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc
 centos-extras
#yum update
#yum remove qemu-img
#yum --disablerepo=* --enablerepo=ceph-extras install -y qemu-img
# yum --disablerepo=* --enablerepo=ceph-extras install -y qemu-kvm
# yum --disablerepo=* --enablerepo=ceph-extras install -y qemu-guest-agent
# yum --disablerepo=* --enablerepo=ceph-extras install -y qemu-kvm-tools

--> Check creating a QEMU image it should work

[root@rdo yum.repos.d]# qemu-img create -f rbd rbd:data/foo 10G
Formatting 'rbd:data/foo', fmt=rbd size=10737418240 cluster_size=0
[root@rdo yum.repos.d]#

[root@rdo yum.repos.d]# qemu-img info -f rbd rbd:data/foo
image: rbd:data/foo
file format: rbd
virtual size: 10G (10737418240 bytes)
disk size: unavailable
cluster_size: 4194304
[root@rdo yum.repos.d]#

Installing LIBVIRT


To use libvirt with Ceph, we must have a running Ceph Storage Cluster, and have installed and configured QEMU
yum install libvirt

Please Follow Ceph + OpenStack :: Part-2 for next step in installation