Volume Federation

RBD

When using RBD (Ceph) as the driver for Cinder, volumes are stored in a Ceph pool. Access to the pool is granted through a CephX authentication key and this key is shared among all Nova compute nodes.

Please follow Ceph’s documentation on setting up OpenStack with RBD.

The libvirt secret key should match the credentials for accessing the pool in both deployments. So if you have two OpenStack deployments and they respectively access the RBD pool through secret keys bab2fea0-a834-48ad-8b27-6b7ce6aeab13 and dfb36a63-484c-48b2-8a47-4de8915dacfb that map to CephX authentications, then compute nodes in both deployments should have both of these secret keys configured.

Note

Due to bug 1635008 it is not possible to use RBD for ephemeral storage and also for federating volumes in RBD prior to the Ocata release. This is because Nova will trust the RBD information that it uses to store ephemeral disks over the information coming from Cinder, which should be the correct information.

LVM

LVM uses an iSCSI server to share the Cinder volume with Nova compute nodes. The credentials to access each volume are randomly generated by Cinder and passed to Nova during volume attachment. No special setup needs to be done in this scenario, other than to ensure compute nodes are able to access the iSCSI server through the network.