Xen hyperconvergence the CEPH way (2024)

Xen virtualization with CEPH storage, XCP-ng + RBDSR

While the world is busy containerizing everything with docker and pushing further with kubernetes and swarm, a case can still be made for 'traditional', or thick, virtualization.

In the landscape of virtualization, VMware might claim the top spot but Xen is also a hot contestant, and oVIRT (that I wrote about in my last article) is also a mature technology. There are a few more of course, Hyper-V, virtualbox, etc.

Following a vastly unpopular move by Citrix to make Xenserver less functional in its free version a couple months ago, I started experimenting with oVIRT/CEPH/iSCSI to a point where I had things mostly working without openstack. I was trying to have software defined shared storage (CEPH) to have VMs move around (live migration) without any unnecessary complexity (openstack) and also without using Gluster (conveniently integrated with oVIRT, but I wanted CEPH).

Since then, Olivier Lambert, the driving force behind Xen Orchestra, took the matter in his own hands and with the help of some people had a successful fund raising campaign to produce an entirely open source, full featured rebuild of Citrix's Xenserver, named XCP-ng, which stands for Xen Cloud Platform (A throwback to an earlier XCP project) New Generation. In only a couple months, Olivier and his team have managed to produce quality builds with great popularity, especially given the features announced and delivered.

I started testing XCP-ng since then, moving VMs around, and it has been a very pleasant experience. The forums are very alive and Olivier (the other one) himself is quite active on following up on issues. It's been a few months now and I have done very little since then. My XCP-ng test servers 'just work'. I was going to test CEPH with iSCSI as a backend for XCP-ng but I was still battling the requirements; 2 MDS servers, 2 iSCSI gateways, a bunch of specific non-mainstream packages for the iSCSI gateway CLI, etc. Although I got it working with plain iSCSI, the new CEPH/iSCSI mechanism still needed some work on my part.

And then recently, I was told about an interesting project that makes RBD devices (CEPH foundation Rados Block Devices) available to Xen directly as an SR (Storage Repository) using specifically designed plug-ins. Started about a year ago, RBDSR comes with a very interesting set of features to create and delete (and many other base operations) VM storage for Xen directly as an SR backed by a CEPH RBD storage data pool.

It achieves this by making every Xen host a client member of CEPH, giving access to the RBD pools, then using some cleverly designed plug-ins to translate SR operations into RBD operations.

I had tried a similar approach on my oVIRT nodes, making them client members of my CEPH cluster. I had to first install packages that were conflicting with Gluster and it was not clean although it worked. The main issue was that the storage was seen by oVIRT as local only. The beauty of RBDSR is that it makes the storage a shared one in the eyes of Xen.

So, RBDSR plugs right in the Xen server with a RBD pool. That means no MDS, no iSCSI gateway, no iSCSI at all, even. That sounds too good to be true, but it actually is.

The installation directions given by RBDSR are quite simple. It only took a few minutes to make my CEPH cluster provide storage for my Xen servers. I proceeded to test live migration and was quite pleased with the results.

Looking at the python code of the plugins, we quickly realise that there is much more to it than a simple storage link. Cloning, snapshot, resize and other features are there, but sadly there is little word on them on the github page.

In the course of only a couple of months, we have seen a new player emerge in the virtualization world, and XCP-ng looks quite promising, especially if it gets to the point of yum install and yum update as they have planned. Coupled with the RBDSR plugin, that I hope to see further development and improvement on, we can achieve a very cost effective shared storage 3-node Xen server CEPH highly available environment. The CEPH components can even be virtualized too, and a management console self-hosted, bringing all of it in a neat hyperconvergence setup. I will continue experimenting with this technology, but it is already on an exciting path.

As a sidenote, Olivier L. and the Xen Orchestra people are putting the finishing touches on another hyperconvergence turn-key product for Xen, bringing a shared SR that is self hosted on Xen servers as well, similar to my Xen/CEPH/RBDSR setup. XOSAN is, ironically, GlusterFS based. A discussion on the GlusterFS vs CEPH for that specific hyperconvergence case has already been addressed by Olivier L. though it seems to have taken place before RBDSR was made available.

Xen hyperconvergence the CEPH way (2024)
Top Articles
Latest Posts
Article information

Author: Jonah Leffler

Last Updated:

Views: 6319

Rating: 4.4 / 5 (65 voted)

Reviews: 80% of readers found this page helpful

Author information

Name: Jonah Leffler

Birthday: 1997-10-27

Address: 8987 Kieth Ports, Luettgenland, CT 54657-9808

Phone: +2611128251586

Job: Mining Supervisor

Hobby: Worldbuilding, Electronics, Amateur radio, Skiing, Cycling, Jogging, Taxidermy

Introduction: My name is Jonah Leffler, I am a determined, faithful, outstanding, inexpensive, cheerful, determined, smiling person who loves writing and wants to share my knowledge and understanding with you.