
Recent versions of LVM supports dm-cache so I opted to use LVM to set it up. Setting up the device mapper by hand is somewhat of a bother so I found it a lot easier to use LVM to do this. It is basically the same as bcache, but happens in the device mapper in the linux kernel. The other implementation I’ve looked at is dm-cache. You can mount the filesystem and start using it. The setup is persisted through some udev magic. You probably want to enable writeback caching: Then initialize the cache:Īnd write down this UUID as well. One will be the HDD partition you want to cache and the other is the SSD partition.

You might need to rebuild the package, depending on your platform. There is a PPA however, so getting packages installed is simple. Bcache was a bit of bother to set up as nobody has packages the user space tools yet. These are Haswell servers with E5 Xeon CPUs with solid states and hard drives running Debian Jessie (testing). If the writeback cache works as it should the caching layer will push the writes to the SSD and let them trickle down onto the HDD when the HDD has IO capacity to spare. Sudden bursts of traffic will create significant IO load. The second advantage is that it can provide a writeback cache for the HDDs. It should be able to be able to provide a reasonable hit rate if there are any patterns in the in traffic. One is that the SSD can be used for read caching. So, we add one terabyte of flash storage to our 20 terabyte server, and suddenly we have quite a bit of cache.Īdding flash storage for caching gives us two distinct advantages. That gives us the possibility to add a secondary caching layer to our setup. Both of these components use solid state drives to cache hard disks. I’ve been keeping an eye on two very interesting components in the Linux kernel that were built exactly for this scenario. Varnish Cache has a history of relying on the operating system kernel for its performance.
