Playing with bcache: NVMe Write-Back Caching over HDD
Over the weekend I decided to revisit bcache, Linux’s block-layer cache system, to try and squeeze more performance out of a large spinning disk by caching it with a small SSD. The concept is simple: attach a fast SSD (in this case, a 256?GB NVMe drive) in front of a slower HDD, and let bcache absorb and smooth out writes using the SSD as a write-back cache.
Unfortunately, the results were not what I expected. Instead of a big performance bump, I discovered that my NVMe drive—despite being a Samsung MZVLQ256HBJD—is shockingly slow for sustained writes, barely better than the HDD I was trying to accelerate.
Setup Steps
Here’s the setup I used:
- Wipe the drives
Make sure there’s no lingering filesystem or bcache metadata:sudo wipefs -a /dev/sda sudo wipefs -a /dev/nvme0n1p4
- Create backing device on the HDD
sudo make-bcache --wipe-bcache --block /dev/sda
- Create cache device on the NVMe SSD
sudo make-bcache --cache --wipe-bcache --block /dev/nvme0n1p4
- Attach the cache to the backing device
Get the cache set UUID and attach it to the backing device:# Find UUID bcache-super-show /dev/nvme0n1p4 | grep UUID # Attach (replace UUID) echo | sudo tee /sys/block/bcache0/bcache/attach
- Enable write-back mode
echo writeback | sudo tee /sys/block/bcache0/bcache/cache_mode
- Format and mount
sudo mkfs.ext4 /dev/bcache0 sudo mount /dev/bcache0 /mnt
Benchmarking Reality
To get a baseline, I ran a basic fio test directly on a file on my root filesystem, which resides on the NVMe drive:
fio --name=test --filename=/home/osan/testfile --size=10G --bs=1M --rw=write \ --ioengine=libaio --iodepth=32 --direct=0
The result: ~194?MiB/s write bandwidth. And that’s not a typo.
With clat latencies in the hundreds of milliseconds and 99th percentile latencies >2?seconds, the drive showed behavior much closer to a spinning disk than what you’d expect from an NVMe device. Even enabling write-back caching in bcache didn’t improve performance much—it simply matched the NVMe’s raw write speed, which isn’t saying much.
The disk shows 8GT/s x4 on PCIe Gen3, so the bus isn’t the bottleneck. The drive is just bad at sustained writes—probably DRAM-less and overaggressively optimized for client workloads or bursty activity.
<H2Takeaways
If you’re planning to use an NVMe SSD as a bcache write-back device, don’t assume all NVMe drives are fast. Many low-end OEM drives—especially DRAM-less models like the MZVLQ256HBJD—fall off a performance cliff under sustained write load.
In my case, bcache didn’t offer a performance boost because the cache layer was just as slow as the backing device. In retrospect, I would’ve been better off checking the SSD’s sustained write behavior first using something like:
sudo nvme smart-log /dev/nvme0n1
Or running long-duration writes and monitoring latency and thermal throttling.
There’s still value in bcache—especially if you have a real SSD or enterprise-grade NVMe to use as a cache. But with a sluggish consumer NVMe like mine, bcache turns into a no-op at best and a source of extra latency at worst.