Sparse files

January 21, 2020 by · Leave a Comment
Filed under: Development, Linux 

A really long time ago I was writing up something about sparse files to show how they can be used for automatically growing file systems. Unfortunately I never got very far but figured it’s one of those things I’m still really happy to play around with, so I might as well try to finish this entry.

A sparse file is basically an empty file with a size, but you never assign the blocks for the file… so you can wind up with a huge file, but it only uses a small part of the harddrive space which in turn is only filled/take up real disk space as you use the space. The real disk space is not taken up until it’s actually used. This can be really useful for creating harddrive loopback disks for example. Below is an example of how I create an empty image file which takes up 0 bytes of diskspace, but is 512 megabytes in size according to the filesystem.

oan@work7:~$ dd if=/dev/zero of=file.img bs=1 count=0 seek=512M
0+0 records in
0+0 records out
0 bytes (0 B) copied, 9.0712e-05 s, 0.0 kB/s

Here we actually create the file, notice the seek 512M. This makes dd jump 512 megabytes forward to write a single zero.

oan@work7:~$ du -shx file.img
0 file.img

And here you see the amount of disk space actually used by the file at this point.

oan@work7:~$ mkfs.btrfs file.img
SMALL VOLUME: forcing mixed metadata/data groups
btrfs-progs v4.0
See for more information.
Turning ON incompat feature 'mixed-bg': mixed data and metadata block groups
Turning ON incompat feature 'extref': increased hardlink limit per file to 65536
Turning ON incompat feature 'skinny-metadata': reduced-size metadata extent refs
Created a data/metadata chunk of size 8388608
ERROR: device scan failed 'file.img' - Operation not permitted
fs created label (null) on file.img
nodesize 4096 leafsize 4096 sectorsize 4096 size 512.00MiB

At this point we create a btrfs filesystem inside the sparse file we previously created.

oan@work7:~$ du -shx file.img
4.1M file.img

And now you can see that the filesystem metadata that we created uses 4.1 megabytes of actual diskspace. The actual filesystem available should be 512M – 4.1M in size approximately.

oan@work7:~$ sudo mount -t btrfs -o loop file.img file
oan@work7:~$ sudo dd if=/dev/urandom of=./lalala.rand bs=1K count=4K
4096+0 records in
4096+0 records out
4194304 bytes (4.2 MB) copied, 0.344203 s, 12.2 MB/s

We mount the file system using a loopback device and create a small file of 4M in size.

oan@work7:~$ du -shx file.img
8.1M file.img

And here you can see how the actual diskspace used has grown to 8.1M in size.

This is a really interesting way of utilizing your diskspace optimally while creating for example yocto builds run locally etc. I hope this can be useful for others as well.