Sparse files

January 21, 2020 by · Leave a Comment
Filed under: Development, Linux 

A really long time ago I was writing up something about sparse files to show how they can be used for automatically growing file systems. Unfortunately I never got very far but figured it’s one of those things I’m still really happy to play around with, so I might as well try to finish this entry.

A sparse file is basically an empty file with a size, but you never assign the blocks for the file… so you can wind up with a huge file, but it only uses a small part of the harddrive space which in turn is only filled/take up real disk space as you use the space. The real disk space is not taken up until it’s actually used. This can be really useful for creating harddrive loopback disks for example. Below is an example of how I create an empty image file which takes up 0 bytes of diskspace, but is 512 megabytes in size according to the filesystem.

oan@work7:~$ dd if=/dev/zero of=file.img bs=1 count=0 seek=512M
0+0 records in
0+0 records out
0 bytes (0 B) copied, 9.0712e-05 s, 0.0 kB/s

Here we actually create the file, notice the seek 512M. This makes dd jump 512 megabytes forward to write a single zero.

oan@work7:~$ du -shx file.img
0 file.img

And here you see the amount of disk space actually used by the file at this point.

oan@work7:~$ mkfs.btrfs file.img
SMALL VOLUME: forcing mixed metadata/data groups
btrfs-progs v4.0
See for more information.
Turning ON incompat feature 'mixed-bg': mixed data and metadata block groups
Turning ON incompat feature 'extref': increased hardlink limit per file to 65536
Turning ON incompat feature 'skinny-metadata': reduced-size metadata extent refs
Created a data/metadata chunk of size 8388608
ERROR: device scan failed 'file.img' - Operation not permitted
fs created label (null) on file.img
nodesize 4096 leafsize 4096 sectorsize 4096 size 512.00MiB

At this point we create a btrfs filesystem inside the sparse file we previously created.

oan@work7:~$ du -shx file.img
4.1M file.img

And now you can see that the filesystem metadata that we created uses 4.1 megabytes of actual diskspace. The actual filesystem available should be 512M – 4.1M in size approximately.

oan@work7:~$ sudo mount -t btrfs -o loop file.img file
oan@work7:~$ sudo dd if=/dev/urandom of=./lalala.rand bs=1K count=4K
4096+0 records in
4096+0 records out
4194304 bytes (4.2 MB) copied, 0.344203 s, 12.2 MB/s

We mount the file system using a loopback device and create a small file of 4M in size.

oan@work7:~$ du -shx file.img
8.1M file.img

And here you can see how the actual diskspace used has grown to 8.1M in size.

This is a really interesting way of utilizing your diskspace optimally while creating for example yocto builds run locally etc. I hope this can be useful for others as well.

Gitea and

Recently I’ve been frustrated at work while using Gitlab and Jenkins for various reasons, some of the integrations are really fragile due to some plugins we use, both Jenkins and Gitlab are incredibly bloated and use insane amounts of resources and they are simply not reasonable choices for a private setup. Also I recently replaced my server at home (basically a machine that does almost everything I want at home) from a 32 bit atom to an I5 intel machine with 16 gigs of RAM etc, which means I have totally different resources to work with. For example, 32 bit i386 cpu’s are not supported out of the box by docker, and the cpu was quite overloaded. With the new box I’ve been able to play around a little with my setup.

I’ve previously used just a basic and manual git setup at home, with approximately 110’ish repositories in it. I’ve been playing around with gitea and at home with the new server and am very pleased with it, even though it took a bit of work to get used to it. In all honesty, I’ve only done fairly basic work with it so far. The only really complex stuff I’ve done was to move my existing git repositories into the gitea environment by scripting a bit and using the gitea web based API.

Regarding gitea, so far I’ve noticed the following things:

  • Very slim by comparison to other options, currently uses around 60 MB of RAM and not a lot of CPU from what I’ve seen. Especially considering what you get.
  • UI is fairly similar to gitlab/github.
  • Setup was very simple except for database connection (I winded up just using sqlite3 I believe, I was lazy and also don’t expect more than a very few users).
  • Pull Request is really nice.
  • Issue tracking seems to work fairly well.
  • Docker setup with volumes is very easy.
  • Seems to have the essentials in plugins etc that I need.
  • API seems very nice, I’ve only used it for the migration so far though.
  • The only bad part I’ve seen so far is that the administration panel might be a bit spartan at times, but I don’t really mind.

Regarding, my first impressions are:

  • I absolutely love the yaml file format so far.
  • UI is incredibly clean, on the verge of too clean.
  • Integration with gitea was super simple once I actually got it working (documentation was not 100% accurate I think).
  • Simple to get started with if you have a sane build pattern.
  • Nice integration to gitea and you get marks on build statuses etc. Interesting to find if you can block a build from being merged based on build results as well.
  • I’ve managed to make a simple build of a playground project I have by adding a Dockerfile which is built into an image when the build is started, then a continuation of the build builds my project inside the docker image we just built.
  • First time using docker-compose so it was a bit of a hassle understanding this, but it was fun ;). Not always obvious where some configuration should be placed etc.
  • Yaml file format definitely not enough for the type of pipelines we do professionally though :(.
  • Pleasantly surprised you can actually add and remove build slaves to the platform.
  • Also pleasantly surprised by how to do parallel build steps. Syntax is super simple.
  • I really lack some form of artifact storage, or at least a plugin for something that is not either cloud based or incredibly enterprisey (artifactory). Actually, I’ve had issues just finding a good light weight open source artifact storage so far…
  • I also lack some form of nice presentation of various build artifacts, code coverage or unit test results etc.

In all, pleasantly surprised by how simple this was to setup and configure. It was a fun trip and I’ll continue using it at home for now.

As a sidenote, for the stuff I have on github, I do like to use travis, it also has a nice syntax and is a nice solution.

DevOps World Nice 2018

Long time again, I seem to be bad at updating this site at times.

I’ve recently moved into DevOps as a coincidence. I feel like this is what I’ve done for ages, but I just never set a name on it really. Also, this time it’s proper DevOps I’d say, a team of people maintaining build environments, CI/CD environments, gitlab, and so forth. It’s been a long time since I did proper server maintenance however, and it’s kind of interesting to see what happened. New tools, new possibilities, containerization, monitoring, virtualization, cloud, and so forth.

Because of said circumstances, I went to the DevOps World conference (previously Jenkins World I guess) in Nice and was pleasantly surprised by some of the development done to Jenkins and what they are trying to do. I’m a bit cautious with some of the changes though and what I expect of the future. One of my main issues from work is that a lot of the clients we have are very sensitive about where data is stored, to the point where they set up private clouds to keep stuff in house and in many cases blocking internet access almost completely. A lot of the changes in Jenkins and from CloudBees is about moving stuff into the public clouds from private installations. IE, something that we will not be able to do, even though my company is all for working as open as possible and driving our clients to open up what they are doing as well. There is obviously stuff you don’t want to open as a corporation, but a lot of it is not really specific to you and the help you can receive from collaborating with your competition is actually quite tremendous, hence driving down part costs etc.

So, all that said. I’m super happy to have been to the conference and have heard a lot new stuff. My main takeaways:

Jenkins x seems really interesting, it’s basically a stripped down Jenkins master running in kubernetes/cloud swarm but you spin up a new master for every build so the single point of failure is removed in it, and the kubernetes ecosystem for most of the extraneous systems such as web-hooks, etc. The bad part of this is that you loose the Jenkins UI¬† and Jenkins x is completely Command Line Interface based. CloudBees has some proprietary UI’s for Jenkins X however…

Configuration as Code Plugin seems really nice as I’d really like to easily take my Jenkins Master and create staging environments to test new changes easily. I will be experimenting with this most definitely.

Speeches about breaking down and splitting down your Jenkins Master and basically divide and conquer. Try to do different things on different servers basically, and don’t create a Jenkinstein with almost all the plugins in a single instance to cater to all types of jobs, etc.

Jenkins seems to be moving a lot into the public cloud, which unfortunately is bad for our industry as already mentioned. However, I’m really intrigued by the scaling possibilities that was shown using GCP and AWS for example.

Also, a lot of other good talks.

All this said, I’ve been experimenting with alternative CI and git hosting solutions, which I find rather interesting and I’ll write more about that in a close future…. ūüėČ

SSL certificate updated slightly late

Due to being away for the duration of last week, I missed the renewal of the SSL certificate. This should be fixed now as I’ve moved to letsencrypt using certbot for autorenewal. I hope this should solve the problem for the future.

As a sidenote, I was attending the DevOps World in Nice last week. Very interesting week, I’ll try to write up on some of the topics I found interesting. Also, I’ve been experimenting with “other” solutions than jenkins + gitlab + atlassian tools as of late, I found some rather nice setups using gitea and that I’ll try to write about as well.

Iptables-tutorial work

It’s been a while since I looked at the iptables-tutorial to say the least. The last real commits where 9 years ago and the last proper release is closer to 13 years ago….

The last two three days I kind of picked it up again, for fun mostly. In the end of my maintenance of the project I kind of burnt myself out on the whole topic, I just did not want to do the whole thing anymore. I’ve grown and changed as a person since then. I don’t have the same spare time for one.

My first order of business, The build system was always a mess and I started cleaning out stuff that shouldn’t be there. A bunch of old scripts have been removed, I managed to remove the dependency on the fabpdf backend for jade and also the eps_to_png script with that, and almost all of the scripts where removed. The Spanish and Portuguese builds where similarly cleaned up. Finally a Travis file was added to get automated builds running on Travis, and this actually works now!

I’m getting close to making a 1.2.3 release imho to get something new out there. The actual content has barely changed to be honest, maybe a few words at the most, but it feels like something that would be nice to get out there.

The task of getting this documentation up to par is a tremendous effort to be honest, and I’d be really interested in getting help from anyone who reads this. If you feel like contributing, contact me, check the code out on github, add bugs/tasks on stuff you find that is wrong, or provide pull requests. I would be thrilled to have other people working on this as well so it becomes more vibrant again and don’t stagnate as it has done over time.

Retropie table writeup

May 27, 2016 by · 2 Comments
Filed under: Development, Hardware, Linux, Personal 

A co-worker of mine asked me to do a writeup of my retropie table build and yes, I guess I should. I rarely write anything about what I do anymore ūüôā .

So, I got kids, and I thought it would be fun for them to play some of the old stuff I used to play and found the¬† quite interesting and so set out to do something similar. I winded up going to Ikea with the kids and told them to choose colors, we winded up with a pink IKEA Lack table … ;).

I also got a bunch of old screens at home, I picked a very heavy 17″ monitor and removed all the casings etc greatly reducing the weight. My choice was slightly bad though for two reasons, the connectors are standing at a 90 degree angle to the screen so it winded up not fitting inside the table and it’s pointing out on the underside as it is now… secondly, the viewing angles are so so, I winded up rotating¬†the screen 180 degrees as the viewing angles where much better from that direction and the /boot/config.txt in raspbian systems have options to hardware rotate the output. I also bought a set of 2 joysticks + 20 buttons with built in backlight and a xin mo based usb controller from ebay, a small speaker from a local shop and the power adapters I had at home, and a connector to fit for power input to the table. I decided to pick up a powered USB hub as well to fit so it was reachable from the outside. I also had a sheet of plexiglass lying around for many years which winded up useful. I also had a USB wifi dongle lying around so I reused that for connectivity.

We started out with measuring out and sawing up the holes needed and then removing the innards that are in the way. I used some paint masking tape to protect the table, more on this later. This work was very easy to do with a dremel with a circular saw addon. The underside wasn’t so important how it looked but I tried to make the cuts¬†decent looking at least.

Once this was done, I test fitted and probed a bit on how to get the raspberry pi, monitor and power system fitted inside the the table before moving on to sawing the plexiglass sheet into the same size as the table, and then drilling and countersinking the screw holes, temporarily fit the plexiglass while drilling the holes for the joystick + buttons. Some sanding and fixing of edges followed. I removed the plexiglass sheet and drilled and countersunk holes for screwing the joysticks to  the topside of the table underneath the plexiglass.

Moving on, I screwed in the joysticks and fitted the power adapters for the monitor, USB¬†hub and raspberry pi in the screen. I made 6 foam inserts to rest the monitor on and glued in place inside the table, wired up the monitor and put the monitor in. Removing the paint masking tape I realized that I used some shitty tape with much too¬†hard tacking adhesive, meaning that I managed to pull away a bunch of the foil/tape (the “paint” on the table, it’s not painted, but rather foiled with a layer of colored plastic). When I realized this I started rethinking a paint scheme I had already planned and decided to do some modifications to hide the errors and to possibly heighten the feeling of the table.

I did the paintjob, made some simple paint masks etc and airbrushed the table with black borders, softish blue and red and green colors further softened with a few drops of white.

After this had dried for a few days I put the plexiglass on, screwed in all the buttons, joystick heads etc and installed the raspberry pi + other final electronics and tested the system. This is when I realized the problems with the screen viewing angle so I had to back everything up, remove the buttons, joysticks, plexiglass screen, and monitor. I winded up lifting some of the paint I had used to paint the table (the paint was really sticking badly to the surface). This lead me to question the surface of the plexiglass and¬†figured I’d polish the plexiglass. I originally made the bad choice of trying to apply an old plastic modelling technique on the plexiglass, washing it with a layer of future floor polish. This looked absolutely horrible on this big surface so I winded up spending 1,5-2 hours removing the stuff again and then using some proper polishing compounds on both sides of the table making the plexiglass sheet incredibly nice looking (in my humble opinion). I Repainted the parts where the paint was removed, took out the screen, rotated it 180 degrees, re-fitted all the power adapters etc so it wouldn’t be in the way of the monitor and had to saw up a second hole for the DVI connector… I then reinstalled plexiglass, buttons, joysticks, etc… and now, a much nicer viewing angle of the monitor and a nicer looking plexiglass sheet, but paint job not as nice anymore. Shit happens. Oh, I also pulled a cable through from the screen to the front panel button so I can turn on/off the monitor from one of the buttons. The USB hub was glued into a hole made in the skirting so it sticks out underneath the table with two accessible USB ports.

After everything was fitted and tested to work I started to look at the backside what could be done about it… I had to make a raised area to increase the depth of the table as the buttons I got wont fit properly otherwise. I used a 1 cm floor skirting around the hole and then took the sheet from the monitor hole and sawed into two pieces which fits over the hole, drilled a lot of holes in it to at least create some air ventilation into the table.

At this point I’ve installed some games, used it for a bit, let the kids play around and I’m absolutely happy with it. The software side I didn’t need to do anything about, it worked more or less out of the box. I had to make a usb quirks hack to split the controller into two halves, I had to rotate the screen in the config.txt and that’s it, then just follow the installation howtos. Retropie was a really happy surprise, I wasn’t expecting things to be that smooth to install. I do wish the Amiga emulator was better integrate, it would be nice to be able to do the same thing as with the NES images, just drop them in and they work… but I understand that each game needs its own “fixes” to get up and running… I will have a look and see if it’s possible to improve the situation somehow, at least so I can start games with just the joysticks and buttons.

Replacing network switches can be a pain sometimes

March 25, 2016 by · Leave a Comment
Filed under: Communications, Personal 

One of those days when replacing 2 routers being used as switches should be so simple bit turn out so complicated. I completely forgot I’m using one of them as a “firewall” between a dmz and the rest of the network. Figured I’d just switch the IP on the comhem router and it would work for now at least, but no, it refused to because guest network is hardwired in the router to the same range as my internal network, and hence impossible to use… Winded up reconfiguring everything on my network to a different IP range, spending a couple of hours doing so, and once I’m done, switch IP on the comhem router and reboot it… And get a 10/8 IP address on the Internet side interface… Wtf? I call the tech support and apparently this has been so for at least 3 years. Sometimes you get a private network IP on the Internet, sometimes not, and there is nothing you or they can do about it. I went home and rebooted the router, and got an 83/8 IP instead this time. So… Something I planned to take 10 minutes winded up taking 5-6 hours… Hurray… I just wish we could start being sane and also that certain network providers would start implementing IPv6.

FOSDEM 2016 is over

I went to FOSDEM 2016 this year with 8 other colleagues of mine and had a really really good time. A lot of good speeches and stuff to talk about and I feel very motivated for some new projects. Some of the stuff going on right now is incredibly exciting, especially with regards to containerization etc which is something I have a lot of personal and work related interest in. I will be looking into more details in that for the future…

What I did miss was a more “general networking” track with low level stuff like iptables, netfilter, iproute, wireshark, snort, etc. I’m just not sure if this is the right conference for that though. Gathering my thoughts and working out some of the project details in the upcoming week if I get time.

Using AWS EC2 instances for large builds

I experimented a few years ago with using EC2 spot instances (virtual server on the internet, but using unused server capacity). It was fairly successful, being able to run large calculations that should have taken weeks in a matter of days.

Since I started at my current job I’ve been running into building increasingly complex yocto images which keeps growing in size, at this point most images I build can take up to 6-7 hours to build on my laptop. This is an i7-4558U 2.8GHz cpu and 8 gigs of RAM so it’s not bad really, just not enough for these types of builds.

Again I started experimenting and I am really happy and impressed. So far all experiments are for open source projects etc, so nothing that has any non-disclosure agreements or corporate etc etc, I’d like to but this isn’t up to me really. I’ve setup an AMI on EC2 which I can instantiate and have up and running in 2-3 minutes, and then mount a 100 gig EBS volume where I store the sources and build data.

The same build that generally takes up to 6 hours on my laptop takes approximately 30-40 minutes on an EC2 c4.4xlarge machine (16 cores and 32 gigs ram).

My key findings so far is:

  1. Keep an AMI with all the build tools necessary/or script the setup.
  2. Keep an EBS volume with the long term stored data, gits etc for building and mount somewhere suitable in the AMI.
  3. Keep a micro instance (these are free/very cheap) around for mounting the EBS volume when you just want to check stuff out, mess around etc but not make builds.

Qt5.5 QtPositioning PositionSource unable to load due to missing symbols

December 31, 2015 by · Leave a Comment
Filed under: Development, Linux, Ubuntu 

I’ve slowly been working on a “Magic Mirror” for a long time now, trying to pick up some new technologies and making something fun. I’ve been working with Qt on and off for the last 10 years or so, but only peripherally, looking and awing over what they do. QML is absolutely awesome in my humble opinion.

A few weeks ago I started using Qt5.5 and ran into some issues executing the Magic mirror in a qmlscene now that I continued the project. It was fairly easy to reproduce but it seems to only affect the Qt binaries I’ve installed from the installer downloaded from I’ve had verification that the code works with packages built from source, and trying to verify this on my own as well right now (building as we speak).

This is the sample code not working with qmlscene:

import QtPositioning 5.5
import QtQuick 2.0

Rectangle {
id: root

PositionSource {
id: positionSource

Bug is reported to here:

Next Page »