Sparse files
A really long time ago I was writing up something about sparse files to show how they can be used for automatically growing file systems. Unfortunately I never got very far but figured it’s one of those things I’m still really happy to play around with, so I might as well try to finish this entry.
A sparse file is basically an empty file with a size, but you never assign the blocks for the file… so you can wind up with a huge file, but it only uses a small part of the harddrive space which in turn is only filled/take up real disk space as you use the space. The real disk space is not taken up until it’s actually used. This can be really useful for creating harddrive loopback disks for example. Below is an example of how I create an empty image file which takes up 0 bytes of diskspace, but is 512 megabytes in size according to the filesystem.
oan@work7:~$ dd if=/dev/zero of=file.img bs=1 count=0 seek=512M
0+0 records in
0+0 records out
0 bytes (0 B) copied, 9.0712e-05 s, 0.0 kB/s
Here we actually create the file, notice the seek 512M. This makes dd jump 512 megabytes forward to write a single zero.
oan@work7:~$ du -shx file.img
0 file.img
And here you see the amount of disk space actually used by the file at this point.
oan@work7:~$ mkfs.btrfs file.img
SMALL VOLUME: forcing mixed metadata/data groups
btrfs-progs v4.0
See http://btrfs.wiki.kernel.org for more information.
Turning ON incompat feature 'mixed-bg': mixed data and metadata block groups
Turning ON incompat feature 'extref': increased hardlink limit per file to 65536
Turning ON incompat feature 'skinny-metadata': reduced-size metadata extent refs
Created a data/metadata chunk of size 8388608
ERROR: device scan failed 'file.img' - Operation not permitted
fs created label (null) on file.img
nodesize 4096 leafsize 4096 sectorsize 4096 size 512.00MiB
At this point we create a btrfs filesystem inside the sparse file we previously created.
oan@work7:~$ du -shx file.img
4.1M file.img
And now you can see that the filesystem metadata that we created uses 4.1 megabytes of actual diskspace. The actual filesystem available should be 512M – 4.1M in size approximately.
oan@work7:~$ sudo mount -t btrfs -o loop file.img file
oan@work7:~$ sudo dd if=/dev/urandom of=./lalala.rand bs=1K count=4K
4096+0 records in
4096+0 records out
4194304 bytes (4.2 MB) copied, 0.344203 s, 12.2 MB/s
We mount the file system using a loopback device and create a small file of 4M in size.
oan@work7:~$ du -shx file.img
8.1M file.img
And here you can see how the actual diskspace used has grown to 8.1M in size.
This is a really interesting way of utilizing your diskspace optimally while creating for example yocto builds run locally etc. I hope this can be useful for others as well.
DevOps World Nice 2018
Filed under: Configuration Management, Development, Linux
Long time again, I seem to be bad at updating this site at times.
I’ve recently moved into DevOps as a coincidence. I feel like this is what I’ve done for ages, but I just never set a name on it really. Also, this time it’s proper DevOps I’d say, a team of people maintaining build environments, CI/CD environments, gitlab, and so forth. It’s been a long time since I did proper server maintenance however, and it’s kind of interesting to see what happened. New tools, new possibilities, containerization, monitoring, virtualization, cloud, and so forth.
Because of said circumstances, I went to the DevOps World conference (previously Jenkins World I guess) in Nice and was pleasantly surprised by some of the development done to Jenkins and what they are trying to do. I’m a bit cautious with some of the changes though and what I expect of the future. One of my main issues from work is that a lot of the clients we have are very sensitive about where data is stored, to the point where they set up private clouds to keep stuff in house and in many cases blocking internet access almost completely. A lot of the changes in Jenkins and from CloudBees is about moving stuff into the public clouds from private installations. IE, something that we will not be able to do, even though my company is all for working as open as possible and driving our clients to open up what they are doing as well. There is obviously stuff you don’t want to open as a corporation, but a lot of it is not really specific to you and the help you can receive from collaborating with your competition is actually quite tremendous, hence driving down part costs etc.
So, all that said. I’m super happy to have been to the conference and have heard a lot new stuff. My main takeaways:
Jenkins x seems really interesting, it’s basically a stripped down Jenkins master running in kubernetes/cloud swarm but you spin up a new master for every build so the single point of failure is removed in it, and the kubernetes ecosystem for most of the extraneous systems such as web-hooks, etc. The bad part of this is that you loose the Jenkins UI and Jenkins x is completely Command Line Interface based. CloudBees has some proprietary UI’s for Jenkins X however…
Configuration as Code Plugin seems really nice as I’d really like to easily take my Jenkins Master and create staging environments to test new changes easily. I will be experimenting with this most definitely.
Speeches about breaking down and splitting down your Jenkins Master and basically divide and conquer. Try to do different things on different servers basically, and don’t create a Jenkinstein with almost all the plugins in a single instance to cater to all types of jobs, etc.
Jenkins seems to be moving a lot into the public cloud, which unfortunately is bad for our industry as already mentioned. However, I’m really intrigued by the scaling possibilities that was shown using GCP and AWS for example.
Also, a lot of other good talks.
All this said, I’ve been experimenting with alternative CI and git hosting solutions, which I find rather interesting and I’ll write more about that in a close future…. 😉
Iptables-tutorial work
Filed under: Development, Frozentux.net, Iptables, Linux, Netfilter
It’s been a while since I looked at the iptables-tutorial to say the least. The last real commits where 9 years ago and the last proper release is closer to 13 years ago….
The last two three days I kind of picked it up again, for fun mostly. In the end of my maintenance of the project I kind of burnt myself out on the whole topic, I just did not want to do the whole thing anymore. I’ve grown and changed as a person since then. I don’t have the same spare time for one.
My first order of business, The build system was always a mess and I started cleaning out stuff that shouldn’t be there. A bunch of old scripts have been removed, I managed to remove the dependency on the fabpdf backend for jade and also the eps_to_png script with that, and almost all of the changes.sh scripts where removed. The Spanish and Portuguese builds where similarly cleaned up. Finally a Travis file was added to get automated builds running on Travis, and this actually works now!
I’m getting close to making a 1.2.3 release imho to get something new out there. The actual content has barely changed to be honest, maybe a few words at the most, but it feels like something that would be nice to get out there.
The task of getting this documentation up to par is a tremendous effort to be honest, and I’d be really interested in getting help from anyone who reads this. If you feel like contributing, contact me, check the code out on github, add bugs/tasks on stuff you find that is wrong, or provide pull requests. I would be thrilled to have other people working on this as well so it becomes more vibrant again and don’t stagnate as it has done over time.
Retropie table writeup
A co-worker of mine asked me to do a writeup of my retropie table build and yes, I guess I should. I rarely write anything about what I do anymore 🙂 .
So, I got kids, and I thought it would be fun for them to play some of the old stuff I used to play and found the https://www.raspberrypi.org/blog/raspberry-pi-ikea-arcade-table-make-yourself/ quite interesting and so set out to do something similar. I winded up going to Ikea with the kids and told them to choose colors, we winded up with a pink IKEA Lack table … ;).
I also got a bunch of old screens at home, I picked a very heavy 17″ monitor and removed all the casings etc greatly reducing the weight. My choice was slightly bad though for two reasons, the connectors are standing at a 90 degree angle to the screen so it winded up not fitting inside the table and it’s pointing out on the underside as it is now… secondly, the viewing angles are so so, I winded up rotating the screen 180 degrees as the viewing angles where much better from that direction and the /boot/config.txt in raspbian systems have options to hardware rotate the output. I also bought a set of 2 joysticks + 20 buttons with built in backlight and a xin mo based usb controller from ebay, a small speaker from a local shop and the power adapters I had at home, and a connector to fit for power input to the table. I decided to pick up a powered USB hub as well to fit so it was reachable from the outside. I also had a sheet of plexiglass lying around for many years which winded up useful. I also had a USB wifi dongle lying around so I reused that for connectivity.
We started out with measuring out and sawing up the holes needed and then removing the innards that are in the way. I used some paint masking tape to protect the table, more on this later. This work was very easy to do with a dremel with a circular saw addon. The underside wasn’t so important how it looked but I tried to make the cuts decent looking at least.
Once this was done, I test fitted and probed a bit on how to get the raspberry pi, monitor and power system fitted inside the the table before moving on to sawing the plexiglass sheet into the same size as the table, and then drilling and countersinking the screw holes, temporarily fit the plexiglass while drilling the holes for the joystick + buttons. Some sanding and fixing of edges followed. I removed the plexiglass sheet and drilled and countersunk holes for screwing the joysticks to the topside of the table underneath the plexiglass.
Moving on, I screwed in the joysticks and fitted the power adapters for the monitor, USB hub and raspberry pi in the screen. I made 6 foam inserts to rest the monitor on and glued in place inside the table, wired up the monitor and put the monitor in. Removing the paint masking tape I realized that I used some shitty tape with much too hard tacking adhesive, meaning that I managed to pull away a bunch of the foil/tape (the “paint” on the table, it’s not painted, but rather foiled with a layer of colored plastic). When I realized this I started rethinking a paint scheme I had already planned and decided to do some modifications to hide the errors and to possibly heighten the feeling of the table.
I did the paintjob, made some simple paint masks etc and airbrushed the table with black borders, softish blue and red and green colors further softened with a few drops of white.
After this had dried for a few days I put the plexiglass on, screwed in all the buttons, joystick heads etc and installed the raspberry pi + other final electronics and tested the system. This is when I realized the problems with the screen viewing angle so I had to back everything up, remove the buttons, joysticks, plexiglass screen, and monitor. I winded up lifting some of the paint I had used to paint the table (the paint was really sticking badly to the surface). This lead me to question the surface of the plexiglass and figured I’d polish the plexiglass. I originally made the bad choice of trying to apply an old plastic modelling technique on the plexiglass, washing it with a layer of future floor polish. This looked absolutely horrible on this big surface so I winded up spending 1,5-2 hours removing the stuff again and then using some proper polishing compounds on both sides of the table making the plexiglass sheet incredibly nice looking (in my humble opinion). I Repainted the parts where the paint was removed, took out the screen, rotated it 180 degrees, re-fitted all the power adapters etc so it wouldn’t be in the way of the monitor and had to saw up a second hole for the DVI connector… I then reinstalled plexiglass, buttons, joysticks, etc… and now, a much nicer viewing angle of the monitor and a nicer looking plexiglass sheet, but paint job not as nice anymore. Shit happens. Oh, I also pulled a cable through from the screen to the front panel button so I can turn on/off the monitor from one of the buttons. The USB hub was glued into a hole made in the skirting so it sticks out underneath the table with two accessible USB ports.
After everything was fitted and tested to work I started to look at the backside what could be done about it… I had to make a raised area to increase the depth of the table as the buttons I got wont fit properly otherwise. I used a 1 cm floor skirting around the hole and then took the sheet from the monitor hole and sawed into two pieces which fits over the hole, drilled a lot of holes in it to at least create some air ventilation into the table.
At this point I’ve installed some games, used it for a bit, let the kids play around and I’m absolutely happy with it. The software side I didn’t need to do anything about, it worked more or less out of the box. I had to make a usb quirks hack to split the controller into two halves, I had to rotate the screen in the config.txt and that’s it, then just follow the installation howtos. Retropie was a really happy surprise, I wasn’t expecting things to be that smooth to install. I do wish the Amiga emulator was better integrate, it would be nice to be able to do the same thing as with the NES images, just drop them in and they work… but I understand that each game needs its own “fixes” to get up and running… I will have a look and see if it’s possible to improve the situation somehow, at least so I can start games with just the joysticks and buttons.
FOSDEM 2016 is over
Filed under: Communications, Debian, Development, Frozentux.net, General, Linux, Personal, Uncategorized
I went to FOSDEM 2016 this year with 8 other colleagues of mine and had a really really good time. A lot of good speeches and stuff to talk about and I feel very motivated for some new projects. Some of the stuff going on right now is incredibly exciting, especially with regards to containerization etc which is something I have a lot of personal and work related interest in. I will be looking into more details in that for the future…
What I did miss was a more “general networking” track with low level stuff like iptables, netfilter, iproute, wireshark, snort, etc. I’m just not sure if this is the right conference for that though. Gathering my thoughts and working out some of the project details in the upcoming week if I get time.
Using AWS EC2 instances for large builds
Filed under: Configuration Management, Debian, Development, General, Hardware, Linux, Ubuntu
I experimented a few years ago with using EC2 spot instances (virtual server on the internet, but using unused server capacity). It was fairly successful, being able to run large calculations that should have taken weeks in a matter of days.
Since I started at my current job I’ve been running into building increasingly complex yocto images which keeps growing in size, at this point most images I build can take up to 6-7 hours to build on my laptop. This is an i7-4558U 2.8GHz cpu and 8 gigs of RAM so it’s not bad really, just not enough for these types of builds.
Again I started experimenting and I am really happy and impressed. So far all experiments are for open source projects etc, so nothing that has any non-disclosure agreements or corporate etc etc, I’d like to but this isn’t up to me really. I’ve setup an AMI on EC2 which I can instantiate and have up and running in 2-3 minutes, and then mount a 100 gig EBS volume where I store the sources and build data.
The same build that generally takes up to 6 hours on my laptop takes approximately 30-40 minutes on an EC2 c4.4xlarge machine (16 cores and 32 gigs ram).
My key findings so far is:
- Keep an AMI with all the build tools necessary/or script the setup.
- Keep an EBS volume with the long term stored data, gits etc for building and mount somewhere suitable in the AMI.
- Keep a micro instance (these are free/very cheap) around for mounting the EBS volume when you just want to check stuff out, mess around etc but not make builds.
Qt5.5 QtPositioning PositionSource unable to load due to missing symbols
I’ve slowly been working on a “Magic Mirror” for a long time now, trying to pick up some new technologies and making something fun. I’ve been working with Qt on and off for the last 10 years or so, but only peripherally, looking and awing over what they do. QML is absolutely awesome in my humble opinion.
A few weeks ago I started using Qt5.5 and ran into some issues executing the Magic mirror in a qmlscene now that I continued the project. It was fairly easy to reproduce but it seems to only affect the Qt binaries I’ve installed from the installer downloaded from qt.io. I’ve had verification that the code works with packages built from source, and trying to verify this on my own as well right now (building as we speak).
This is the sample code not working with qmlscene:
import QtPositioning 5.5
import QtQuick 2.0Rectangle {
id: rootPositionSource {
id: positionSource
}
}
Bug is reported to qt.io here: https://bugreports.qt.io/browse/QTBUG-50227
Systemd oneshot, ExecStop and RemainAfterExit
I wanted to create a systemd service file that just ran 3 small commands today in a sequence on ExecStart and another set of reverse commands on ExecStop. My initial idea was to use bash syntax with ; between the commands (I keep forgetting that Systemd is not bash…), and the service file was set to being a oneshot file, which meant the ; was actually interpreted correctly, but all the ExecStop commands where also run directly when running systemctl start service.
So I read up a little on Systemd and ExecStart, this is what the http://www.freedesktop.org/software/systemd/man/systemd.service.html#ExecStart= page has to say:
When Type is not oneshot, only one command may and must be given. When Type=oneshot is used, zero or more commands may be specified. This can be specified by providing multiple command lines in the same directive, or alternatively, this directive may be specified more than once with the same effect.
So… that means the ; syntax will only work with oneshot apparently. Also, oneshot means that ExecStop runs directly after ExecStart. Further reading the documentation seems to indicate that RemainAfterExit=yes will make the service stay around according to systemd, so it will only try to execute the first time you run systemctl start, but not the second one. I don’t think this fixes that it runs ExecStop on start however, but I’m not sure.
Iptables-tutorial and ipsysctl-tutorial on github
Filed under: Configuration Management, Frozentux.net, Ipsysctl, Iptables, Linux, Netfilter
I guess I should have done this a long long time ago. Both the iptables-tutorial and the ipsysctl-tutorial source code are now available on github. Many many years ago I had an idea of putting the version control system out there for use, but I never realized it for various reasons. Both these documents are old by today, but the basic information is still valid and will likely be for a long time to come it seems.
I apologize for the version history, I moved from CVS in a rather rude way to SVN without keeping the history, which was what I used back in those days.
I invite anyone and everyone to do edits if they wish to and send me pull requests to fix the issues they find, or to add the documentation they’d like to add.
The iptables tutorial is available at:
https://github.com/frznlogic/iptables-tutorial
The ipsysctl tutorial is available at:
https://github.com/frznlogic/ipsysctl-tutorial
Project build speed importance
Filed under: Configuration Management, Development, Linux
I began writing this several years ago but never published it for various reasons. I think some of the thoughts are still really interesting, I joined a company named Pelagicore some months ago and we do some rather large builds at this company, similar in build times in the project I refer to in this text, but with some major differences. This build is based on yocto and it actually works really well once you get through the first build (currently a scratch dev image build is up to 4 hours in time, but after that it’s able to rebuild only changed software and recipes as necessary. The scratch build is heavy admittedly, but we are working on some ways to improve that as well, such as sstate caches, icecc distributed build systems, and so forth. I think with modifications we could get those 4 hours down to under an hour, which would be really sweet. Anyways, I thought I’d post this text as is, even though it’s not finished, just because and since I re-read it right now 😉 .
Original text
Again, I’m struck by the importance of a proper build process in projects. No matter how small or big the project is, the build must be kept fast and working from get go to the end — which means its end of life cycle. I am currently working in a large project with a huge code base where the build system is, in my opinion, next to completely collapsed. Build time is in excess of 1-4 hours, and the automated test-suites take from 3 hours to 7 hours to complete, depending on how thorough a test-suite you choose. A gigantic heap of time is spent just…. waiting… waiting…. waiting. Just to make a simple one line edit and then compile to see if it compiles can take up to 4 hours. I’ve previously been a bit spoiled with good and simple build systems, only occasionally running into really crappy build systems — for some reason, a lot of scientific open source software seem to be stuck in this category.
This time, I think I hit pay-dirt on “how not to do it”. Instead of focusing on the bad parts, I will try to focus on the things to do and to keep in mind. Nobody seems to like a grumpy person anyways, which I really am sometimes.
1. Keep the build system simple and manageable. Try to maintain the build system in a logical fashion and in a single language/system (scons/python, Makefiles, etc).
2. Expandable (new directories, files, etc)
3. Scalable (multiple CPU’s, machines, threads, etc)
4. Try to use as few frontends as possible (a single top makefile for example, with targets depending on what you want to do), and keep people informed of changes. Wiki with history via RSS is _perfect_ for this. This doesn’t mean you can create a script to build something, then 2 months down the road delete it because it is no longer needed, it causes a lot of stress for developers who just barely had a chance to find the script.
5. The programmers are (most likely) your customers.