Gitea and Drone.io

Recently I’ve been frustrated at work while using Gitlab and Jenkins for various reasons, some of the integrations are really fragile due to some plugins we use, both Jenkins and Gitlab are incredibly bloated and use insane amounts of resources and they are simply not reasonable choices for a private setup. Also I recently replaced my server at home (basically a machine that does almost everything I want at home) from a 32 bit atom to an I5 intel machine with 16 gigs of RAM etc, which means I have totally different resources to work with. For example, 32 bit i386 cpu’s are not supported out of the box by docker, and the cpu was quite overloaded. With the new box I’ve been able to play around a little with my setup.

I’ve previously used just a basic and manual git setup at home, with approximately 110’ish repositories in it. I’ve been playing around with gitea and drone.io at home with the new server and am very pleased with it, even though it took a bit of work to get used to it. In all honesty, I’ve only done fairly basic work with it so far. The only really complex stuff I’ve done was to move my existing git repositories into the gitea environment by scripting a bit and using the gitea web based API.

Regarding gitea, so far I’ve noticed the following things:

  • Very slim by comparison to other options, currently uses around 60 MB of RAM and not a lot of CPU from what I’ve seen. Especially considering what you get.
  • UI is fairly similar to gitlab/github.
  • Setup was very simple except for database connection (I winded up just using sqlite3 I believe, I was lazy and also don’t expect more than a very few users).
  • Pull Request is really nice.
  • Issue tracking seems to work fairly well.
  • Docker setup with volumes is very easy.
  • Seems to have the essentials in plugins etc that I need.
  • API seems very nice, I’ve only used it for the migration so far though.
  • The only bad part I’ve seen so far is that the administration panel might be a bit spartan at times, but I don’t really mind.

Regarding drone.io, my first impressions are:

  • I absolutely love the yaml file format so far.
  • UI is incredibly clean, on the verge of too clean.
  • Integration with gitea was super simple once I actually got it working (documentation was not 100% accurate I think).
  • Simple to get started with if you have a sane build pattern.
  • Nice integration to gitea and you get marks on build statuses etc. Interesting to find if you can block a build from being merged based on build results as well.
  • I’ve managed to make a simple build of a playground project I have by adding a Dockerfile which is built into an image when the build is started, then a continuation of the build builds my project inside the docker image we just built.
  • First time using docker-compose so it was a bit of a hassle understanding this, but it was fun ;). Not always obvious where some configuration should be placed etc.
  • Yaml file format definitely not enough for the type of pipelines we do professionally though :(.
  • Pleasantly surprised you can actually add and remove build slaves to the drone.io platform.
  • Also pleasantly surprised by how to do parallel build steps. Syntax is super simple.
  • I really lack some form of artifact storage, or at least a plugin for something that is not either cloud based or incredibly enterprisey (artifactory). Actually, I’ve had issues just finding a good light weight open source artifact storage so far…
  • I also lack some form of nice presentation of various build artifacts, code coverage or unit test results etc.

In all, pleasantly surprised by how simple this was to setup and configure. It was a fun trip and I’ll continue using it at home for now.

As a sidenote, for the stuff I have on github, I do like to use travis, it also has a nice syntax and is a nice solution.

DevOps World Nice 2018

Long time again, I seem to be bad at updating this site at times.

I’ve recently moved into DevOps as a coincidence. I feel like this is what I’ve done for ages, but I just never set a name on it really. Also, this time it’s proper DevOps I’d say, a team of people maintaining build environments, CI/CD environments, gitlab, and so forth. It’s been a long time since I did proper server maintenance however, and it’s kind of interesting to see what happened. New tools, new possibilities, containerization, monitoring, virtualization, cloud, and so forth.

Because of said circumstances, I went to the DevOps World conference (previously Jenkins World I guess) in Nice and was pleasantly surprised by some of the development done to Jenkins and what they are trying to do. I’m a bit cautious with some of the changes though and what I expect of the future. One of my main issues from work is that a lot of the clients we have are very sensitive about where data is stored, to the point where they set up private clouds to keep stuff in house and in many cases blocking internet access almost completely. A lot of the changes in Jenkins and from CloudBees is about moving stuff into the public clouds from private installations. IE, something that we will not be able to do, even though my company is all for working as open as possible and driving our clients to open up what they are doing as well. There is obviously stuff you don’t want to open as a corporation, but a lot of it is not really specific to you and the help you can receive from collaborating with your competition is actually quite tremendous, hence driving down part costs etc.

So, all that said. I’m super happy to have been to the conference and have heard a lot new stuff. My main takeaways:

Jenkins x seems really interesting, it’s basically a stripped down Jenkins master running in kubernetes/cloud swarm but you spin up a new master for every build so the single point of failure is removed in it, and the kubernetes ecosystem for most of the extraneous systems such as web-hooks, etc. The bad part of this is that you loose the Jenkins UI  and Jenkins x is completely Command Line Interface based. CloudBees has some proprietary UI’s for Jenkins X however…

Configuration as Code Plugin seems really nice as I’d really like to easily take my Jenkins Master and create staging environments to test new changes easily. I will be experimenting with this most definitely.

Speeches about breaking down and splitting down your Jenkins Master and basically divide and conquer. Try to do different things on different servers basically, and don’t create a Jenkinstein with almost all the plugins in a single instance to cater to all types of jobs, etc.

Jenkins seems to be moving a lot into the public cloud, which unfortunately is bad for our industry as already mentioned. However, I’m really intrigued by the scaling possibilities that was shown using GCP and AWS for example.

Also, a lot of other good talks.

All this said, I’ve been experimenting with alternative CI and git hosting solutions, which I find rather interesting and I’ll write more about that in a close future…. 😉

SSL certificate updated slightly late

Due to being away for the duration of last week, I missed the renewal of the SSL certificate. This should be fixed now as I’ve moved to letsencrypt using certbot for autorenewal. I hope this should solve the problem for the future.

As a sidenote, I was attending the DevOps World in Nice last week. Very interesting week, I’ll try to write up on some of the topics I found interesting. Also, I’ve been experimenting with “other” solutions than jenkins + gitlab + atlassian tools as of late, I found some rather nice setups using gitea and drone.io that I’ll try to write about as well.

Using AWS EC2 instances for large builds

I experimented a few years ago with using EC2 spot instances (virtual server on the internet, but using unused server capacity). It was fairly successful, being able to run large calculations that should have taken weeks in a matter of days.

Since I started at my current job I’ve been running into building increasingly complex yocto images which keeps growing in size, at this point most images I build can take up to 6-7 hours to build on my laptop. This is an i7-4558U 2.8GHz cpu and 8 gigs of RAM so it’s not bad really, just not enough for these types of builds.

Again I started experimenting and I am really happy and impressed. So far all experiments are for open source projects etc, so nothing that has any non-disclosure agreements or corporate etc etc, I’d like to but this isn’t up to me really. I’ve setup an AMI on EC2 which I can instantiate and have up and running in 2-3 minutes, and then mount a 100 gig EBS volume where I store the sources and build data.

The same build that generally takes up to 6 hours on my laptop takes approximately 30-40 minutes on an EC2 c4.4xlarge machine (16 cores and 32 gigs ram).

My key findings so far is:

  1. Keep an AMI with all the build tools necessary/or script the setup.
  2. Keep an EBS volume with the long term stored data, gits etc for building and mount somewhere suitable in the AMI.
  3. Keep a micro instance (these are free/very cheap) around for mounting the EBS volume when you just want to check stuff out, mess around etc but not make builds.

Iptables-tutorial and ipsysctl-tutorial on github

I guess I should have done this a long long time ago. Both the iptables-tutorial and the ipsysctl-tutorial source code are now available on github. Many many years ago I had an idea of putting the version control system out there for use, but I never realized it for various reasons. Both these documents are old by today, but the basic information is still valid and will likely be for a long time to come it seems.

I apologize for the version history, I moved from CVS in a rather rude way to SVN without keeping the history, which was what I used back in those days.

I invite anyone and everyone to do edits if they wish to and send me pull requests to fix the issues they find, or to add the documentation they’d like to add.

The iptables tutorial is available at:
https://github.com/frznlogic/iptables-tutorial

The ipsysctl tutorial is available at:
https://github.com/frznlogic/ipsysctl-tutorial

Project build speed importance

I began writing this several years ago but never published it for various reasons. I think some of the thoughts are still really interesting, I joined a company named Pelagicore some months ago and we do some rather large builds at this company, similar in build times in the project I refer to in this text, but with some major differences. This build is based on yocto and it actually works really well once you get through the first build (currently a scratch dev image build is up to 4 hours in time, but after that it’s able to rebuild only changed software and recipes as necessary. The scratch build is heavy admittedly, but we are working on some ways to improve that as well, such as sstate caches, icecc distributed build systems, and so forth. I think with modifications we could get those 4 hours down to under an hour, which would be really sweet. Anyways, I thought I’d post this text as is, even though it’s not finished, just because and since I re-read it right now 😉 .

Original text


Again, I’m struck by the importance of a proper build process in projects. No matter how small or big the project is, the build must be kept fast and working from get go to the end — which means its end of life cycle. I am currently working in a large project with a huge code base where the build system is, in my opinion, next to completely collapsed. Build time is in excess of 1-4 hours, and the automated test-suites take from 3 hours to 7 hours to complete, depending on how thorough a test-suite you choose. A gigantic heap of time is spent just…. waiting… waiting…. waiting. Just to make a simple one line edit and then compile to see if it compiles can take up to 4 hours. I’ve previously been a bit spoiled with good and simple build systems, only occasionally running into really crappy build systems — for some reason, a lot of scientific open source software seem to be stuck in this category.

This time, I think I hit pay-dirt on “how not to do it”. Instead of focusing on the bad parts, I will try to focus on the things to do and to keep in mind. Nobody seems to like a grumpy person anyways, which I really am sometimes.

1. Keep the build system simple and manageable. Try to maintain the build system in a logical fashion and in a single language/system (scons/python, Makefiles, etc).
2. Expandable (new directories, files, etc)
3. Scalable (multiple CPU’s, machines, threads, etc)
4. Try to use as few frontends as possible (a single top makefile for example, with targets depending on what you want to do), and keep people informed of changes. Wiki with history via RSS is _perfect_ for this. This doesn’t mean you can create a script to build something, then 2 months down the road delete it because it is no longer needed, it causes a lot of stress for developers who just barely had a chance to find the script.
5. The programmers are (most likely) your customers.

FMEA of Door and conclusions

October 13, 2011 by · Leave a Comment
Filed under: Development, General, Management 
If Cavemen had done an FMEA of the door when it was invented, this would possibly be the outcome.
Item Potential Failure mode Potential Effects of failure Severity Rating Potential Causes Occurence rating
Open door Person behind door gets door in face  – Broken neck
– Death
– Bleeding nose
5-6  – Two persons trying to pass the same door
– Door placed to open into corridor
– -“- onto esplanade
– Badly placed painting on back of door
4
Open door Hidden vicious monster on other side  – Death
– Loss of limb
– Loss of head
10  – Door opens into wilderness
– Glass not yet invented, so other side of door can not be inspected
4
Close door Limbs get stuck between door and doorway  – Loss of limb
– Broken fingers/toes
– broken nails
– Bloodshed
7  – Person is angry and slams door shut
– Annoying salesman sticks foot into door
– Forgot person behind you
5
Run through door Other side opens into chasm
Other side opens up on multistory building
Other side opens up into ocean
 – Falling off high cliff
– Death
– Drowning
– Bludgeoning
– Cuts and bruises
10  – Earth activity causes earthquake and resulting chasm on other side of door
– Balcony fell down
– Global warming caused ocean to rise
– Person was in a rush, running through the door, falling off.
2

Conclusion

If the cavemen would have been engineers, they would never have invented the door, or at least not used it.

Bugs, bugs and more bugs

December 12, 2010 by · Leave a Comment
Filed under: Development, Linux, Projects, Ubuntu 

Lately, I’ve come to realize more and more that bug handling in open source, and specifically in Ubuntu has dramatically declined in efficiency. For years I’ve been extremely satisfied with using Linux because it’s bug free, there has simply not been any serious bugs that I’ve run into. In the last weeks, I’ve run into several more or less serious bugs in Ubuntu, which got me looking at how the bug handling is done.

First off, a few weeks ago, I ran into a bug with Ubuntu 10.10 Ubiquity (the Live CD installer) where I accidentally marked my old /home drive as ext4 when it was ext3 (but not to reformat it). The installer complied happily, and set it up as ext4, but once it got back online, the harddrive was completely wiped. No warning, no nothing. I started looking around, after a while I’ve found several reports on the same matter on launchpad.  For example this and this.

This lead me to take a look at Ubiquity’s other bugs in launchpad, and it’s not very promising. The main installer of Ubuntu 10.10 has 1528 Open bugs as of writing this, of which 846 bugs are new, 35 bugs are marked High importance — and the bugs I found (dare I say, they seem Critical to me, are still not marked with any importance at all). Only 12 bugs are marked as having a patch.

Fine, maybe this is not the poster child of open source. However, the last few days I’ve been severely annoyed by the password popup which is misbehaving. I enter the password, and hit enter (or hit the Authenticate button) and the password field disappears, but the rest of the dialog stays up, and nothing works in it. The only thing you can do is to kill it with the x button. When you do this, you get authenticated…

Since I’m not sure exactly how the authentication is performed in Ubuntu for the update manager etc, I decided to check the update-manager package for Ubuntu on Launchpad. What do I see, if not another package with gigantic mass of bugs filed, but noone dealing with them. 1017 Open bugs, 520 of those are New and 15 marked as High importance. This bug I’ve been having has been reported all over the net, but noone seems to be dealing with it and it isn’t really reported in launchpad. Some computers has it, some doesn’t. It’s nowhere near a critical bug, or even a high importance one, but it’s annoying none the less and it looks extremely crude and comes off giving a fairly unstable feeling.

All this being said, I am wondering how bug handling is done, and how it should be managed on “aggregate” projects such as Debian and Ubuntu. I think the idea is really nice, having upstream bug trackers for each package in the project, but maybe we are spreading too thin having several bug trackers for each minor project? Also, how do we as “normal” users know which package is the reason for the error? I am not so sure it is really the update-manager that is the error in this case, it might as well be some completely other thing behind all that dbus stuff etc. Ie, what is the point of me filing bug reports if I’m not sure they wind up in the right place, or are at all looked after?

Unit testing and stubbing singletons

May 4, 2010 by · Leave a Comment
Filed under: Development, Projects 

I got a bit curious about stubbing Singletons for testing during the weekend as well. We often find ourselves needing to test large codebases at work, and in the current project I’m in, we do complete end to end signal flow tests, but people are finally realizing that this will simply not do. For this reason, we’re doing a lot of work to try to split the entire project up into manageable chunks. One of the main problems has been the incessant use of singletons. A simply half-way-there to doing full out interfaces is to simply make all public function calls virtual and then create a stub class of the singleton saving the message or whatever passed on, into a variable which can be grabbed and tested from the actual unit test.

A sample of the general idea below:

#include 

class A
{
public:
    static A *instance()
    {
        std::cout << "A::instance()" << std::endl;
        if (!s_instance)
            s_instance = new A;
        return s_instance;
    };

    A()
    {
        std::cout << "A::A()" << std::endl;
    };
    // Virtual makes the difference
    virtual void send(int i)
    {
        std::cout << "A::send()" << std::endl;
        // Blah blah, send i or something
    };
    static A *s_instance;
private:
};

class stub_A: public A
{
public:
    static stub_A *instance()
    {
        std::cout << "stub_A::instance()" << std::endl;
        if (!s_instance)
        {
            s_instance = new stub_A;
            A::s_instance = s_instance;
        }
        return s_instance;
    };

    stub_A()
    {
        std::cout << "stub_A::stub_A()" << std::endl;
    };

    void send(int i)
    {
        std::cout << "stub_A::send()" << std::endl;
        y = i;
    };

    int getMessage()
    {
        return y;
    };
private:
    int y;
    static stub_A *s_instance;
};

A *A::s_instance = 0;
stub_A *stub_A::s_instance = 0;


int main(int argc, char **argv)
{
    stub_A::instance()->send(5);
    std::cout << "stub_A::instance()->getMessage() == " << 
        stub_A::instance()->getMessage() << std::endl;

    A::instance()->send(7);
    std::cout << "stub_A::instance()->getMessage() == " << 
        stub_A::instance()->getMessage() << std::endl;
}

Inproductive productivity

For a while I’ve been stuck in slow speed mode again, not really doing great work, just being on average. It feels weird. Don’t really get much done, but I have on the other hand had a great deal of time to test some “new” technologies, well, new as in only 10-15 years old I guess :-). I’ll get back to that later. Also, I’ve begun a new contract at “a big company”.

This is my first time at a really giant hunk of a company, the biggest I’ve seen before was circa 500 people in all, and it moved slower (the beaurocracy) than this in all honesty. This BigCompany is quite interesting to me. Started off with almost 4 weeks of introductions, courses, and so forth. They have a dedicated TEAM of CM’s, that alone is just… wow :-P. I’ve just been put up to speed and started working a little before this weekend so I might be a bit premature, but I like it so far. The weird part is, things happen, but not as I’m used to it. I’m used to 13+hour days and frentic coding/hacking to get things to happen, everyone here eschews away with their 8 hour days — only working overtime at very special occasions — yet slowly things get done, new functionality gets added and so forth.

Another thing that kind of amazes me — and worries me to some extent — is the kind of planning that is done. I’m used to small scale projects with workpackages or task based development, where no workpackage should ever take more than 4-5 days to implement. This place uses a workpackage development structure where each package takes up to 6-7 weeks for 6-10 people to implement. We’ll see how it works out — at least their “stand-up meetings” works :-).

All that being said, I had the time to write quite a bit of python which is a first, then I’ve looked into d-bus architecture which is also a first, and I also looked into Bluetooth and how to use it — some test applications running, fetching services and graphically displaying info about all units it finds etc. The complexity of Bluetooth is rather saddening imho, it’s a horrible protocolstack to work with in some senses, even though I was really impressed by how much python does for you.

I’ve been unused to the whole concept of python before this, and just a tad sceptical. Mainly because of all the problems with version matching that you always wind up having to do, to make anything work properly (try getting scons, trac and wamp, and some more tools working on a win32 machine some day for some fun).

Anyways, I always figured there has to be an upside, and there really is — python is hackfriendly 🙂 . In less than 3-4 hours I went from writing my first simple helloworld to having a scratch written class based graphical (tkinter) interface implementing some very fundamental bluetooth commands. In my world, thats not bad at all ;).

I’ve also had time to learn a lot of new tools at work. I’ll comment on those some other day as I havent seen much other comments on some of them (some is imho very expensive crap with a nice wrappings, while some are completely awesome). Sidenote, I simply adore the systems we are working on 4 xeon with 4cores and 64 gig ram.

I’ll get back later :-).

Next Page »