Mat Honan hack “cloud isn’t so bad” comment

August 10, 2012 by · Leave a Comment
Filed under: Communications, General, Uncategorized 

Just read this by Ron Miller  and while I don’t mind the cloud (I use several cloud services extensively), I do think he’s completely missing the point. Yes, the cloud is as vulnerable as any other machine etc that I or someone else has set up. However, the daisychaining and the ability of the hacker to wipe out the entire storage media on the phone, tablet and the laptop wouldn’t have been an issue with old school IT. The main point of the original article by Mat Honan was that he felt stupid for not doing proper backups, and having setup remote wipe. Having a backup on dropbox, ubuntu one or some other place in the cloud just isn’t safe enough, imho. The way dropbox etc works, you can always wipe out the files in question permanently. A lot of cloud functions adds a lot of liability that we didn’t have before. This hack and the loss of his personal data would never have happened in 2000 as his phone and laptop wouldn’t had  a remote wipe function so easily accessible for anyone, and the tablet was barely “invented” yet.

 

 

Playing with LinuxMCE and thoughts

July 29, 2012 by · 17 Comments
Filed under: Hardware, Linux 

Long time no writing for various reasons. I’ve played around with LinuxMCE for a few days as it looked like a really sweet solution for home automation etc. I’ve previously had a HTPC running mythbuntu as a main server for my home network, then switched for a Guruplug to save electricity and now finally decided for an Atom D525 platform to use as a personal NAS because it draws very little power (17’ish watts), I can hook up a lot of harddrives etc to it, and finally it’s i386 meaning a lot of stuff that was unavailable for the guruplug would start working again (management tools for my UPS for example).

Anyways, I also moved to a house in the last few weeks and decided to start fiddling a bit with home automation and to give one of the dedicated distributions a shot, LinuxMCE because it has support for such a hell of a lot of features etc. Unfortunately, after 2 days of trials I’m severely disappointed by all the bugs and stuff that just wont work with the default settings etc. For example, it took me 1,5 days to get a basic weborbiter (remote controller via web interface) to work. Secondly it took me another 4-5 hours to realize that LinuxMCE absolutely will not run properly with LVM. I have always used an LVM for storing files etc. The LVM currently consists of 2.5TB spread over 3 disks. The only solution I’ve been able to deduce is to get another set of disks, install in LinuxMCE, then copy all the files to that set of disks, and then have 3 disks that I don’t really need atm. On top of this, just getting the graphics output to work was … let’s just say, not easy, and will only work with a very limited set of hardware. On top of this, the Android qOrbiter is extreme alpha quality (crashes so much I can’t describe it, it’s completely useless on 3 different devices).

Over all, I think LinuxMCE could be a really badass solution, but it isn’t because of all the bugs and lousy documentation. Imho, the feeling I get is that the system is developed and installed by a limited group of people who knows about the issues and knows how to get around them. Ie, the claims that “similar” products cost XXXXX USD is offset by the fact that this system really requires a lot of knowledge about this system which will take weeks (if not months) to get and requires you to get a consultant to do the installation for  you.

After a few days of trying to set up a Core/Hybrid properly with a few remotes and a Media Director I just realized I’ll reuse an old harddrive in the machine that was to be the Media Director and set up mythtv on it, and setup the “supposed to be LinuxMCE Hybrid” as a Debian server which replaces the guruplug for network control and basic operations. I’ll have to figure something out when it comes to remote controlling music etc, I’m unsure what yet.I hate being this negative about LinuxMCE, but I really wish this system had worked a bit better for me because it ticks a _lot_ of boxes I’m interested in. I’m guessing the main issue that I stomped into this thinking I’d be able to preserve and reuse existing hardware and harddrives etc, which I’m apparently not able to and obviously caused a lot of headaches. If  you are still interested in trying this out, make sure to start from scratch, do not expect to reuse old tablets etc, get new (old) ones etc.

FMEA of Door and conclusions

October 13, 2011 by · Leave a Comment
Filed under: Development, General, Management 
If Cavemen had done an FMEA of the door when it was invented, this would possibly be the outcome.
Item Potential Failure mode Potential Effects of failure Severity Rating Potential Causes Occurence rating
Open door Person behind door gets door in face  – Broken neck
– Death
– Bleeding nose
5-6  – Two persons trying to pass the same door
– Door placed to open into corridor
– -“- onto esplanade
– Badly placed painting on back of door
4
Open door Hidden vicious monster on other side  – Death
– Loss of limb
– Loss of head
10  – Door opens into wilderness
– Glass not yet invented, so other side of door can not be inspected
4
Close door Limbs get stuck between door and doorway  – Loss of limb
– Broken fingers/toes
– broken nails
– Bloodshed
7  – Person is angry and slams door shut
– Annoying salesman sticks foot into door
– Forgot person behind you
5
Run through door Other side opens into chasm
Other side opens up on multistory building
Other side opens up into ocean
 – Falling off high cliff
– Death
– Drowning
– Bludgeoning
– Cuts and bruises
10  – Earth activity causes earthquake and resulting chasm on other side of door
– Balcony fell down
– Global warming caused ocean to rise
– Person was in a rush, running through the door, falling off.
2

Conclusion

If the cavemen would have been engineers, they would never have invented the door, or at least not used it.

Finding the Zone.

February 10, 2011 by · 1 Comment
Filed under: Development, General, Personal 

Several years ago at one of my first jobs, right smack at the end of the dot-com boom over here, we had a dartboard. What good is a dartboard you ask? I don’t know, we also had a pinball machine, but I never quite liked it as much as the dartboard. I’d take short breaks from answering phone calls and e-mail or problematic pieces of text to throw a few darts. Personally, it gave me one thing however, I learnt to shut out everything else and find my Zone, you know the one where you’re at your most efficient, solving problems, kicking off e-mail and answering calls in a furious flurry.

For me, I focus on the bulls-eye of the dartboard, relax all muscles in my head, letting my scalp feel like a smashed egg running down from the skull. It feels like my head gets lighter and all pressure on my brain dissipates. Nothing else exists but bulls-eye. Ears are shut off, and all focus is on bulls-eye, nothing else distracts my eyes. At this point I’m there, and ready to hit bulls-eye. It’s simple, it’s fast, and it works. Under extreme pressure and stress, it sometimes takes closing my eyes to get the process started, but almost always I reach the “Zone” within a minute or so.

After the few darts, I’d get my headphones on and listen to music, work my hardest to shut everything else out and maintain the Zone for as long as possible. Reaching it takes concentration and loosing the concentration is always easily lost. Getting back on your train of thought on a complex problem after loosing it is hard work, often taking 15-30 minutes. I absolutely hate having that elusive thread of clarity vanish into thin air due to someone breaking Zone or loosing the almost ready to implement solution to a particularly bothersome problem.

My wife knitted me a pair of special fingerless gloves, ending just before the first knuckle on my ringfinger. I use them while coding or otherwise writing at the computer, to keep my hands warm, just to be able to maintain the Zone for that bit longer, not having to bother about getting cold hands. Cozy, to say the least, but people look funny at me for using them so it’s not often I use them at work. Also, I like to wall myself in as I’m easily distracted by movements and foreign objects. Open office space is complete devastation on these trips into the Zone, where people talk, walk around, gesticulate, strange things happening on other peoples screens, and so forth. I barely ever manage to stay in the Zone for more than 45 minutes or so. On that note, open office planning must have been among the most stupid things ever invented, short term savings at huge cost in productivity.

So, that is how I reach my Zone. Do you have a Zone, and how do you reach it? How do you maintain it from collapsing?

Bugs, bugs and more bugs

December 12, 2010 by · Leave a Comment
Filed under: Development, Linux, Projects, Ubuntu 

Lately, I’ve come to realize more and more that bug handling in open source, and specifically in Ubuntu has dramatically declined in efficiency. For years I’ve been extremely satisfied with using Linux because it’s bug free, there has simply not been any serious bugs that I’ve run into. In the last weeks, I’ve run into several more or less serious bugs in Ubuntu, which got me looking at how the bug handling is done.

First off, a few weeks ago, I ran into a bug with Ubuntu 10.10 Ubiquity (the Live CD installer) where I accidentally marked my old /home drive as ext4 when it was ext3 (but not to reformat it). The installer complied happily, and set it up as ext4, but once it got back online, the harddrive was completely wiped. No warning, no nothing. I started looking around, after a while I’ve found several reports on the same matter on launchpad.  For example this and this.

This lead me to take a look at Ubiquity’s other bugs in launchpad, and it’s not very promising. The main installer of Ubuntu 10.10 has 1528 Open bugs as of writing this, of which 846 bugs are new, 35 bugs are marked High importance — and the bugs I found (dare I say, they seem Critical to me, are still not marked with any importance at all). Only 12 bugs are marked as having a patch.

Fine, maybe this is not the poster child of open source. However, the last few days I’ve been severely annoyed by the password popup which is misbehaving. I enter the password, and hit enter (or hit the Authenticate button) and the password field disappears, but the rest of the dialog stays up, and nothing works in it. The only thing you can do is to kill it with the x button. When you do this, you get authenticated…

Since I’m not sure exactly how the authentication is performed in Ubuntu for the update manager etc, I decided to check the update-manager package for Ubuntu on Launchpad. What do I see, if not another package with gigantic mass of bugs filed, but noone dealing with them. 1017 Open bugs, 520 of those are New and 15 marked as High importance. This bug I’ve been having has been reported all over the net, but noone seems to be dealing with it and it isn’t really reported in launchpad. Some computers has it, some doesn’t. It’s nowhere near a critical bug, or even a high importance one, but it’s annoying none the less and it looks extremely crude and comes off giving a fairly unstable feeling.

All this being said, I am wondering how bug handling is done, and how it should be managed on “aggregate” projects such as Debian and Ubuntu. I think the idea is really nice, having upstream bug trackers for each package in the project, but maybe we are spreading too thin having several bug trackers for each minor project? Also, how do we as “normal” users know which package is the reason for the error? I am not so sure it is really the update-manager that is the error in this case, it might as well be some completely other thing behind all that dbus stuff etc. Ie, what is the point of me filing bug reports if I’m not sure they wind up in the right place, or are at all looked after?

Ubuntu 10.10 r8192se_pci driver on the Toshiba Satellite T130-17E (Realtek RTL8191SEvB)

December 10, 2010 by · 4 Comments
Filed under: Development, Linux, Phone, Ubuntu, Windows 

I’ve been home for a few days with a really bad back, and the only thing I’ve been able to do is watch tv, and some minor work with the laptop. I’ve been running Windows 7 which it was delivered with for a few months to get a feel of it, and I must say I was pleasantly surprised for the most part. It hasn’t crashed more than once or twice in three months, and it is fairly snappy (except boottimes seems to get worse and worse, at this point it takes 2-3 minutes to boot). Anyways, for some reason I get a bit sad inside (and bored) every time I boot Windows, there is just “something” about the feel, the look or… I can’t really put my finger on it, that I can’t stand. How the windows open and close perhaps, I just don’t know.

So, yesterday I wanted to test android SDK, re-realized just how much of a bitch it is to install stuff on Windows, so I finally got around to installing Ubuntu 10.10 on it (Already running 10.10 on desktop and mediapc), removing the extra backup partition they deliver laptops with these days. Sidenote, isn’t it a bit like selling a candle with a flammable fire extinguisher to sell a laptop with 500gig harddrive, split it in half, and use on half for “backups”? I digress.

So, installation went almost flawless. The wireless card was identified, saw networks, but was unable to connect to any of them. I managed to pass the installation using trusty old cables, and after installation was done I started fiddling about, reading on the net etc, and found noone who had solved the combination or at least written about it.

Main problem seems to have been hardware wep encoding/decoding, which can be turned off using the hwwep flag to the r8192se_pci module. On Ubuntu 10.10, remove the module, and then reload it by doing this:

rmmod r8192se_pci
modprobe r8192se_pci hwwep=0

If your network works now, you can automate the setting by editing/adding the configuration to modprobe.d, by editing /etc/modprobe.d/realtek.conf and adding the following line:

options r8192se_pci hwwep=0

I hope this has been some help!

Guruplug arrived, sounds like a jet

October 8, 2010 by · 3 Comments
Filed under: Development, Hardware, Linux 

The first few paragraphs here are rather harsh vs the Guruplug that I received, and yes, GlobalScale deserves some really bad critique for how this has been handled, but please read the final parts to get a full picture. In all honesty, sometimes I think I should just change name to “Grumpy Fart”.

I finally received the Guruplug Server Plus last week that I ordered 3 months ago. My first reaction is a big WTF on this machine. Apparently, they’ve had big troubles with overheating in the Guruplug, so badly so that a lot of units died from it. Someone over at GlobalScale Technologies had the absolutely idiotic idea to put a fan into the machine. It’s not just any fan, it’s a 2 cm maglev fan running at 3-4000 rpm, and no powermanagement whatsoever and it is directly hooked up to the power source inside the Guruplug, hence it will not be possible to vary the rpm, ever, without a hardware hack. Also, the fan is absolutely horribly placed as is evident from several pictures on the net (and by opening the machine, voiding the warranty) — it is placed with 80-90% of the back of the fan covered by a metal plate (the two gigabit ethernet interfaces), and when the machine is closed up as delivered, it has a big plastic plate (power source cover) covering the front of the fan, with less than 2 mm clearance. All this means is that it is incredibly noisy (easily 30-40dB, way louder than my core 2 quad machine with 8 gig ram and 4 fans in it, while in bootup and before the nvidia graphics card has gone into power management), and has close to no effect at all.

I bought this machine when there was no fan, and there wasn’t even talk of a fan or any mails to acknowledge this design change, so I was heavily inclined at sending the thing back for a refund, but I realized that with my normal luck, it would take several weeks to find a new machine matching my needs, and then another few months before receiving it. Because of this, I winded up simply ripping out the fan, and voiding the warranty. So far so good, and nothing bad has happened. The plug is still running approximately at the same temperature as before, but slightly higher, and I hope there will be no ill effects.

Second WTF when I finally got the machine started up was the installation — first off it is a really nice debian install, I was prepared to start straight off with the jtag interface and installing images and reinstalling etc etc, but there was a perfectly working machine there, straight off. Nice. I thought. Then came the wtf moment, the machine booted up and set an ip address for the uap0 interface (wifi access point interface) to 192.168.1.1. My plan is to use it hooked up to a wired network via gigabit, so I went straight at it, added the eth0 interface to /etc/network/interfaces with a static ip, delete routes and ip’s for uap0 and it worked… for an hour or so, then it stopped working. Reboot, still not working, notice that uap0 is back at it’s old location 192.168.1.1 and routes are back, but nothing in the bootup scripts. Finally find out they have a /etc/init.d/rc.local file point to a /root/init_setup.sh, which in turn does a heap of stuff — including setting up network parameters etc, overriding the normal configuration parameters in a nonstandard way.

Personally, I find this type of “hacked” environments to be despicable. For a private system in your home, fine do whatever you wish, but when it is a public server at a company, or something that you sell to customers, you wind up with a product that the other admins and/or customers can not trust the setup of. Luckily, the whole system is at least included in installed .deb packages. The init_setup.sh, together with the fact that there is no way of recovering if you do a simple screw up of the network setup without the JTAG interface is a major drawback imho.

So… that is the “bad” part of my experience so far. At this point I was rather underwhelmed, but I have been pleasantly surprised by the performance of the little bugger, and except for the startup scripts, it is rather nicely installed from what I have see so far. It would be interesting to gather up a complete list of the “hacks” they have performed to get it all together. The machine is currently able to do exactly what I set out to do, in less than a 2 days of configuration/fixing/fiddling about/etc. I didn’t manage to get the “auto connect” of my usb disk to work, but mount commands work and I didn’t really look at how it’s supposed to work (might be that you need to reboot machine etc). The idea with the machine on my part, is to set it up as a 24/7 machine, replacing my other machine doing this work, but at a lower power consumption. So far, the machine is running the following services for me:

  • DHCP
  • Dynamic DNS
  • Samba and NFS filesharing
  • Ssh network login server
  • USB Disk/”NAS” function
  • Bittorrent (transmission-daemon with transmission-remote on all other machines)

Additionally I hope to use it for the following as well:

  • Printer server
  • Zeroconf/avahi
  • Firewall (possible use as a portable firewall?)
  • Temperature sensors etc via GPIO?
  • Bluetooth, still haven’t figured out what to do with this…

As you can see, the machine is very capable, and all at a very low power consumption of <5W compared to my old computer doing all this, running at 170W approximately. At current power costs, I am expecting to have made up the money I spent on the Guruplug within less than 9 months. Let’s just keep fingers crossed that the machine wont die of heat before that 🙂 .

Conclusion

In conclusion, I am really afraid to say anything but this, I can not really recommend this product, not unless you’re either ready to do some serious hacking, or you plan to run it in a garage or some such place. A wardrobe or closet is simply not enough, it sounds too much as it is. The idea and the thought behind the machine is great, I just wish the execution of it was as good.

Shaving some time

May 24, 2010 by · 3 Comments
Filed under: Development, Linux 

I was talking to a colleague the other day and got into a discussion regarding some ways of coding c++, such as whether to use division or bit shifting — almost all modern  compilers can optimize this so it’s basically just a matter of what looks best. At the same time, we got into a minor discussion regarding references and pointers in c++. I made a small test and found some rather amusing results, which is quite obvious once you think about it, but still very scary considering how common it is to use the pointer construct:

// Compile using g++ -lrt -o lala lala.cpp
#include <iostream>
#include <time.h>

int* lala_ptr()
{
    int *y = new int;
    *y = 5;
    return y;
}

int& lala_ref()
{
    int x =5;
    int &y = x;
    return y;
}

timespec diff(timespec start, timespec end)
{
    timespec temp;
    if ((end.tv_nsec-start.tv_nsec)<0) {
        temp.tv_sec = end.tv_sec-start.tv_sec-1;
        temp.tv_nsec = 1000000000+end.tv_nsec-start.tv_nsec;
    } else {
        temp.tv_sec = end.tv_sec-start.tv_sec;
        temp.tv_nsec = end.tv_nsec-start.tv_nsec;
    }
    return temp;
}

int main(int argc, char **argv)
{
    timespec time1, time2, time3;

    clock_gettime(CLOCK_PROCESS_CPUTIME_ID, &time1);
    for (unsigned int i=0;i<(unsigned int)-1;i++)
    {
        int &z = lala_ref();
    }
    clock_gettime(CLOCK_PROCESS_CPUTIME_ID, &time2);
    for (unsigned int i=0;i<(unsigned int)-1;i++)
    {
        int *z = lala_ptr();
        delete z;
    }
    clock_gettime(CLOCK_PROCESS_CPUTIME_ID, &time3);

    std::cout << "Reference diff(" << time1.tv_sec << ":" << 
                time1.tv_nsec << ", " << 
                time2.tv_sec << ":" << 
                time2.tv_nsec << ") = " << 
                diff(time1, time2).tv_sec << ":" << 
                diff(time1, time2).tv_nsec << std::endl;

    std::cout << "Pointer diff(" << time2.tv_sec << ":" << 
                time2.tv_nsec << ", " << 
                time3.tv_sec << ":" << 
                time3.tv_nsec << ") = " <<
                diff(time2, time3).tv_sec << ":" << 
                diff(time2, time3).tv_nsec << std::endl;
}

Below is a sample of the output generated by the test code above:

oan@laptop4:~/Projects/test$ ./testRefVsPointer 
Reference diff(0:3869272, 25:234466470) = 25:230597198
Pointer diff(25:234466470, 299:547382527) = 274:312916057

So, the question for you all, can you figure out what's wrong? 😉

Screenmovie 1.1 released

A quick note that Screenmovie 1.1 has been released. It’s still very crude, but adds some sound recording and the ability to turn it on/off. Postprocessing is not supported yet but should be there in the next version.

Features:

  • Record video
  • Record sound
  • Configure file format
  • Configure video codec + settings (5-6 codecs chosen for now)
  • Configure audio codec + settings (2 codecs for now)

I still have some problems, but I just found some info on how to possibly make some problems better at least.

Todo:

  • Fix some high cpu usage problems
  • Add global keybindings
  • Postprocessing encoding
  • Clean up and add some values specific for codecs (as required).

Unit testing and stubbing singletons

May 4, 2010 by · Leave a Comment
Filed under: Development, Projects 

I got a bit curious about stubbing Singletons for testing during the weekend as well. We often find ourselves needing to test large codebases at work, and in the current project I’m in, we do complete end to end signal flow tests, but people are finally realizing that this will simply not do. For this reason, we’re doing a lot of work to try to split the entire project up into manageable chunks. One of the main problems has been the incessant use of singletons. A simply half-way-there to doing full out interfaces is to simply make all public function calls virtual and then create a stub class of the singleton saving the message or whatever passed on, into a variable which can be grabbed and tested from the actual unit test.

A sample of the general idea below:

#include 

class A
{
public:
    static A *instance()
    {
        std::cout << "A::instance()" << std::endl;
        if (!s_instance)
            s_instance = new A;
        return s_instance;
    };

    A()
    {
        std::cout << "A::A()" << std::endl;
    };
    // Virtual makes the difference
    virtual void send(int i)
    {
        std::cout << "A::send()" << std::endl;
        // Blah blah, send i or something
    };
    static A *s_instance;
private:
};

class stub_A: public A
{
public:
    static stub_A *instance()
    {
        std::cout << "stub_A::instance()" << std::endl;
        if (!s_instance)
        {
            s_instance = new stub_A;
            A::s_instance = s_instance;
        }
        return s_instance;
    };

    stub_A()
    {
        std::cout << "stub_A::stub_A()" << std::endl;
    };

    void send(int i)
    {
        std::cout << "stub_A::send()" << std::endl;
        y = i;
    };

    int getMessage()
    {
        return y;
    };
private:
    int y;
    static stub_A *s_instance;
};

A *A::s_instance = 0;
stub_A *stub_A::s_instance = 0;


int main(int argc, char **argv)
{
    stub_A::instance()->send(5);
    std::cout << "stub_A::instance()->getMessage() == " << 
        stub_A::instance()->getMessage() << std::endl;

    A::instance()->send(7);
    std::cout << "stub_A::instance()->getMessage() == " << 
        stub_A::instance()->getMessage() << std::endl;
}

« Previous PageNext Page »