Guruplug arrived, sounds like a jet

October 8, 2010 by · 3 Comments
Filed under: Development, Hardware, Linux 

The first few paragraphs here are rather harsh vs the Guruplug that I received, and yes, GlobalScale deserves some really bad critique for how this has been handled, but please read the final parts to get a full picture. In all honesty, sometimes I think I should just change name to “Grumpy Fart”.

I finally received the Guruplug Server Plus last week that I ordered 3 months ago. My first reaction is a big WTF on this machine. Apparently, they’ve had big troubles with overheating in the Guruplug, so badly so that a lot of units died from it. Someone over at GlobalScale Technologies had the absolutely idiotic idea to put a fan into the machine. It’s not just any fan, it’s a 2 cm maglev fan running at 3-4000 rpm, and no powermanagement whatsoever and it is directly hooked up to the power source inside the Guruplug, hence it will not be possible to vary the rpm, ever, without a hardware hack. Also, the fan is absolutely horribly placed as is evident from several pictures on the net (and by opening the machine, voiding the warranty) — it is placed with 80-90% of the back of the fan covered by a metal plate (the two gigabit ethernet interfaces), and when the machine is closed up as delivered, it has a big plastic plate (power source cover) covering the front of the fan, with less than 2 mm clearance. All this means is that it is incredibly noisy (easily 30-40dB, way louder than my core 2 quad machine with 8 gig ram and 4 fans in it, while in bootup and before the nvidia graphics card has gone into power management), and has close to no effect at all.

I bought this machine when there was no fan, and there wasn’t even talk of a fan or any mails to acknowledge this design change, so I was heavily inclined at sending the thing back for a refund, but I realized that with my normal luck, it would take several weeks to find a new machine matching my needs, and then another few months before receiving it. Because of this, I winded up simply ripping out the fan, and voiding the warranty. So far so good, and nothing bad has happened. The plug is still running approximately at the same temperature as before, but slightly higher, and I hope there will be no ill effects.

Second WTF when I finally got the machine started up was the installation — first off it is a really nice debian install, I was prepared to start straight off with the jtag interface and installing images and reinstalling etc etc, but there was a perfectly working machine there, straight off. Nice. I thought. Then came the wtf moment, the machine booted up and set an ip address for the uap0 interface (wifi access point interface) to 192.168.1.1. My plan is to use it hooked up to a wired network via gigabit, so I went straight at it, added the eth0 interface to /etc/network/interfaces with a static ip, delete routes and ip’s for uap0 and it worked… for an hour or so, then it stopped working. Reboot, still not working, notice that uap0 is back at it’s old location 192.168.1.1 and routes are back, but nothing in the bootup scripts. Finally find out they have a /etc/init.d/rc.local file point to a /root/init_setup.sh, which in turn does a heap of stuff — including setting up network parameters etc, overriding the normal configuration parameters in a nonstandard way.

Personally, I find this type of “hacked” environments to be despicable. For a private system in your home, fine do whatever you wish, but when it is a public server at a company, or something that you sell to customers, you wind up with a product that the other admins and/or customers can not trust the setup of. Luckily, the whole system is at least included in installed .deb packages. The init_setup.sh, together with the fact that there is no way of recovering if you do a simple screw up of the network setup without the JTAG interface is a major drawback imho.

So… that is the “bad” part of my experience so far. At this point I was rather underwhelmed, but I have been pleasantly surprised by the performance of the little bugger, and except for the startup scripts, it is rather nicely installed from what I have see so far. It would be interesting to gather up a complete list of the “hacks” they have performed to get it all together. The machine is currently able to do exactly what I set out to do, in less than a 2 days of configuration/fixing/fiddling about/etc. I didn’t manage to get the “auto connect” of my usb disk to work, but mount commands work and I didn’t really look at how it’s supposed to work (might be that you need to reboot machine etc). The idea with the machine on my part, is to set it up as a 24/7 machine, replacing my other machine doing this work, but at a lower power consumption. So far, the machine is running the following services for me:

  • DHCP
  • Dynamic DNS
  • Samba and NFS filesharing
  • Ssh network login server
  • USB Disk/”NAS” function
  • Bittorrent (transmission-daemon with transmission-remote on all other machines)

Additionally I hope to use it for the following as well:

  • Printer server
  • Zeroconf/avahi
  • Firewall (possible use as a portable firewall?)
  • Temperature sensors etc via GPIO?
  • Bluetooth, still haven’t figured out what to do with this…

As you can see, the machine is very capable, and all at a very low power consumption of <5W compared to my old computer doing all this, running at 170W approximately. At current power costs, I am expecting to have made up the money I spent on the Guruplug within less than 9 months. Let’s just keep fingers crossed that the machine wont die of heat before that 🙂 .

Conclusion

In conclusion, I am really afraid to say anything but this, I can not really recommend this product, not unless you’re either ready to do some serious hacking, or you plan to run it in a garage or some such place. A wardrobe or closet is simply not enough, it sounds too much as it is. The idea and the thought behind the machine is great, I just wish the execution of it was as good.

Shaving some time

May 24, 2010 by · 3 Comments
Filed under: Development, Linux 

I was talking to a colleague the other day and got into a discussion regarding some ways of coding c++, such as whether to use division or bit shifting — almost all modern  compilers can optimize this so it’s basically just a matter of what looks best. At the same time, we got into a minor discussion regarding references and pointers in c++. I made a small test and found some rather amusing results, which is quite obvious once you think about it, but still very scary considering how common it is to use the pointer construct:

// Compile using g++ -lrt -o lala lala.cpp
#include <iostream>
#include <time.h>

int* lala_ptr()
{
    int *y = new int;
    *y = 5;
    return y;
}

int& lala_ref()
{
    int x =5;
    int &y = x;
    return y;
}

timespec diff(timespec start, timespec end)
{
    timespec temp;
    if ((end.tv_nsec-start.tv_nsec)<0) {
        temp.tv_sec = end.tv_sec-start.tv_sec-1;
        temp.tv_nsec = 1000000000+end.tv_nsec-start.tv_nsec;
    } else {
        temp.tv_sec = end.tv_sec-start.tv_sec;
        temp.tv_nsec = end.tv_nsec-start.tv_nsec;
    }
    return temp;
}

int main(int argc, char **argv)
{
    timespec time1, time2, time3;

    clock_gettime(CLOCK_PROCESS_CPUTIME_ID, &time1);
    for (unsigned int i=0;i<(unsigned int)-1;i++)
    {
        int &z = lala_ref();
    }
    clock_gettime(CLOCK_PROCESS_CPUTIME_ID, &time2);
    for (unsigned int i=0;i<(unsigned int)-1;i++)
    {
        int *z = lala_ptr();
        delete z;
    }
    clock_gettime(CLOCK_PROCESS_CPUTIME_ID, &time3);

    std::cout << "Reference diff(" << time1.tv_sec << ":" << 
                time1.tv_nsec << ", " << 
                time2.tv_sec << ":" << 
                time2.tv_nsec << ") = " << 
                diff(time1, time2).tv_sec << ":" << 
                diff(time1, time2).tv_nsec << std::endl;

    std::cout << "Pointer diff(" << time2.tv_sec << ":" << 
                time2.tv_nsec << ", " << 
                time3.tv_sec << ":" << 
                time3.tv_nsec << ") = " <<
                diff(time2, time3).tv_sec << ":" << 
                diff(time2, time3).tv_nsec << std::endl;
}

Below is a sample of the output generated by the test code above:

oan@laptop4:~/Projects/test$ ./testRefVsPointer 
Reference diff(0:3869272, 25:234466470) = 25:230597198
Pointer diff(25:234466470, 299:547382527) = 274:312916057

So, the question for you all, can you figure out what's wrong? 😉

Screenmovie 1.1 released

A quick note that Screenmovie 1.1 has been released. It’s still very crude, but adds some sound recording and the ability to turn it on/off. Postprocessing is not supported yet but should be there in the next version.

Features:

  • Record video
  • Record sound
  • Configure file format
  • Configure video codec + settings (5-6 codecs chosen for now)
  • Configure audio codec + settings (2 codecs for now)

I still have some problems, but I just found some info on how to possibly make some problems better at least.

Todo:

  • Fix some high cpu usage problems
  • Add global keybindings
  • Postprocessing encoding
  • Clean up and add some values specific for codecs (as required).

Python multiprocessing

May 2, 2010 by · Leave a Comment
Filed under: Development, Linux 

Sometimes I find a good time by writing small but easy to understand test programs to research specific behaviours in some language or another. Sometimes, the programs grows a bit wieldy and not so easy to understand, but all’s good that ends well. During the last year or so, I’ve grown more and more interested in python programming and finding it very enjoyable. There are some strange constructs that can be hard to wrap your head around, and sometimes I run into some very weird problems, but it’s not a big problem (imho).

My last forays has been into multiprocessing and how it behaves in Python. One of the features I had a hard time to wrap my head around since there are some syntactical weirdness that could use some addressing, or at least takes a bit of time to get your head around.

  • Multiprocessing requires all objects to be pickled and sent over to the running process by a pipe. This requires all objects to be picklable, including the instance methods etc.
  • Default implementation of pickling functions in python can’t handle instance methods, and hence some modifications needs to be done.
  • Correct parameters must be passed to callbacks and functions, via the apply_async. Failing to do so causes very strange errors to be reported.
  • Correct behaviour might be hard to predict since values are calculated at different times. This is especially true if your code has side effects.

The small but rather interesting testcode below explores and shows some of the interesting aspects mentioned above. Of special interest imho is the timing differences, it clearly shows what you get yourself into when doing multiprocessing.

#!/usr/bin/python
import multiprocessing

def _pickle_method(method):
    func_name = method.im_func.__name__
    obj = method.im_self
    cls = method.im_class
    return _unpickle_method, (func_name, obj, cls)

def _unpickle_method(func_name, obj, cls):
    for cls in cls.mro():
        try:
            func = cls.__dict__[func_name]
        except KeyError:
            pass
        else:
            break
    return func.__get__(obj, cls)

import copy_reg
import types
copy_reg.pickle(types.MethodType, 
    _pickle_method, 
    _unpickle_method)

class A:
    def __init__(self):
        print "A::__init__()"
        self.weird = "weird"

class B(object):
    def doAsync(self, lala):
        print "B::doAsync()"
        return lala**lala

    def callBack(self, result):
        print "B::callBack()"
        self.a.weird="wherio"
        print result

    def __init__(self, myA):
        print "B::__init__()"
        self.a = myA

def callback(result):
    print "callback result: " + str(result)

def func(x):
    print "func"
    return x**x

if __name__ == '__main__':

    pool = multiprocessing.Pool(2)
    a = A()
    b = B(a)
    print a.weird
    print "Starting"
    result1 = pool.apply_async(func, [4], callback=callback)
    result2 = pool.apply_async(b.doAsync, 
        [8], 
        callback=b.callBack)
    print a.weird
    print "result1: " + str(result1.get())
    print "result2: " + str(result2.get())
    print a.weird
    print "End"

The above code resulted in the following two runs, and if you look closely, the timing problems show up rather clearly. Things simply don’t happen in the order always expected when threading applications:

oan@laptop4:~$ ./multiprocessingtest.py
A::__init__()
B::__init__()
weird
Starting
weird
func
B::doAsync()
callback result: 256
B::callBack()
16777216result1: 256

result2: 16777216
wherio
End
oan@laptop4:~$ ./multiprocessingtest.py
A::__init__()
B::__init__()
weird
Starting
weird
func
B::doAsync()
callback result: 256
B::callBack()
16777216
result1: 256
result2: 16777216
wherio
End

One more warning is in order. A job that leaves via the multiprocessing.Pool, and then calls the callback function has a major effect that could take some getting used to. The callback is run in such a fashion that if a class was changed, the change has not taken place in the context of the callback.

Regarding web browsers and User Interfaces

March 28, 2010 by · Leave a Comment
Filed under: Development, Frozentux.net, Linux, Windows 

I had a discussion this morning with my wife regarding her new computer (Toshiba laptop). A few days back, I was contemplating the possibility to try and get rid of the Windows 7 license it came with the EULA disagreement argument and get my money back, but after much consideration and realizing that her school (Chalmers University) pretty much requires her to run Windows 7 and Office 2007 (all applications used are windows based with Unix/Linux alternatives only irregularly, and her lab professors more or less frowns on OpenOffice.org vs MS Office formatting problems), we decided to keep it. This is another story, how Chalmers went from being a very open University to a complete clusterfuck of a Microsoft bootcamp and something I’d love to write more about at some point, but not now.

The computer has arrived finally, and my wife had a little time to play around with Windows 7. I’ve personally not used Windows since Win XP and refuse to lay hands on it unless “forced” to at work. Her main gripes came with Internet Explorer 8, and I couldn’t help but laugh very loudly at her assessment of it. It incessantly keeps asking questions and giving hints and ideas like “How do you like this?”, “Did you know that IE plugin blah …?”, “We can help you do …?”, “Do you have herpes?” to the point where the only sane comparison she could make is a very horny and needy drunk boy at a club. It took her less than one battery charge to switch back over to Firefox. My guess is that they simply made the user interface a bit … too helpful, and tried to be too reactive to “possible needs in this context”.

So, on this topic, I thought it could be slightly interesting to see this months statistics and split between different operating systems and web browsers.

Operating systems

I can’t really say that this site is representative of the world in general, but I think it is interesting to see the level of windows users. I’m guessing this is a sign that many of the people visiting are reading about iptables and using linux for servers and firewalls, but aren’t using it as a desktop system (yet?).

Windows 59.1 %
Linux 31.3 %
Macintosh 6.4 %
Unknown 2.7 %

Web browser

Personally, I find this part the most astounding. That IE has gone so low is simply not something I had expected, and how big market share firefox has… incredible.

Firefox 60.7 %
MS Internet Explorer 16.4 %
Safari 11.4 %
Mozilla 4.5 %
Opera 4.1 %
Unknown 1.2 %

I will try to do some more work on this in the future, could be interesting to see this change over time. I know the reliability of my own source isn’t excellent and the error margin is probablythrough the roof, but it should be rather fun to dig in to.

Screenmovie released

February 19, 2010 by · Leave a Comment
Filed under: Development, Linux 

My first application ever in python has been released, Screenmovie as I chose to call it. It’s a very simple application for recording your desktop or windows on your desktop. Download and put somewhere in your filesystem and unpack it, and start it. It’s very simple to use, it uses pygtk to create a taskbar icon in your window manager. Click it once, and you get a cross-hair mouse pointer. Click the window you want to record, and the application starts recording straight off. Click the icon again to stop recording. To record the entire desktop, click the background somewhere.This application only works with Linux and other systems using X11 window management and requires xwininfo, ffmpeg and python 2.6 + pygtk+.

If you have any comments or thoughts, feel free to mail me.

Secondly, I’ve come quite far along with the pyEveApi, a rather large project I began a few weeks back. The project is perfect imho, it gives enough problems to solve and also introduces me to a lot of functionality in Python that I would have a hard time finding out about otherwise. I’m not close to a release yet as I only support 4 API’s so far, and just restarted quite a lot of the work with rewriting the caching functionality. The Long caching method has been fairly well implemented atm, and works, and I’m currently working on the Modified Long method, using the WalletTransactions API to test with. Either way, I’m hoping to have something releasable sometime soon.

Working with Python

February 2, 2010 by · 2 Comments
Filed under: Development, Linux 

A while back I started pondering Python as a language, and mostly the fact that I haven’t really tried any of the newer and more modern languages around. Not to any great lengths at least. So, about a year or two ago I started fiddling with it, mostly just trying different scripts, writing some small hacks, and so forth.First, I wrote a Bluetooth scanner for bluez, then via Dbus interface and so forth. The ease this came with was completely amazing tbh.

Last few weeks I started writing some larger applications with python, to try and learn it a bit better. First of all, I’ve had large problems with all desktop recording tools I’ve found so far — they either don’t support GL/Direct rendering, or they simply crash, or colors are all flunked out, and so forth. So, simple idea was to use the tool I know that works — ffmpeg and xwininfo — and make an easy to use interface for it, a taskbar icon. Click it once, and get a cursor to choose which window to record, click it and recording starts (icon changes to a recording button), and click it again to stop it. There is a right click menu to make some preferences and to stop the recorder as well. It works, and it’s very very simple :-).

Second set of tools I decided to make is a set of API’s for the Eve Api. This is also nothing new, but it was a good way to get a handle on how Python works, and how the Eve Api works. On top of this I have written some very simple apps that downloads all marketorders and wallettransactions in game and displays them nicely in a console. It has built in support of all caching styles, downloading, login, local disk cache, and so forth, but so far has very limited support for the different API’s.

My main observation so far has been that I like Python for it’s hackability and ease of use. Stuff comes naturally once you get used to it, but the dynamic typing and hackability also comes at some price. First of all, it is very easy to create scenarios where the code will crash due to bad types in the dynamic type checking, and the hackability also means you need to be very wary of your architecture. Hacking away is almost never a good thing if you are preparing to undertake any larger projects imho, the risk of writing bad code is paramount, and testing bad code … well, it’s just worse 😉 . That said, I do like the language, it’s completely awesome, and someone else said not so long ago, you can get around the problems, you just have to test the code so much harder for it to be reliable. Personally, I wouldn’t use Python for a critical system/application, but for my day to day use, it’s not a problem, I can survive a crash every once in a while. Also, after getting used to the pydev plugin to eclipse, I just love the entire environment even more :-).

Final thoughts on the embedded Linux seminar

December 12, 2009 by · Leave a Comment
Filed under: Development, Linux, Personal 

The embedded Linux seminar was held last week, and in general I feel that it went pretty well in line with my expectations. It’s been a long time since I held any real presentations, so the first two presentations in Gothenburg I was very much nervous and lost myself a few times along the line. The two presentations in Stockholm and Oslo went surprisingly well however, and I don’t think I made any huge errors from a pure presentation point of view.

Licensing

Now, that said, we did run into a few snags and after pondering what we can learn from the whole presentation/seminar — and there are a few points I’d like to raise both for the attendees and anyone else who might be interested. Being an “engineer”, I like to consider what went wrong, etc. Most points are of minor interest, but one of the absolute major points that we really didn’t get across properly, licensing issues with open source, or rather, licensing issues are manageable, however this was not the main area that we (or at least me) where there to talk about. Our main error was simply that we forgot communicating properly with each other, and correlate what we where saying. Also, this seminar wasn’t really about licensing issues, Nohau has an entire seminar/course on that topic alone, and you could easily fill out an entire university course on open source licensing.

The main point I tried to get across was that, yes, you need to be wary about licenses and you need to look at what is required of you, but that’s nothing different from any closed source licenses either, and you should be putting policies as well as processes in place to handle it, and push knowledge on how to handle licenses must be disseminated throughout the project.

So, to address some of the main licensing questions we received:

  • No you will not have to give away your code if you link the code properly.
  • You will have to set up proper procedures to handle any third party sources.
  • You will have to create processes for everyone to follow to get any third party sources “accepted”.
  • You will have to adhere to third party licenses, if you don’t, be prepared to be forced to and receive some bad publicity for it. (A lot of companies/people do get away with it, but is it worth risking it?)

Reliability

The second large question we got was, when would you use Linux, and when wouldn’t you use it, in a life or death situation? Simply put, I wouldn’t put it in a system where a person or persons would die if the process/hardware/appliance crashes, but that’s me. I would generally speaking make the life supporting/critical system run on a separate hardware, and then make all the critical stuff run in that context/hardware, and then a second piece which communicates with the critical hardware and do the higher end “stuff” that might be interesting (communicate to centrals, user interfaces, settings, etc). This way, the critical stuff can be kept simplistic and reliable (in my experience, reliability is a function of complexity, the more complex, the higher the failure rate).

In most projects, this has to be decided on a case by case basis, and due diligence must most of the time be taken with the laws and standards of each area. What is possible and advisable to do in house and home automation is not the same as in airplanes or medical systems for pretty obvious reasons.

Presentation depth/breadth/focus

Finally, a minor point, I got some criticism for being too shallow, not going enough in depth. Well, I could have stayed on discussing tool-chains and how to make one for hours, or I could have talked entirely about Linux internal boot order and why it works the way it does, but that wasn’t the goal of the entire seminar. That stuff could be studied to death, in the end your better off “getting” the top-down structure of an embedded Linux project and then just experiment on your own rather than get everything served in forum that can not make justice to everyones requirements. Next time however, I will try to maintain a deeper focus on a bit fewer topics, or get more time to speak in.

Anyways, I think it was fun and a huge experience. I hope most people visiting found the seminar interesting and had something out of it.

Using Ubuntu as Media Server for Xbox360

November 21, 2009 by · Leave a Comment
Filed under: Personal, Ubuntu, Windows 

I got a Xbox360 since about a year and I just noticed it had some way of connecting to a PC, using the PC as a Media Server. Unfortunately it required a Windows Media Center installation to work, or so it claimed at least. This is probably not news to anyone, but it was very easy to get Ubuntu (or any other Linux distro as a matter of fact) to serve media for the Xbox 360. Xbox 360 uses UPnP to get media from the Windows Media Center PC. To make any recent Ubuntu able to serve UPnP suitable for the Xbox, do the following:

  1. sudo apt-get install ushare
  2. sudo dpkg-reconfigure ushare
  3. sudo vim /etc/ushare.conf
    1. Make sure all the settings are correct.
  4. sudo vim /etc/default/ushare
    1. Make sure it contains USHARE_OPTIONS=”–xbox”.
  5. sudo /etc/init.d/ushare restart

You should now be able to find the PC by searching for it from the Xbox interface (the name you set in ushare.conf should show up in the list of found PC’s). Now that that’s said, I should hint that the Xbox360 has a really shitty availability of audio and video codecs, and I don’t know if it’s possible to resolve this problem. There are hints that there is something called a UPnP Media Adaptor on the ushare website which should be able to convert to proper file formats as necessary, but ushare does not have that ability. Of course, that would give a shitload of cpu load on the fileserver as well, something which sounds less good in my opinion.

My personal opinion so far, Xbox 360 media center is really simple to use, but the available codecs, flexibility and scalability is catastrophically bad in comparison to my MythBuntu installation (still running 9.04 though). The Mythbuntu installation is a bit heavy on the configuration, but much more flexible, handles almost all codecs I’ve run into without even a hitch, and very scalable.

embedded Linux seminar 25-27/11 2009

October 27, 2009 by · Leave a Comment
Filed under: General, Linux, Personal 

On the 25-27 of november I will be on tour with nohau.se and hold a embedded linux seminar. The entrance is free, but requires a registration, see the embedded linux seminar webpage. according to the following schedule:

I will specifically do the Development using Embedded Linux track, which will be 50 minutes long. The presentation is still fairly crude and rough around the edges, but some of the bulletpoints I’m going to talk about is:

  • When to use Linux/ When NOT to use Linux
  • Pitfalls of open source vs closed source and vice versa
  • Hardware vs Development time cost decisions
  • Choosing the right hardware
  • Choosing the right software
  • Security.

I hope to see some of you at the seminars!

« Previous PageNext Page »