Final thoughts on the embedded Linux seminar

December 12, 2009 by · Leave a Comment
Filed under: Development, Linux, Personal 

The embedded Linux seminar was held last week, and in general I feel that it went pretty well in line with my expectations. It’s been a long time since I held any real presentations, so the first two presentations in Gothenburg I was very much nervous and lost myself a few times along the line. The two presentations in Stockholm and Oslo went surprisingly well however, and I don’t think I made any huge errors from a pure presentation point of view.

Licensing

Now, that said, we did run into a few snags and after pondering what we can learn from the whole presentation/seminar — and there are a few points I’d like to raise both for the attendees and anyone else who might be interested. Being an “engineer”, I like to consider what went wrong, etc. Most points are of minor interest, but one of the absolute major points that we really didn’t get across properly, licensing issues with open source, or rather, licensing issues are manageable, however this was not the main area that we (or at least me) where there to talk about. Our main error was simply that we forgot communicating properly with each other, and correlate what we where saying. Also, this seminar wasn’t really about licensing issues, Nohau has an entire seminar/course on that topic alone, and you could easily fill out an entire university course on open source licensing.

The main point I tried to get across was that, yes, you need to be wary about licenses and you need to look at what is required of you, but that’s nothing different from any closed source licenses either, and you should be putting policies as well as processes in place to handle it, and push knowledge on how to handle licenses must be disseminated throughout the project.

So, to address some of the main licensing questions we received:

  • No you will not have to give away your code if you link the code properly.
  • You will have to set up proper procedures to handle any third party sources.
  • You will have to create processes for everyone to follow to get any third party sources “accepted”.
  • You will have to adhere to third party licenses, if you don’t, be prepared to be forced to and receive some bad publicity for it. (A lot of companies/people do get away with it, but is it worth risking it?)

Reliability

The second large question we got was, when would you use Linux, and when wouldn’t you use it, in a life or death situation? Simply put, I wouldn’t put it in a system where a person or persons would die if the process/hardware/appliance crashes, but that’s me. I would generally speaking make the life supporting/critical system run on a separate hardware, and then make all the critical stuff run in that context/hardware, and then a second piece which communicates with the critical hardware and do the higher end “stuff” that might be interesting (communicate to centrals, user interfaces, settings, etc). This way, the critical stuff can be kept simplistic and reliable (in my experience, reliability is a function of complexity, the more complex, the higher the failure rate).

In most projects, this has to be decided on a case by case basis, and due diligence must most of the time be taken with the laws and standards of each area. What is possible and advisable to do in house and home automation is not the same as in airplanes or medical systems for pretty obvious reasons.

Presentation depth/breadth/focus

Finally, a minor point, I got some criticism for being too shallow, not going enough in depth. Well, I could have stayed on discussing tool-chains and how to make one for hours, or I could have talked entirely about Linux internal boot order and why it works the way it does, but that wasn’t the goal of the entire seminar. That stuff could be studied to death, in the end your better off “getting” the top-down structure of an embedded Linux project and then just experiment on your own rather than get everything served in forum that can not make justice to everyones requirements. Next time however, I will try to maintain a deeper focus on a bit fewer topics, or get more time to speak in.

Anyways, I think it was fun and a huge experience. I hope most people visiting found the seminar interesting and had something out of it.

Added papers section

October 22, 2009 by · Leave a Comment
Filed under: Development, Frozentux.net 

The papers section has been added under the Documents section of the site. It currently is rather empty with only 2 semi-old papers added and two more coming within a few weeks. This is one of the reasons I actually switched over to wordpress as it enables much easier updates and additions of new material. The first paper out is about lessons learned from a embedded linux project with Freescale MCF5329 running uCLinux. The second paper is an old but cancelled IETF RFC draft I made almost 6 years ago to enable connection state transfers between multiple firewalls and/or routers for failover, high availability and load balancing reasons. This was put on ice for various reasons, but I thought it may have some value to anyone who would be interested in doing something alike. If there is any interest in carrying on the work, I’m open for questions and suggestions.

Example transforming videos

September 13, 2009 by · Leave a Comment
Filed under: Phone, Video 

I recently made some video editing on videos i copied to and from my cell phone and realized some of the stuff might be rather esoteric and hard to find good examples on how to do. Basically just going to post some minor tips and tricks that I picked up, and some very simple commands to use with mencoder, ffmpeg and kino.

I used mencoder and ffmpeg to do some of the basic edits, like turning videos around etc. After the basic video snippets where done, throw them into kino and make the final cut, and then recode the video into a distributable format (10 minute video in dv format as used in kino = 2.1gig data, while 10minute divx of the same video = 170 meg).

#Rotate video 90degrees
oan@laptop4:~$ mencoder -o lala.avi -vf-add rotate=1 V170709_12.54.AVI -oac copy -ovc lavc

#Postprocessing filters, ac = high quality
oan@laptop4:~/Desktop$ mencoder -o lala.avi -vf pp=ac V170709_12.54-recode.AVI -oac copy -ovc lavc

#Transcode video so it works on cellphone (KC910), this “works for me”(tm)
oan@laptop4:~/Videos$ mencoder -o lala.avi -oac copy -ovc lavc -lavcopts vcodec=mpeg4 alice-final.avi

# Create a black 2 second frame (25 fps, 50 frames), I used this as a filler between
# two movies. There’s probably easier ways of doing this, but it “works for me”(tm)
oan@laptop4:~/Pictures/2009-08-01$ ffmpeg -r 25 -loop_input -i black.jpg -vcodec mjpeg -vframes 50 -y -an test.avi

Finally I put the videos together in kino in the order I wanted, with black frames in between and effects fading from the videos into black, and so forth, making for smooth transitions.

LG KC910 woes

July 24, 2009 by · 3 Comments
Filed under: Communications, Personal, Phone 

As I’ve already said in earlier postings I relatively recently got myself a LG KC910 Renoir phone. The phone has been the cause of a lot of woes and problems so far and I’m afraid I must say I regret not getting a proper Android or iphone from the beginning. As it is, I’m stuck with this phone for another year+ until the subscription runs out – or get some other phone on my own tab.

The one really great part that I love about the KC910 so far is the absolutely wonderful camera, it has a 8MP camera that takes rather splendid snaps for a cellphone camera. Also, the video recording function, and video/music playback is rather nice.

For the really really bad part, well, look at the rest. Three of the main reasons I got this phone was for the ability to get some websurfing done “on the go”, and to get a good calendar that could be synced vs my work and private calendars, and finally I wanted to use the phone straight off for connecting to the internet instead of some dongle. Both these functions are completely botched in the KC910 as the, the webbrowser lacks a lot of functionality practically making most of the internet unusable on the phone, and the browser is also horribly slow, taking tens of seconds to “calculate/draw” complex webpages after its loaded. This problem should be easily solvable by downloading and installing another browser such as Opera you say, sadly, the install process hangs halfway through on the Renoir, and I have so far to find another browser that installs at all, which kind of brings a fourth point up (applications/third party market).

Second problem was the calendar, which is unfortunately totally borked. For basic calendar tasks it works fairly well, but very soon you will realize it doesn’t work very well. It lacks good support for reoccuring activities, the sync applications has a bad habit of screwing up timezones and moving activities/entries around based on timezones and at occasions it deleted entries entirely, and worst of all, LG has chosen to go all the way with their PC Suite set of applications, which essentially bars you from using anything but officially supported Microsoft Windows XP/Vista and Outlook. This goes for pretty much all functionality in the phone. Getting it to work with thunderbird, well, good luck. This is one of the reasons I had to work very hard getting Funambol setup at home to sync the phone and thunderbird with (I can not use a third party server as some calendar entries may in worst case contain sensitive data). The functionality of this setup worked out to be “semi-decent” to crap at best, and in the end I winded up reverting to just using my computer calendars.

My third problem has been internet connections. The only supported way of connecting to the internet is (again) via their PC Suite crapware. All other phone manufacturers support Bluetooth DUN or serial port connections without a problem, but not LG. It halfway supports DUN connections, I get a connection the first time that doesnt work, then get disconnected, and after that it takes 2+ days until I can connect again, and get disconnected again, exactly the same thing happened in Linux, Vista, XP, with/without PC Suite, over USB/Bluetooth, in accordance with 3’s and LG’s support etc. In the end, after 3-4 weeks of messing with this, I winded up getting a Huawei E180 HSPA USB stick, 30 seconds to unpack, plug in and click two buttons in Ubuntu and I was connected to the Internet.

My final annoyance is the lack of a third party aftermarket of some kind, I’m talking anything like the Iphone/Android app-store. A smart phone without a serious aftermarket support is pretty much as dumb as any old “dumb phone” ever was. LG has some eclipse based SDK’s available for download, but they only work for Windows, which has stopped me from trying them out at all so far, as I quit using windows completely half a year back. Anyways, the big problem is that there is no coordinated effort to make a decent app-store or app-store-a-like place to go for your applications for this phone. As always, the phone producers completely fails at understanding this part, in this day and age, you need to create officially endorsed systems of managing, getting and paying for applications. Whom the problem should fall upon is a hard question, but just dumping the problem on someone elses porch is not sufficient in this day and age, especially if you want to make phones that tries to emulate the success of “the big one”, you need to at least try and understand what made it big. It wasn’t a good camera or a nice looking (but slow) gui. It’s the ability to be adapted to my requirements, and to perform my required tasks. You can not predict it all (my needs are not your needs), hence adapt to standards (make shit plug and play with others), and make every effort you can to create a third party aftermarket that works (signed downloads, payments, etc etc).

As a verdict, if you’re looking for a smartphone/iphone/android, dont get an LG.

Syncing strategies

May 3, 2009 by · 2 Comments
Filed under: Communications, Linux, Phone, Windows 

Another problem has (mostly) been solved for me it seems. I’ve had quite a lot of problems the last few months with calendars and email and contacts being out of sync between workplaces and my private computers/cellphones etc. The problem has been that I’ve gotten a new contract and hence am relocated to another workplace. My employer has a stupid (ok, maybe not so stupid, but annoying me nonetheless) policy of not allowing any e-mail to internal addresses be forwarded or fetched from external networks. At the same time, my contracting has put a heavy load on the calendar and all of a sudden I understand everyones problems with syncing e-mail/contacts/calendars etc… it’s really a must.

Anyways, in short, I started out with 4 calendars (workplace1, workplace2, home1, home2(laptop) and cellphone) needing sync, and using Microsoft Exchange weirdo protocols was not an option (I’m not using Windows or Outlook at home anymore). This has later been extended to sync contacts and my two instances of thunderbird (not yet finished). So, in short:

  1. Workplace1 = Windows Vista with bluetooth
  2. Workplace2 = Microsoft Exchange server with limited access.
  3. Home1 = Ubuntu with thunderbird
  4. Home2 = Old Laptop, Windows XP with thunderbird, will likely migrate to Linux soon as well since I barely use it anymore due to the OS on it.
  5. Cellphone = LG KC910 with bluetooth and wifi.

First off, finding a sync strategy wasn’t easy. First, decide on where your “central repository” is, or rather which will be your main device. My current solution relies on cellphone (LG KC910) being the central repository since it’s the only common gadget at all locations. connection at workplace1 is directly over bluetooth to the KC910 using the LG sync application. The application is absolutely horrible, but it does it’s job (barely). Unfortunately LG relies on a proprietary bluetooth protocol for syncing so I have yet to find any decent replacement applications.

My big problem was finding a working solution at home, and I think I finally found it in Funambol (https://www.forge.funambol.org/DomainHome.html) which is a SyncML server. Basically, I got a server on my local network running Funambol, when I get home, connect to the local wifi, and sync with funambol (See http://www.mobyko.com/phoneinfo/lg/renoirkc910Info.do, a bit down for instructions). The funambol server then acts as a “central repository” when I’m at home containing all calendars etc. Thunderbird sessions on Home1 and Home2 uses the funambol addon (https://addons.mozilla.org/en-US/thunderbird/addon/8616) to sync with the funambol server.

WARNING! So far I dont trust funambol to run on the public internet, for one part it seems to be sending passwords in cleartext, as well as data. I’d love to figure out a way to get it all encrypted using SSL/https, but I’m a complete newbie to Tomcat (base plattform for funambol) as well as java. As far as possible, try to use a closed/encrypted network for this unless you get https running imho.

A second note on Funambol is that I had some really funny Timezone problems when setting it up, all devices run the correct timezones, but for some reason my calendars winded up being winded 2 hours into the future at home, I got it fixed by setting all timezones in funambol for all devices manually, and then disabling the timezone handling in funambol… don’t ask me why it fixed it etc, I hate working with timezones 😉 .

All that said, I really think SyncML was a big saviour for me in the end, but I had a hard time finding a single word on it or anyone really recommending it. Bluetooth just needs to be …. well, better support, and everyone needs to agree on standards. Everyone (companies) seems to be running around doing their own thing, which means Linux has very good basic bluetooth support, but none of the higher layer stuff since it’s badly implemented or proprietary.

Off-work robot fun

February 26, 2009 by · 4 Comments
Filed under: Communications, Robots 

As of late, I’ve been having loads of fun with an old robot of mine, Robby RP5. My biggest complaint at all times has been the fact that it has a horrible 8-bit processor with “some kind of” Basic interpreter/compiler that I never quite figured out because it is so boring and … well, let’s face it, you will never be able to do anything “wow” in a language that is more or less assembler having 4k flash and 256bytes ram where only some 60 bytes are actually available.

As of late, we’ve been having some fun with zigbee modules at work, and I figured out a way to have fun with my old Robby again. Robby has a serial port, and I’m connecting one zigbee module to that one, and on the other end I’ve got a zigbee module connected to my computer via USB. On the Robby processor, I got a very simple program that simply talks a protocol sent over the zigbee connection and “implementing” the commands sent in packets. There are 3 packets that can be sent, TrackData, SensorData and RequestData. TrackData packet sent from computer sets speed of both tracks individually, RequestData is sent from computer to Robby and contains a request for a packet back. The Request can either be TrackData or SensorData. SensorData contains data from all sensors supported (currently only IR range sensors).

My first demonstration program on the computer is connected to a joystick and simply transforms the joystick input and sends it to the robot. Pushing button 0 requests sensordata and 1 trackdata.

Right now, I’m looking at porting my robot drivers into the Player/Stage project which I’ve been looking heavily at as of late, and seems damn cool. I’ve been testing some of the example robots in the Stage simulator, and if I would port my setup into that project, I should be able to use the available robot “behavioural modules” straight on my robot, and/or test my new modules in a simulator before actually running in the real world. In all honesty, I think player/stage is the best thing I’ve ever found since sliced bread, it simply opens up for sooo much fun 🙂 . Connect this with a couple of zigbee modules, you can build very simple and cheap robots that are extremely powerful. 60ÜSD robot chassis, 5USD processor, 10USD junk, 30USD for 2 zigbee modules, add some sensors, and you’ve got as much as you can ask for. Robby for example is around 110USD, probably much lower, a pair of zigbee modules are 30USD.

And yes, I will open this once I feel that I’m closer to finished :-).

Build components

November 23, 2008 by · Leave a Comment
Filed under: Configuration Management, Development 

After a weekend of work, I finally got myself a build component that I’m semi-pleased with, for C and C++ projects, using Subversion. Most likely works for any other lower level programming language as well.

First off, structure. Each component is it’s own BTT(Branches, Tags, Trunk)-root, residing in a Project_Modules directory in subversion. Each component contains an inc, src, test and a stubs directory. Rationale for the BTT-root is that, with a separate BTT-root for each component we can raise the version of each separate component without having to raise it for the entire project.

The Project directory resides on the same level as Project_Modules, and is empty, only containing the subversion property externals pointing to the trunks of the Components in Project_Modules. Rationale for this is to have a simple place to checkout the entire project. It’s a bit dangerous when working with branches, and requires a little bit extra care so one doesnt write into the trunk out of mistake. Possibly block everyone but a specific user to write in the trunks and have that CM person do all the branching/merging. It is time consuming however.

It looks something like this:

  • Project_Modules
    • Component1
      • inc
      • src
      • test
      • stubs
    • Component2
      • inc
      • src
      • test
      • stubs
  • Project

Inc directory is the public interface of the component towards the other components. Src directory contains the actual code of the component. Test contains unit tests (personally, i create a new directory for each new unit test file). Stubs contains the stubs of my own component. Ie, Component1/stubs will contain stubs for the functions in Component1. Rationale being that 95% of the time, we want to stub another component in the same way, instead of keeping stubs of a component in 10 different components, we keep it in one place.

“New” subversion structure using svn:externals

Me and the boss deviced a new structure for the project during the last few weeks, and it’s been slowly refining in our heads until yesterday when we finally implemented it. I think we made a rather refined and complex structure, but once we got it into place physically and once we get the general idea into the developers heads (including me), I think it will prove very powerful.

That being said, I don’t think this is a new structure, I just think people are very quiet about how they use subversion, and it’s a problem. Newcomers do the same old errors over and over again. So, let’s get on to try and explain it all.

Most projects uses a single BTT root, where BTT stands for Branches, Tags and Trunk. Ie, they start a project, and then straight in the root put the BTT, and then inside that, they create the project structure. For example:

  • project-root/
    • Branches
    • Tags
    • Trunk
      • admin
      • src
      • out
      • test

This is a good basic structure for very small projects, containing perhaps 10’ish files, or where the actual implementation is perfectly homogenous and has no need for separated versioning. Every time we want to make a release, we cheap copy the content to Tags as a new tag (called perhaps /Tags/Milestone1-RC1). We now have a release that we can provide to people.

The problem comes if it isn’t so homogenous. For example, let’s say you are developing a calculator. It has two objects, a numpad and a display. What if you want to make a new version just of the display? You need to make a completely new version, including for the numpad.
Or how about wanting to branch just a small part of the project? Ie, I want to use a branch for the numpad, and then use the trunk for the display. You’d then have to make a cheap copy for the entire tree. Admittedly, it isn’t costing too much.

Our “new” structure deals with this on a different level. Basically, the idea is to have multiple BTT roots, and then use svn:externals to connect the correct tags to create
1) a complete releasable project and
2) a complete workarea project.

For the calculator example, you get the following structure:

  • calculator/
    • Calculator_Modules/
      • Display/
        • Branches/
        • Tags/
        • Trunk/
      • Numpad/
        • Branches/
        • Tags/
        • Trunk/
    • Calculator/
      • Branches/
      • Tags/
      • Trunk/

As you can see, it looks much more complex, and it is, but the possibilities are infinitely much better.

The Calculator/Trunk/ directory contains a svn:externals property linking in the Calculator_Modules/Display/Trunk as Display and Calculator_Modules/Numpad/Trunk as Numpad. This works by linking external resources into the current directory structure, so basically I would get the trunks into my Calculator trunk, but properly renamed, without them actually being there in the repository. This also works on “real externals” by the way, such as linking in a specific version of a library from some repository on the Internet.

To create a Calculator/Tags/MS1 we could either just set a -rXX to the correct subversion revision, or we would create svn:externals to the correct Display and Numpad Tags, not their trunk. This way, we can say that “Calculator 1.0 contains Display 2.0 and Numpad 2.1”, not “Calculator 1.0 contains Display revision 439 and Numpad revision 587”, or even worse “Calculator 1.0 is revision 587” which completely lacks granularity.

I’m not completely sure it’s perfect, and others have probably already tested it, but I think it will be pretty sweet :-).

Switching GNU toolchains in eclipse — the easy way

October 13, 2008 by · Leave a Comment
Filed under: Development 

This is a method of switching to a different toolchain that I found in eclipse. Basically, my goal was to get eclipse to use a secondary toolchain. Primarily in my first step, I tried making eclipse use a crosscompiling gnu toolchain.

Doing this the “right” way seems to entail writing plugins eclipse, and to write lots of XML configurations and so forth. Not having the luxury of infinite time and money (yet wanting a managed build), but rather being result driven, I finally managed to solve it partially with a hack.

My secondary goal now is to make the build work with a completely “non-gnu toolchain” (renesas sh compiler), I will try to hack together this in the upcoming days. I’ll get back to this topic again, once I know if it works or not. In the meantime, this is how I got it to compile with “non-standard” toolchain, and “standard” cygwin toolchain at the same time.

  1. Install the toolchain somewhere.
  2. Setup your PATH with the /bin/ directory in it.
    1. Start -> Control Panel -> System.
    2. Enter the Advanced tab.
    3. Click Environment Variables.
    4. Find the PATH variable in System variables.
    5. Edit and add ;c:pathtoyourtoolchain at the end of it.
    6. OK
    7. OK
  3. Create a new Build Configuration (Target for example).
    1. Right click the project in the Project Explorer.
    2. Build Configurations -> Manage…
    3. New…
    4. Write name “Target” and a description.
    5. Copy settings from another target that might contain decent default values.
    6. Ok
    7. Ok
  4. Edit the Settings for the “Target” Build Configuration.
    1. Right click the project in the Project Explorer.
    2. Choose Properties.
    3. Go to C/C++ Build -> Settings.
    4. Choose the right Configuration at the top (Target in our case).
    5. For each heading in Tool Settings (GCC Assembler, Cygwin C Compiler, Cygwin C Linker)
      1. Click the heading.
      2. Edit the Command field (for example, instead of using gcc, you might be using arm-elf-gcc or m68k-uclinux-gcc).
      3. Apply.
    6. OK.
  5. Try to build the new “Target” Build.

Hint: A problem I ran into was that my new toolchain couldnt understand/interpret unix style paths properly and not hinting very openly about it. Ie, I got “missing header files” all over the build. Turning on verbose (-v flag) in the C/C++ Builds -> Settings, and then under Cygwin C Compiler -> Miscellaneous gave me an error like “ignoring nonexistent directory”, which turned out that instead of using “../inc/” I had to use “..inc”.

Moving interface from Enterprise Architect to C include file

October 1, 2008 by · Leave a Comment
Filed under: Development, Editors 

After writing this, i realized the output after this is almost 100% the same as I got from exporting the original class as a C header from Enterprise Architect, still putting this out there as it’s a nice regex. Cut n paste each function line from EA to the header file. format will be a bit screwed up, for example:

functionname(varname: vartype, varname2: vartype2): void

Begin with moving trailing return type to beginning:

:%s/^\(.*)\): \(\w\+\)$/\2 \1;/g

Get all varnames and vartypes into correct positions:

:%s/\(\w\+\): \(\w\+\)/\2 \1/g

All function declarations should now be fixed.

« Previous PageNext Page »