My mobile journey

If you are a mobile developer and haven’t read Chet Haase’s latest book “Androids: The team that built the Android operating system” go read it now! It’s a delightful read and a wonderful trip into Android’s history!

The team that created Android didn’t just create a new mobile OS they also created a new career path for software engineers like me. I’ve been coding for Android since November 2008, which means almost 13 years! The projects that I’ve worked on have been very… diverse.

When the team announced their Android device, I wasn’t living in the US (yet) and they weren’t selling it in the Netherlands yet. I was intrigued by what it was, a smartphone or really a portable tiny computer. Fortunately, for work at that time, I was going to be in Austin, TX in Nov 2008 for SuperComputing 2008 to demo a multi-touch device. After the conference was over, I went to a T-Mobile store and tried to purchase a G1. Back then they didn’t like it when you would try to just buy the device without service, but with a bit of patience they eventually sold one to me.

The first app that I wrote wasn’t going to be “Hello World”, nah, I needed a bigger challenge so I tried porting a Chip 8 emulator that I had written for Windows sometime earlier. When I ported it to Java using Eclipse (yep before Android Studio was a thing), the first game I tried was Space Invaders. I was so excited that the code ran and I was able to play it with the hardware keyboard!

Running Space Invaders (via emulation) on my G1 

At that time I was still active on EFnet and Freenode chat servers, I thought it would be cool to have an IRC client to check in every now and then from my phone. That’s when I started to work on fIRC which was released just before Christmas of 2008. I never expected so many people would download and use it. By default it would drop users in #android-chat on Freenode which became very popular and often time had hilarious chatter going on.

The old version started with great ratings and high number of downloads… until it didn’t work so well on newer Android devices as I had a lot of layout things hardcoded for landscape use ?‍♂️

Funny enough, fIRC was also the reason I got in touch with, San Mehat, one of the Android engineers who used to idle in the chatroom too (I think most Android engineers were on #android on Freenode back then). As fIRC was gaining traction, at some point there was so much talk in different languages going on that it became hard to follow any conversations. At one point he yelled “WILL YOU MORONS SPEAK ENGLISH?” and that’s how a new IRC topic was born.


In the spring of 2009, I was still doing research work on multi-touch tech and was invited to come to the Interactive Displays Conference in San Jose. As I had never been in the area before (and it was my first trip to the US by myself), I was asking for some ideas on IRC in #android-chat on things to do in the SF area (as I knew a bunch of them where SF locals). San saw the message and said that we should meet up! We actually did end up meeting in person… (what was I thinking? meeting up with a stranger from the internet??). San is a great guy, even though I was just starting out as a software developer, we had a great chat and geeked out a bit over the G1. I still remember he offered me to come to the Google Mountain View campus but unfortunately I was there only for about a week and the schedule didn’t allow for it. I still regret that.

Late 2011 I started freelancing via Epic Windmill and one of my friends (Seth Sandler) asked me if I could help porting over one of his successful iOS apps to Android. I was up for the challenge and it was the first time for me to be exposed to the complicated side of Android development. NodeBeat was written in C++ using openFrameworks. Porting meant that I had to create a native Android UI as well as getting the ‘cross-platform’ common code to work via JNI (some tech details here). Fortunately things worked out well and we released NodeBeat in October of 2011.

While Android was gaining traction, BlackBerry was trying to convince Android developers to make BB10 apps too. Initially I ported over my IRC chat app for a free device and later on I wrote an openFrameworks add-on (ofxQNX) that would allow apps build with that to run on the BlackBerry PlayBook (tablet) and BB10 devices. It also helped me port over NodeBeat to BB10 which got featured on the BlackBerry blog (yay!)!
Sometime in 2012 because of my work on NodeBeat, I was also introduced to Dan Comerchero who needed some help for an Android port of Quiztones, an ear training for EQ app. I don’t remember it being too hard other than the issue where the audio files had to be decompressed WAVs else there would be a playback lag somehow. I guess back then decoding audio from ogg/mp3 wasn’t as smooth.
The biggest challenge was yet to come though… via Twitter in late 2012, Terry Cavanagh was asking around for experienced Android developers and was looking to have his game Super Hexagon ported over to Android. Initially I was contracted to do the Android port but later also worked on the BB10 version and the Ouya game console port.
I remember that I was pretty much done early January 2013 after a few weeks of intense coding and we (me and Terry) were reviewing the game and Terry noticed that something was off on the Nexus 7 (It ran fine on my Nexus One). It seemed like the game was less responsive on that particular device which for an intense game like Super Hexagon matters a lot. Unsure what could be the cause I reached out to Romain Guy on IRC (I wasn’t sure if he was going to respond but I’m glad he did!) I think I asked him because I had seen some other low level graphics related work from him. He put me in touch with Jeff Brown and fortunately he figured out what the cause was. He pointed out that in openFrameworks the physics and render loops were out of sync and recommended us to use the Choreographer to make the game run on Android Jelly Bean devices. The game should still be available from:

“I’m up to thirty something seconds now.  So it’s all about the dancing pentagons.  Whee!” – Jeff Brown

I’m grateful that the Android team was kind enough to help out, as I’m sure they were all swamped with work for the next Android release.
The Ouya port on a 720p TV…
Running on the Nexus One, BlackBerry PlayBook, Dev Alpha B device (which would become the Z10)
Running on a square screen, Dev Alpha C device (which would become the Q10)
After the different ports for Super Hexagon, I continued writing some apps for BB10. It was basically like the Android early days experience, not a lot of apps where out there so if you were the first, it was easier to become successful. I wrote a utility app SMS Backup for BB10 which allowed you to import/export SMS. It became a success and received the Built for BlackBerry badge that boosted the sales by a lot.
Early 2014 I was contracted by a startup that had technology that would allow you to run Android apps on non-Android platforms such as Tizen. For about 9 months I worked on their internal test tool as well as rewriting their own Android distribution app, the AppMall. It was a great learning experience with many challenges. At some point I had to make the app work on devices like the Pine Neptune which had a tiny screen (320×240). Unfortunately, the company wasn’t going to make it and I had to look for a new job.
The AppMall
The Neptune device had such a tiny screen but I made it work.
Late 2014 I joined Wanderu and helped creating a successful ground transportation travel app from scratch (for iOS and Android) that is loved by many.

Inspecting Network Traffic of any Android App

Like to see what your favorite Android app is doing ? I wrote an article of how you can inspect the network traffic of an app using the Android emulator and mitmproxy check it out here: Inspecting (HTTPS) Network Traffic of any Android App

Creating my own Cycling Trainer App

I’m a big fan of Zwift, it’s a virtual cycling app that allows you to train and compete online in a virtual world.

I started training indoors in November 2018 when the weather was getting colder and I wanted a home setup that I could use if it was raining or snowing outside. Instead of buying an indoor cycling bike or spin bike, I invested in a smart bike trainer (Wahoo Kickr 2018) that I could use with my road bike (Trek Domane SL5). This would allow me to keep using my comfortable bike and it would allow an app to manage the resistance automatically (which isn’t common on a spinbike or a peleton).

The fun part is that Zwift would also allow you to train on courses inspired by real ones. For example, there is Alpe du Zwift which is a virtual copy of Alpe d’Huez in France:
Zwift only allows you to ride the courses they provide, some are made up, others are inspired by real cities/courses. 
But… what if I want to ride somewhere that is more familiar to me? I spend some weeks understanding how to control the smart trainer via ANT+ / Bluetooth and started to create my own bike riding software using Unreal Engine 4.
It’s still WIP, doesn’t have pretty rider graphics but it does read the sensor data (Power in Watt), runs physics calculations on it and turns that into speed (similar to what Zwift does). It then applies the slope % to the Wahoo smart bike trainer for the appropriate resistance and also read the Cadence/Heart rate sensor data. During the ride it will record your stats into a fit file that you can upload to services like Strava.
Basically I created a flow (by writing custom UE4 plugins) where I can import an elevation map (DTM), road information and turn it into an UE4 landscape. This means that I don’t have to create the virtual world with too much manual effort and allows me to recreate a map of any place in the world as long as there is enough (elevation) data available. Fortunately for the Boston area, there is very precise data available (1 meter precision).
I took a ‘virtual ride’ at my old neighborhood. With the super steep Lowell St.:
Another experiment was Mount Greylock which is in Western Mass:


NodeBeat featured on the BlackBerry Dev blog

Interested to read how we (me and Seth) ported NodeBeat to the BlackBerry PlayBook platform?

The BlackBerry Developer Blog wrote an article about us, our app and the journey that was involved with it:

PlayBook add-on (ofxQNX)

After receiving my BlackBerry PlayBook through the developer offer in March, I started to think of the possibility of porting NodeBeat to the PlayBook platform.

As this is my first tablet (yea I know, as a multitouch enthusiast its quite weird not having an iPad or Android Tablet), I have to say that I am really enjoying the 7 inch form factor. It feels much nicer in the hand compared to most of the larger tablets (10+ inch) which can tire out your wrists quite a bit. Spec wise this device is great (dual-core cpu, 1 GB ram, dedicated graphics chip), therefor it would be nice to test if the PlayBook would be a perfect platform to run NodeBeat on.

Since I am already quite familiar with the NodeBeat code base, this would be a nice new challenge. After taking a brief look at the Native SDK, I figured that this would not be as difficult compared to the Android port. Unlike on Android, there is no need for JNI bindings. Actually, I would say that developing for the PlayBook is very similar to writing applications for Linux (you can use Makefiles if you want), the compiler (qcc) is quite similar to gcc.
RIM includes a modified IDE (based on Eclipse) for app development and it works quite well. All the required tools are tightly integrated and the wizards help you to get through the setup. RIM also did a good job on documenting the Native SDK, including the documents on porting.


NodeBeat is build on top of the popular openFrameworks (OF) platform, so the first thing I had to do was find out which dependencies I had to build to make it work and write an PlayBook add-on (ofxQNX) to extend the current framework.

At first sight, this seemed to be quite a difficult task due the amount of dependencies OF relies on and the large code base I had to patch, but with help from the openFrameworks community and BlackBerry community, it fortunately did not take long before I had some stuff up and running.

I won’t bore you guys to death with the details on how to get the dependencies compiled using the NDK as most of them worked straight out of the box (in the readme are some details on what to patch).  The ones that were troublesome and caused me a lot of frustration were the ones that use custom build scripts (yes you POCO). I am sure those devs have a valid reason for using it, but it makes things a bit overcomplicated. In the case of POCO, the QNX build (PlayBook target) could only be build on a Linux machine. I ended up installing Ubuntu in VirtualBox to compile FreeImage and Poco.

It was worth the effort though as I managed to get OF run properly on my tablet:

Where to get it?

Details about the project can now be found here:

The ofxQNX add-on is available on GitHub, in the developPlayBook branch:

It is licensed under the New BSD license and hopefully will become part of the mainstream branch. For now I will maintain it in my own OF fork as quite a lot of the OF core had to be patched.

Included are ten example projects that explain how to use various features of ofxQNX. All project files contain settings for development on the Simulator (x86) as on the PlayBook/BB10 hardware (ARM).

Is it ready for prime time?

It sure is!

With NodeBeat as our guinea pig, Seth and I have been working pretty hard the past weeks to get NodeBeat up and running just in time for BB 10 Jam. Since NodeBeat uses a lot of different OF features, it was a perfect way to test out ofxQNX and the stability of the add-on.

Below is some footage I shot of an early NodeBeat Beta running on the PlayBook. I love that compared to the Android build, this device is giving us much lower input and audio latencies which really enhance the experience.

We have submitted NodeBeat to the BlackBerry AppWorld so hopefully it will be available soon for your listening pleasure!

NodeBeat beta:


Feedback / Todo

Since ofxQNX is now available to everyone on GitHub, I’d love to hear what you think about using openFrameworks for your PlayBook projects. While a lot of basic OF functionality is already available in ofxQNX, the things that are still lacking are:

  • ofSoundPlayer, used for controlling the audio levels from the app and play wav/mp3 audio files. For now I recommend using SDL for this purpose.
  • Camera support. Unfortunately the current PlayBook SDK doesnt allow access to the cameras. As soon as those are supported (prob with the next release) I’ll be porting the cameraExample and openCVExample.
  • GPS Support with gpsExample

Probably the most important one to focus on is ofSoundPlayer. As there is already an OpenAL implementation (ofOpenALSoundPlayer) in the repository, I am currently investigating if we will be able to (re)use it for ofxQNX.

Anyway, let me know how ofxQNX runs and don’t hesitate to report bugs or submit patches (on GitHub). Enjoy!

[update 05/05/2012]
ofxQNX now also supports the BlackBerry 10 Beta platform!

[update 14/09/2012]
Updated link to the ofxQNX project, now compatible with OF0071

Rewriting and porting fIRC

Late 2008 I created my first Android application (fIRC) after obtaining the T-Mobile G1. The project was an early attempt to master the Java programming language and also a way for me to learn more about the Android mobile platform.

Since the Android Market was still young in 2008, my chat application fIRC became a success. While internet group chat isn’t something new, chatting on your smartphone in real time to other people was. People loved using fIRC because it was easy to join the chatroom for some small talk (By default fIRC connects the chatroom #android-chat on Freenode).

The reviews back in 2009 were pretty favorable, but went downhill after new Android devices were released onto the market. One of the (beginners) mistakes I made when creating fIRC, was that I designed the app specifically for one device, the device I owned: The T-Mobile G1. This means that it was created for devices that had a hardware keyboard, a 480×320 resolution display that was used in landscape mode.

On new devices (let’s say Android 2.x or newer)  fIRC didn’t behave that well. UI elements weren’t aligned properly and the bundled resources were only meant to be used on low or medium DPI displays, so on those new devices fIRC looked quite horrible.

As the old chat code turned into a code spaghetti while trying to fix the problems, I decided to rewrite the app from scratch. I did most of the core last summer and some of the UI during fall. The newly rewritten UI of fIRC now scales properly on any phone or tablet device.

New Features

The main new features:

(1) Profile wizard

IRC is not that common these days, one of the complaints of the previous version was that nobody knew how to connect to a different server and how to enter multiple chat rooms (or actually so called “channels”). In order to address this problem, I’ve made a profile wizard, that helps you out by providing some quick settings and a list of commonly used IRC servers (such as DALnet, EFnet and Freenode).

(2) Multi-server

More advanced users requested muti-server supprot because they often like to join multiple irc servers.  This new version allows you to do that. Simply create one profile per server, hit the menu button and choose “Connect all profiles” to connect.

(3) Fast channel switching

No need to dive into the menu anymore to switch channels, just use a swipe motion (left/right) to switch between chat rooms.

(4) Fully customizable chat

fIRC now allows you to customize the incoming messages and color them in the way you want. You decide which font, font size, background color and text color you want to user per message type.

(5) File transfer support

A unique feature on Android IRC apps: fIRC supports DCC file transfers (3G and WiFi). Currently only DCC receive is implemented.

(6) Store chat logs

fIRC allows you to store your chat logs on your SD card for future reading.

Porting to the BlackBerry Playbook

Earlier this month Alex Saunders from RIM tweeted:
“Shh…. Android Devs – submit your Android app to BB AppWorld by Feb 13 and get a free Playbook –> tools here: .

I wasn’t aware that the BlackBerry Playbook had some kind of emulation layer to run Android apps, so after watching the video below, I decided to give it a shot. I mean, how hard could it possibly be?

Basically you will have to sign up as a vendor on this site: . RIM will sent you an e-mail (took about a day here) with the request to provide some documents proving you are a company. Fortunately I still had a copy from the Chamber of Commerce that I could use.
While waiting for RIM to approve your account, you should request a code signing key. You will need this key to sign your app later on. After you’re done with that, open up Eclipse and install BlackBerry Plug-in for Android Development Tools.

Since fIRC doesn’t use any of the advanced features of the Android SDK or Android NDK (not supported), the only things I had to do were:

  • Convert the Android project by adding a “blackberry nature”
  • Remove the Android references in the app (CTRL-F “Android”)
  • Resized the application icon to 86×86 pixels
  • Test the app in the simulator

From the list above, I’d say that testing the app on the simulator was the most time consuming. Blackberry provides a “BlackBerry Tablet Simulator”, but it is actually a virtual drive image with the Playbook software pre-loaded that you can run in VMWare Player.

If you thought the Android emulator was already slow compared to the iOS simulator, think again. Booting up the Playbook takes quite a bit (it feels like booting a desktop OS), and the interaction with the mouse feels a bit sluggish. After the Tablet OS is fully booted, its time to put it in development mode. This allows you to connect to the simulator from Eclipse. Recompiling the Android project and installing it into the simulator feels almost the same as using the Android emulator. Just hit the play button.

Unfortunately the Playbook simulator isn’t all that great for testing your converted BlackBerry apps. In my case the screen flickering was really bad and fonts were blurred (Native BB apps were alright though). Reading the BlackBerry Dev forums, it seem to be an issue with the simulator, so hopefully it looks good on the real device.

The only real trouble I had was signing the app itself to prepare it for the BlackBerry App World. I was following the steps from the Youtube video above. However, BB changed the names of the dropdown menus in Eclipse. Fortunately the documents describe how to do this properly: Sign your app.

Final thoughts

For me porting fIRC to the BlackBerry platform seemed to be a quite painless experience. However, I would advice you to test apps thoroughly on the Android Emulator and when you’re ready for release, test it out on a real Playbook instead of the simulator. That would probably give you a much smoother workflow.

For me that will hopefully soon be possible. Last week I received an e-mail from RIM that my application was accepted to the BlackBerry App World and that I would receive a free Playbook soon :).

The latest version of fIRC for your chatting pleasure is now available from the Android market (Android devices 2.1 and higher) and the BlackBerry App World (BlackBerry Playbook)!


Epic Windmill

It’s official now, I’m proud owner of my new startup company Epic Windmill.

As this blog served as my online portfolio of research work I contributed to, Epic Windmill will be serving as a place for my creative and casual software projects. For the time being, it will focus on the development of applications for handheld devices such as smartphones and tablets (Android and iOS).

Currently two apps are already available from the Android Market, NodeBeat (a creative audio sequencer) and fIRC (a free chat app). We’re preparing a release for NodeBeat so be sure to follow @NodeBeat for more details. A new fIRC will be released later this month.

– Laurence

p.s. Don’t forget to follow us on Twitter @EpicWindmill and like us on Facebook!

NodeBeat, openFrameworks and Android

Last month we (Seth Sandler and yours truly) released the Android port of the popular iPhone/iPad music application NodeBeat.

NodeBeat was created by Seth Sandler and Justin Windle earlier this year and released in April for the iOS platform. It is an experimental node-based audio sequencer and generative music application. By combining music and playful exploration, NodeBeat allows anyone to create an exciting variety of rhythmic sequences and ambient melodies in a fun and intuitive fashion.

How Does it Work?

Octaves and Notes make up the two types of nodes. Octaves pulse and send messages to Notes within proximity. Each Octave is assigned a random octave and each Note, a random note; therefore, a Note will play in several octaves depending on the Octave it’s connected to. Pause nodes to create your own beats or let them roam free to have them generate their own.

Cross platform development

Because NodeBeat was developed using the C++ based open source framework openFrameworks, I did not expected a lot of trouble getting the core to work on Android. However, since the Android port of openFarmworks is still pretty new (we’re using the development branch) and officially only supported on the Mac and Linux platforms, I decided to put some effort into making it work on Windows as well. I’m a Windows user and developer, so if I can avoid dualbooting, I will :P.

Native Development Kit

As soon as you want to use C or C++ in your Android projects, you will have to install the Native Development Kit (NDK). It basically allows you to compile your code into a library which you can access using JNI calls. While in general it is recommend to code using the SDK in Java for your Android projects (the Dalvik VM with JIT show really good performance), lazy coders (like me :P) are always trying to find ways to reuse existing code. Instead of having a native codebase for iOS (in Objective-C) and Android (in Java), it is nicer to have a shared core in C++ with a thin layer (Obj-C or Java) to interface with it. Sure the NDK might sound intimidating at first sight and Google doesnt recommend it unless you know what you’re doing, but honestly I don’t think its rocket science either. After downloading the NDK, you will need to setup a unix like environment such as MinGW or Cygwin.

For my previous projects I already had MinGW installed (you could use Cygwin, but in general I don’t like their approach). I did a fresh checkout from Github and started to mess around with the Makefiles to see if it would compile.

It turned out that all I had to do is replacing a few IF statements (the ones that are checking the build platform) and make them point to the NDK location on my Windows computer. I’ve created a tutorial that explains the steps if you want to try it out yourself. However, if you want to use openFrameworks for your own Android applications, I would highly recommend just using my openFrameworks fork instead (until they accept my pull request). It includes all the patches from the tutorial and it should be compatibly with the latest NDK. The tutorial actually also explains how to run one of the examples so be sure to check that out.

Porting the GUI

For the UI I wanted to stay as close to the iOS version as possible. As I don’t own an iOS device, Seth gave me some screenshots of NodeBeat running on iOS which I used as a reference.

Since the iOS and Android framework are quite distinct, there are cases where I had to do an alternative implementation. For example, on Android most devices have the following buttons:

  • Back
  • Menu
  • Home
  • Search
The iOS devices only have one button which brings you back to the home screen. In the original implementation of NodeBeat on iOS, there is a shortcut on the canvas that popup a menubar allowing you to access different option menus. On Android however we can use the options menu which allows us to control the flow of the application.

Example: Menu bar

So instead of writing the menu code in C++, I only had to create a XML file for the option menu. It looks like this:

<?xml version="1.0" encoding="utf-8"?>
<menu xmlns:android="">
    <item android:id="@+id/node"
          android:title="Node" />
    <item android:id="@+id/audio"
          android:title="Audio" />
Basically you define a unique id (this allows you to reference it in the Java source code), tell it which icon to use and give it a title. You need to do this for every entry, but apart from that, the Android framework will handle how to display it (depending on orientation and the number of menu items)

After the user touches the menu button on Android, it will popup the option menu:


Example: Audio menu

Other menu elements such as the popup menu for the Audio, Rhythm and Settings menu required a different approach. I could’ve switched Views on Android, but in my opinion that would be a bad UI design decision. The problem with this is that the user would be taken away from the NodeBeat activty. Instead I much more prefer to use the context menu that the Android framework provides. This menu popups up and is placed over your current Activity.

While it is on the foreground, the activity in the background is still visible and continues running. Another benefit of this approach is that the user will get immediate feedback when adjusting the audio settings. As like the menu, this UI layout is created entirely in XML.


Example: Recording dialog

In some cases the context menu might be a bit of overkill if you want to let the user decide on a question. For example, in the example below we want to inform the user how to record his NodeBeat creation. All we need is a simple dialog that either lets the user confirm the action or decline it. In such cases Android provides dialogs which can be build with an AlertDialog.Builder
AlertDialog.Builder builder = new AlertDialog.Builder(this);
builder.setMessage("You can stop recording by pressing the record button in the menu again")
       .setPositiveButton("Yes", new DialogInterface.OnClickListener() {
           public void onClick(DialogInterface dialog, int id) {
                // Start recording
       .setNegativeButton("No", new DialogInterface.OnClickListener() {
           public void onClick(DialogInterface dialog, int id) {
                dialog.cancel(); // Cancel dialog
AlertDialog alert = builder.create();
alert.setTitle("Start recording");


Using JNI Callbacks

After we’ve ported the UI, we still need to pass on our settings to the core application. Fortunately we can use JNI callbacks to get and set NodeBeat’s properties. It is good to know that you should minimize the number of JNI callbacks for performance reasons (so don’t go mental and call tons of JNI methods each time you render a frame).

Let say we would like to pass a value from one of the sliders from the Audio context menu to our NodeBeat core. In such case we first create a new static method in our Java source file:

public static native void sliderChanged(float v);

It is important to use the static and native keyword when you define your method. This is all we need to do in our java code and this method can be used anywhere in our java class.

Now the tricky part is how to implement the function on the C++ side of your application. It’s not exactly complex, but you will have to pay attention to a few details. Three things are important here:

  1. The namespace (in Java)
  2. The class name
  3. The method name

If we assume we’re implementing this callback in one of the openFrameworks examples, this means:

  1. The namespace (in Java): cc.openframeworks.androidEmptyExample
  2. The class name: OFActivity
  3. The method name: sliderChanged


Here is the code you implement in C++

#include <jni.h>
extern "C"{
void Java_cc_openframeworks_androidEmptyExample_OFActivity_sliderChanged (JNIEnv*  env, jclass  thiz, jfloat value){
// Do smth here

As ugly as this method looks like, take a brief look at how it is constructed. It starts with the return type (which is void just like how we specified it in Java). Next it starts with Java_ and is succeeded by the namespace, classname , method name. All dots from the namespace are replaced by underscores and between each element we place an underscore as well.

In the arguments list, the “JNIEnv*  env, jclass  thiz” part is mandatory (so if you have a something like this: public static native void methodname(), it would be void methodname(JNIEnv*  env, jclass  thiz)). For our method we want to pass a float as argument. For some reasons you can’t just pass a float, you will need to use JNI mapping types. The float becomes a jfloat.
Note: for booleans you need to compare the value to JNI_TRUE or JNI_FALSE and not to true or false.

Honeycomb Tablets

Unlike the iOS devices from Apple, Android devices run at so many different configurations and API levels, it can be a bit tricky to support all of them. For NodeBeat we decided to create two versions: A phone version and tablet version. We basically distinguish between phones who are running Eclair or Gingerbread (2.2+) and tablets running Honeycomb (3.0). On the Android Market it seems like we only provide one version, but depending on what device you’re using to download the app, it will give you a certain version.

In order to maximize the use of the display, I’m using this line in the AndroidManifest.xml file:


On phones it sets the application to fullscreen. While this works for anything running Android 2.2+, it is a problem on Honeycomb devices. Honeycomb tablets don’t have any physical buttons and when the application is running in fullscreen mode, there is no shortcut to the menu button. This shortcut is normally placed in the top right corner.

This means that for the tablet version, we run the application in regular display mode.
Other than that, NodeBeat provides a rich user experience on tablet devices such as the XOOM or Asus transfomer.

Earbleeding masterpiece created by Sharath Patali (professional coder, horrific musician)

Android Market

Publishing the app to the Android market is no hassle at all (we were just a bit unfortunate and had our app pulled down by accident). There is no annual fee (just a one time 25$) and apps are approved instantly. The dev guide provides a comprehensive overview on how to build your project in release mode and how to sign it.

Go get it!

NodeBeat is available on the Android market:

Try it out! It’s just a dollar 🙂


An introduction to emulation

In the early 90s I primarly used my Commodore 64 (C64) for gaming. As Santa never gave us a Nintendo, I probably used my C64 until the late 90s. At that point I was introduced to the concept of emulators. Emulators are computer programs that allow you to run software that wasn’t designed for your computer platform. For example, with emulators you could run games from the NES or Sega Master System on PCs. Nowadays computers have enough resources to run games from modern consoles such as the Nintendo Gamecube/Wii and Playstation 2.

Of course, there is always the debate whether it is legal to build an emulator. Console manufacturers will always forbid any type of emulation of their systems. Although its a rather grey area as most documents on the chip design are publicly available (e.g. MIPS / ARM instruction sets).

Since I got excited about emulation in general, I started to do some research on how to build one myself. As I didn’t want to spend days or even weeks to get some results, I looked for a simple system. A friend introduced me to a small system called the Chip 8.
The Chip 8 itself didn’t existed as a hardware system, but was implemented in software. Due to the small instruction set (~30 opcodes), it was a very good candidate to work on as a first emulation project. The instruction set defines the functionality of the CPU. It for example contains instructions that allow the CPU to load or store data, but also perform mathematical tasks such as multiplications.

Having worked on a couple of emulation projects (and emulator plugins), I started to write a guide that will hopefully help out aspiring emulator authors and inspire emulation enthusiast. The guide explains how emulators work and provides a detailed overview on how to write a Chip 8 interpreter from scratch.

Writing a Chip 8 emulator shouldn’t be too time consuming. On average people tend to finish such project in one or two days. It’s a fun project to test out your programming skills and of course quite educational (and it isn’t just for CS majors ;))

If you’re ready to take up the challenge, click the link below! Have fun!

Earth Friends, a social network visualization

Last week I blogged about a new project I was working on. For the past few days I went through the code again and decided to clean it up a bit for the release. It is pretty much completed now, therefor I have made it available on Facebook.

“Earth Friends is a free Facebook application to visualize your social network on Google Earth. Locate your friends by using the Google Earth Webplugin or download your personal datafile for use with the Desktop version of Google Earth.”


While my code from last week was running fine, there was a lot of room for improvement. Basically three major parts changed:

  1. The database structure
  2. Using a template engine
  3. File compression


The first thing I worked on was the database itself. The local database is primarily used for converting a location (defined by a city, state and country name) to a latitude and longitude.
Each time we need a location, we query the database, simple as that.

So let’s look at the following example:
First assume you have about 100 friends on Facebook. Probably most of them share their current location, but some of them might not want to share their location with Facebook apps or simply did not filled out the location at all.
If 75 friends filled this field with their current location, the old code would query the database 76 times (75 times for your friends + 1 time for your location).

While this looks like a lot of queries, the time for a query to complete also depends on the design of the database. As I was using MaxMind cities database, I initially imported all the data into one MySQL table. Just because it was convient to work with. However, the dataset has about 2.699.356 entries (cities).
Doing a search 75 times within this table was not going to be fast…

Besides the users’ patience, I am also limited by PHP’s execution time limit. I think that scripts are often allowed to run for 30 or 60 seconds, although I can’t stand a website that is taking longer than 10 seconds to show up.

So what to do next? Can we find a way to split the database into multiple small tables? Just limiting the number of entries to a fixed number (e.g. 100.000) wasn’t going to work as I would need some kind of lookup system in order to figure out where I could find my city.

Improvement one
The easiest solution I found for it (not saying its perfect) was by splitting the database by country. In my case I have 231 countries in the database, thus 231 tables are created.

Taking a look at the top 3 countries in our database reveals the following:

  • Russia – 176.934 cities
  • USA – 141.989 cities
  • China – 117.508 cities

The average number per country is around the 20.000 cities. While the top 3 shows a significant higher quantity than the average, look ups are performed much faster than before.

Improvement two
We could’ve stopped here with improving the database part as the results were reasonable. However, it is a small effort to tweak it a bit more.

In our example, we query 76 locations from the database. But wait… what did we query? The location of friends! And what do a lot of friends have in common? Right, they share the same location. So by first creating a list of cities we need to look up, we can reduce the number of queries required to collect our data.

Smarty Template Engine

While PHP provides a nice environment for rapid prototyping, it can also become a mess easily. Using print or echo statements is fine for testing purposes but it is better to keep code separated from HTML.

This is why you need to use template engines. When using a template engine, you first collect data from your database and then parse your variables and arrays to the template engine. In your templates you specify where this data need to be placed. (Summary: PHP is used for data collection and prepared into arrays, Template engines use these arrays and are used for the actual design of the website).
Fortunately, there are free open source template engines such as Smarty.

Take a look at the following code:

{* Add placemark for friends *}
{foreach $friendlist as $friend}

This is an actual code snippet from the template responsible for generating the KML. This particular section is used to display your friends icon on Google Earth.

Within the template I can specify a block (in the example its a Placemark) that will be looped. After setting up Smarty and collecting data from the database, I pass the result ($friendlist) to Smarty. In the template engine, it will now perform loop through this foreach loop and place the variables in the correct location.

A few more benefits of using a template engine is that you can store the results in a cache. By caching the results we can skip ‘expensive’ MySQL queries if we know the page hasn’t changed. By specifying the cache lifetime (for example 30 minutes), we can make sure that Smarty will regenerate the page if the cached page is older than 30 minutes.


In the previous version of Earth Friends, I embedded the KML file into the header of the website (in Javascript). While this method works fine for small data sets, it has a large impact on page loading and render times when data sets are growing.

KML files are plain text files formatted in XML. Besides KML files, Google Earth also accept compressed KML files which have the extension KMZ. KMZ files are basically KML files compressed with ZIP.
Tests show noticeable differences in loading times when using KMZ. For example, my test data set in KML was about 693 KB. After compressing this file using zip (max. compression), the size was reduced to 92 KB. That’s around 13% of the original file size! As a result, loading times were reduced significantly.

Where can I find the application?

Ready to try out this application on facebook? Just click the following link to open up Earth Friends: Earth Friends application on Facebook.

After authorizing Earth Friends to access your profile data, it will reload the page and launch the Google Earth browser plugin. If the plugin is not installed, please follow the instructions that are displayed instead. The plugin should work in Windows and Mac OS X.

Important: Make sure you set your own location (with the correct privacy settings) or the curves will not appear!

More information can be found on Earth Friends Community page on Facebook.

How to use this?

To help you get started with Earth Friends, I have created two screencasts which demonstrate you how to add Earth Friends to your Facebook account and use the application.

Tutorial 1: How to use Earth Friends (View in 720p HD)

Tutorial 2: How to download the KMZ file for Google Earth Desktop (View in 720p HD)

Where are you?

Last weekend I’ve been working on a new project. Since I already had some experience  generating KML files for use with Google Earth (wikileaks projects!) I started to think of something else that I could visualize… perhaps see where my friends are?

Because I’m using Facebook to connect with my friends, I decided to dig into the documentation of the facebook APIs. Apparently there are multiple ways to get hold of your and your friends information. The most commonly used APIs are: Graph API / FQL . The first one lets you retrieve information about a friend or page by loading a specific URL, the second one lets you actually send a SQL query to retrieve the  information.

Since I want to make this a hassle free experience, I decided to make a Facebook application which would use the Google Earth Web plugin. This way, users only need to download the plugin, but everything works just in the browser.

Finding friends on Google Earth

How does it work?

Basically a Facebook app is just a website running on some server. In my case, I’m hosting my application on the same domain as my blog. Since the application is embedded into the Facebook website, normal users won’t notice. The app itself can be written in all kinds of languages but for the sake of simplicity I used PHP.

First we need to connect to Facebook using an API/SDK. This allows us to authenticate and securely connect to the Facebook servers. After enstablishing a connection, we use FQL to query: Our friendslist and the location of our friends. Unfortunately the friendslist only contains the name of the location and not the geospatial coordinates.

Therefor I had to create a lookup database that would translate a City/State/Country name into a geospatial coordinate (latitude and longitude). This was done by downloading a free database from

Now we should have all data available to create our KML file on the fly. For now I embed the KML result into the javascript header which seems to work fine for ~200 friends. I still need to do some benchmarks to see how well this scales. A demonstration video of the result can be found below:


Can I try?

Since this project is still WIP, it is not available yet in the Facebook Application Directory. I’m planning to release this application for free soon.

Wikileaks mirror spread

In my previous post I presented a visualization of the Wikileaks mirrors spread of December 8th.

While it is interesting to see the spread of a certain day, it is even more interesting to see how the spread is evolving over time. By keeping track of updates of the mirror page on Wikileaks, I was able to collect enough data for an animated version of the spread. My current dataset contains a 7 day period covering December 5th to December 12th.

Wikileaks mirror spread

As some of the commenters pointed out, the edges (curves) in the previous dataset didn’t always followed the shortest path. This was due to the (simple & stupid) algorithm I was using to draw the path between two points (basically just the mathematical shortest distance). In the latest dataset (download link is at the bottom) this is corrected. In particular this website: was very useful for figuring out the the correct path.

* update 21-12-2010 11pm CET *
– Got featured on ReadWriteWeb, Thanks!

* update 23-12-2010 10pm CET *
– Embedded Google Earth web plugin in demo section

Mirror spread

The result of plotting the spread in a line chart:

Growth of the number of mirrors

Top 10 locations

An overview of the spread based on country:

Mirrors spread around the world

  1. Germany: 498
  2. United States: 394
  3. France: 194
  4. Netherlands: 152
  5. United Kingdom: 72
  6. Sweden: 67
  7. Canada: 49
  8. Spain: 47
  9. Switzerland: 36
  10. Russian Federation: 32

Screencast / Video

Short screencast (Watch it in HD 720p, in fullscreen mode)

Online Demo

To view this demo you will need to install the Google Earth Browser Plug-in


Note: You need to move the range marker all the way to the left to make the timeline work:

  • Move the time slider all the way to the right
  • Move the range marker (the small attachment on the left of the time slider) all the way to the left
  • Now you can move the time slider as you want

KML source

If the online demo doesn’t work for you, you can also try it in Google Earth!

Visualizing Wikileaks mirrors

For the past few weeks, Wikileaks has drawn a lot of attention from the media. Mostly because of the Cablegate.

Whether Julian Assange should be considered as a Hero (by publishing information) or Terrorist is an open question. Because of threats from governments, companies in the USA started to ban payment, hosting and dns services to Wikileaks. Soon Wikileaks moved from servers based in the USA to Sweden/Switzerland.

For now Wikileaks seem to be safe, however, they also started a call for mirrors.

Interested to see which people in the world are supporting Wikileaks, I had the idea to visualize the Wikileaks mirrors on Google Earth.

Visualizing Wikileaks mirrors

* update 09-12-2010 8pm CET *
– Added Google Earth web plugin links

* update 10-12-2010 10am CET *
– Got featured on ReadWriteWebNY Times, thanks guys!
– Stay tuned for more updates!

* update 23-12-2010 10pm CET *
– Embedded Google Earth web plugin in demo section

Data mining

In order to get the data, I went to the official Wikileaks website which had all mirrors listed on this page: mirrors. I wrote a small PHP script that would open up the mirror page and scanned the document for URLs. If a mirror was found, it would be stored in a file. At this moment there are about 1334 mirrors on the website.

Data manipulation

At this point I only have the URLs of the mirrors, but how do I know where these servers are located?

In order to know where the server is located, I used GeoLite City which is a service from MaxMind. GeoLite City allows you to resolve most IPs to a geospatial location. Of course it doesn’t gives the exact location, but usually it is able to show in which City the server is located, which is good enough for my purpose.

After obtaining the GeoLite City database (there is a free version!) I used PHP to write a script that would first resolve the URL to an IP address (PHP function: gethostbyname() ) and then use this IP address to look up the longtitude and latitude.

The last step was converting the data into the KML format (and adding some artificial altitude information) for use with Google Earth.

Data visualization

Below are some of the early results.

Currently the main server ( seem to be located in Sweden, when we view the data in Google Earth we can see that a lot of mirrors are actually located in Europe.
Visualizing Wikileaks mirrors

Red pin: Wikileaks server
Yellow pins: Wikileaks mirrors
Greenlines: Connections from the Wikileaks server to a Wikileaks mirror

Mirrors in Europe

Wikileaks mirrors in the USA
Mirrors in the USA

Since some of the servers are at the same location (probably sharing the same data center), we can click on a pin and it will expand to show all mirrors located at this data center.

Mirrors in Sweden

Screencast / Video

Short screencast (Watch it in HD 720p, in fullscreen mode)

Online Demo

To view this demo you will need to install the Google Earth Browser Plug-in



KML source

Want to try it out locally on your computer?

No problem! Here is how:

Research projects

Since my graduation back in 2008, I have been working on different research projects at universities. Some of them got published, others never left the ‘prototype’ stage.

Recently I started to organize the footage I made and collected over the years and decided to put some of it online. To maintain the overview, a new section has been added to the site: Research Projects. It includes most of the projects I have been working on at the University of Amsterdam (UvA) and Harvard University.

Each project includes a short description, pictures and a video. Enjoy!

(Click the project title for more information)

Interactive Networks

This project introduces the Interactive Network concept and describes the design and implementation of the first prototype.


Twilight is an interactive graph exploration tool for multi-touch systems. Twilight provides a flexible environment that can be used to visualize and analyse graphs and networks found in the computational science.


This projects involves the visualization of large phylogenetic tree structures such as the ones found in the Tree of Life. By combining high performance computer graphics with multi-touch input interaction methods, his project will create an interactive exploration environment that allows us to view the data interactively and in different representations. This research will lead into a better understanding of the evolutionary tree.


INVOLV is a research project that combines cutting-edge interactive technology with emerging information visualization techniques to create innovative explorations for large hierarchical data sets


This application is designed to be a collaborative activity to teach undergraduate students about phylogeny and to prevent misconceptions about evolution. The system guides the students through a set of steps required to construct a phylogenetic tree based on morphological and DNA sequence data.

Since this project is still active, more media content will be released in future!

* update: 27 November 2010 *

Old footage from personal projects: Touch tracer and Real time fluid dynamics running on the UvA-MTT

Touch Tracer v0.3

Real time fluid dynamics


Earlier this month Adafruit started a contest for the first person to hack the Kinect, Microsoft’s latest gadget for the XBOX 360. The contest was won by Spanish hacker Héctor Martín Cantero who published his proof of concept only 3 hours after the European launch (last week).

So what exactly is the Kinect? Is it similar to the PS2 eyetoy?

Actually it is much more advanced than the PS2 eyetoy. Unlike the eyetoy, the Kinect contains 2 cameras. One RGB camera which is used for ‘normal’ images and a depth camera is used to figure out the position of objects in its view. A nice explaination (and an overview of the components) can be found at ifixit.

Because Adafruit required the contest winner to open source his source code, this means that others can now enjoy hacking their Kinect as well! Currently the code is available at github and renamed to libfreenect (irc: #openkinect @ freenode):

Getting the code to compile might be a bit tricky and it involves CMake to create the project files. Running it on Linux is very trivial (just make sure you’ve installed all dependencies), on Windows and Max OS X there are some extra steps involved to compile the library and demo application.

For the windows version you will need libusb, glut and pthreads. Also, don’t forget to select the win32 branch when you do a checkout from git. After creating the visual studio project files you will need to manually fix the path to the include and lib directories (The current CMake file is broken).

If you can’t be bothered, I have compiled a windows binary (vs2k8):

Before running the binary, make sure you’ve installed the drivers from github (First the XBOX motor, then camera and audio). To control the motor in the Kinect, you can use this code: NUI_Motor.cpp

My experiments

Last Friday I actually bought myself a Kinect. In the video below you can see me running the Kinect on my machine. For now it is just retrieving both camera streams and putting it on display.
Basically the library (libfreenect) is providing you with two images through a callback. One depth image and one RGB image. The depth image actually maps a depth to a certain color. In this video for example, the color red/white means something is really close to the camera and green/blue is further away.

Hopefully I will have some nice apps later this month :).

Kinect hacks by others

3D mapping by Oliver Kreylos

ofxKinect 3D draw by Memo

Multitouch hack by Florian Echtler

Kinect Point Cloud by cc laan


P.s.: Before I forget, Matt Cutts (Google) started another contest. Check it out at: and ITS 2010


Seth Sandler a good friend of mine, released his new website which is a website that he describes as “a social platform for people that are sparked (inspired) by creative and emergent technology”.

Personally I like the way how he organized the site. Basically he created a portal that allows you to rapidly find (multitouch) applications, open source programming frameworks, and community projects that are out there, all in one single place!

Check it out and don’t forget to add your own projects as well!

Interactive Tabletops and Surfaces 2010

If you’re doing research on interactive tabletops and surfaces, you might want to check out this years ITS 2010 conference. This year it will be hosted in Saarbrücken, Germany!

Check out the details below (more information after the break).


5th Annual ACM Conference on Interactive Tabletops and Surfaces 2010

ITS 2010
November 7-10, 2010
Saarbrücken, Germany

The Interactive Tabletops and Surfaces 2010 Conference (ITS) is a
premiere venue for presenting research in the design and use of new
and emerging tabletop and interactive surface technologies. As a new
community, we embrace the growth of the discipline in a wide variety
of areas, including innovations in ITS hardware, software, design, and
projects expanding our understanding of design considerations of ITS
technologies and of their applications.

Building on their success in previous years, ITS again features Papers
and Notes presentations, as well as tutorials, posters, and
demonstrations tracks. For the first time, ITS 2010 will also include
a doctoral symposium.

ITS 2010 will bring together top researchers and practitioners who are
interested in both the technical and human aspects of ITS technology.
On behalf of the conference organizing committee, we invite you to
begin planning your submissions and participation for this year’s


The use of interactive surfaces is an exciting and emerging research
area. Display technologies, such as projectors, LCD and OLED flat
panels, and even flexible display substrates, coupled with input
sensors capable of enabling direct interaction, make it reasonable to
envision a not-so-distant future in which many of the common surfaces
in our environment will function as digital interactive displays. ITS
brings together researchers and practitioners from a variety of
backgrounds and interests, such as camera and projector based systems,
new display technologies, multi-touch sensing, user interface
technologies, augmented reality, computer vision, multimodal
interaction, novel input and sensing technologies, computer supported
cooperative work (CSCW), and information visualization.

The intimate size of this single-track symposium provides an ideal
venue for leading researchers and practitioners to exchange research
results and experiences. We encourage submissions on (but not limited
to) the following topic areas as they relate to interactive tabletops
and surfaces:

* Applications
* Gesture-based interfaces
* Multi-modal interfaces
* Tangible interfaces
* Novel interaction techniques
* Data handling/exchange on large interactive surfaces
* Data presentation on large interactive surfaces
* User-interface technology
* Computer supported collaborative systems
* Middleware and network support
* Augmented reality
* Social protocols
* Information visualizations
* Interactive surface hardware, including sensing and input
technologies with novel capabilities
* Human-centered design & methodologies