Well, just an idea : an application which 'reads' the OSD text in the video ? I mean like an OCR, which would then be able to do anything with that data, for example display the position in google earth, display the speed and altitude on a virtual cockpit, or save everything for a post flight analysis; draw the polar curve of the plane, compare different propellers or motors, know the mAh per vertical meter, etc

In this way there it can be adapted to any osd without having to know a specific OSD supplier's protocol.

If software is good, more people will buy OSD (from any manufacturer), so this could be good for everyone.

Interesting idea that I had been thinking about some time ago too. Guess it's a bit of the EZOSD does via one of the audio lines. If the concept you are talking about was to work I can definitly see the benefits, although there are numerous challenges to meet.

Interesting idea that I had been thinking about some time ago too. Guess it's a bit of the EZOSD does via one of the audio lines. If the concept you are talking about was to work I can definitly see the benefits, although there are numerous challenges to meet.

This is going to be very font specific. Each manufacturer's output will have to be tweaked and I suspect each camera/VTx/Vrx combo will need tweaking too because these characters are not going to appear on everyone's screen the same. I'm impressed with what you've got so far but it's going to be a nightmare to try and get to work on everything.

This is going to be very font specific. Each manufacturer's output will have to be tweaked and I suspect each camera/VTx/Vrx combo will need tweaking too because these characters are not going to appear on everyone's screen the same. I'm impressed with what you've got so far but it's going to be a nightmare to try and get to work on everything.

First, congratulations for your ground control station, I can imagine the amount of work such a project represents

For the OSD Decoder, yes it will be font specific, but I'll include the fonts of all current OSD in it, so you won't have to.

But the interest resides in the fact that it will be the first 'telemetry system' to be compatible with all suppliers.

I made it in way very insensitive to the real shape of the characters : it's not looking for a precise pixel color in a given position, but only choosing the most 'similar' candidate in the 10 digits.

In this way you can have more or less blurred characters to take this example, or with less or more contrast, luminosity etc, it still will work.

I made tests with different videos taken from vimeo (it means already compressed, so with an average quality). They are obviously taken with different equipments but it was possible to decode them all (except a very blurry one) without change to the parameters. The only conditions are to have the same video resolution (640*480) and be using the good character set (in the current system, the eagletree one).

To give another example, the characters are on an transparent background (you see the image around and in the holes of the digits), but in the current algorithm, all the pixels of the character are considered (a square 15 x 15 pixel wide). It means that normally it should only decode correctly characters which are on a black background, as the reference digits I chosen. But if we have another background, it continues to be ok, because the relative probability between the digits is conserved :

if for a given value to decode, we had with a black background
0 : 86% probability it is a 0
1 : 25% probability it is a 1
2 : 30 %
3 : 42%
etc

But you see the most probable digit is still the same. The same applies for other video transformations that can appear such as camera / TXRX / video grabber card noise, blur, contrast etc.

This is true because we only care about a reduced set of symbols (the 10 digits), so it's relatively easy to find the right one.

I mention in the video that the eagletree character set is not the easiest one, and so it's a good test. It's because some characters such as the 6 and the 8, only differ by a few pixels, while in other character sets there are bigger differences and - most importantly - larger characters.

what about add a contrast filter to each frame before process it to identify the number using "OCR" ?

you may get even better results

propaway looks quite cool

You're right, now is really the worst case, without any prefiltering, and not using the color information to remove the background (the image is converted into gray levels and then analyzed). So there's room for improvement !

Well done Ben. What do you plan on doing with this software once/if you get it perfected?

Well, the global project is the creation of a virtual cockpit, I mean a cockpit in 3D, around you, coherent with the head/goggles movements.
The project includes the creation of telemetry hardware and a mean of sending telemetry data over the video channel (on several video lines at the bottom of the screen). The thread for this project is here : http://www.rcgroups.com/forums/showthread.php?t=1312788

In the meanwhile, I thought I could distribute the protocol I'm using to encode the telemetry data in the video, so that OSD vendors can easily (just a software upgrade) encode their data for use in the virtual cockpit (to display on the instrument panel, virtual gps etc). Or for other softwares to come.

Then I realized that there are few chances that OSD vendors will adapt their stuff, or share their own telemetry protocol. So I decided to see if we can grab the data from the video... and we are here.

The basic equation is that a classic embedded hardware such as an OSD controller or autopilot uses a microcontroller of maybe 30 MIPS or a little more.
And a modern computer has 147600 MIPS + the huge computing power of the graphic card. (OK it was for the Intel core i7 extreme edition) It's almost 5000 times more computing power - what can we do with all this ? Why continue tu use simple black and white video overlays when we can use 3D graphics in millions of colours ? Then it's a question of having ideas, but IMO we have a lot of very good OSD and telemetry systems, but not many impressive things on the software side.

OK it's maybe less easy to have a computer on the field, it can be less reliable than a simple osd etc. But we see the arrival of powerful intermediary devices such as the iPad or other tablet pc and laptops. Probably in the few next years we'll have a computer as powerful as our desktops, completely embedded in the googles...