Success: Robot36 encoder works…

Well, it works! Using the information in the paper I linked earlier in the day, I spent some time and managed to code up a Robot36 SSTV encoder. Above, you see the image decoded by Multiscan on my Macbook. Here’s a link to the .wav file:

Addendum: Some time ago, I noted that I was having difficulty with the Audio Shield that I built as an audio output for my Arduino. Yesterday, I finally got around to sorting out the (as it turns out) software issue, and it began to play back .wav files from an SD card. So, I took the WAV file I generated, put it on the card, and then used the Arduino to play back the file into my Macbook. You can see it decoding in the video below. The timing of the Arduino is apparently not that close, because the image has a pretty strong skew, but not beyond which Multiscan can automatically correct. I’ll perhaps figure out how to do better with this later.

httpv://www.youtube.com/watch?v=VMULetCtRVc

Addendum2: Oddly enough, one year ago today I linked to DJ1YFK’s webpage, where he has a Martin 1 SSTV encoder. Which reminds me, i forgot to put a link to my code. It’s ugly and inefficient, and I can think of a couple of things that are probably wrong with it, but it works. Check it out here.

Addendum3: Link above was broken, so I uploaded it to github’s “gist” service.

I found your website and it very useful for my project. Thank you very much for this resource.

I have some problem to ask If you can help, I found that time in image scan line is shorter time than real audio frequency such as .25 ms or 250 us per pixel, How to modulate audio 1500hz in .25ms ? 1 cycle of 1500 hz need 666.6 us or 0.6666 ms.

I’m not 100% certain I understand your question. The scanline time for robot36 is 88ms for the Y data. There are nominally 320 pixels per scanline, which means that each pixel takes up only .275ms. As you pointed out, this is not a full cycle of 1500Hz (it won’t be a full cycle of any frequency less than 3.6Khz), but we don’t _have_ to output a full cycle. The code in ScanlinePair just makes sure we end up with the right number of samples, and makes sure (in a fairly crude and unoptimal way) that each pixel in the image makes a contribution to some samples in the output. The result isn’t easy to explain without more math and signal processing than I feel comfortable with, particularly in a comment, but it works out fine for slowly varying signals, but has more difficulty with lots of high frequency changes. A more detailed analysis of the mode would be fun to do. I might give it a try sometime.

One way to think about the demodulator is to imagine that you are looking at a window of samples around your sampling time. The question you want to answer is “what frequency do I think he was trying to emit _at this moment_?” If your window is wide (you have lots of samples) and you have a very slowly varying (or constant) signal, you can probably do a really good job. But the signal is varying slowly, so the signal doesn’t provide a lot of resolution. If you kept that wide window, you’d have information from other nearby pixels. In most realistic images, the values nearby are fairly correlated with the pixel you are looking at, but as they get further and further away, the information you gain is less and less useful. At a certain point, you don’t get any extra use from having those extra samples.

You can actually make a demodulator which attempts to just look at windows that contain two samples. (You can think of this as taking your complex input signal, and solve for the angle that rotates the first one to the next, accumulating the frequency). In fact, my first demodulators did precisely this. The result can be noisy and can alias, but it works surprisingly well. In each case, you are estimating the frequency just on the basis of two adjacent samples. Not ideal, but it works…

Comment from NatthapongTime 8/30/2012 at 9:52 pm

Thank you very much for your explaination !

Comment from WS4ETime 6/5/2013 at 7:00 pm

Where did the robot36.c code go? I would love to use this as starting place for a field day project.

Okay, I added a fancy embedded plugin thing above, where you can view and download the code. If you are a programmer and know how to use “git”, then: “git clone https://gist.github.com/5723053.git” will grab the code automatically.

Search

Find:

About Myself…

I'm Mark VandeWettering, husband, proud father of a U.S. Airman, technical director at Pixar Animation Studios, telescope maker, computer science and math afficianado, an Extra class radio amateur licensed as K6HX, and all around geek. I hope you enjoy my website.