Posted
by
timothy
on Thursday December 26, 2013 @12:10PM
from the wax-cylinders-all-the-way dept.

New submitter mni12 writes "I have been working on a Bayesian Morse decoder for a while. My goal is to have a CW decoder that adapts well to different ham radio operators' rhythm, sudden speed changes, signal fluctuations, interference, and noise — and has the ability to decode Morse code accurately. While this problem is not as complex as speaker-independent speech recognition, there is still a lot of human variation where machine learning algorithms such as Bayesian probabilistic methods can help. I posted a first alpha release yesterday, and despite all the bugs one first brave ham reported success. I would like to collect thousands of audio samples (WAV files) of real world CW traffic captured by hams via some sort of online system that would allow hams not only to upload captured files but also provide relevant details such as their callsign, date & time, frequency, radio / antenna used, software version, comments etc. I would then use these audio files to build a test library for automated tests to improve the Bayesian decoder performance. Since my focus is on improving the decoder and not starting to build a digital audio archive service I would like to get suggestions of any open source (free) software packages, online services, or any other ideas on how to effectively collect large number of audio files and without putting much burden on alpha / beta testers to submit their audio captures. Many available services require registration and don't support metadata or aggregation of submissions. Thanks in advance for your suggestions."

Na, amateur radio transmissions are some of the most boring conversations known to man (and I am a ham radio operator). No sex, drugs and rock and rock - no eavesdropping. Besides, we're mostly harmless.

Back to the topic. Because the bands are proscribed, ie, there are frequencies that are just CW (and phone or digital or whatever), it would seem an easy job to just record a band for a while to grab some samples. Use a software defined reciever (to allow for easy scripting), work the grey line [qsl.net] in your area. Even if your software isn't tuned well yet, I would hazard a guess that it is smart enough to detect CW vs. radio noise. Use that to start and stop the file. You probably don't need WAV, that's sort of overkill for CW. Even cruddy ol MP3 ought to give you more than enough headroom for further processing.

150 kHz to roughly 2 GHz does happen to encompass the ham bands. There is a small region around 1.27 MHz where the local oscillator won't lock, but that doesn't jibe with a amateur band (IIRC, too lazy to look it up).

It's not free. It's a low volume device, so it costs a bit (125 GBP). Life is hard.

A simple phase-locked loop circuit is generally adequate to discriminate between tone/no-tone and you can buy them for pennies.

Once you have that done, tie your input data line to the PLL output and measure the widths of the tone pulses. A dash is going to average about 3 times as long as a dot, the inter-tone spaces are going to be about 1 dot-width with inter-character spacing being 1 dash-length. Actual dot and dash timing can be expected to vary from about 5WP

Totally agree. Ham radio has become one of the most boring things one can do in his spare time.
Years ago I was into it, and I was developing some advanced DSP stuff (sort of what is known now as software defined radio, but the algorithms I was using were different and better performing than those used by radio amateurs). As I started leaking some details about what I was developing, I suddenly realized that radio amateurs were not interested into experimenting new technologies: they just wanted to buy hi

I suddenly realized that radio amateurs were not interested into experimenting new technologies

I'm not a radio amateur but I certainly am interested in experimenting with new technologies. The other day, I was thinking whether it would be possible to combine a GPS unit, a PRNG, frequency hopping, exotic modulation schemes and SDR into a low-bandwidth, virtually undetectable means of clandestine communication. But I suspect that this is not exactly what amateurs are allowed to do anyway.

I'd recommend using e-mail. It's open to everyone to use, and they probably already have registered one. They can provide any and all metadata in the free-form text field known as "body", and it even supports multiple file attachments!

But it also means getting the metadata as free-form-text, which is likely to need interpreting before processing. A HTML form on the other hand will provide, by comparison, quite standardised data format. It also provides an easy file upload facility.

You know, you could hardly pick a less controversial topic than amateur radio. If you want to get everyone all wound up about your favorite boogyman, at least start off on one of the more irritable subjects we tend to yammer on about. The level of angst here is likely to be too low to channel.

I agree. Got to a WebSDR like http://websdr.ewi.utwente.nl:8901/ [utwente.nl] and automate the process. You can get a large amount of OTA signals to examine, in the correct ratios, styles and weightings. This requires you to decide whether or not the signal under test is CW or not but that's part of your algorithm anyway.
n6gn

I have already many samples of CW contest traffic recorded from my Flex3000. Because most of it is computer generated the decoding challenge is mostly related to signal-to-noise ratio and interference, not so much on personal rhythm variances when people are using straight key.

The idea presented was to collect many different kinds of CW samples. I am looking more for variation than uniformity. Having an adaptive decoder algorithm that adjusts itself automatically to all kinds of CW is a challenge.

Picasa came to mind - this service supports audio files and, last time I looked, allows you to share stuff. Although I should add that it has been a while since I looked at this service. Complements on your your clearly written post... days of/. gone by

So.... I guess you've never heard of skimmer, the various remote receivers out there, and the SDR's that people are using to record large swathes of shortwave spectrum? You know people have been working on the problem for a while, as in decades? Skimmer decodes multiple streams of morse at once. Wake me when your stuff outperforms skimmer.

I am using CW skimmer fairly actively - in fact I have been corresponding with Alex, VE3NEA who wrote the CW Skimmer. He gave me the idea of pursuing Bayesian framework [blogspot.com] as I have been progressing in developing a well working CW decoder. The main difference here is that I am focusing on improving FLDIGI [w1hkj.com] which is open source software while CW Skimmer is a commercial software package. I do agree with you that CW skimmer does a great job decoding multiple streams simultaneously. Once the algorithm works decoding multiple streams [blogspot.com] is not that difficult.

So it isn't that hard to record 200 KHz wide segments of an HF band using something like this: http://rfspace.com/RFSPACE/SDR-IQ.html [rfspace.com] How many hours of test audio do you need? If you had a few volunteer owners of SDRs do some recording for you, you would have a large test base quickly. Unless I complete misunderstand the scale of the test base that you are going after. But it seems to me that 100 or 200 hours should not be difficult to get from volunteers -- and 200 hours times 200 KHz is a lot of CW a

I have two SDR receivers myself and using them actively. The problem is not in the volume of data but having a set of data with a lot of variability to find out limits where the decoder stops working correctly. I integrated the decoder to FLDIGI with the hope that I get other hams to try this out and report back [eham.net] when they observe conditions where decoder stops working.

So to get this variety, what you really need is a network of volunteers with SDRs set up in listening posts around the globe. I think you'll get the most participation by making a solution as turn-key as possible for volunteers. Perhaps what you could do is to wrap up a software package that you could distribute to all these people. It could install the SDR drivers, and run the capture program on the appropriate frequencies. Set up a server where your volunteers can upload their captures. Set up the ca

Synthetic files at least have the advantage of automatically generated expected results files to go with them. Coming up with a good noise model seems to be the hard part. Perhaps record off-the-air noise and summing that into clean computer generated CW receive audio is a way to get a reasonable start on a channel model that has a more realistic HF noise model. Fading is easy enough to add to a propagation model, which could be extended to auroral flutter and backscatter.. Of course, you also need a sen

You can collect a lot of morse code traffic in the wild. Just get yourself a good HF receiver with some filtering (notch filter and a DSP). Set up a dipole as your receive antenna cut to 1/4 the wavelength of the band you will be monitoring. Here is a handy band plan [arrl.org] to guide you to where you will be able to find morse code which is normally called CW for continuous wave communications.

I recommend this over any attempt to collect samples directly from hams. I know I do morse code differently when using the

Write an article and submit to ARRL's QST [arrl.org] and join and post to the AMSAT mailing lists [amsat.org] as there are quite a few keys there as well. Talk to your local amateur radio club and get the word out and you might even talk to your area coordinator.

Great suggestion - thank you! Looks like the site requires registration but it has been created exactly for this kind of audio related research. It has even APIs to access the data. I will investigate this a bit further.

Since this is a binary signal problem another approach to consider would be Markov Random Fields (MRFs) which could be used as an initial de-noising pass or even as a full decoder if you set the cost functions right.

Your idea of user adaptation is pretty reasonable, but my guess is the primary thing that matters would be an overall speed scaling. IOW for good decoding you probably just need to normalize the average letter rate between users.

Thanks @SnowZero. I have looked at HMMs and in fact I wrote a simplistic decoder version using RubyHMM just to learn more how HMM really works. You would be surprised on the mathematical rigor of the original thesis [archive.org]. Many of the ideas are very relevant today, just much easier to implement with current generation of computers.

The current decoder actually uses Markov Model - the software calculates conditional probabilities based on 2nd order Markov symbol transition matrix. The framework itself allows to

You might also like to have a look at this paper on using HMMs to convert a (continuous) chromatographic signal into (discrete) base pairs "calls" during DNA sequencing: Link [mit.edu]. The problem seems similar to the one you are working on, in many respects.

I did listen parts of the conversation between WOOH, NMN, LJKR and other boats in vicinity. Scary indeed.BTW - FLDIGI had hard time decoding this correctly, partly because the signal quality was so poor.Thanks for sharing.

Obviously you realize there are differences in how people send CW. While I applaud your drive to make a smarter decoder - the reality is that you need to make sure it works on live traffic. So in that respect, you should hook it into some kind of SDR software like HRD or even make your own that can decode multiple streams of CW. If you don't have a radio, I suggest maybe a SoftRock receiver?1. It gives you actual live conversations with all the mistakes and alterations. Not everyone uses computer genera

@jfalcom -- I do realize the differences between live traffic and recordings. The example links I provided above demonstrated a live feed from ARRL W1AW code bulletin on 12/24 at 3.58105 MHz that I decoded using experimental version of FLDIGI v3.21.75 connected via SignaLink USB to Elecraft KX3 radio.

However, there is a difference between debugging software and listening live feeds. I posted this question to figure out ways how to get a test set of boundary conditions captured by other hams so that I cou

This is a problem that effect all kinds of machine learning. It is always very difficult to collect enough samples to teach good recognition skills. Whether it is hand writing, speech or as in this case Morse Code. I'm wondering if some open library that could be uploaded to for this kind of thing might not exist, or if not, it might be a good idea.

Internet access is pretty cool, irrespective of the delivery method, but get back to me on how well that works without power (storms or other disasters). If you have a ham license and a battery-backed transceiver, you can communicate easily over long distances. Because of its narrow bandwidth, CW works very well.

During a recent contest, a ham in the northeast U.S. communicated with Wake Island using CW and four watts of power. Pretty impressive.

Couldn't you just create a computer generator for this audio, that uses a PRNG to intersperse pauses and other variations? You could create a much wider variety of conditions to put your parser through by controlling how much variation is in the length of each beep, pauses between beeps, pauses between letters. You could create a really bungling case or create a perfect case, and anything in between. Why not just do that?

Great idea and in fact I have been using this strategy to create a number of different synthetic test cases. I have synthetic audio files with various Signal-to-Noise levels, with different speeds and so on. The variable timing (rhythm) is more difficult to simulate as there is no clear distribution (like Gaussian) to use as a model. Only if you aggregate over many users and normalize by speed you can start to observe some sort of Gaussian distribution in dits and dahs. I wrote about this problem wh

First off, thank you Slashdot UI, for having me retype this whole thing again.

I did this back in the early 90s with my Amiga. The hardware interface consisted of a transistor, filter capacitor, and variable resistor (I don't remember the exact design I came up with) to interface to the Amiga's joystick port (which used standard Atari controller wiring). I wrote the software decoder in Blitz Basic, and it used a scrolling window of 20-30 seconds over which it would average the pulses to determine the current dit and dah length. Any pulses deviating significantly from the current dit and dah length indicate a likely change in operator (one station finished keying and the other began their response), and the window would be positioned using that as as the edge point.

The system worked extremely well, and was far more accurate than my AEA PK-232MBX when it came to decoding morse code. It decoded most anything I threw at it. Decoded output was sometimes delayed until it had received enough code to determine the current transmission rate and style, and then it would output a chunk of text at one time as it decoded the whole buffer at once. Then it would output real-time until a deviation in dit-dah lengths had been exceeded and the window repositioned so the dit and dah length could be recalculated.

There are two discreet problems to address, and it sounds like you're lumping them together, which may not be a good way to proceed. First is the audio filtering / notch filter which tries to isolate a specific morse code signal out of other transmissions in the adjoining frequencies and general background noise. The other is simply decoding of the morse code message. Ideally, step 1 should be the analog portion, and step 2 should be purely digital.

I did some testing using classifiers in WEKA package [waikato.ac.nz]but was quite disappointed on the results. My next attempt was to leverage PNN (Probabilistic Neural Network) [blogspot.com] and got somewhat better results. In the test runs with noisy audio files with Morse code I got up to 90% accuracy in classifying dits and dahs. I have not used FANN package a lot though I installed it on my development machine 1-2 years ago. What are your thought about FANN exactly? How would you go about using the package?

Thanks for your advice, junior. I am not retired but just happen to be interested in Machine Learning methods and this problem seems to be difficult enough since only few people have created anything that would even closely perform at skilled human operator level. I did investigate some speech recognition algorithms such as HMM and SOM. I have spent also some time collecting data and training software to recognize real world noisy and messy signals. In fact the current shipping version of FLDIGI pack [blogspot.com]

As others have posted, I think you're making the problem far more complex than it needs to be by insisting on using "machine learning" techniques. All you need to do is some basic filtering to identify the beeps from the background noise, some averaging over a time window to determine which are dashes and dots, and the rest is just simple lookup tables.

I made the same mistake back in university. We applied machine learning techniques to playing a game (I forget which one), but it turned out that after

Dead as in "There are few people left on the planet who actively work CW on a high proficiency level without using a keyboard and a screen reader".

Today you can see ham shacks without a CW keyer as a norm, and if you see a CW keyer, the owner only in rare cases can go beyond 20wpm without breaking a sweat, making lots of errors all along the way and getting frustrated at hearing others do perfect CW, albeit with a keyboard.