Here are the transition probability plots for the 700B bit fields, when the ve9qrp10s sample is processed.

It is interesting to note the quite striking difference in appearance
between the very uniformly dispersed nature of the 700B VQ bit field
transition probability mesh plots and the "peakier" 700 scalar LSP bit
field mesh plots with more marked central tendencies ( see: codec2-700-mode-trellis-decoding )

This
suggests that the VQ is doing a pretty good job of encoding information
without much redundancy, which is likely to have implications for
maximum likelihood decoding strategies. It is harder to derive a useful measure of the central tendency and then meaningfully apply it when the mesh plot looks like a square of uniformly cut lawn, as opposed to a nice mound in the middle of the lawn.

codec2's author, David Rowe, also discusses this issue of information redundancy in his blog.

Judging by the large increases in errors from our attempt to apply maximum likelihood decoding to the VQ bit fields, it seems reasonable to conclude that the VQ encoded bit fields are doing an excellent job of conveying a lot of information with minimal redundancy. Unfortunately for us, this would suggest that we can't profitably employ maximum likelihood decoding for the VQ bit fields directly.

While experimenting, we have also seen that Reverend Bayes' insights are quite relevant to low bit rate audio codec R&D some 250 years later!

Here are some .wav files of the trellis decoded ve9qrp_10s sample which has had additive gaussian white noise added after 700B encoding, in keeping with the method used for the other codec2 mode trellis decoding experiments....

The intelligibility of the samples with bit fields 4, 5, or 6 decoded are not improved, as would be expected based on the very uniform distribution of the encoded VQ values evident in the transition probability plots, and also from the significantly increased number of bit errors and standard deviation seen in the summary statistics.

From the above samples, it is clear that bit fields 1,2 and 3, either singly or in combination, benefit
from direct trellis decoding, but the VQ bit fields would require
decoding before attempting trellis decoding of their encoded information.

Further to the codec2 1600 bit/second mode experiments with trellis decoding, the octave script has been modified further with a view to allowing any mode of interest to be specified.

As part of this refactoring and testing of the modified octave script, the 700 mode has been put through the script, with a combination of trellis decoded and passed through bit fields.

Here are the transition probability mesh plots for all of the bit fields showing the likelihoods (in Z axis height) of a given bitfield transitioning from a given value (X-axis) to a corresponding value on the Y axis. These plots were generated with the ve9qrp10s sample which was also used for the 1600 mode trellis decoding experiments.

The voicing bit field appears to be the least impressive in terms of predictability, and this bitfield may not lend itself to trellis decoding without excess errors being introduced.

Here are the summary statistics for the bitfields following the addition of additive Gaussian white noise (AGWN) and then trellis decoding.

Having confirmed that the modified script still works with the codec2 700 mode, the next step is to support the 700B and 1300 bit/second modes.

In closing, all of this experimentation serves to highlight that black box vocoders subject to intellectual property protection being marketed by commercial equipment vendors do not allow this sort of experimentation by interested amateurs.

that will contain the text. It will recognise spaces and any of the
usual ASCII characters that PCB can ordinarily display as text, but, you
will need to escape characters that the shell might take exception to,
and the escape character may end up getting rendered in the footprint
text, until such time as I support escape characters a bit better.

Step 3)

When generating the new footprint in PCB, use

"File:Load Element To
Buffer"

to load the newly generated footprint onto the layout.
Place the text in a suitable position. If it is the wrong size, go back and play with the magnification ratio option.

Step 4)

Select the text by clicking on it.

CTRL-x to cut the text to buffer

Go to "Buffer:Break Element To Pieces" to convert the text footprint
into silk line primitives, and click to place the broken up element
where it is needed.

Hit "Esc" to deselect.

Step 5)

Proceed now, as you normally would, to convert your collection of
elements (which now include the silk lines showing the text) making up
your footprint in its entirety into a footprint.

Usage:

java FootprintTextForPCB -t "my Text For Conversion To Silkscreen Stroke Elements" -m X.XXXX
"my Text For Conversion To Silkscreen Stroke Elements" is ASCII text, which can include spaces,
and X.XXXX is an optional magnification ratio; default = 1.0)
If run without any command line arguments, a demonstration footprint file
called demonstration1234567890.fp, will be generated

Codec2 is an open source low bit rate voice coder (vocoder) that enables voice to be carried on data channels at very low data rates.

Low bit rate vocoders are distinct from their higher bit rate encoder cousins such as mp3 which seek to reproduce more than just voice, i.e. music.

Codec2 is particularly exciting owing to its potential to revolutionise HF voice communications, until now dominated by Single Sideband Transmission (SSB). Codec2 already offers better robustness than SSB in low signal to noise conditions. Codith ec2 also has significant potential in VHF and above amateur radio communications, where single frequency time division multiple access technologies (TDMA) already in use commercially for mobile communications and telephony can be introduced with significant spectrum saving benefits in amateur bands and on amateur repeaters.

Anyway, after playing with trellis.m in the codec2-dev/octave directory, support for trellis decoding of the 1600 bit/s mode was implemented.

Maximum likelihood decoding is used, using the raw hts sample audio file encoded at 1600 bit/s as the source of the training database.

David Rowe has pointed out that that 1600 bit/s mode uses an underlying 1300 bit/s bit stream plus a 300 bit/s forward error correction (FEC) bit stream, for a total of 1600 bits per second.

Accordingly, experiments on the underlying 1300 bit/s mode is planned, in the absence of FEC.

Additive white gaussian noise has been added to the ve9qrp_10s audio sample, followed by trellis decoding of various combinations of the bit fields.

This is the result of decoding after the addition of noise to the codec2 bitstream, with no trellis decoding:

Intelligibility in the presence of noise seems to be enhanced the most by maximum likelihood decoding of the LSPs and voicing bits, with next best being maximum likelihood decoding of just the LSPs.

Maximum likelihood decoding of just the LSPs in the 1600 bit/s mode is not too demanding given the small bitfield lengths. Based on this admittedly limited sample set, maximum likelihood decoding of the voicing bits in addition to the LSPs seems to reduce the occasional "stutter" type artifacts.

Errors introduced by maximum likelihood decoding of the energy bitfields seem to have an adverse effect on intelligibility. Interestingly, there were dense probabilities in the high order bits, and sparse probabilities in the low order bits... see transition probability plots below, and see the summary statistics showing significantly increased errors with trellis decoding.

Maximum likelihood decoding of the scalar W0 bitfields has not been performed owing to the bits = 7, making processing by octave quite challenging. This is due to the exponential ( i.e. 2^(bitfield length) ) demands of the maximum likelihood decoding. Further experimentation with maximum likelihood decoding of the scalar W0 bitfields in C is planned. Also of interest were the uniformly dense probabilities in the low order bits,
and sparse probabilities in the high order bits... see transition
probability plots below... which may have an impact on trellis decoding effectiveness - but this may reflect the effects of FEC, in which case it is actually desirable. Furthermore, the ability to adequately encode outliers is also important to convey and preserve intelligibility.

A couple of the bitfield transition probability plots also showed marked clustering into four peaks, perhaps suggesting that quantising could take better advantage of the available bits, although it may just be evidence of the FEC at work, adding robustness, or again, reflecting a required ability to adequately encode the occasional outlier.

The following plots are labeled and are presented in order of the 1600 bit/s bitfields. Basically, the Z axis shows the frequency at which a bitfield value on the X axis maps to another value on the Y axis. Accordingly, the more densely clustered the maxima, the more predictable changes from one bitfield to the next will be, and the greater the ability of the trellis decoding to make informed guesses about the most likely codeword:

In conclusion, early indications are that trellis decoding has the potential to improve the performance of the 1600 bit/s codec2 mode in the presence of noise, as envisaged by codec2's author, David Rowe. Further investigation is planned of the 1300 bit/s mode prior to the addition of FEC in the 1600 bit/s mode, and also the lower bit rate 700B mode that does not employ FEC.

About Me

Licensed radio amateur - That doesn't mean I wait for floods and tornadoes with a battery and a radio, it means I like to pull stuff apart, put stuff together, and hack stuff generally, while avoiding electric shock, thermal burns, RF burns, fire, lightning and falls from heights.