convolutional coding: One of the coders used for e.g. cellular phones in order to permit you talking to each other. Convolutionally coded data is decoded with a so-called Viterbidecoder. In the usually used versions, a convolutional coder adds a lot of redundancy to the data stream. At the comon coder rates of 1/2 or 1/3, the data going into the coder only accounts for 1/2 (or 1/3) of the data going out of the coder. On the other hand, with in this way convolutionally coded data it doesn't matter which bits of the coded stream are corrupted.

turbo coding: Another coder/decoder pair used in wirelesstelecommunications. The coder uses a feedback system, taking some of the coded data back to the coder's input. Again, at common coder rates of 1/2 or 1/3, a turbo coder adds a lot of redundancy to the data stream and, again, all of the bits in the coded data stream are equally (in)sensitive towards being corrupted. The description of how a turbo coder does what it does and the proof that it does that correctly fills several PhD theses.

So why does anybody want to apparently waste 1/2 or 2/3 of the available (and expensive) bandwidth by transmitting redundant data? Simply because it's not a waste but a necessity if you want to recieve your data

So, in order to be able to cope with streamingreal-time data, a system is needed that can correct errors in the incoming data without waiting for a retransmit, and the bandwidth increasing FEC schemes are the best we have today.