What determines upstream speed?

We are all familiar with the idea that the target SNR is used to determine what downstream sync we receive. A higher target reduces the errors but reduces the sync speed etc etc. But what determines the upstream speed? Is there a target SNR for the upstream direction? Is it the same target? If not target SNR, what does govern the upstream speed?

As an example, my downstream has a target SNR of 6db and a sync of around 5700. But my upstream sync is 888 with an actual SNR of 13.4. So either I have a target SNR of (lets say) 12db (why?) or something else is governing the upstream sync speed. Any suggestions?

Re: What determines upstream speed?

Thank you for that. I do not believe that there is anything contractual that limits the package to 888bps upstream so what technical reason might there be to cap the upstream speed and who controls it?

Re: What determines upstream speed?

Can you explain that in a bit more detail please? I understand FEC to be a method of error correction that uses bytes of data included in the datastream to identify faults in the the monitored part of the data and correct it without retransmission. I do not see why that should effect the raw sync speed.

Similarly, my understanding is that interleaving relates to the organisation of data blocks and error correction blocks within the datastream. Again, it is at a higher level than the raw dsl connection and would not limit the upstream sync.

Re: What determines upstream speed?

The capped upstream on Plusnet is around 443 or 440kbps syncIf they uncap it - it is 888kbps fixed interleavedIf the DLM switches off interleaving upstream it's whatever the target upstream SNR margin of 6 on the line will give - usually around 1000kbps.

Re: What determines upstream speed?

One aspect is that the bandwidth used to carry the FEC data is not included in the bandwidth displayed as the line rate reported by your modem. That explains why FEC can lower the line rate (although the coding gain from the FEC can outweigh that resulting in a overall increase in bandwidth), but does not explain the limit to 888k.

I think the limit is rather complicated, and due to limitations of the exact framing parameters available on the upstream, and the minimum INP value and maximum delay value configured in the line profile. Not all levels of FEC/interleaving will result in the 888k limit, but the level typically used by the DLM does, especially when it's configured as interleaving on instead of auto.

Also note that you can't explicitly switch off FEC+interleaving on only the upstream. Switching interleaving off switches it off for both the downstream and the upstream. With interleaving set to auto, the DLM can have interleaving on the downstream but not on the upstream, or at a lower level on the upstream which does not impose the 888k limit.

Re: What determines upstream speed?

Just to be clear, the 888 figure I am quoting is the figure reported as DSL upstream bandwidth, not the speed I get if I do a line speed test on thinkbroadband or wherever. The figures from thinkbroadband type tests are a chunk lower.

The bit where Kitz does not seem to support you is in regard to interleaving. Kitz seems to say that pure simple interleaving introduces latency, but does not reduce the total amount of data that can be transmitted per unit time.

Forward Error Correction does reduce the amount of useful data that can be transmitted because it involves transmitting extra bytes of error correction data that are not normally transmitted if FEC is not applied. That is news to me as we are usually told to ignore FEC errors as they cost us nothing. If you and Kitz are correct then they clearly do cost us throughput.

Annoyingly, I get lots of FEC errors on download, but I get none on upload, so am presumably wasting those upstream bytes.

In any case, I am "happy" that there is a difference between the thinkbroadband type of speed test and the DSL bandwidth due to the header (and other) bytes that need to be transmitted but are not part of the data I am interested in.

I am less happy with your statement that the bytes used for FEC are not included in that difference but instead reduce the DSL bandwidth in some artificial way. The implication is that the bandwidth is still physically 988kbps (or whatever) but that only 888kbps is reported because 100kbps is used for FEC (my numbers here are illustrative, not actual). Also, if the physical figure is still 988kbps, then why would the SNR increase. I cannot believe that the modem would artificially increase the SNR to match the artificial decrease in bandwidth.

So it seems to me that the physical upstream bandwidth is being physically reduced. Everything I have seen so far justifies a reduction in the apparent download rate (thinkbroadband type rate). Nothing accounts for a physical reduction in physical bandwidth.

Re: What determines upstream speed?

Is that just the upstream interleaving that is being turned off or will downstream change as well. Also, are you turning off FEC as well - not pushing - just asking. And finally, do I need to drop the connection and allow it to re-sync or will that happen automatically?

Re: What determines upstream speed?

It can be argued that though FEC takes up some of the data bandwidth, you can still get more throughout than you might without it.

Data packets which contain errors which can be fixed by FEC do not need to be retransmitted. Depending on the error rate, without FEC a data packet might need several retransmissions (at the higher speed) before an error free packet is obtained. Wth FEC the slower throughput error containing packet might be rectified locally negating the necessity to retransmit.

Re: What determines upstream speed?

For information, the following is a cut and paste from Kitz site that goes some way to explaining why the DSL speed might reduce, fewer sub-channels in use to carry the real data. However, I still feel it is only half the story. It does not explain the high SNR reading which to me still implies that the sub-channels the SNR is based on, presumably the reduced set, are not working hard. If it is based on more than the in-band channels then the out of band channels that are included are really slacking.

From Kitz:-

"ADSL uses out-of-band signals to carry redundant data for FEC whereby certain sub-channels are specifically assigned to carry the overhead data. These sub-channels are not available for transmission of normal data and used purely for FEC overheads. Assigning sub-channels as out-of-band has the effect of reducing the maximum sync speed available for normal data transmission."

Re: What determines upstream speed?

I am not against FEC when appropriately used which it would be if it was saving more in re-transmissions than it was using in extra data transmission though I read that the overhead is about 8% which, if not used for FEC would allow quite a bit of re-transmission.

I am also willing to put up with the rough (Unnecessary FEC on upstream, in my case) if the smooth (Necessary FEC on downstream) gives an overall benefit when they cannot be split.

My real problem is that the figures I see for SNR (upstream) do not make sense given my current understanding of what is going on. There appears to be margin for extra data to be sent, FEC or no FEC.