Meet DOCSIS, Part 2: the jump from 2.0 to 3.0

In this installment of our in-depth look at the DOCSIC protocol that powers …

In Part 1 of this series, we covered DOCSIS 2.0. Now, DOCSIS 2.0 is no slouch, but with the hot breath of fiber-to-the-home, WiMax, and LTE on its neck, it was clear to cable providers early on that DOCSIS needed to up its speed. DOCSIS 2.0 was able to improve upload speeds over DOCSIS 1.0 using denser QAM constellations and CDMA, but unfortunately, there’s a limit to simply cramming even more bits into a single QAM symbol. The problem is noise.

Suppose that we have a channel where our signal and the noise are exactly the same strength, and we can still distinguish the two symbols -1 and +1 to a usable degree. (We’ll let one little detail slide: when the signal is +1 and the noise -1, that would be indistinguishable from a signal of -1 and noise of +1.) If we now increase our signal strength by a factor three, our two symbols are -3 and +3, making our signal stand out from the noise much more. We can then make use of this increase in the signal-to-noise ratio by using a larger number of distinct symbols. Previously, the distance between our two symbols was two. If we keep that distance, we can now have four symbols: -3, -1, +1, +3. Four symbols gives us two bits per symbol, so our three-fold SNR gain gave us a doubling of our bitrate. The relationship between the two is expressed in the Shannon-Hartley theorem:

C is the highest possible channel capacity in bits per second, B the channel bandwidth in Hertz, S the signal, and N the noise. Note that the signal and noise are not expressed in decibels, but as a linear power ratio, or Volt squared.

This little foray into information theory makes it clear that we can’t simply increase the number of distinct symbols to reach higher and higher bitrates, but we’re limited by the available signal-to-noise ratio. The noise floor is dictated by the thermal noise that all electronics produce and the interference from outside the cable system, as well as the other signals carried by the cable. The maximum signal strength is limited by the cable’s properties and the limits on the interference that we can reasonably impose on others—don’t forget, we’re using frequencies inside the coax cable that are used for other purposes over the ether, and in practice no cable is shielded perfectly.

DOCSIS 3.0 gets around these limits using brute force: it bundles multiple channels. By using multiple channels at the same time for the same data stream, the data stream can be bigger than the bandwidth that a single channel allows. By default, in the downstream direction, the CMTS will send individual packets over different channels, but each packet is labeled with a sequence number so if packets arrive out of order, they can be put back into their original order before they’re given to the user. The Internet Protocol (IP) explicitly allows for out-of-order arrival of packets—but if this happens routinely, it does slow down transfers. Alternatively, the CMTS can use different channels for different types of communication, where each channel has its own interleave level that is appropriate for the type of traffic flowing over that channel.

Interleaving is basically slowing down the transmission of packets so that when there is a spike in interference, for instance, from the power grid as an appliance turns on, fewer bits of the packet are lost. This way, the forward error correction (FEC) has a better chance of repairing the damage and packet loss is avoided. By interleaving multiple packets, the aggregate bandwidth remains high, but the latency goes up. The highest interleave level of 128:1 adds a delay of 4 milliseconds to the downstream path with QAM64 and 2.8ms with QAM256. The typical interleave level is 32:4, which adds 1 or 0.7 ms of latency, respectively.

A high level of interleaving is appropriate for video streaming, where the additional latency is inconsequential but lost packets immediately impact quality. For VoIP, losses are also bad, but latency is much worse, so a lower level of interleave is better. The CMTS may thus be configured to send video packets over a channel with a high level of interleave and VoIP packets over a channel with a low level of interleave.

In the upstream direction, cable modems make requests for bandwidth when they have data to transmit. The CMTS then “grants” minislots to the CM for that transmission on any combination of bonded upstream channels, depending on available bandwidth. Unlike downstream channels, where there's just QAM64 and QAM256, there are many different ways an upstream channel can be configured, and the interference in the upstream direction also varies a lot more for different channels, as strong sources of interference tend to be lower in frequency. It's even possible for certain (upstream or downstream) channels to become completely unavailable. In that case, communication continues over the remaining channels in the bonded group until the lost channel(s) can be brought back into service.

The DOCSIS standard doesn’t specify a specific number of channels that must be supported, but it does mandate a minimum capability of four downstream channels. That adds up to 172Mbps of raw bandwidth, before taking various types of overhead into account.

Sharing the bandwidth: statistical multiplexing

So far, we’ve mostly talked about the maximum number of bits per second that a DOCSIS system can carry—more than 100 megabits per second, which is nothing to sneeze at. However, that bandwidth is shared among multiple users.

Interestingly, one of the main reasons the ARPANET, the illustrious forebearer of the Internet, was created was to enjoy the benefits of statistical multiplexing.

WiFi and EDGE 2G mobile work the same way: one user gets to use the capacity at a time, so the speed each user experiences goes down as the number of active users goes up. Some other technologies don’t have this limitation. With today’s Ethernet, where everyone has their own cable to a switch, everyone gets to use the full Ethernet speed regardless of what other users do. ADSL works the same way: every subscriber has their own circuit to the central office. The mobile 3G standards are based on CDMA, which also manages to give different users each the maximum speed they’re capable of at the same time—within limits.

So maybe 1.5Mbps ADSL is a lot slower than cable, but at least you always get to use that 1.5Mbps, right? Unfortunately, no. Suppose the local phone company has 650 ADSL customers in a neighborhood of a medium-size city in one of the flyover states. That’s 650 x 1.5Mbps = 1Gbps. But does the ISP in question reserve 1Gbps of capacity from that neighborhood to one of the big cities, where it connects to the rest of the Internet? Of course not, because 650 users never all use their connection to capacity at the same time and thus never generate 1Gbps of traffic. It’s the same way with all other infrastructure: it’s designed for average usage patterns with some headroom, not for the possibility that every user needs the full capacity at the same time. If everyone picks up their landline phone at the same time, many won’t get a dial tone. Same thing with cell phones, electricity, water, and of course the roads. If every car owner pulls out of their driveway at the same time, good luck reaching the speed limit.

Interestingly, one of the main reasons the ARPANET, the illustrious forebearer of the Internet, was created was to enjoy the benefits of statistical multiplexing. Before the ARPANET, communication networks, such as the phone network, were circuit switched—as the user dials a number, a path is created through the network. That path is then dedicated to that user for the duration of the call in both directions. This is a tremendous waste of resources, as people typically take turns talking, and sometimes there are gaps in the conversation where nobody speaks.

Computer communication is the same way. For instance, when surfing the Web, the browser will send requests when the user clicks a link, and then one or more remote servers send back data. Then the communication link is idle for some time while the user looks at the page. But the Internet is packet-switched rather than circuit-switched, so the unused capacity doesn’t have to go to waste. Packet switching is similar to traffic on the road—there is no need to reserve a path through the city when driving from point A to point B; cars simply enter the road if there is room for them. This approach works even better with packets, because congestion control algorithms make sure new packets are only injected into the network at a rate that the network can accommodate—if the network is overloaded, communication sessions simply slow down.

Through statistical multiplexing, it’s entirely possible to have more than a thousand users share 100Mbps worth of capacity. Many users won’t even be using their network connection at any given time. Of the ones that do, most will generate intermittent traffic (such as in our Web browsing example) or use other applications that generate limited amounts of traffic (such as listening to streaming audio). Only a few users will be engaged in bulk data transfers (uploads or downloads) at any given time. A good deal of the time, downloaders won’t see 100Mbps transfers—that would be a little over 11 megabytes per second. But just in case nobody else is using the network, DOCSIS and other shared networking technologies make it possible for one user to use almost all the bandwidth. That is, as long as cable operators don’t limit their user’s speeds to something much lower to avoid creating unrealistic expectations.

So if you get 107Mbps cable service, which is now available in several places in the US, that probably means that “under the hood” four or probably more channels are bonded for a raw speed much higher than 107Mbps.

Could we maybe get a follow up article detailing what a national fiber roll-out would look like in this context? I ask because some countries are already doing this / debating it seriously and I think the US should be topping that list. Obviously this would have to be government driven, but a large scale fiber rollout would change these figures SIGNIFICANTLY.

On a similar note though: Any readers have/find a good guides for troubleshooting cable modem issues? Not to be too snarky, but I've found that phone in support doesn't manage much beyond rebooting the cable modem, and sending a tech out doesn't help with intermittent issues. Sorry to drag the thread down so early, but I've just had issues earlier today.

On a similar note though: Any readers have/find a good guides for troubleshooting cable modem issues? Not to be too snarky, but I've found that phone in support doesn't manage much beyond rebooting the cable modem, and sending a tech out doesn't help with intermittent issues. Sorry to drag the thread down so early, but I've just had issues earlier today.

Basically the reason for this is that there isn't much that they can do from their end since 90% of the DOCSIS issues I've seen originate from the cable plant. So if the back end system sees your cable modem/account/.cm file and the CMTS has the mac in the table then it is almost always a bad modem or a bad signal level in the coax. The only way to remedy that is with a tech

Basically the reason for this is that there isn't much that they can do from their end since 90% of the DOCSIS issues I've seen originate from the cable plant. So if the back end system sees your cable modem/account/.cm file and the CMTS has the mac in the table then it is almost always a bad modem or a bad signal level in the coax. The only way to remedy that is with a tech

I've actually found that link before myself. My up and down signal levels seem to be good, things just go silent every so often and the logs don't say anything (at least unless the modem decides it's been out of contact for too long and reboots itself). But anyway, not to take over the comments section...

With regards to video send by cableco's, statmux has a totally different meaning. Since that's outside the scope of the article, I will elaborate.

The downstream channels in DOCSIS are organized into slices of frequency groups with guard band between them. Typically 6mhz. This is handy for the US because before cableco's really stressed the quality of their plant (cable infrastructure), broadcast television signals would intrude into the cable plant and render those channels unusable.

A 6mhz channel at 256QAM modulation will result in a bandwidth of 38Mbit/s after overhead. From memory so maybe wrong...

When a cableco has hundreds of SD and HD streams, slicing those channels up gets a bit tricky, especially for HD.

OTA (Over the air) TV transmits at a rate of 19mbit/s. 18.5mbit/s steams are about the max, so CBS 1080i and NBC 1080i could completely saturate a entire 6mhz channel. That makes a cableco sad, because they are used to using bandwidth more efficiently.

Enter the magical savior of digital cable, and the primary reason you can now get hundreds of channels. The almighty MPEG groomer/transcoder/magic box.

A Groomer can take in a Mpeg2 stream, "relieve" it of unnecessary data, and cut down the bitrate in real-time. A groomer allows cableco's to make streams fit. That's all well and good, but it has obvious downsides (quality). Enter the stat-mux enabled groomer. A groomer in charge of a 6mhz channel (or 10, or all of them), will examine the MPEG frame contents, and assign a certain picture quality level of that stream.

When a video stream doesn't have a lot of motion, the required bitrate to maintain quality isn't that high. Those are expendable bits. During periods of fast motion, every bit counts. Stat-mux enabled grooming allows cableco's to cram more channels in the same bandwidth, at equivalent quality levels.

Stat-mux has even more advantages when you factor in Mpeg-4 transcoding of Mpeg-2 streams. Assuming a cableco decides to output mpeg4, the efficiency combined with statmux allows a cableco to cram even more into a single channel at the same quality. It makes HD relatively cheap, compared to what it used to cost in bandwidth, and it makes SD dirt-cheap (2mbit per stream is doable).

When I called a groomer a magic box, it's because integration has pushed so many of the functions of a video headend into a single box. The first groomer that took off was the Terayon Cherrypicker. You could statmux/remux/groom/splice many channels with one box. I'm sure just how many per box, but it had limits which meant you had to have quite a few cherrypickers. Cisco recently rolled out their statmux/remux that allows up to 350 SD streams or 80-ish HD streams in a single 2u box. That's an insane amount of processing. It's their DCM 9900, and you can buy boards so it'll do integral mpeg2 to mpeg4 transcoding.

So with the DCM 9900, you can conceivably have a bunch of satellite receivers that wire directly into the magic box, and output a single ethernet (or multiple ethernet) IP video stream. That is modulated by whatever gear the cableco happens to use, and send over fiber to the nodes for conversion to coaxial at the designated areas.

Could we maybe get a follow up article detailing what a national fiber roll-out would look like in this context? I ask because some countries are already doing this / debating it seriously and I think the US should be topping that list. Obviously this would have to be government driven, but a large scale fiber rollout would change these figures SIGNIFICANTLY.

As far as cost goes, HFC is probably the most efficient. What we'll see is cable companies split nodes until we're at 100 users per node, and then from there they'll switch their operating paradigm from mostly-broadcast to something resembling multicast (using switched digital video).

This article says 4 channels is ~100mbit, but then goes to say that bandwidth is shared.

DOCSIS3 uses CDMA which means ALL clients are talking at the same time. This means each modem has its own 100mbit to the node. I understand the bandwidth being shared at the node, but not at the channel level.

The only way to cause shared bandwidth is to have one client talk at a time. TDMA does that, but not CDMA.

My understanding is the only limit on the amount of bandwidth you have is the amount of noise adding a new modem causes. CDMA does allow for hundreds to thousands of devices to talk at the same time, but the limit is based on how clean the signal is.

If cell phones can have hundreds of devices talking at the same time with -70db of signal, I should think a cable modem could have quite a few more with +34db.

Can someone help me? My teacher only spent 1 week going over the math and theory of CDMA back in school and this does not sound anything like what my teacher said.

-70db received signal level, indicative of power level. Your confusing signal strength with SNR. signal strength doesn't carry information as to where the noise floor is.

Bandwidth is still shared because the combined channels can still only support a certain amount of data throughput without freaking out.

I think your confusing TDMA the cellular phone spec with TDMA the concept. TDMA cellular splits timeslots evenly between user devices. TDMA muxing in general doesn't necessarily mean all devices get equal bandwidth. TDMA slots can be flexibly assigned based on demand, so one device can get consecutive slots for increased data carriage. So long as there is a method for that device to ask for the extra slots, and for them to be awarded.

This article says 4 channels is ~100mbit, but then goes to say that bandwidth is shared.

DOCSIS3 uses CDMA which means ALL clients are talking at the same time. This means each modem has its own 100mbit to the node. I understand the bandwidth being shared at the node, but not at the channel level.

The only way to cause shared bandwidth is to have one client talk at a time. TDMA does that, but not CDMA.

My understanding is the only limit on the amount of bandwidth you have is the amount of noise adding a new modem causes. CDMA does allow for hundreds to thousands of devices to talk at the same time, but the limit is based on how clean the signal is.

If cell phones can have hundreds of devices talking at the same time with -70db of signal, I should think a cable modem could have quite a few more with +34db.

Can someone help me? My teacher only spent 1 week going over the math and theory of CDMA back in school and this does not sound anything like what my teacher said.

While I can't explain the intricacies of CDMA off the top of my head, it was only about a week for me too and was too long ago, I can promise that multiple devices on a single DOCSIS segment and frequency cannot talk at the same time. I am wondering if it's a typo that should say CSMA.

I'm not an L1/L2 expert in DOCSIS but it looks something like so.

Time is divided up into data and control slices. Modems MUST NOT send in an upstream data slice unless they have permission from the CMTS. The modem will use a control slice to signal a RTS(request to send) to the headend. If the headend approves the RTS then the modem may send in the assigned data slice. However, the control channel is collision based b/c it is inband and shared by all modems. So your RTS could collide with another modem's RTS and you the other modem will have to "backoff" via , I think, CSMA and try your RTS again. Excessive RTS can over load the control slices and cause the data slices to be under utilized. Most likely cause is LOTS of small packets from lots of modems.

While their logic did not pan out this is what, I think it was Comcast, was saying was the problem when they started sending TCP RSTs to bit torrent users who were seeding.

-70db received signal level, indicative of power level. Your confusing signal strength with SNR. signal strength doesn't carry information as to where the noise floor is.

Bandwidth is still shared because the combined channels can still only support a certain amount of data throughput without freaking out.

I think your confusing TDMA the cellular phone spec with TDMA the concept. TDMA cellular splits timeslots evenly between user devices. TDMA muxing in general doesn't necessarily mean all devices get equal bandwidth. TDMA slots can be flexibly assigned based on demand, so one device can get consecutive slots for increased data carriage. So long as there is a method for that device to ask for the extra slots, and for them to be awarded.

whoops. yes, correct on the SNR vs power mix-up.

But I wasn't talking about TDMA; I was talking about CDMA which does not using time slots at all. All devices talk at the same time and all devices see other devices as "noise". So for devices on CDMA the only concept is increased noise, not sharing.

Again, I don't work with CDMA, but my teacher did cover it. In a nutshell, she said each device effectively get's its own dedicated bandwidth. But the practicality of it is the more devices are added, the more noise is added and you hit an effective limit.

Well, actually, it was more like we went over just CDMA for one day, then TDMA, then FDMA, then we compared them all. So I guess it wasn't "one week" of CDMA. It's been quite a few years.

"Interleaving is basically slowing down the transmission of packets so that when there is a spike in interference, for instance, from the power grid as an appliance turns on, fewer bits of the packet are lost."

I am sorry but this is not what interleaving is. While FEC may be a part of any good interleaving implementation it is not what interleaving means. Interleaving is essentially multiplexing.

The reason interleaving increases latency is pretty much just like cut-through versus store and forward switching. To interleave you must wait for the whole frame to arrive before you can decide how to interleave it and also apply FEC but even if you were not doing FEC you would still be doing a store and forward not cut through type operation.

While I can't explain the intricacies of CDMA off the top of my head, it was only about a week for me too and was too long ago, I can promise that multiple devices on a single DOCSIS segment and frequency cannot talk at the same time. I am wondering if it's a typo that should say CSMA.

I'm not an L1/L2 expert in DOCSIS but it looks something like so.

Time is divided up into data and control slices. Modems MUST NOT send in an upstream data slice unless they have permission from the CMTS. The modem will use a control slice to signal a RTS(request to send) to the headend. If the headend approves the RTS then the modem may send in the assigned data slice. However, the control channel is collision based b/c it is inband and shared by all modems. So your RTS could collide with another modem's RTS and you the other modem will have to "backoff" via , I think, CSMA and try your RTS again. Excessive RTS can over load the control slices and cause the data slices to be under utilized. Most likely cause is LOTS of small packets from lots of modems.

While their logic did not pan out this is what, I think it was Comcast, was saying was the problem when they started sending TCP RSTs to bit torrent users who were seeding.

Ahh, thanks.

DOCSIS must not be using CDMA completely then.

One of the exercises we had in class was for several of us students to get into a group, and each of us in the group to make a small stream of binary data. Then we took those bits, ran them through the CDMA algorithm via Excel. Then we overlapped our results together.

So we started off with, let say, 8 bits each. Then each of us ran those 8 bits through CDMA. Then we overlapped them as to combine our 3 8-bits in to a single 8 bit stream. But instead of just having 1s and 0s, we now had 2s and 3s also, caused by the constructive/destructive interference of a collision.

Then we gave our output to another group and gave them our keys and we deciphered each of the other group's streams.

So, a single "byte" of data can hold 3 bytes of data with CDMA. I use byte loosely because you're no longer talking about just 1s and 0s.

The cool thing is CDMA has no collisions, just noise.

If DOCSIS does actually only allow one device to talk at a time, then they must be using CDMA as noise cancellation, which means DOCSIS 3.0 is actually TDMA, not CDMA.

I really appreciate you writing these articles. It gives a great consolidated introduction into the whole field of modern telecommunications technologies. It puts lots of various terms that a novice may hear, but not be familiar with, into perspective.

You're oversimplifying CDMA. The rules of bandwidth vs. signal and noise remain. You talk about "it's just noise" and forget that noise is a very very very bad thing. At some point, all that noise eats in to the literal number of bits that the line can support.So CDMA may be more robust against noise, but each user is adding to the total noise on the channel for every other user. At some point the laws of physics exact vengeance and so, not everybody gets the full theoretical bandwidth of the line.

Also, a comment on tech support.

There are 4 problems that your internet service could have.

1: your cables in the home are bad, intermittently or because you changed something. 2: The HFC plant/ Head End could be borked3: Your modem could be borked4: The asic/Firmware in the modem froze and stopped working/ the system hit a corner case.

Item 4 is the the most likely failure of a consumer broadband device. The ASIC is in a bad state. How do you fix an ASIC in a bad state? You power it down and power it back up.

So when you call tech support, they ask you to power cycle the modem, because 90% of the time that is the only correct solution. There is nothing more "WRONG" than an ASIC in a corner case state or some bug in the firmware that it got released with.They aren't going to send you a new modem if they don't have to. Tons of modems are sent back as broken and are checked and work just fine and get sent back out. If it's a problem with the plant, then one user isn't going to justify a repair unless the problem is acute.If it's the wiring in your home, then that's still a truck roll and nobody wants to spend money. So they always try the 90% case first.

As for why the ASIC might hang, well Broadcom is the number one provider of DOCSIS ASICs in the world and their original parts the 3300/3310 and 3550 were known to have a number of design flaws that caused them to hang. They've stepped through a number of iterations that improve things, but I as a former employee in the DOCSIS HW world don't trust them.

And of course we're talking about modems that are made at the lowest possible costs. Consumer Electronics firmware is a world of risk mitigation. If there's a bug that hangs the modem every once in a great while and it won't cause a truck roll to fix, then it gets shipped. There's no time for perfect products here.

So there's almost no point in getting any more troubleshooting tips. If your modem stops working try the following

Power cycleDouble check all your cables are tight and in good shapeCall up tech support and see if there's an outage.

Sure you could check your own RF numbers, but unless it's your cables causing a problem, there's nothing you can do about it and the tier 1 support on the other end doesn't care what you did before you called.

Item 4 is the the most likely failure of a consumer broadband device. The ASIC is in a bad state. How do you fix an ASIC in a bad state? You power it down and power it back up.

So when you call tech support, they ask you to power cycle the modem, because 90% of the time that is the only correct solution. There is nothing more "WRONG" than an ASIC in a corner case state or some bug in the firmware that it got released with.They aren't going to send you a new modem if they don't have to. Tons of modems are sent back as broken and are checked and work just fine and get sent back out. If it's a problem with the plant, then one user isn't going to justify a repair unless the problem is acute.If it's the wiring in your home, then that's still a truck roll and nobody wants to spend money. So they always try the 90% case first.

I can't help it, I'll jump in again.

I don't disagree with what you've said - I'm certain it's true. But, the problem is - to support it's _always_ true. Even if the problem shows up in the cable modem logs - sometimes I have trouble convincing support that the modem even has logs (I fondly remember one trying to convince me that I'm looking at my router as opposed to my modem). My downstream power level tends to be around -1 and -2 db, even if I'm having trouble and my upstream values are around 35 db. As far as I can tell, those are all pretty close to perfect. I have intermittent pauses in connection - more often during the day than at night (my wife generally works at home). The modem always comes back though so I really don't think it's the hardware. (This is also relatively new - only a few months, and I keep the modem on a ups, so it shouldn't have gotten zapped.)

What really kills me is that the tools are actually there, just hidden from the customers. I understand that my modem supports snmp, but it's locked away from the customer side so I can't even simply read it. The best I can come up with is to continually ping my modem and my default gateway and record the times (and possibly go further upstream if those never change) but I don't see how I could even get support to look at those logs (the last time I called, the tech decided to help me with my wireless. All computers I used to test with are wired and I never mentioned wireless once.)

CDMA can be combined with other schemes, such as TDMA (TD-SCDMA) ou CSMA/CA (Wifi)That is, at any given instant, only some devices will be talking over the channel.Note: I don't know what DOCSIS uses!

With CDMA, it's not as simple as having a hard limit for the number of devices.As you add more devices to a channel, each device sees more noise -- that part you got right.But, there's no hard limit to the number of devices.It's, as any system, a trade off between bandwidth, power to noise ratio and bitrate. DOCSIS targets bitates much higher than 3G.

CDMA can be combined with other schemes, such as TDMA (TD-SCDMA) ou CSMA/CA (Wifi)That is, at any given instant, only some devices will be talking over the channel.Note: I don't know what DOCSIS uses!

With CDMA, it's not as simple as having a hard limit for the number of devices.As you add more devices to a channel, each device sees more noise -- that part you got right.But, there's no hard limit to the number of devices.It's, as any system, a trade off between bandwidth, power to noise ratio and bitrate. DOCSIS targets bitates much higher than 3G.

It's nice that people are responding. I've been doing more digging on CDMA and DOCSIS 3.0 and the most I found was it uses SCDMA(Synchronous). Outside of that, I haven't found out how it actually uses it.

I do believe DOCSIS 3.0 is very nice and a lot of people give it more crap that it deserves, but damn.. I can't wait for 100% fiber instead of HFC.

They should just go all ethernet once 40gb/100gb come out. 100GB fiber link to a node that has 4 40gb child nodes that splits into 8 10gb child nodes that split into 16 1gb links to the homes. A single 100gb link could be feeding 512 homes with 1gb. Over subscribed, but switched for maximum speed. TDMA sucks.

Man, it would be awesome to just have a whole city wired up like a LAN.

I don't disagree with what you've said - I'm certain it's true. But, the problem is - to support it's _always_ true. Even if the problem shows up in the cable modem logs - sometimes I have trouble convincing support that the modem even has logs (I fondly remember one trying to convince me that I'm looking at my router as opposed to my modem). My downstream power level tends to be around -1 and -2 db, even if I'm having trouble and my upstream values are around 35 db. As far as I can tell, those are all pretty close to perfect. I have intermittent pauses in connection - more often during the day than at night (my wife generally works at home). The modem always comes back though so I really don't think it's the hardware. (This is also relatively new - only a few months, and I keep the modem on a ups, so it shouldn't have gotten zapped.)

What really kills me is that the tools are actually there, just hidden from the customers. I understand that my modem supports snmp, but it's locked away from the customer side so I can't even simply read it. The best I can come up with is to continually ping my modem and my default gateway and record the times (and possibly go further upstream if those never change) but I don't see how I could even get support to look at those logs (the last time I called, the tech decided to help me with my wireless. All computers I used to test with are wired and I never mentioned wireless once.)

Anyway, I suspect I'm over it (for the time being anyway).

I'll tell you right now, that most cable companies never ever wanted that webpage viewable to the customer. All it does is make their support calls take longer.That webpage is there for the initial install. Some providers leave it open, but really it's only useful if you aren't getting MAC link.

Remember this, a knowledgeable end user usually results in a longer tech support call. It's a case of a little bit of knowledge being a bad thing.The SNMP support is there for them to manage their network. The idea that their customers even know what SNMP is scares the bejebus out of them. Which sounds bad, but being on the manufacturing side, I have some sympathy, because there are a lot more people out there that know SNMP and a little ethernet than know even what this article tells you about DOCSIS. I've seen it on 3rd party support forums all over the place. People try to apply their little bit of knowledge to a problem well beyond their scope and come up with something completely crazy.

Or think of it another way. When my car is broken and I take it into the shop, My googling of the problem and trying to provide help to the mechanic with it just makes it take longer and annoys the mechanic...and I'm usually wrong anyway. Might as well shut my yap and let him do his job.

As to the Reboot. Part of the 10% case of modem failure could also be a tuner in a bad state, or a stuck power algorithm. Again, there could be problems in the firmware. Starting over will often clear those problems. So it's always a good place to start. If the problem comes back, then you have to get more serious. In your case, I'd be curious to see the corrected and uncorrected errors numbers. Just because the power and SNR look good, doesn't mean the QAM constellation looks good. That would be a plant or a tuner issue.

CDMA can be combined with other schemes, such as TDMA (TD-SCDMA) ou CSMA/CA (Wifi)That is, at any given instant, only some devices will be talking over the channel.Note: I don't know what DOCSIS uses!

With CDMA, it's not as simple as having a hard limit for the number of devices.As you add more devices to a channel, each device sees more noise -- that part you got right.But, there's no hard limit to the number of devices.It's, as any system, a trade off between bandwidth, power to noise ratio and bitrate. DOCSIS targets bitates much higher than 3G.

It's nice that people are responding. I've been doing more digging on CDMA and DOCSIS 3.0 and the most I found was it uses SCDMA(Synchronous). Outside of that, I haven't found out how it actually uses it.

I do believe DOCSIS 3.0 is very nice and a lot of people give it more crap that it deserves, but damn.. I can't wait for 100% fiber instead of HFC.

They should just go all ethernet once 40gb/100gb come out. 100GB fiber link to a node that has 4 40gb child nodes that splits into 8 10gb child nodes that split into 16 1gb links to the homes. A single 100gb link could be feeding 512 homes with 1gb. Over subscribed, but switched for maximum speed. TDMA sucks.

Man, it would be awesome to just have a whole city wired up like a LAN.

They should just go all ethernet once 40gb/100gb come out. 100GB fiber link to a node that has 4 40gb child nodes that splits into 8 10gb child nodes that split into 16 1gb links to the homes. A single 100gb link could be feeding 512 homes with 1gb. Over subscribed, but switched for maximum speed. TDMA sucks.

Man, it would be awesome to just have a whole city wired up like a LAN.

It would be nice but it's just not feasible. Even FTTH is not a pure switched network. It's fiber versus coax which makes all the diff in the world but if you look at a pure FTTH setup its architected much like a coax one. Smart headend and dumb/cheap devices in between the headend and the CPE(customer premise equipment). Full switched would be WAYYYY too expensive.

The PON in GPON, 10G PON, whatever PON in FTTH networks is Passive Optical Network. There are a number of un-powered splitters that demux the signal outbound from the headend, and mux inbound to the headend. Passive optical splitters are what made FTTH economically viable however many years ago now. Its a shared media(though maybe not frequency -- TDMA for data and FD for video? -- unsure) but there's just SOOOO much bandwidth at L1 that it does not feel that way. The first switched device, in the ethernet sense, is the headend device.

I have read a little about running point to point fiber to each home but its really not an issue until you try to offer ~10G per sub and then you would have backhall problems even with 100G.

On a similar note though: Any readers have/find a good guides for troubleshooting cable modem issues? Not to be too snarky, but I've found that phone in support doesn't manage much beyond rebooting the cable modem, and sending a tech out doesn't help with intermittent issues. Sorry to drag the thread down so early, but I've just had issues earlier today.

Basically the reason for this is that there isn't much that they can do from their end since 90% of the DOCSIS issues I've seen originate from the cable plant. So if the back end system sees your cable modem/account/.cm file and the CMTS has the mac in the table then it is almost always a bad modem or a bad signal level in the coax. The only way to remedy that is with a tech

Good read BTW

Mine was bad signal level. About a year of off again, on again finally got it resolved (I hope). Illegal taps, bad splitters, and a leak somewhere in their system (we use to be Adelphia)

I'll tell you right now, that most cable companies never ever wanted that webpage viewable to the customer. All it does is make their support calls take longer.That webpage is there for the initial install. Some providers leave it open, but really it's only useful if you aren't getting MAC link.

Remember this, a knowledgeable end user usually results in a longer tech support call. It's a case of a little bit of knowledge being a bad thing.The SNMP support is there for them to manage their network. The idea that their customers even know what SNMP is scares the bejebus out of them. Which sounds bad, but being on the manufacturing side, I have some sympathy, because there are a lot more people out there that know SNMP and a little ethernet than know even what this article tells you about DOCSIS. I've seen it on 3rd party support forums all over the place. People try to apply their little bit of knowledge to a problem well beyond their scope and come up with something completely crazy.

Or think of it another way. When my car is broken and I take it into the shop, My googling of the problem and trying to provide help to the mechanic with it just makes it take longer and annoys the mechanic...and I'm usually wrong anyway. Might as well shut my yap and let him do his job.

As to the Reboot. Part of the 10% case of modem failure could also be a tuner in a bad state, or a stuck power algorithm. Again, there could be problems in the firmware. Starting over will often clear those problems. So it's always a good place to start. If the problem comes back, then you have to get more serious. In your case, I'd be curious to see the corrected and uncorrected errors numbers. Just because the power and SNR look good, doesn't mean the QAM constellation looks good. That would be a plant or a tuner issue.

Sure, I can see where it's a bit of a double edged sword and definitely where it results in a longer tech support call - but not where it was necessarily that bad.

In my case, phone support tried to convince me that the CTMS timeouts were on the ethernet link between my router and modem. I explained that I've never heard of CTMS on ethernet before, and was in the process of getting the MAC addresses of my equipment to point out that the second MAC address mentioned didn't match any piece of equipment that I had. Apparently, her technical lead overheard part of the conversation (and I never had to complete the list) as she did check further into the signal problems I'm having and scheduled a tech to come out (who couldn't determine anything himself - he thought that the signals looked weird, but didn't have the ability to correct anything himself. So another visit is scheduled by someone higher up). Interestingly enough, she never did correct herself and admit that it wasn't my router that was having the problems, she just dropped that part and went on.

So, from your example, no I wouldn't presume to google auto repair tips and tell the mechanic what to do because any moderately competent mechanic should know more than me. I haven't been so lucky when it comes to ISP phone support.

As regards the tech support thread of comments, I think this xkcd comic applies: http://xkcd.com/806/

I have to believe if the customer was not expected to be dumb as a pile of rocks, and could describe the contents of the signals page to the tech support (or the remote tech supports had easy access to such info) then the decision to roll a truck or not could be reached much more quickly.

Let's assume two modems, both trying to talk to the main office at the same time and on the same line/channel. Normally this would be like trying to listen to two people talking at the same time. But CDMA (code division multiple access) sets up conventions by which the modems are only allowed to sing certain notes in order to communicate. Let us say that each modem can communicate a 0 with a certain note and a 1 with a certain note. Each modem's notes are different from the other modem's notes.

So when the main office hears modem 1's notes only or modem 2's notes only, it can easily understand that one of the modems is not presently communicating and can easily translate the talking modem's notes into 1s and 0s.

But when both modems sing a note at the same time, the result is a recognizable chord or a harsh beat frequency sound. The result is effectively one of four chords (0-0,0-1,1-0,1-1). Because the main office knows how the modem's notes mix mathematically to create the chords, it can decode whether each modem is singing a 0 or a 1.

This works both ways. The modems listen to and decode the main office chords (the main office is actually a bunch of modems or one big modem capable of singing in multiple voices at once.)

Iljitsch van Beijnum / Iljitsch is a contributing writer at Ars Technica, where he contributes articles about network protocols as well as Apple topics. He is currently finishing his Ph.D work at the telematics department at Universidad Carlos III de Madrid (UC3M) in Spain.