The first day of the CRTC's network management hearing featured some interesting discussion from deep packet inspection providers Sandvine and Juniper as well as Canadian consumer groups. A summary of the day, thanks to University of Ottawa student Frances Munn, is posted below. There is additional coverage from the National Post liveblog, CBC.ca, and lots of posts on Twitter [update: CRTC transcript].

While the Sandvine comment that an unmanaged network is not a neutral network captured the headline, I thought the more important part of the DPI provider discussion was Juniper's focus on consumer-controlled prioritization. This is not currently available in Canada, but the notion that consumers should choose how to prioritize the bandwidth they pay for would address many concerns.

It was the consumer presentation that did the most to link network management to the law and it also highlighted reason for great concern. I think that the consumer groups rightly focused on who should bear the burden of demonstrating that DPI and other Internet traffic controls are consistent with current Canadian law. The groups argued that these are prima facie violations of Section 36 of the Telecommunications Act and that the onus therefore should fall on the carriers to show that there is a serious problem, the solution minimally impairs users' rights, and is proportional to the harm.

Unfortunately, the questions that followed suggest that the CRTC Commissioners start these hearings having accepted the carriers' claims that congestion is a problem and that inhibiting the use of deep packet inspection could result in increased consumer costs for Internet access. This suggests that there is a steep mountain to climb in these hearings, leading me to believe that the issue will ultimately be a political one with pressure on the Conservatives to join with the Liberals and NDP in supporting net neutrality.CRTC Net Neutrality Hearings, July 6, 2009

Sandvine Incorporated

Opening remarks: Don Bowman, Chief Technology Officer

Sandvine is a Canadian company that provides technology for internet service providers. Their primary argument was that congestion on the internet is inevitable and that an unmanaged network is not a neutral network. Applications that require large amounts of bandwidth can slow down the internet for time-sensitive activities like VoIP and online gaming. As such, they design technologies prioritizes traffic in times of congestion.

They argued that prioritization can serve several legitimate purposes such as scrubbing malicious traffic from networks and guaranteeing quality for emergency transmissions.

Prioritization is done through a variety of ways and through a variety of technologies. As such, Sandvine argued that any new regulations for traffic management would quickly become outdated. For instance, the policies for DSL and Cable would have to differ because DSL is not usually oversubscribed due to upstream use while the opposite is true for Cable.

Part of the problem is the difficulty of predicting networks over time. Congestion occurs during peak times, but peak times cannot always be predicted due to unforeseen events such as Michael Jackson’s death.

Sandvine also cautioned against a policy targeted at disproportionate users of bandwidth. For example, both VoIP and online gaming do not use much bandwidth. However, a policy that targets all disproportionate users would also hurt those using VoIP or other low bandwidth activities. Instead, Sandvine advocated a policy that prioritizes time-sensitive applications that are not bandwidth intensive. Therefore, even in times of congestion, the goal is to maintain their quality.

Sandvine briefly addressed the privacy concerns raised by traffic management. They argued that the technologies do not raise privacy concerns because they do not read content such as email or video. They pointed out that privacy issues depend on how the technology is used rather than the technology itself.

Finally, Sandvine argued that there is an understanding that some traffic management is necessary. However, they argued against the imposition of absolute rules. Since traffic management practices quickly become outdated, they submitted that each traffic management practice be judged individually at a moment in time.

Sandvine finished by implying that regulations were unnecessary. They pointed to the small number of complaints and asked whether anything was broken that required a real-world need for new guidelines.

Questions

CRTC Chair von Finckenstein opened the questions by asking Sandvine to clarify what they meant by traffic management, and how their submission conformed to the objectives of the hearing. In response, Sandvine argued that their submission was based on how to best manage internet traffic to create an equal space for everyone. They claimed their main purpose is to manage congested internet traffic. For instance, their technology will delay a live streaming video if there is a call attempting to go through the network.

Von Finckenstein then asked how voice and gaming can always be prioritized. Sandvine described how VoIP packets are prioritized over other packets. First, a service provider has to identify the VoIP packets. All VoIP packets have similar qualities – they are low packet and use relatively little bandwidth. When given priority, the idea is for the VoIP packets to precede non-voice packets.

Von Finckenstein pointed out that there has been some criticism levelled at congestion remedies and he wanted to know how advanced the technology was. Sandvine said that the technology is improving over time and that Sandvine products are evolving to measure and detect congestion in a simple fashion. Von Finckenstein followed up by inquiring into the cost of investing in these technologies. Sandvine said it depended on the goals of the company. Sandvine is a commercial business out to make a profit and so it designs its technologies to meet the demands of its customers.

Von Finckenstein asked about a system where users are charged extra for the excessive amount of bandwidth they use. Sandvine said that this was an area of active research where the person pays for the amount of network usage they use. However, Sandvine expressed scepticism that monthly usage charging could impact congestion. An economic model that deviates from the simple flat rate 40 dollar/month model would have to have transparent and visible measures of when charges are and are not incurred. Sandvine argued that it could become a powerful model in the long run, but that it would require a change in consumer understanding and acceptance. For the most part, consumers do not have an understanding of the value of a byte, and so shifting to an economic model would require further consumer education.

On the privacy question, Von Finckenstein asked for a simple explanation on why the network requires any information on content to manage traffic. He said he did not understand why they had to look into it or use it. In response, Sandvine claimed to have no interest in the user’s actual content – e.g. "Spiderman" or "a phone call to your niece." Sandvine did say that knowing how many people are engaged in an activity allows the operator to provide a better service. Sandvine claimed it is useful to know the duration of the session, the amount of bandwidth used, and the number of people engaging in the activity.

Von Finckenstein finished his questioning by asking about the difference between wireless and non-wireless technologies. Sandvine explained that wireless operators suffer from mobility issues. For instance, wireless networks overloaded after Michael Jackson was taken to the hospital. As such, Sandvine admitted that additional tools for wireless technology were likely necessary.

Leonard Katz proceeded with the questioning. He opened by asking whether networks can design their technologies around predictable peaks. Sandvine replied that long-term peaks are predictable, but stressed that events like Michael Jackson or the Olympics lead to congestion at unpredictable non-peak hours.

Leonard Katz then asked how network providers can turn Sandvine products into profits. In response, Sandvine claimed that providers strive to maintain quality for all users. Sandvine products help measure events that lead to diminished capacity so that they can protect users when the internet is congested.

Leonard Katz asked about how much the provider needs to know about his online activities – e.g. when Leonard Katz plays online games. Sandvine said that measures are made in several ways, but that the technology has not reached a point where it can measure how long a single individual is playing a game and how much technology he is using. However, Sandvine said that the technology could be heading that in direction. Theoretically, the user could one day call their network provider and ask how much bandwidth they are using when they do a particular activity.

Katz pointed out that providers advertise their speeds "up to" a certain level when, in reality, the speed level is often below the advertised level. He wanted to know if it would be possible for providers to advertise a service with a minimum guaranteed speed. Sandvine replied that this was "absolutely" possible and that some businesses already had this guaranteed service.

Suzanne Lamarre, the Regional Commissioner for Quebec, opened her questioning by asking for a clarification on "malicious traffic"" and from whose perspective they were defining it. In response, Sandvine claimed that they look at everything from the consumer's perspective – e.g. spam or unsolicited pornography. In addition to being unsolicited, he said that the "malicious" traffic can also have an economic or quality cost to an operator by slowing down the system.

Timothy Denton, the National Commissioner, inquired into the Sandvine position that "an unmanaged network is not a neutral network." In response, Sandvine claimed that when the internet was first created, users were on equal footing. Over time, however, users have become self-interested and use more bandwidth than others. Consequently, an unmanaged internet leads to some users being able to serve their interests at the cost of others.

Juniper focuses on building networks that are fast, reliable and secure. In their submission, they made a differentiation between two methods of traffic controls called "White Listing" and "Black Listing."

White Listing: Users that require a certain amount of "enhanced" treatment on a network. E.g. when it comes to VoIP there is a higher priority given to 911 calls.

Black Listing: Specific users or applications are restricted from using the network. E.g. malicious users that are blocked from accessing the network.

Juniper explained that there are several traffic management tools that can be used for traffic management:

1. Network Level Processing Controls: These refer to how actual packets are controlled over the network. They are "static" in nature and unable to respond to new network requirements. 2. Application Processing Controls (DPI): They are more dynamic and are used for security, to prevent malware and attacks, etc. 3. Policy Management Processing Controls: They create communication links between the network and the customer or application and are used primarily for VoIP and video.

These tools give providers the ability to respond to new applications in dynamic ways. The ability to interact with user experiences gives the network the flexibility to innovate and keep up with the innovation currently occurring in applications today.

In sum, they stressed that the ability to innovate and to interact with users is key to moving forward. Fifteen years ago, the internet was primarily made up of file transfers and some email. The types of applications running today are fundamentally different than the ones used fifteen years ago. Applications like VoIP, gaming, and live streaming change the network and the way it is delivered to the customer. Consequently, it is important that users are able to interact with the network.

Questions

Von Finckenstein opened by asking how dynamic controls can properly deal with congestion problems. He wondered if, like Sandvine, Juniper believed the technology was still unable to deal with congestion problems. Juniper responded that dealing with congestion can be the "weakest link" because you have to determine where it is and how to deal with it. They stressed that strategies depend on where the congestion occurs.

Candice Molnar, the Regional Commissioner for Manitoba and Saskatchewan, asked about the consumption model as a form of managing traffic. In response, Juniper said that consumption models are useful in dealing with the five percent of "heavy users" that take up 50 percent of internet traffic. However, they added that the consumption model does not help in dealing with unexpected world events. For instance, the internet became congested after Michael Jackson’s death, but such congestion was not a predictable peak. The congestion model is only useful when peaks can be predicted. They also added that most heavy users are not aware that they are heavy users.

Molnar asked about using consumption models to manage all internet traffic on both off peak and peak hours. Juniper argued that part of the problem in the consumption model is that consumers do not understand when peak and off peak hours occur. For instance, consumers understand that phone calls are more expensive during the day. However, consumers do not have such an understanding with regards to IP technology.

Molnar then asked for a clarification of the Black Listing and White Listing methods. Specifically, she inquired about an "open garden" in White Listing application where a user prioritizes certain activities (i.e. live streaming of news). Juniper said that the idea is for a user to signal that they want an application prioritized – e.g. gaming or live streaming. In this way, consumers are able to indicate what applications are most important to them. Further, Juniper pointed out that while it is impossible to guarantee a certain amount of bandwidth, this system could allow providers to guarantee a certain application as an alternative form of a minimum guarantee. For instance, if live streaming of news is guaranteed, another user in the household would find his or her traffic slowed down to guarantee the live streaming. Juniper went on to clarify that while the technology exists, it is not widely deployed.

Juniper pointed out that, in some cases, service providers decide which applications to prioritize. For instance, some providers have publicly stated that VoIP will take priority in times of congestion. However, Juniper argues that this technology is static and that more dynamic options exist such as being able to indicate to the provider what sort of applications are valued to the individual users.

Juniper clarified that while the technology does exist, it has not been deployed in Canada yet. In addition, they also pointed out that prioritizing cannot satisfy everyone. For instance, if Scott Stevens prioritizes live streaming news, his kids will not have a good gaming experience.

Molnar changed the focus to deep packet inspection (DPI). She asked how it can be used to manage traffic – i.e. peer-to-peer downloading. In response, Juniper claimed that it can be used in many different such as for security interests. They called it a "broad" technology beyond regulating peer-to-peer.

Molnar shifted her question to privacy concerns. She referred to DPI being used to "sniff" out traffic flows by observing what consumers are using and the amount consumed. She asked what the purpose of this monitoring is. In response, Juniper claimed that monitoring was important to the quality of the service. Further, Juniper explained that it is important to identify the high bandwidth users. If congestion is being delivered by a small number of users, taking action on a small number of users benefits everyone. This way, Juniper can respond to a certain number of users rather than everyone at once.

Katz wondered if there were tools available to shift non real-time downloading to reduce congestion in networks. Juniper argued that some consumption based models can shift when downloads are done. However, Juniper pointed out that it does not address unexpected events. It is possible to move people around to deal with predictable problems on a daily basis, but not the unexpected events.

Lamarre finished the questioning by asking at what point the privacy issue becomes a concern when these technologies are sold to their clients. Juniper argued that their focus was not on obtaining information, but to protect users from malicious software.

Canadian Consumer Groups

The panel described themselves as representing the interests of Canadian consumers. They argued that traffic management should be used to help consumers access the activities they would like over the Internet. They submitted that where there is a conflict between providers and consumers, the priority should be in delivering content to the consumer.

They addressed some of the Commission’s assumptions that it is necessary to violate some of the consumer rights under the Telecommunications Act in order to protect the integrity of the network. Further, they argued that it is up to the providers to justify interfering with the purpose of s. 36, which prevents Canadian carriers from interfering with content except when the Commission approves otherwise.

They argued that all DPI and other Internet traffic controls are prima facie violations of s. 36 and should have to launch individual applications. They went on to argue that none of the ISPs who presented fully explained their DPI levels and how they are carried out. Since such understandings are key to evaluating how content is being impacted, they presented a three-part "test" with the burden of proof on the provider to show that the violation of s. 36 is necessary:

1. There is a serious problem and a pressing need to address it. 2. The solution minimally impairs the user’s rights. 3. The solution is proportional to the harm

Further, if the application is granted, the consumer should be fully informed of the impact on their content rights.

The panel argued that legitimate violations of s. 36 would protect the end users from harmful and unsolicited content such as malware and spam. Further, the guidelines should treat all providers and consumers equally and protect the privacy and security of users.

In Canada, the Commission retains the right to regulate Internet traffic in a way that is not available in the United States. As such, the panel urged them not to give up this authority. In the U.S., Obama and others have indicated their support for net neutrality. In addition, bills have been passed in the U.S. that promote the equality of applications and provide notice to consumers.

The Consumer Groups reiterated that DPIs should only be used to protect users from unsolicited and malicious content. They stressed that it is "totally unacceptable" when DPIs are used for a provider’s financial gain-such as for advertising goals or targeting specific kinds of content.

Finally, the Consumer Groups addressed disclosure. That is, whether the ISP has made its practices known to it costumers. The Consumer Groups argued that such disclosure should be made prominently on the ISP’s website. Bell, for instance, has refused to indicate how its throttling policies are carried out. The Consumer Groups seek information on the time of day when throttling is carried out and whether it could lead to extra billing for customers. They also expressed concern when ISPs claim confidentiality over their DPI practices. They argued that if DPI practices are used solely for trafficking reasons then there is no need for confidentiality.

For their final point, the Consumer Groups addressed privacy. They argued that the policies are "highly invasive" and that DPIs look deeply into packets that reveal what websites are visited and for how long. They argued that this clearly violates the Telecommunications goal of protecting the privacy of users. ISPs do not get their customer’s permission for this information nor do they make it clear that they are collecting this information.

Questions

Von Finckenstein opened the questioning by asking the Consumer Groups to clarify their position on DPIs and whether they are by themselves a violation of the Telecommunications Act. He wanted to know why it was a privacy violation for ISPs to use DPI technology solely to manage their networks. In response, the Consumer Groups argued that privacy violations occur even when the DPIs are only used to preserve the integrity of the network. Further, they argued that the burden is on the ISPs to show that the congestion justifies the privacy violations. The Consumer Groups argued that unregulated DPI use could lead to abuses of the system.

The Consumer Groups went on to explain that the burden of proof is key. In the United States, the burden is on the carrier to prove that its actions are necessary. In Canada, the Telecommunication Act stipulates that carriers must provide their services without discrimination. Per the Consumer Groups, the problem is that carriers are not fully explaining how and why they are using this technology, which opens the door to possible abuses.

Von Finckenstein argued that he was not prepared to accept that using DPI technology is a violation of privacy in of itself, but moved on with the questioning.

Suzanne Lamarre proceeded with the questioning. She asked about the pricing differentiations in services. The Consumer Groups pointed out that the ability to tailor Internet prices to individual users currently exists. For instance, under this type of scheme, gamers might have to pay an extra 30 dollars a month for service just for being gamers.

Lamarre pointed out that different pricing systems are in itself discriminatory and asked if the Consumer Groups were suggesting that everyone pay the same rate regardless of their usage. In response, the Consumer Groups said it was not unreasonable to charge users extra for using a certain amount of bandwidth (e.g. more than 200). However, they were concerned that that DPI technology might be used to charge users extra based on their activities rather than their actual usage. For example, they pointed out that a gamer could be charged more simply for gaming even though he might not be using more bandwidth than someone who watches Youtube videos.

Lamarre went on to address the five percent "high end" users who use more bandwidth than others. She asked whether it was fair for light users to in effect subsidize their usage. The Consumer Groups reiterated that it was appropriate to charge for usage, but expressed concern that prices would go too high too quickly since ISPs remain an unregulated industry. Further, Lamarre pointed out that it is difficult for users to know how much bandwidth they are actually using, and the Consumer Groups agreed that such information would be helpful for users.

Finally, Lamarre went on to address their three part test, comparing it to the Oakes test. She asked whether s. 36 can be considered a "fundamental right" that deserves a high standard of Oakes protection. The Consumer Groups argued that s. 36 – the right for content to pass without it being observed or changed – is a fundamental right that deserves a high standard of protection.

Katz opened his questioning by pointing out that operators strive to make a profit. He criticized the Consumer Groups for hindering their ability to make money. He argued that without the ability to preserve the integrity of the network, carriers will face higher costs, and consumers will have to pay more. In response, the Consumer Groups claimed they have heard statements from Canadians saying that they would be willing to pay more for unhindered Internet access. However, they doubted that prices would significantly increase in the near future.

Katz pressed the Consumer Groups to recognize that if the carriers’ costs go up, so will the price of Internet usage for consumers. As a result, demand will decrease and Canada will fall further behind the rest of the world. The Consumer Group responded that they were not trying to create a blanket rule against carriers. They pointed out that they were arguing against unjust discrimination, not no discrimination at all. They said there should be a reason why a provider has to discriminate – such as to avoid prohibitive costs.

The Consumer Groups expressed concern with regards to what DPI practices were doing and the effects they were having. They argued that the reason the burden of proof was so important was because it should fall on the provider to justify their actions. They pointed out that one of the problems with consumption models was the lack of disclosure beforehand to give the consumer the choice to alter their behaviour. The Consumer Groups expressed concern that providers are acting unilaterally and that the public lacks information on what is and is not necessary to protect the integrity of the network.

Timothy Denton’s main concern was the notion that the CRTC should investigate whether or not carriers have made reasonable investments in preventing congestion. He wondered whether the Consumer Groups were taking congestion seriously enough, and he expressed reluctance at second-guessing carriers with regards to the congestion issue. In response, the Consumer Groups argued that DPI is not the only technology available for controlling congestion. They pointed out that Telus is not using DPI and is doing fine. Further, they argued carriers have a requirement to meet the needs of Canadians and to provide a reliable and affordable service. Consequently, the CRTC has a responsibility to look at whether there is some shred of evidence that carriers are addressing their problems. Denton finished his questions by admitting that the Consumer Groups made good points on both issues.

The answer is already in the envelope – as it always is with the CRTCThe fundamental problem is that CRTC commissioners get an extra “pension” by getting hired by Telcos after their terms are up. As a result, the Telcos get favoured treatment. Surprising? Hardly. Illegal? Yes – but Canadians have been conditioned to accept government that is corrupt to the core.

The technical side of this is a “solved problem in computer science”I’m a professional capacity planner, and my customers want certain levels of service, which they use bandwidth (and other) controls to achieve.

For example, if VOIP requires a response time of, say, 1/3 second, then we provide the network speed to achieve it in the normal case, and limit other services’ use if the VOIP response time starts to rise.

To achieve this, the ISP has to tell me which are the priority services, the time-sensitive ones, and which are the low-priority services, one like file transfer, downloading new copies of Linux and the like. And, of course, the services in the middle like web pages.

If there’s a spike in traffic, the VOIP users get enough to barely meet their minimums, the FTP users get almost nothing, and the web users get whatever is left. Everything does run rather slowly, but only during the spike.

The ISPs then worry about having enough bandwidth for the normal case, plus some extra: the rule of thumb is to have about 20% or so “headroom”.

The first part is technical: if the rules are arguably fair and are published, then we can have a debate in the CRTC if we think they’re wrong, and get them fixed.

The second part is financial. If one ISP can’t buy enough bandwidth for the 80% “normal” load, then we need to have
another competitor who can. It becomes a CRTC issue if theyre’s a monopoly, or the more common case, a duopoly of cable and telephone companies offering the ISP services. Then we need to treat them like regulated monopolies and open their books in front of the CRTC in order to strictly regulate the prices that charge, so that can meet a public performance standard 80% of the time.

In my considered opinion, debates about the technology are wasted effort. We need to make the the rules for sharing public, so we know what the ISPs are guaranteeing us, and we need to audit their performance, so we can act when they fall short.

Pay more get less…The common trends since 2000 is ISP don’t invest in insfrastructure anymore, they invest in software to regulate a lot more of users over an infrastructure that was put in place for a lot less of users.

It’s clear that the Telco want to make the most money without giving the services customers pay for. How can it be clear? Just look at Canada broadband cost versus services return compare to the rest of the world.

Corporation shouldn’t be the road constructor of the new information highway. Like country should develop car infrastructure, they should develop technology infrastructure that are not tie to a maximum profit.

One way regulationWhy is the problem only with the consumer?
The page providers are also creating unnecessary traffic that should be regulated FIRST.
I have found that leaving myself logged into facebook generates constant traffic. I now log out, but why are they allowed to push so much my way when I don’t want it?
Why can Yahoo mail display ads that make my usage crawl?
Why is there no limit on the size of the ads on a page? The CRTC can limit the ad volume on TV so why not on the internet? I’m not sure but I think they don’t limit ads currently but did so before.
If the CRTC needs to protect bandwidth why are they not looking at unnecessary push technology?

Let’s not forgetthat IPv6 incorporated capabilities such as QoS (Quality of Service) which would allow applications such as VOIP to request a bandwidth. They are correct in that some low bandwidth applications such as VOIP are particularly sensitive to loading… The human ear is very sensitive to stuttering in the audio stream.

One problem, however, is that the ISPs, and the backbone carriers in particular, have not invested in upgrades to their networks to support things such as IPv6 (which would have gone a long way to relieving a number of QoS issues that they are facing). Your computer at home is probably capable of it. However, that is of no use if the backbone network doesn’t support it. Instead, they have spent the money on things that do DPI and other mechanisms. Given that the main backbones are controlled by the same groups that have a commercial presence in competing technologies, I have to ask myself if the management tools used are there to manage throughput, or to position them to throttle competitors

Bigger pipes?In all this talk of network management, costs, prices, and profits, where is the discussion free markets and competition? The fundamental problem here is, if you believe the providers, that we’re trying to cram more data through the same sized pipes. The ultimate solution is to build bigger pipes and any discussions on network management need to be framed in that context.

So where is the economic motivation to build bigger pipes? Basic economics tells us that high prices and profit come from scarcity. If you have control over a scarce resource that everybody wants, you set the prices and reap the profits. DPI simply enables the duopoly to maintain the status quo scarcity. What is needed is economic pressure to increase the abundance of bandwidth to ultimately solve network management problems and reduce prices. The duopoly, and their suppliers like Sandvine, benefit by keeping this from happening.

Network congestion is not a new problem unique to ISPsRoad planners manage network congestion without scanning the contents of your vehicle and throttling the transport of certain goods that they consider less important. Hydro companies manage network congestion without scanning what appliances you plug in and throttling certain appliances that they consider less important. Water, gas, sewers, every utility manages network congestions without logging everything you use that utility for and throttling certain certain types of activities. In internet is no different.

Re: Network congestion is not a new problem unique to ISPsStephen mostly has it correct. This is a simple math problem. If an ISP has UB upstream bandwidth and NC number of customers, each customer is able to achieve a minimum of UB/NC bandwidth.

No DPI is needed for this. This requires only the most shallow and basic (i.e. the kind of traffic management that has been around since the dawn of the Internet) of inspection and that’s to determine traffic source and destination (i.e. “who’s traffic is this” — not “what traffic is this”). Such inspection has no idea what the traffic is (VIOP, p2p, WWW, etc.), it just knows to which customer the traffic belongs.

Now, of course, most of the time every customer is not active, so in reality, the active customers (NAC) get a minimum of UB/NAC bandwidth and in fact even amongst active customers, not all of them are using their full UB/NAC allocation. In that case, the unused capacity should be divided evenly amongst the users that are wanting more than UB/NAC bandwidth. But again, this is always done fairly and never at the detriment of anyone’s desired use of their UB/NAC allocation. But still, no “what is this traffic” discrimination is being done.

So what of these “VOIP” type protocols? If an ISP wants to offer a customer an additional service of priortizing certain traffic *within* that customer’s UB/NAC allocation into their backbone, that is fine. But once the traffic coming from the customer is on the shared portion (i.e. the ISP’s backbone and onward to their upstream Internet connection) no priortiziation must be allowed.

If that means that a customer’s prioritized (i.e. within that customers allocation) VOIP traffic still doesn’t get the priority it needs, chances are it means the ISP is too oversubscribed for that customer’s needs and he/she needs to find an ISP that is not oversubscribing their Internet connection so much.

Now the logistics are a bit different as they offer a discount for participation, but that is only because they cannot impose control. It has to be willingly ceded by their customers. If they did not need your permission to throttle your AC I’d be willing to bet they’d take the same approach as the ISPs.

Road Management – also a bad exampleThose road managers get to use things like stop and yield signs, red lights, detours and the likes… that controls what flows where for how long, etc… and a road at 5AM is the same road at 5PM, but the traffic has become horrendous and we all wished someone would do something about that rush hour traffic we’re stuck in…. but once it comes to out internet, do we really expect that the pipe should handle all kind sof traffic regardless of time of day? If we can’t get that from our roads, why should we expect it from the technology providers?

There are three solutions to fixing the roads version of this problem. Stop signs, traffic lights, etc. are all a stop-gap repair when the real problem is simply volume. Volume can be handled in one of three ways:

* build more capacity
* make using the existing capacity more expensive
* build alternate routes with tolls that fund the building of the routes

The third option is really just a flavor of the second option but usually implementing the third option in place of the second option is simply not achievable.

I’m not sure which of those I want to see applied to the ISP situation in Canada.

But on the other hand, I am not even slightly convinced that there really is general congestion on our portion of the Internet (i.e. the ISPs that connect us Canadian consumers to the greater Internet). I might accept that there is occasional congestion but that is handled like driving at 5pm. You wanna drive then, you drive slow. You don’t widen highways because there is congestion for an hour or two a day. You let the congestion be the antagonist to people altering their behavior to better use of the resource.

I think “congestion on the Canadian Internet backbone” is something invented by the major carriers to justify being able to inspect content for marketing purposes and ratcheting up fees and ratcheting down service, despite “speeds up to” claims.

NOC TechOur Roads In Toronto are Mismanaged, if you look at traffic lights some times you can hit 10 reds in a row, and that, my friends that shouldn’t ever happen. the fact is we do not have proper road management thats for one.

Another thing. I agree with Brian, There is no congestion, this is all a scheme to get money from us all,, make us pay over a certain limit.

They should be looking at QoS, and should not be charging that much anyways to begin with, nor for their home phones (1000 compatitors out there for VOIP – like magicJack or Vbuzzer you pay pennies and they love your business)