A prototype radio is using new technology to gain access to large swathes of wireless spectrum that are not being used to their full potential. Built by Qualcomm, it is the latest attempt to communize the wireless spectrum used by billions of smartphones, tablets, and other devices.

On Monday, the company teased the new radio, which has been built to flip between licensed and unlicensed spectrum using a technique known as spectrum sharing. With it, the radio listens into different frequency bands, finds the fastest ones, and allows smartphones and other devices to pluck signals from them.

Efficiently sharing spectrum could help address the explosive demand for mobile data spurred by new technologies like virtual reality and sensor networks in factories. In the view of Qualcomm executives, it could also form a central part of the next generation of wireless technology, also known as 5G.

The 5G New Radio, as the prototype is called, searches up and down the wireless spectrum for potential openings. The radio can broadcast on frequency bands below 6 GHz, where most of today’s devices send communications, and then leap into higher bands in the millimeter-wave range.

The concept of sharing spectrum faces uncertain regulatory hurdles, though. Wireless carriers pay billions for exclusive rights to thin slices of radio spectrum, and many might not want to surrender those rights. The government, for its part, will likely have to keep a database of different spectrum bands, a kind of traffic report, that devices will review before picking a channel. The database would also keep track of which bands are off-limits or used by the military.

Macom bid $770 million for Applied Micro in a deal that aims to sell off quickly Applied’s X-Gene ARM server SoC unit. The deal is a sign the big data centers continue to drive lucrative communications markets aggressively and are not poised to embrace ARM servers in the near future.

Macom believes it got Applied’s “gold nugget” of CMOS comms chips for a bargain at a 15.4% premium in a semiconductor merger frenzy that has seen premiums above 30%. In an indication of the pressure for Applied to cut its losses on its X-Gene ARM server SoC, the mixed cash and stock deal started at a 10% premium until Macom’s stock price rose.

With the march to the end of the year underway, the gigabit Internet craze has shown no sign of slowing. In November alone, there already has been an onslaught of announcements. Let’s take a look at a few of them.

Canadian operator Videotron has begun testing DOCSIS 3.1 in its network and is deploying DOCSIS 3.1 modems. Beta trials are underway to Montreal-area homes and businesses in order to assess users’ behavior and reactions, as well as the reliability and performance of the technology in a real-environment.

Comcast (NASDAQ:CMCSA) has signed a seven-year bulk agreement to provide gigabit Internet service to residents in the Coda, a multiple dwelling unit in the Cherry Creek neighborhood in Denver.

Comcast also deployed DOCSIS 3.1 in Detroit and announced plans for 2017 rollouts in some California markets, as well as in Utah, Oregon and southwestern Washington. The company already offers its fiber-based symmetrical 2 Gbps Gigabit Pro in the last three areas.

Hawaiian Telcom has made gigabit Internet available to Hawai’i Island’s Pu’u Lani Ranch subdivision and the surrounding area.

AT&T recently launched its AT&T Fiber gigabit service

Atlas Networks has deployed V-band wireless gigabit Internet access in Seattle. The service is based on Vubiq Networks’ HaulPass V60s Gigabit Ethernet wireless solution. This uses a 60 GHz V-band millimeter wave broadband wireless radio with an integrated two-port Gigabit Ethernet switch. The Atlas network reaches more than 200 buildings in Seattle.

Canadian cable operator Videotron has begun testing DOCSIS 3.1 in its network. The MSO is now deploying DOCSIS 3.1 modems and adapting its equipment and working protocols to the new technology.

Beta testing is underway at Montréal-area homes and businesses to assess users’ behavior and reactions, as well as the reliability and performance of the technology in a real-life environment. After testing has been completed, it will be possible to roll out DOCSIS 3.1 quickly as it is compatible with Videotron’s existing network, where it has been upgraded. Only a new modem for the user and a software upgrade on the network will be needed.

DOCSIS 3.1, which has just begun to be deployed at scale, is designed to deliver up to 10 Gbps downstream Internet speeds over existing HFC networks. Most deployments to date have featured 1 Gbps speeds.

In the early days of the internet, communication was by email. Originally siloed by companies like Compuserve, AT&T and Sprint so that messages could only be exchanged with others on the same system, email is now ubiquitous. Pretty much anyone can communicate with anyone else without worrying about app or device or browser.

Today there are additional methods of communicating via the internet, such as chat and voice. These new methods, however, are currently similar to early email: siloed by different vendors so that users can communicate only with other users on the same system. Matrix.org aims to change this, so that any user on one system can communicate with any user on a different system; just like email today.

Matrix is an open standard for interoperable, decentralized, real-time communication over IP. It can be used for any type of IP communication: IM, VoIP, or IoT data.

To this end, Matrix has announced and launched the formal beta of the new Olm end-to-end encryption implementation across Web, iOS and Android. “With Matrix.org and Olm,” commented Hodgson, “we have created a universal end-to-end encrypted communication fabric — we really consider this a key step in the evolution of the Internet.”

Olm is the Matrix implementation of the Double Ratchet algorithm designed by Trevor Perrin and Moxie Marlinspike.

A new industry-funded research study, titled “Broadband competition helps to lower prices and faster download speeds for U.S. residential consumers,” analyzed DSL, cable, and fiber broadband plans from the 100 largest designated market areas in the U.S. and found that when a city has gigabit internet speeds, the price of plans with slower speeds drop. Therefore, customers who don’t purchase gigabit internet plans will still benefit from their availability.

-The presence of gigabit service in a market is associated with a $27 decrease in the average monthly price of broadband plans with speeds of 100Mbps or greater but less than 1Gbps. That’s a 25 percent price reduction.
-Markets with gigabit Internet also see smaller price decreases for plans as slow as 25Mbps.

One limitation in this study relates to the size of the markets analyzed. The average market studied has a population of 1.45 million people and 7.38 Internet providers. Obviously, there isn’t much overlap among wired ISPs, as cable companies in particular avoid each other’s territory. There might be seven providers across a sizable area, but any individual neighborhood is unlikely to have much, if any, broadband choice.

Separate research from the Federal Communications Commission found that most Americans have no choice when it comes to high-speed Internet providers at home. As of June 30, 2015, only 22 percent of developed census blocks had at least two ISPs offering plans at the FCC’s broadband threshold of 25Mbps downstream and 3Mbps upstream. There were zero such providers in 30 percent of census blocks and one provider in 48 percent of blocks. About 55 percent of census blocks had no 100Mbps/10Mbps providers, and only about 10 percent had multiple ISPs offering those speeds.

MBBF2016 Mobile carriers should only adopt 5G “if we’re able to create new markets”, Craig Ehrlich of the Global TD-LTE Initiative (GTI) has warned the mobile network industry, adding: “If 5G does not focus on that then we’re buying a lot of equipment and [just] talking a lot of hype.”

Huawei, organisers of the Global Mobile Broadband Forum where Ehrlich was speaking, is an enthusiastic advocate of 5G. Hearing a speaker sound a warning over the blind adoption of 5G for its own sake was a refreshing novelty.

“We need to find a way for our industry, for operators, to have a bigger piece of the pie,” continued Ehrlich, lamenting how the mobile industry “has been just that pipe that we feared”.

This is a repeated theme at MBBF so far, with virtually everyone queueing up to announce how they’re going to lead traditional value-creating industries – manufacturing being a key sector – by the nose into this shiny new world of industrial sensors plastered over every available machine.

China Mobile’s chief exec, Lie Yue, painted a rather rosier picture, telling the audience of his company’s 497 million subscribers and 1.4 million LTE mobile phone masts – “the biggest VoLTE network in the world” as he put it.

“Before 4G mobile apps were not that popular,” Yue said. “In the past various apps were isolated from peoples’ lives. With 4G networks those mobile apps have become integral parts of our lives. What kind of things will 5G change?

“5G will change our society,” he continued, introducing another conference trope, the Internet of Everything. Asia’s mobile operators see a future where everything is connected as a matter of course – the Internet of Things (IoT) but applied to every single item we use or interact with in any way.

“When it comes to objectives, by 2020 it is the hope that we will double the total number of connections, compared to the number in 2013. That means we will have more than 1.75 billion connections,” said Yue.

“Only when there’s a great number of IoT devices and modules,” he continued, “I believe the threshold will be very low for various industries to get into IoT markets. Through our efforts in devices, we want to work on smart home, reduce cost and increase adoption of these technologies.”

The good news about the Internet of Things (IoT) is that it demonstrates just how pervasive high-speed communication technology has become. Addressing software issues within the IoT is pretty straight forward—create some code that people can readily download to their hardware devices to maintain the operating integrity of their various communication devices.

Addressing hardware issues is not so simple. Even experienced hardware developers are challenged in addressing these issues. Part of the problem is attributable to the nature of hardware technology itself. Printed circuit boards (PCBs) and the various other pieces of hardware associated with them have essentially “run out of gas”. Moreover, wringing the last ounce of performance capability out of these devices often requires unprecedented and very creative engineering efforts.

The state of technology

At the start of 21st century, providers of equipment for the Internet struggled to design large routers and switches containing backplanes and plug-in line cards that had long internal connections running at 3.125 Gb/s. The primary concern was how to manage loss in those long paths.

Fast forward to 2016 and the picture has changed radically. Manufacturers of the semiconductors used in route processors and switch ICs have managed to engineer them so they operate at speeds as high as 32 Gb/s with a very high tolerance for loss along the signal paths. The ICs of 2001 could tolerate as little as 10 dB of loss in the signal path at 3.125 Gb/s. The ICs of 2016 can tolerate as much as 38 dB of loss at 32 Gb/s.

These changes have exposed a number of microdefects in the signal path that were of little consequence in previous products running at lower data rates. These microdefects include:

1. The parasitic capacitance of the plated through holes required to mount the connectors can introduce substantial bandwidth degradation.
2. Crosstalk between transmit and receive signals can be severe because those signals that tolerate 38 dB of loss at the receivers are far more susceptible to interference from a signal leaving a transmitter at full amplitude.
3. The difference in travel time of the two sides of a differential pair (skew) induced by the irregularities in the weave of the glass cloth required to provide mechanical strength in the PCB can cause a signal path to fail.
4. Signal loss along the data paths is still an issue but, in most cases, can be handled with the materials currently available used to fabricate PCBs and backplanes. However, as the shift is made to 56 Gb/s and higher, loss in the data path comes back into the equation as a major issue.

Solving the first three problems has met with varied success.

The first problem (excess capacitance in the plated through holes) has been dealt with by using a technique called back-drilling to remove the excess capacitance of the connector plated through holes that extend below the layer in which the signal traces are routed.

The second problem (excess crosstalk) has been dealt with by routing the signals farther and farther apart from each other so this problem is minimized. . However, when receive signals can be only 2 or 3% the amplitude of transmit signals this becomes mechanically very difficult to accomplish

The third of these (skew or difference in travel time in the two sides of a differential pair), is a result of the uneven distribution of the glass in the woven cloth and the resin used to bind the composite together. This unevenness is due to the fact that the glass bundles used to weave the cloth are much larger than the width of the traces.

Dealing with signal-path loss

As mentioned at the start of this article, advances in semiconductor technology have resulted in transceivers that can tolerate as much as 38 dB of loss in the signal path at 32 Gb/s. This has made it possible to design systems with large backplanes with plug in modules. When the move to 56 Gb/s is made the materials that are available as laminates no longer have loss values that allow the design of the very large routers required in server farms and large IT centers

Notice that the two curves labeled “cable” have far lower loss than any of the laminate systems used to manufacture current products. This loss is representative of what twinax cable can achieve. This solves the problem of how to achieve 56 Gb/s in large systems without the need to resort to optical interconnects.

More reliable & economical than PCB laminates

Since the signal integrity problem at high data rates in large systems is directly traced to microdefects in the PCBs and connectors used to manufacture very large, high performance systems, removing these signals from those PCBs and backplanes can solve the problem. This is not a new idea.

Conclusion

Advances in semiconductor technology are making it possible to connect components in products such as switches and routers at rates as high as 56 Gb/s. As these higher speeds are achieved, micro-scale variations in the materials used to fabricate PCBs and backplanes can significantly degrade signals. Among the problems encountered are loss, skew, crosstalk, and degradation due to the parasitic capacitance of the plated-though holes required to mount the connectors to the backplanes and daughter cards.

By using twinax cables to make these connections instead of implementing them in PCBs and backplanes with traditional traces, skew, crosstalk, and degradation from the plated-though holes can be virtually eliminated. Due to the ultra-low loss of the twinax cables, path lengths can be longer, or the frequency of operation can extend much higher than is possible with the laminate systems currently available.

We’ll compare a few types of frequently used filter, and look at how to start with single-ended filter design then transfer that to a differential filter design. We’ll also examine a few points on how to optimize differential circuit PCB design.

The International Telecommunications Union, ITU is concerned about the adequacy of the frequencies for future tens of billions of devices connected to the Internet of Things. Began yesterday in Geneva workshop, where IoT spectrum issue considered more broadly.

ITU points out that the IoT networks will be introduced in various countries to existing and new radio technologies based on networks. Some of them works in fully licensed frequencies, part of the free frequencies.

ITU already been decided at WRC-radio meeting last year that the different radio networks and systems of technical and operating conditions will be examined to support narrowband, short-term and long-range and networked sensors harmonized use.

The modern human’s worst nightmare: a power outage. Left without cat memes, Netflix, and — of course — Hackaday, there’s little to do except participate in the temporary anarchy that occurs when left without internet access. Lamenting over expensive and bulky uninterruptible power supplies, Youtube user [Gadget Addict] hacked together a UPS power bank that might just stave off the collapse of order in your household.

This simple and functional hack really amounts to snipping the end off of a USB power cable.

Telia Sonera Company that is going to bring 5G network technology to Helsinki together with Nokia for over two years. The two companies today in connection with asiatsa Slush event.

the operator can move to 5G technology has the flexibility offered by Nokia 4.5G- and through 4.9G technologies.

5G opening up new possibilities for the development of mobile services.

For consumers, Sonera notification is not intended to mean that the operator’s subscribers would have gigabit connections at its disposal in 2018. The 5G-standardization is probably not yet ready after two years, so that there would be some kind of pre-standard tuning,. Additionally, such networks do not exist terminals.

Especially for holidays is nice to find a cafe that offers a cup in addition to free wireless internet connection. Security company Kaspersky Lab, according to such a router, you may not want to join. Nearly one-third of them are completely unprotected and are just waiting to steal your information.

Kaspersky Lab to analyze as many as 31 million free Wi-Fi base stations in different parts of the world. Of these, as many as 28 percent were classified as risks in terms of security. In practice, all passing through these base stations data – personal messages, passwords and documents – can be intercepted.

Quarters, or 25 percent of the world’s Wi-Fi network is not encrypted or protected from any type of password. Three per cent to encrypt traffic to the WEP protocol that is broken in minutes with tools that can be downloaded for free online.

Finnish telecom operator AinaCom that the sale of wireless machine-machine interfaces that operate all over the world. Most preferably, machinery IoT access is available for EUR 1.5 per month in Finland and abroad in three euros a month.

AinaCom machine interfaces are suitable for controlling, managing, monitoring or data collection, as well as more general data communication solution.

Customers can manage their subscriptions and monitor their traffic. Data packets of 10 and 50 megabytes.

As the quest for gigabit and faster Internet speeds ramps up, it’s becoming increasingly clear that there’s no “one true path.” Rather, it’s akin to all roads leading to Rome: One destination, multiple ways to get there.

Some of the more common options include fiber-to-the-home (FTTH), DOCSIS 3.0 and 3.1 over cable’s HFC plant, G.Fast over telco DSL networks, 5G cellular, and fiber-to-the-building coupled with point-to-point wireless. A report commissioned by Liberty Global (NASDAQ:LBTYA) indicates that achieving ubiquitous gigabit speeds will require the deployment of all of these, matching technologies to local conditions. While the Liberty report focuses on the European market, the findings are equally applicable here in the States.

Until recently, FTTH has been the dominant technology for gigabit, with numerous deployments by Google Fiber (NASDAQ:GOOG), telcos, municipalities and their local power companies, and some cable operators. Unfortunately, the technology is expensive and physically disruptive to deploy, particularly in heavily built-up areas such as city centers. Though there are some notable exceptions, most FTTH deployments to date have been in greenfield areas such as new residential subdivisions. Telco FTTH deployments have been numerous, but not ubiquitous; they tend to be relatively small cherry-picked areas rather than network-wide upgrades. Google Fiber has backed off from its FTTH strategy and is examining other options, including wireless.

Cable’s DOCSIS 3.0 and 3.1 are cheaper and less disruptive than FTTH in that they do not require a rip-and-replace of the existing outside plant. Gigabit and near-gigabit services based on DOCSIS 3.0 have been rolling out for a few years now

As the quest for gigabit and faster Internet speeds ramps up, it’s becoming increasingly clear that there’s no “one true path.” Rather, it’s akin to all roads leading to Rome: One destination, multiple ways to get there.

Some of the more common options include fiber-to-the-home (FTTH), DOCSIS 3.0 and 3.1 over cable’s HFC plant, G.Fast over telco DSL networks, 5G cellular, and fiber-to-the-building coupled with point-to-point wireless. A report commissioned by Liberty Global (NASDAQ:LBTYA) indicates that achieving ubiquitous gigabit speeds will require the deployment of all of these, matching technologies to local conditions. While the Liberty report focuses on the European market, the findings are equally applicable here in the States.

Until recently, FTTH has been the dominant technology for gigabit, with numerous deployments by Google Fiber (NASDAQ:GOOG), telcos, municipalities and their local power companies, and some cable operators. Unfortunately, the technology is expensive and physically disruptive to deploy, particularly in heavily built-up areas such as city centers. Though there are some notable exceptions, most FTTH deployments to date have been in greenfield areas such as new residential subdivisions. Telco FTTH deployments have been numerous, but not ubiquitous; they tend to be relatively small cherry-picked areas rather than network-wide upgrades. Google Fiber has backed off from its FTTH strategy and is examining other options, including wireless.

Cable’s DOCSIS 3.0 and 3.1 are cheaper and less disruptive than FTTH in that they do not require a rip-and-replace of the existing outside plant. Gigabit and near-gigabit services based on DOCSIS 3.0 have been rolling out for a few years now, most notably in Suddenlink – now Altice (Euronext:ATC) – markets. Some other ops deploying D3 gigabit include Mediacom and Cable ONE (NYSE:CABO). DOCSIS 3.1 finally started ramping up earlier this year, with deployments from Comcast (NASDAQ:CMCSA), RCN, Atlantic Broadband and WOW!. Other ops are trialing D3.1, including Midco and Videotron in Canada. 2017 is expected to see DOCSIS 3.1 deployments at a large scale.
SPONSORED CONTENT ?
Break out of the time-shift holding pattern
Now that content is being delivered to a myriad of devices from phones to tablets and TVs, the US Pay TV Industry is prepped to provide the TV anytime solution subscribers are demanding. However fear and uncertainty has lead to deployment paralysis.
Brought To You By

Like DOCSIS 3.1, G.Fast is just beginning to come online with a few deployments of ADTRAN (NASDAQ:ADTN) technology by telcos. The technology is somewhat limited by its relatively short range (typically 500 meters or less) and low (in the gigabit sense) throughput. Most deployments thus far have been to apartments and similar multiple dwelling units (MDUs).

5G cellular technology is still in development, and standards for it do not yet exist, though several companies and organizations are working on specifications for it. Early lab trials suggest 5G could support multi-gigabit speeds.

Another promising wireless technology for delivering gigabit speeds is point-to-point millimeter wave, which uses spectrum between 30 GHz and 300 GHz. Google Fiber is looking into this and recently bought Webpass, a fiber-based ISP that has been experimenting with the technology.

As seen at PC World: “A standard is just a definition of what a 5G system is supposed to do, it’s not an actual technical design for that system. While there’s a lot of testing and development underway already – and we could even see the odd piece of “pre-standard” 5G technology released here and there – 5G tech won’t take the consumer market by storm until 2020 at the earliest, and possibly not until 2023 or later.”

In September the IEEE ratified the 802.3bz specification, widely known as “2.5 and 5GBASE-T.” BZ’s primary value proposition is that the installed base of Category 5e and Category 6 will support 2.5- and 5-Gbit/sec operation. To that end, the bz standard references a TIA document, TSB-5021, titled Guidelines for the Use of Installed Cabling to Support 2.5GBASE-T and 5GBASE-T. As of late October TSB-5021 was in the standards-creation step known as default ballot.

In the meantime the NBASE-T Alliance, the prime mover of the 802.3bz specification, produced a technical paper titled NBASE-T Performance and Cabling Guidelines. It provides guidelines on how to evaluate the readiness of existing Category 5e, 6 and 6A copper cabling infrastructure for 2.5 and 5G. Specifically, the paper states in part, “Certification of category cabling requires measurements of ‘internal’ parameters such as insertion loss, return loss, and crosstalk.

The paper introduces and describes alien limited signal to noise ratio (ALSNR), which is “a calculation that combines insertion loss, alien NEXT and alien FEXT to estimate the response of the PHY. This determines if the channel has adequate SNR for supporting the new data rates under worst-case conditions.”

This paper describes the evaluation of cabling infrastructure for network owners and designers looking to implement NBASE-T™ technology on existing cabling, as well as expected NBASE-T performance under worst case cabling configurations, and mitigation techniques to provide the best opportunity for cabling channels to support NBASE-T. This paper also outlines the current work developing measurement procedures to qualify installed cabling for NBASE-T support.

By joining the Open Network Operating System (ONOS) and Central Office Re-architected as a Datacenter (CORD) projects led by ON.Lab, Comcast diversifies ON.Lab’s open source communities, comprising service providers, vendors, individual contributors, and other collaborators, all working to redefine network access through SDN, NFV and cloud computing.

Altice USA (Euronext:ATC) announced plans to build a fiber-to-the-home (FTTH) network capable of delivering broadband speeds of up to 10 Gbps across its U.S. footprint, including its Optimum and Suddenlink markets. The MSO plans to extend fiber deeper into its existing hybrid fiber/coax (HFC) network and leverage proprietary technologies developed by Altice Labs, the company’s global research and development arm, to create the system, dubbed Generation GigaSpeed.

Altice says it’s the first major U.S. cable provider to announce an FTTH deployment across its entire footprint.

“Across the globe Altice has invested heavily in building state-of-the-art fiber-optic networks, and we are pleased to bring our expertise stateside to drive fiber deeper into our infrastructure for the benefit of our U.S. Optimum and Suddenlink customers,”

The five-year deployment plan is scheduled to begin in 2017, and the company expects to reach all of its Optimum footprint and most of its Suddenlink footprint during that timeframe. Initial rollout markets will be announced in the coming months. Altice expects to reinvest energy efficiency savings to support the buildout without a material change in its overall capital budget.

Since Altice USA’s inception with the acquisition of Suddenlink followed by Cablevision/Optimum, the company has been aggressive in rolling out enhanced services to its customers, tripling Internet speeds to up to 300 Mbps for residential customers and 350 Mbps for business customers in its Optimum footprint more than a year ahead of schedule.

It’s funny to think that the most interesting tech products of 2016 have been routers. The router has transformed from a utilitarian computer hardware with antennas sticking out of it to multiple, sleekly designed pods that are placed throughout your home. This method of using more than one device as a wireless access point is known as a mesh system, and it promises to fix those dead zone problems traditional routers often succumb to.

Eero is the most well-known mesh router system, though others have quickly hit the market, including efforts from Netgear and other startups. Now Google is getting into the mesh router game with the Google Wifi, a multi-point router system that shares more than a few similarities with Eero.

Plume is announcing today that its Adaptive WiFi system is now available for purchase, following pre-orders earlier this summer. Plume is a mesh-based home Wi-Fi system that uses compact “pods” to provide coverage in every room of your home. The system is managed by a backend system that monitors the network and adjusts it according to devices and load.

Like Eero and other mesh systems, Plume is meant to prevent signal drops and dead spots in your home. However, unlike Eero, which is meant to cover a home with a few nodes, Plume’s pods are designed to go in each room you want to have internet access. Each pod has a single ethernet port and plugs directly into a power outlet. They are then wirelessly linked together to provide continuous coverage throughout your home The system updates its traffic management patterns periodically based on how you use the network and how much demand is on certain pods.

Verizon has finalized a deal to hand over control of 29 data centers in the US and Latin America to Equinix, in a deal that will net the telco $3.6bn.

The sell-off includes 24 customer-facing locations and is expected to close in 2017. After the hand-off, Verizon customers will be able to continue their managed hosting and cloud services, which are not part of the deal. Verizon will also continue to operate its data center locations outside of the US and Latin America.

“This transaction aligns with Verizon’s strategy to focus resources in areas that will help drive digital transformation for enterprise customers, while providing world-class service,” Verizon said in announcing the deal.

Big Switch Networks is taking aim at the kinds of IoT-based attacks that have rocked the Internet this year.

Headlining its BigSecure Architecture release today is a service chaining solution the company’s chief product officer Prashant Gandhi told Vulture South can scale up to deflect a terabit-scale attack in about ten minutes, but will also “give you the ability to survive for hours”.

For a purely volumetric attack, Gandhi said the software-defined networking (SDN) controller in the demilitarised zone (DMZ) can reconfigure the service chain “so the traffic is redirected to the [security] infrastructure for mitigation”.

The controller then uses flow-based policies and access control lists to tell switches to drop the attack traffic.

However, as we’ve seen in the attacks against Dyn’s domain name services and Krebsonsecurity.com, Mirai-based botnet attacks may be volumetric but they’re coming from a host of different source IP addresses – all those compromised Internet of Things devices.

“You can leverage a pool of x86 services,” Gandhi said. “The virtual machines can be scaled out, and the SDN allows the traffic to be distributed across the servers.”

Putting the defences in software on a bunch of x86 servers isn’t expensive, making it affordable to activate the defences only when they’re needed.

That’s where the fast response comes from, Gandhi said: it should be possible to activate, program, and validate the infrastructure within ten minutes or so when an attack is detected.

T-Mobile CFO Braxton Carter spoke at the UBS Global Media and Communications Conference in New York City, and he touched a bit on President-elect Donald Trump and what his election could mean for the mobile industry. Carter expects that a Trump presidency will foster an environment that’ll be more positive for wireless. “It’s hard to imagine, with the way the election turned out, that we’re not going to have an environment, from several aspects, that is not going to be more positive for my industry,”

Carter expects that a Trump presidency will foster an environment that’ll be more positive for wireless. “It’s hard to imagine, with the way the election turned out, that we’re not going to have an environment, from several aspects, that is not going to be more positive for my industry,” the CFO said.

He went on to explain that there will likely be less regulation, something that he feels “destroys innovation and value creation.”

Speaking of innovation, Carter also feels that a reversal of net neutrality and the FCC’s Open Internet rules would be good for innovation in the industry, saying that it “would provide opportunity for significant innovation and differentiation” and that it’d enable you to “do some very interesting things.”

The T-Mobile CFO touched on consolidation, too. T-Mobile is regularly named as an acquisition target, and with the incoming Trump administration, some have suggested that a T-Mobile merger is more likely than it has been in years past.

T-Mobile US CFO Braxton Carter cheered the incoming administration of President-elect Donald Trump, arguing that less regulation—including the dismantling of the FCC’s net neutrality rules—and less onerous corporate taxes would be “positive for my industry.”

However, Carter declined to address a potential merger between Sprint and T-Mobile, a transaction that industry observers have speculated may be possible under a Trump White House.

Christmas is coming, and the airwaves are filled with holiday commercials, tempting people to buy more, spend more for the holidays. This year, one of the advertisements shows gatherings of people, around the decorated tree, for example, wearing virtual reality (VR) devices and exclaiming with glee about whatever it is they are experiencing.

Today, to be truly immersed in an alternate environment one has to be tethered to a powerful computer, Blair said. “What we truly want is to be able to go anywhere and take the capability in the mobile sense. 5G will be really important to take VR to the next stage.”

To make for a really great VR experience, the viewer needs to not be able to see pixels, and the images have to be refreshed rapidly. The human brain will be confused if it expects the image to move in a certain way and it does not.

“This has got to have a lot of processing power and feel like a real-time environment,” Blair said.

For a near-pixel-less situation, the headsets need UltraHD feeds, and 16-24 images must be delivered to get a sense of participation. Consider also that the VR application likely won’t be the only one running in a house with multiple connected devices.

“This is going to blow up our bandwidth capacity demand,” Blair said.

For their part, cable operators will need to be able to support the latency and delay requirements necessary for real-time interaction, as in a gaming application. The Internet was designed to support connectivity, and now is evolving into a consumption tool, with projections indicating that 90% of traffic will be video or video formatted by 2020, Blair said.

San Diego University with Keysight carried out with the world’s longest radio link to connect to 60 GHz frequency range. On consist of 32 element antennas and software to change the direction of patterns. New technology can be used in future high-speed 5G- and radar systems.

Three hundred meters connection between the link data rate, there were two gigabits per second all the ± 45-degree orientations. Hundred meters connection between the data rate of 4 gigabits per second and 800 meter while connection between the 500 megabits per second.

Keysightin measuring equipment and software was given by means of a rapid implementation of the prototype system. The system for measuring characteristics was used Keysightin programmable waveform signal generator M8195A, vektorisignaaligeneraattori E8267D and DSOS804A oscilloscope.

Mobile phone base stations were sold to 10 billion dollars in the third quarter. IHS-Institute emphasizes that the market shrank by 11 per cent from one year ago. Nokia is the third largest manufacturer, but the market leader in LTE base stations.

The downward trend in the figures indicate that the construction of the spike LTE networks are behind us. The next big boost to may have to wait a long time, the 5G-deliveries do not start the next 4-5 years. Of cource have to continuously improve the capacity of their networks.

IHS According to Huawei became the largest base station supplier in the third quarter. Ericsson is now a close second, but Nokia already breathe in the neck tightly. Nokia has the potential to become the market number two. LTE base stations, Nokia is the leader with a market share of 34 per cent.

Base stations for sale emphasis is shifting increasingly from the traditional iron to software. IHS: According to the LTE base station equipment was sold last year to nearly 25 billion dollars, and software for about 15 billion. In 2020, the shares have turned the other way around.

In the future, the mobile phone network is a kind of hybrid solution, where part of the calculation is done in the cloud, and critical functions closer to the users at the edge of the network. Nokia is now working together with Vodafone tested a kind of cloud-based radio network solution.

The test was carried out Vodafone’s test center in Italy. In the network access point run on Nokia’s airframe server so that the baseband processing separated for real-time and non-real-time functions.

This splitting allows the calculation is that many functions can be processed at the edge of the radio network. In practice, this requires NFV-server connection to the radio network based on Ethernet.

Nokia AirScale Cloud Base Station Server is practically a virtual LTE base station. Some of its functions can be processed airframe server.

Folks using Windows 10 and 8 on BT and Plusnet networks in the UK are being kicked offline by a mysterious software bug.

Computers running the Microsoft operating systems are losing network connectivity due to what appears to be a problem with DHCP. Specifically, it seems some Windows 10 and 8 boxes can no longer reliably obtain LAN-side IP addresses and DNS server settings from their BT and Plusnet broadband routers, preventing them from reaching the internet and other devices on their networks.

The cause of the bug is so far unclear, although Pluset has blamed an unspecified “third-party update.”

The United States International Trade Commission last Friday has issued a new ruling (PDF) in the patent litigation between Cisco and Arista, finding that the latter company is in violation of two Switchzilla patents.

The two patents are U.S. Patent 6,377,577 (“Access Control List Processing In Hardware”) and U.S. Patent 7,224,668 (“Control Plane Security and Traffic Flow Management”).

Distributed denial-of-service (DDoS) made lots of headlines in late October when a massive DDoS attack on Domain Name System (DNS) service provider Dyn temporarily disrupted some of the most popular sites on the internet.

DDoS attacks are clearly on the rise. A report by content delivery network provider Akamai earlier this year said such incidents are increasing in number, severity and duration. It noted a 125 percent increase in DDoS attacks year over year and a 35 percent jump in the average attack duration.

When the Software Engineering Institute (SEI) at Carnegie Mellon University recently posted a blog titled, “Distributed Denial of Service Attacks: Four Best Practices for Prevention and Response,” it became SEI’s most visited post of the year after only two days, according to a spokesman for the institute.

Architecture. To fortify resources against a DDoS attack, it is important to make the architecture as resilient as possible.

The following steps will help disperse organizational assets as to avoid presenting a single rich target to an attacker:

Locate servers in different data centers.
Ensure that data centers are located on different networks.
Ensure that data centers have diverse paths.
Ensure that the data centers, or the networks that the data centers are connected to, have no notable bottlenecks or single points of failure.

Hardware. Deploy appropriate hardware that can handle known attack types and use the options that are in the hardware that would protect network resources. Again, while bolstering resources will not prevent a DDoS attack from happening, doing so will lessen the impact of an attack.
In particular, certain types of DDoS attacks have been in existence for quite some time, and a lot of network and security hardware is capable of mitigating them. For example, many commercially available network firewalls, web application firewalls, and load balancers can defend against layer 4 attacks

Bandwidth. If affordable, scale up network bandwidth. For volumetric attacks, the solution some organizations have adopted is simply to scale bandwidth up to be able to absorb a large volume of traffic if necessary.

Outsourcing. There are several large providers that specialize in scaling infrastructure to respond to attacks. These providers can implement cloud scrubbing services for attack traffic to remove the majority of the problematic traffic before it ever hits a victim’s network.
An ISP can offer DDoS mitigation services that will help organizations respond in the wake of an attack.

Setting the stage for new leadership at the Federal Communications Commission, on Friday the senate failed to reconfirm Democratic Commissioner Jessica Rosenworcel.

That means when (or if) Chairman Tom Wheeler, the current head of the FCC, steps down, Republicans will hold a majority. And their first order of business will likely be to reverse the historic network neutrality rules that were finalized in 2015.

The FCC is tasked with regulating wireless carriers, cable, radio and television broadcast, and internet infrastructure.

No one should have to fear losing their internet connection because of unfounded accusations. But some rights holders want to use copyright law to force your Internet service provider (ISP) to cut off your access whenever they say so, and in a case the Washington Post called “the copyright case that should worry all Internet providers,” they’re hoping the courts will help them.

Will Internet providers have to start cracking down harder on their own customers for suspected copyright infringement?

That’s one of the big questions being raised in the wake of an obscure court ruling that finds that Cox Communications is liable for the illegal music and movie downloads of its subscribers.

Earlier this week, a federal judge said Cox Communications will have to pay a $25 million penalty that a jury had awarded in December to BMG, the music rights company.

BMG had been using a third-party company called Rightscorp to monitor the Internet for filesharing activity and notify Internet providers when it found evidence of it. The expectation was that Cox would pass along Rightscorp’s notices to consumers.

The finding that Cox is liable for its customers’ piracy should absolutely worry other Internet providers, according to legal analysts at the consumer group Public Knowledge. The precedent raises fresh questions about what else Internet providers may be liable for beyond copyright, for example, and what the risk of litigation could mean for their ability to grow and provide reliable service to their subscribers. It may also lead to greater monitoring and control of individual customers.

From our R&D lab in Ottawa, Ciena’s Patrick Scully demonstrates how simple it is to steal massive amounts of data by quickly and easily taping a fiber optic cable, and explains how optical encryption can be used to protect against this threat ensuring the security of all in-flight data.

Sonera is, together with Nokia tested the LTE network in future rates. The tested rates were about the network to the terminal 700 megabits per second terminal and the network 150 megabits per second. Tests carried out more carrier-wave signal and the new technique.

TeliaSonera plans to draw down the speed of the 4G network. The network terminal reached 640 megabits per second speed and from the terminal to the network 131 megabits per second. making use of new high-speed requires Cat12-class terminal, which is already available in the market to some extent.

Follow-up testing, the network optimization settings even higher speeds can be achieved. Currently, Sonera’s 4G network maximum speeds of 375 Mbit / s network to the user and to 50 Mbit / s from the network.

“4G network speeds increase in one part of the construction of 5G-readiness”

Telia-Sonera launches first 5G services to Helsinki together with Nokia in 2018. Telia’s President and CEO Johan Dennelid about starting a new web project today Slush-investor event.

Providing new 5G services will bring new opportunities for the development of mobile services in the Helsinki region. High-speed mobile broadband, significantly lower than the current delay and support IoT devices probably interested in developing new services and solutions.

“We want to catalyze change and to be at the forefront of the industry. We will ensure, together with Nokia, Sonera customers that have access to the best networks and that we are able to bring 5G in time to our partners for the development of future services,

Semiconductor engineering teams have been collaborating with key players in the data center ecosystem in recent years, resulting in unforeseen and substantial changes in how data centers are architected and built. That includes everything from which boxes, boards, cards and cables go where, to how much it costs to run them.

The result is that bedrock communication technology and standards like serializer/deserializer (SerDes) and Ethernet are getting renewed attention. Technology that has been taken for granted is being improved, refined, and updated on a grand scale.

Some of this is being spurred by the demands and deep pockets of Facebook and Google and peers, with their billions of server hits per hour.

“There has been a relentless progression with performance and power scaling to the point where computation almost looks like an infinite resource these days,” said Steven Woo, distinguished inventor and vice president of enterprise solutions technology at Rambus. “And there is a lot more data. You need to drive decisions on what you put, where, based on that data.”

In the context of today’s cutting-edge IEEE 802.3by standard, which is uses 24-Gbps lanes to achieve 100 Gigabit throughput speeds, this is one place where chipmakers get involved.

“A lot of these are concepts and waves of thinking in data flow architectures of the 1980s, and they’re making their way back,” said Woo. “But they’re very different now. Technologies have improved relative to each other and the ratios against each other are all different. Basically, what you’re doing is taking the data flow perspective and optimizing everything.”

Minor considerations, big impact
Optimizing everything is how Marvell Semiconductor sees it, as well. Marvell continues to churn out at Ethernet switch and PHY silicon, but performance demands are rising—and the payoff for meeting those demands is greater. The cabling between the top-of-rack Ethernet switches and the array of servers beneath them may seem like a minor consideration, but it has big impact for the data center design, cost and operation. The best SerDes enable 25Gbps throughput, but they also have long-reach capability that allows for ‘direct attach’ without supplemental power.

This potential brought together a worldwide “meeting of the minds” among power users like Google, the rest of the industry, and IEEE to have a 25Gbps standard, and not go directly from 10Gbps to 40Gbps. Not only is power supply removed within the rack, but equally as important, the backplane can be copper, not fiber.

Engineering teams are working overtime to develop 802.3by-capable silicon and systems in light of all of this.

“We also just introduced something called ‘link training’ where you are decoding a communications link between Ethernet transceivers and replicating that link between 10Gbps and 25Gbps.”

Marvell uses ARM cores in many of its switch families, which helps keep the silicon power consumption low. ARM has spent decades perfecting that.

“The CPU must use DDR,” said Amit Avivi, senior product line manager at Marvell. “But the switch level bandwidth is way too high to use DDR. Advanced switch (silicon) within the switch (device), optimize the traffic to minimize the memory needs. There is lots of prioritization, and there are lots of handshakes to optimize that traffic.”

Michael Weissenstein / Associated Press:
Google to install servers in Cuba to host its most popular content in hopes of speeding up load times of its sites by up to 10x — HAVANA (AP) — Google and the Cuban government signed a deal Monday allowing the internet giant to provide faster access to its data by installing servers …

Google and the Cuban government signed a deal Monday allowing the internet giant to provide faster access to its data by installing servers on the island that will store much of the company’s most popular content.

Storing Google data in Cuba eliminates the long distances that signals must travel from the island through Venezuela to the nearest Google server. More than a half century after cutting virtually all economic ties with Cuba, the U.S. has no direct data link to the island.

The deal removes one of the many obstacles to a normal internet in Cuba, which suffers from some of the world’s most limited and expensive access. Home connections remain illegal for most Cubans and the government charges the equivalent of a month’s average salary for 10 hours of access to public WiFi spots with speeds frequently too slow to download files or watch streaming video.

The agreement does not affect Cuba’s antiquated communications infrastructure or broaden public access to the internet

The number of connected devices per household is higher than ever before. Recent forecasts estimate that more than 1 billion new Internet users are expected to join the global Internet community in the near future, growing from 3 billion in 2015 to 4.1 billion by 2020. Meanwhile, global IP networks will grow by an additional 10 billion new devices and connections in that period.

For communication service providers, this presents a conundrum — subscribers’ increasing appetite for IP-based applications is driving bandwidth usage and causing network congestion, which leads to poor quality of experience (QoE) and/or expensive network upgrades. But could such changing subscriber habits also offer service providers an opportunity?

The very devices and applications favored by subscribers today are also a goldmine of data. This information could be your company’s most valuable asset — but few providers are maximizing this resource to its full benefit. Focusing on data monetization while still respecting subscriber privacy is possible — and necessary — to stay afloat in an increasingly competitive market.

As cable operators’ business services arms court larger enterprise customers, SD-WAN is emerging as an increasingly important tool.

Network functions virtualization (NFV) and software defined networking (SDN) hold promise for helping operators offer agility and dynamic change for a variety of services and solutions. Adding to the mounting list of possibilities, software defined wide area networking (SD-WAN) is a way to provide a virtual private network (VPN) over broadband networks instead of using dedicated multiprotocol label switching (MPLS) service to provide WAN optimization.

While SD-WAN is well-suited for small businesses that can’t afford the cost of a dedicated MPLS, SD-WAN also shows potential as a managed service for larger enterprises, said Kevin Wade, senior director and product marketing team leader, Ciena Blue Planet (NYSE:CIEN). A promising scenario is for an operator to offer a dedicated WAN augmented by SD connectivity.

“The enterprise gets the benefit of agility and policy-based routing of their applications or flows … (as well as) the prioritization of mission critical applications and the benefit of SD-WAN without having to own or manage appliances,” Wade said.

A policy manager allows the enterprise to specify that all telepresence content, for example, should use the MPLS because of the guaranteed latency, while large file transfers should use the Internet-based VPN.

Australia’s communications minister Mitch Fifield has put a price on Australian Reg readers’ heads: a lousy dollar and twenty five cents.

That’s the price would-be-bidders will need as table stakes in the forthcoming auction for Australia’s 700 MHz spectrum, which has set that sum as the per-MHz, per-head price for the slice of the skies suited to use by 4G networks.

The Internet of Things is povattu hundreds of billions of business, but still just can not be so long. Berg Insight research institute, the operators will make IoT has this year a total turnover of EUR 11 billion, so any space to store all is no longer the case.

2016 is also practically the first year in which operators have begun to report the IoT revenue. For example, Vodafone and Verizon both recorded in the third quarter, net sales IoT revenues of approximately EUR 200 million amount.

Berg Insight estimates that there are already half a billion mobile network connected to the IoT devices next year. This year, one IoT nodes to produce the operator an average of EUR 1.40 months.

The increasing complexity and scalability being found in the next generation of routers and switches has exerted pressure on power supply manufactures to improve efficiency, reduce solution size and to provide flexible solutions that can be scaled across multiple platforms. System designers will frequently have several variations of a base architecture, allowing them to offer high, medium and low-end systems, each with different feature sets. Examples of device types that can be added, removed or sized according to system needs are; content-addressable memory (CAM), ternary content-addressable memory (TCAM), application specific integrated circuits (ASIC), full custom silicon and field-programmable gate arrays (FPGA).

Due to its parallel nature, CAM is much faster than RAM. However, it consumes much more power and generates a higher level of heat. CAMs are expensive, so they are not normally found in PCs. Even router vendors will sometimes skimp, opting instead to implement advanced software-based searching algorithms.

CAMs are found in network processing devices, including Intel IXP cards and various routers or switches. The most commonly implemented CAMs are called binary CAMs. They search only for ones and zeros. You can be assured that any switch capable of forwarding Ethernet frames at gigabit line-speed is using CAMs for lookups.

A TCAM is a specialized type of high-speed memory that searches its entire contents in a single clock cycle. The term “ternary” refers to the memory’s ability to store and query data using three different inputs: 0, 1 and X. The “X” input, which is often referred to as a “don’t care” or “wildcard” state, enables TCAM to perform broader searches based on pattern matching, as opposed to binary CAM, which performs exact-match searches using only 0s and 1s. Routers can store their entire routing table in these TCAMs, allowing for very quick lookups.

An FPGA is yet another device used in routers and switches, and is an integrated circuit that can be programmed. FPGAs are used in the design of specialized systems and allow users to tailor microprocessors to meet their own individual needs.

Scalability

The amount of CAMs and TCAMs allocated to a particular router is dependent on how the networking company positions their offering of low, medium or high-end routers. The more expensive routers will normally have sufficient CAM’s and TCAM’s to support the highest speeds, fastest look ups and highest throughputs. However, some customers won’t want to purchase a high end router unless they can justify the added cost.

Most everyone has heard of the Internet of Things (IoT), where connectivity to people and things comes from Earth-bound wired and wireless networks. But fewer technologists know about the evolving Internet of Space (IoS), where connectivity comes from space-based satellites and—in the near future—lower altitude airborne platforms based on drones and even balloons. This article will look at the controversy and challenges surrounding the IoS, from the phrase itself to the technical RF and microwave issues, the business model viability, and finally the competition with terrestrial 5G and LTE networks.

Conversely, microwave and RF engineers would probably recognize the Internet of Space as a reference to satellite-based technology. For the last several decades, many wireless devices have been designed to service communication satellite Ka- and Ku-bands.

In 2016, the IEEE MTT Society initiated a major discussion of the IoS during the International Microwave Symposium in San Francisco. The essence of this discussion among the major satellite and RF-microwave vendors and investors will be highlighted shortly.

IOT vs. IOS

Today’s Internet of Things (IoT) is connected via wired and wireless Earth-based networks. But with the insatiable hunger for ever higher data bandwidths, the resulting congestion and slower data rates are becoming common. Adding to these network stresses is the cost viability challenges when connecting remote and underserved regions of the world to the internet.

These factors have led to recent investments in space-based internet platforms. These suborbital, high-data-rate communication networks are being collectively referred to as the Internet of Space (IoS) or sometimes satellite-based IOT (S-IoT). For now, satellites serve mainly as backhaul for hard-to-reach geographic locations and sparsely populated areas.

The Internet of Space is comprised of a variety of satellites at different orbits and potentially lower-altitude airborne platforms. Geostationary orbit (GEO) and low earth orbit (LEO) are the two traditional orbits for the IOS

Low Earth orbits are preferred when low latency is needed. Low orbits also reduce power amplifier requirements and antenna sizes among other things (see table below). However, low Earth orbit global coverage requires large numbers of satellites to cover the planet to ensure continuous data links.

Backhaul and Beyond

From the IoT perspective, the IoS is seen as an evolving communication network. In particular, the IoS can provide critical backhaul for remote locations devoid of cellular or wireless LANs.

However, not everyone agrees. Maurizio Brignoli, co‐founder and CTO of Avanix srl, a wireless technology company, doesn’t see satellites as yet being cost-effective with 5G and LTE cellular. “SIGFOX and LoRa will be very competitive compared to satellites or low-altitude airborne alternatives like balloons,” he says. SigFox and LoRa are wireless carrier platforms in the terrestrial-based low-power wide-area network (LPWAN) market.

The overwhelming value of IoT is in enabling the connectivity to “things,” where the overwhelming value of IoS is a ubiquitous access point to the internet, explains Wallace. While there will be many alternate ways for the IoT to connect to the internet infrastructure, IoS will offer unique benefits given its vantage point from hundreds of miles above Earth.

Technical Challenges Abound

Unlike ground-based systems, the Internet of Space relies on satellites as the primary communication mechanism. To compete with terrestrial systems, satellite designers like OneWeb must keep the costs low. The OneWeb satellite constellation is a proposed collection of roughly 648 satellites that are expected to provide global internet broadband service to individual consumers as early as 2019.

As an example of an extremely low-power and efficient system, Bettinger mentioned a 10-W, Ku-band Scanning Spot Beam Antenna (SSBA) device. Several scannable spot beams or a large number of multisport beams are required to provide ultra-high data-rate transmission while simultaneously ensuring wide coverage.

“You get a lot of cost savings with high-rate production for space technology,”

Enabling IOS

Whether satellite-based IOT is used as backhaul or directly connected to end-user terminals (like satellite TV) or handsets, providing internet globally to all users will require a mix of technologies. Most will rely heavily on RF and microwave connectivity, especially in sparsely populated rural and underdeveloped countries.

To meet this capacity challenge, the best of each connectivity technology should be selected: space satellites (mainly GEO and LEO), High Altitude Platforms (e.g., drones and balloons), and existing terrestrial networks (like cellular and fiber optics). The same or similar RF and microwave technologies will be used in all of these implementations, so the main differentiator is cost.

A business case will have to be made to determine which connectivity option is best for a given application in a given geographic region.