Posted
by
timothy
on Thursday August 04, 2011 @05:16PM
from the injecting-signal-into-the-noise dept.

holy_calamity writes "Inventor of the Quicktime codec Steve Perlman has unveiled a new wireless technology he claims can deliver thousands of times more bandwidth to mobile devices than existing technology. Each user is served by multiple transmitters, which send out waves carefully designed to combine into a data signal only at a device's location. That technique enables every user to be targeted with a signal with the same total bandwidth that would usually be shared between users, says Perlman."

No, but the fixed costs of the infrastructure eventually ends up being far exceeded by the revenue coming in which is his point. Or do you care to point out how 1GB of "overage" data somehow costs AT&T 10 times more than a "regular" GB of data.

Given no amount of bandwidth is truly "boundless" or "infinite", some joker will come up with some way to saturate the line 24/7

Nope, not buying it. The demand for bandwidth is not infinite. Just look at ethernet in a typical business network with a normal file server, traffic to the internet, etc. There is just no scarcity there anymore: Even if you have a "fast" file server, the bottleneck is the disk array, not the 10GbE uplink to the switch. Nothing normal people do even remotely taxes a network with gigabit to the users and 10GbE to the servers.

The fact is that consumption does not increase linearly with capacity because the mo

AT&T is nowhere near saturating their lines, nor are any of the other major carriers. They cried wolf in Canada and Canada made them actually reveal the data and nowhere in their network were they anywhere near 50% saturation at peak.

Why is it so many/. posts demand socialistic freebees and but a capitalist paycheck.

Because capitalism assumes low barriers of entry to a market (so that if AT&T charges too much, new startups pop up to eat their market share), while phone industry has extremely high barriers. Because the assumptions behind capitalism don't hold for phone industry, it would be stupid to insist running said industry by it, unless one has some kind of ideological commitment to capitalism - and we only need to look at th

The economic cost is different in the short term and long term. The financial cost to the company will be different to that because markets and technology never work properly, but it's probably a good start.

In the short term the economic cost to using extra bandwidth is zero until capacity is reached. So is the financial cost to the company. The infrastructure and organizations have to be maintained whether is being used or not. When it reaches capacity the economic cost is the cost of failing to deliver th

Right, it's not as if they inflate their actual costs or anything. I mean it's makes perfectly logical sense that a 200MB data plan from AT&T costs $15 while 2GB costs $25. Care to explain to me why the former plan costs nearly 7x more per MB than the latter?

Care to explain to me why the former plan costs nearly 7x more per MB than the latter?

Because you're making the common mistake of not understanding (or pretending not to) that both plans carry administrative costs and overhead that costs a lot more than the bandwidth. You're thinking that the only thing built into those prices is the actual bandwidth. Which is exactly wrong.

If admin overhead was the reason, then graphing cost vs data allocation should result in a straight line that crosses the axis at a value equivalent to the overhead. Instead, the higher data rate plans become progressively cheaper even factoring in some constant amount of overhead. For example, my local telco has the following plans for mobile internet:

No, he was assuming that the administrative costs and overhead of the two data plans are identical. Considering the only thing that changes is the bandwidth or data cap, it is a reasonable assumption. Actual operation of the service is automated and doesn't have proportionate cost increases.

For example, a sysadmin that can manage a 100 Mbps Fast Ethernet switch doesn't cost 10x less than one that can manage a GbE switch. Nor is there 10x the work involved.

Let's agree that each plan has equal administrative overhead costs, and that for each plan this cost is $10 (actual admin cost should be nowhere near that, but hey, this AT&T we're talking about). So, that leaves $5 for 200MB of data, and $15 for 2GB of data. So, 1/3 the cost for only 1/10 the data? OP's point still stands. They're screwing you on the high(er) end plans, but they're screwing you even worse on the low end.

of course - utilisation might well vary according to plan.and cost is really only driven by utilization during peak periods.more importantly, this cost has to cover the cost of customer acquisition and infrastructure build.

here's the reality though;

Companies charge the fee which they think will maximize their profit.

They don't really know what that is, so they wave their hands a little (actually a lot). They try to charge fees which will cover their investment and generate a profit on each segment whilst bu

That depends on how fine the resolution is and how you define 'slightly'. I'd guess (in a totally unscientific manner) your sweet spot would be at least a 1-2 meter diameter sphere. The problem becomes more interesting if you are a moving target.

In the 90s, Arraycomm [arraycomm.com] had developed a technology that relied on multipath and was capable of putting a spherical signal in less than a half-meter of space. This technology, called "Intellicell", was used first to nearly quadruple the capacity of Japan's PHS system (tiny phones meant to be low-mobility additions to wired phones, serviced my microcell stations located on metropolitan buildings). This was later expanded to a metropolitan data service using more typical cell stations promising over a megabit/se

As an RF engineer, when I hear people mention the words simple and radio in the same sentence I smile inwardly and anticipate a project that gets to the desperation phase more rapidly than usual without any design input to allow it for tuning the performance of each circuit block.

802.11n already does this, they call it "beam-forming". Cisco features it in their high-end access points, using multiple antennae to send the same payload but with varying phase shift, which recombine at the receiver to produce a stronger coherent wave.

I love how the summary introduces him as the "inventor of the Quicktime codec". Yeah, he provided the RPZA ("road pizza") codec, which is so damn simple it made Bink Video look like fine art, back in the day.

That's just to help the signal. It still shares bandwidth amongst all users. With this, each user can theoretically get full-spectrum downstream. Also different in that it broadcasts from multiple access points which is hardly trivial.

Again, Ciscos can do this. I don't care much for the company, but I've a client with more money than brains and they have a HUGE deployment of these things. The WiFi is actually faster than the wired lan, despite having 300+ clients.

It sounds like they demonstrated really-high bandwidth to a single specific user, but how well does it provide high-bandwidth for multiple users? Wouldn't it get calculation-intensive, as each signal is modulating the others?

Bingo. It sounds like he's trying to take a "cloud-sourced" approach to MIMO, with a little meshiness thrown in for good measure.

Plus I think the whole "support for non-stationary receivers is a huge issue" and "needs to avoid interference that's not of its own making" aspects will make this a non-starter. Good luck getting that spectrum, or finding a big enough group of fixed-wireless customers to make this either useful or profitable.

WiMAX and LTE are already doing MIMO and beamforming (perhaps to varying

May suck for cell phones, but it'll be great for me. I live in an area where I'm just barely out of range of DSL and cable. Not anything like montana or alaska... Southern alabama, Right between Mobile and pascagoula. I'm currently using 3G wireless from verizon, and it pretty much sucks. 1.1MB down, when it works, the rest of the time, SOL.

My stationary USB card in a 3G router would love to site nice and still for this to work.

802.11n already does this, they call it "beam-forming". Cisco features it in their high-end access points, using multiple antennae to send the same payload but with varying phase shift, which recombine at the receiver to produce a stronger coherent wave.

Which is a variant on "steerable null" - a multi-antenna hack that lets the antennas at a cell site send out beams configured such that, at each active remote device paired with the site, the signals intended for all the OTHER active receivers cancel out. (

I love how the summary introduces him as the "inventor of the QuickTime codec".

A common enough confusion I suspect. To be pedantic: QuickTime is a media container, not a codec. It's similar to the way that AVI and OGG aren't codecs. They're containers for stuff like MP4 (confusingly sometimes also the name of a container format), Vorbis (the codec behind most Ogg audio files), or Mp3.

This is the same kind of technology used to take 3 innocuous beams of light and explode your head at the point where they cross-over, and my phone has alarmingly accurate location information these days...

I can see it now. Sprint hires hacker to hack the T-mobile phone network and in a single keystroke explode their customer base; other networks follow moments later. The first and final act in what is later to be known as the Carrier Wars.

Seems kind of dumb to kill all the customers of a competing company. Those are paying customers you'd much rather have for yourself; dead men pay no bills. I'd hire the hacker to limit the head 'splosions to the people actually running the competitor. Then when the competitor collapses due to having no more employees, where are all the people needing cell service going to turn? That's right.

So no cause for alarm here, people... Unless you happen to work for a telco that is...

In his June 4 presentation [youtube.com] he states that it's "not beam-forming". He doesn't say much about what it is, though.

His white paper [rearden.com] (PDF) gives a bit more detail, though still not much. It sounds akin to MIMO, but instead of phase-aligning multiple signals to increase the strength (i.e. beam-forming), the antennae are more widely distributed, and complex-formed signals are broadcast from each antenna in careful sync, so that they interfere at each receiver to produce the desired signal.

Thanks for sharing a link to my story on Steve Perlman's DIDO hype, on the EE Daily News. Since writing my report I have spoken with several communications experts regarding this "invention". In a nutshell, the general consensus is "no technical merit".

Not mentioned in this article, but called out elsewhere in the press surrounding this, is that this new interference formula only works where nobody else is broadcasting. This can't be used in the existing wi-fi spectrum, for example, because the interference from non-DIDO devices will corrupt his receivers. Unlike FM, for example, which grabs the strongest local signal, this tries to grab and combine all signals under the belief that they will combine properly. If anyone else is emitting on that same spectrum (intentionally or not), it will be troublesome.

Nope. Sorry. You're wrong. Electromagnetic waves add very nicely, so that your signal remains there, even if many other signals are simultaneously being transmitted.

The overall idea is fine, in principle. As other people have said, it is 802.11n beam-forming on steroids. If you had 1000 transmitters, and if you could know the exact time delay and attenuation from each of those transmitters to your cell phone, then (indeed) you could make them all add together precisely where your cell phone is.

Based on what I read in other news articles, Perlman himself says that interference from others is a problem.

'Perlman estimates that the first commercial use of DIDO technology could come as early as the end of next year. But even then, the first deployments are likely to be outside of the United States.
In a DIDO system, a data center on the Internet determines the wireless signals that each transmitter will send based in part on the location and number of other DIDO transmitters in the area. In order for the data center to know what the resulting interference patterns will be, there can't be any other sources of signals outside those generated by the DIDO transmitters.
What that means is that a DIDO system would have to be used on currently unused or completely reclaimed spectrum. So DIDO won't be improving your Wi-Fi or 3G experience, because those parts of the spectrum are already crowded with transmitters. It's more likely to be embraced in the near term in countries that have a lot more unused spectrum than does the United States, Perlman said.' (Source: San Jose Mercury News, 8/3/2011 http://www.mercurynews.com/business/ci_18603178 [mercurynews.com])

So Perlman himself says that, yes, in a closed system, his design works -- but only in that closed system. Given that some stray RF emissions are generated from various things (as we know from the amateur radio folks whenever BPL comes up, for example), there is an interference problem.

[*...actially, if you're really gonzo, you can adjust all the transmitters to make their signals cancel out exactly, at everyone else's cell phones, so long as you have more transmitters than cell phones. In principle. But I don't think anyone is seriously proposing that...]

From the above-mentioned article, Perlman says you do not need more transmitters than cell phones. He suggests a

What that means is that a DIDO system would have to be used on currently unused or completely reclaimed spectrum.

Sounds to me like this would make it trivial to design cell blockers for say, Theatres, Libraries, Museums, etc. Or even for someone to make an even lower powered one to shut up that asshole sitting behind you in a restaurant blabbing at 80db who just won't shut the fuck up. Obviously they'd be illegal for personal use, but damn they'd be satisfying to use at times...

But you could easily have a company that owns a chunk of spectrum in say a single urban environment deploy a fixed wireless DIDO system. This would provide huge amounts of competition to the duopoly DSL/cable ISPs.

How do you come up with signals that not only constructively and destructively interfere in precisely the right spot in precisely the right way to deliver data to a device, but also for those same signals to simultaneously interfere at other points to deliver different data?

Designing radio signals that will interfere with one another in just the right way takes complex mathematics and careful coordination among the different DIDO transmitters. "The computational requirements are very large, but we solved that by using a cloud server," says Perlman.

Oh! The cloud. I thought he might dodge the question with some hand-waving. But he's got the cloud on it.

Where do I sign up? And how do I make sure the guy sitting next to me isn't stealing my signal?

How do you come up with signals that not only constructively and destructively interfere in precisely the right spot in precisely the right way to deliver data to a device, but also for those same signals to simultaneously interfere at other points to deliver different data?

For each of the N partners you compute a signal that puts a null on all the N-1 OTHER partners but not on the partner of interest. That can be heard everywhere except near the other partners - and the one you're talking to is somewhere

I can't seem to find any reference to it, but I read about a similar system several years ago, where communications for a submarine would be split up into several waves, which only combine into a useful signal at the point the submarine is supposed to be.

Don't know wether it was an idea or something that was actually implemented, though.

Waves don't only interfere constructively at one point. They interfere constructively at many points, to varying degrees. What happens when two devices are using mirrored interference points?

Instead of targeting specific devices, what about dividing the landscape into many physical regions, using constructive interference to cover an area rather than a single device. It would be like space-division multiplexing.

My biggest concern with this tech is not transmission from towers to individual devices, but rather the return call. What are the computational requirements for a receiver using this technology?

According to the whitepaper, the coefficients to weight each transmitter signal to constructively interfere at your location sets up mathematically orthogonal channels (at lease orthogonal to some SNR, with some leakage from other channels depending on the number/location of devices and antennas).

The device can send a signal back which will interfere with other devices, but incoming signals at the antennas can be weighted by the same coefficients (or at least derived from the same) to again cancel all th

This is cute, but it won't let you beat information theory limits on signal bandwidth. After all, no matter how these signals are intended to be combined through spatial interference, each antenna is emitting a signal which varies only in time. So the more signals you try to pack into one antenna's output, the more those signals project onto one another ("overlap"), and so the more they get mixed together. From each antenna's perspective, this is just a baroque form of time-domain multiplexing (TDMA) --