Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

imamac writes with this excerpt from news out of Carleton University:
"Atif Shamim, an electronics PhD student at Carleton University, has built a prototype that extends the battery life of portable gadgets such as the iPhone and BlackBerry, by getting rid of all the wires used to connect the electronic circuits with the antenna. ... The invention involves a packaging technique to connect the antenna with the circuits via a wireless connection between a micro-antenna embedded within the circuits on the chip. 'This has not been tried before — that the circuits are connected to the antenna wirelessly. They've been connected through wires and a bunch of other components. That's where the power gets lost,' Mr. Shamim said."
The story's headline claims the breakthrough can extend battery life by up to 12 times, but that seems to be a misinterpretation of Shamim's claim that his method reduces the power required to operate the antenna by a factor of about 12; 3.3 mW down from 38 mW. The research paper (PDF) is available at the Microwave Journal. imamac adds, "Unlike many of the breakthroughs we read about here and elsewhere, this seems like it has a very high probability of market acceptance and actual implementation."

I don't think he separating the amplifier from the antenna, but perhaps feeding the amplifier directly attached to the antenna. The loss in signal from source to antenna from the distance of the run has to be made up. This is done by stepping up the output of the amplifier stage.

This configuration isn't uncommon and many microwave systems employ this technique. (Attaching the amplifier nearly directly to the antenna.)

Though I would have to look a bit at the design this is only item I can think of. From near

"This configuration isn't uncommon and many microwave systems employ this technique. (Attaching the amplifier nearly directly to the antenna.)"

I agree, it sounds very much like some kind of Impedance Matching technique where the Inductive coupling is direct to the antenna. I'm not so sure that's as patentable as this University is drumming it up to sound. (I guess they hope to earn a lot of money from it, mainly from from phone companies). But Impedance Matching using windings to effectively wireless couple

There is many an order of magnitude more atoms in the tracing on the PCB than comprise the air the radio waves travel through from the antenna on the cell phone to the cell tower. There are even less when we are talking a matter of mm. The more atoms you have to push your information through the more amperage it takes to overcome the resistance [wikipedia.org] and since radio waves are a form of EM radiation they follow similar laws which just appear more complicated [wikipedia.org].

umm doesn't air have a lower conductivity than copper, hence electricity runs happily along copper at low voltages but needs 1000 volts to jump just 1 cm through the air? TFA is hopeless, it almost sounds like he cut the wires on his iphone, which stopped it transmitting then declared a major break through in battery life.

We're talking superhigh frequencies near 1 GHz. At such frequencies all of the electric/magnetic field generated "current" runs on the surface of wires anyway, not through the bulk, due to "skin effect". Or the electric/magnetic field can simply propagate through free space as electromagnetic radiation, like microwaves in your microwave oven, or light through empty space. Light propagates better through vacuum than through a copper wire, doesn't it?

How else will you put it into plain english terms? Derive your answers from the Maxwell equations? Like people really get the integrating of partial differentials and approximating of certain terms with constants under certain conditions in the equations. Let's start a discussion on how the solutions to the partial differential equations are affected by epsilon, mu, ro and nu - permittivity, permeability, charge density and frequency. I think explaining it like that on here would be more retarded, even thou

They also do this in recording studios. It takes far less power and wiring(or can be done via RF or IR) to have each speaker have its own small amplifier than to try to power the whole room with a rack of giant units.

This also would create less interference, believe it or not, since running wires near live electrical components(even the tiny components in a circuit board make a difference - just stick an AM radio near your computer's motherboard) tends to cause interference. This is the other reason recording studios do this. They can run a very heavily shielded or wireless line level signal to each speaker directly. Less power, less clutter, less interference.

Powered speakers are popular because it gives monitor manufacturers a way to make line level crossovers, power amps and speaker drivers work together.Having control over the specifications of all those components means better fidelity. It is tidier too.

I don't think RF or IR is ever used with studio monitors. They would cause phase alignment problems and a loss of fidelity. Simpler is better, so people use wires. Anyway, aren't we trying to avoid RF transmitters here?Speaker cables can be shielded too, but people don't bother as any interference would be imperceptible.

Power loss in speaker cables is pretty tiny too. Powered speakers really are all about convenience and potential better fidelity.

They use wireless just fine with mics and pickups and so on on stage for these reasons all the time. Less cables, less problems, and also if you've ever had to deal with grounding issues, wireless or a line-level signal that's amplified at the source is a huge improvement. I suspect that's the real problem here - too much background RF noise from the components. Rather than brute-forcing it, he decided to find a way to get around this and clean up the signal in the process.

Btw, most pros don't use wired mics any more. Too many issues. Most studios don't use non-powered speakers any more, either. You're right - I haven't found many setups that use IR or wireless(yet), but I can find many professional systems that use S/PDIF, optical, or other non-analog transmission methods.(shoot, most home theater interconnects are now HDMI for exactly these sorts of reasons.

Wired mics sound better because they lack the companders involved in transmitting the audio signal. Performers like wireless because it's convenient, not because it sounds better. Those concerned with sound quality stick to wired.

Balance signals use common mode rejection to eliminate induced noise. This has been standard practice for years. Recording studios used either balanced wiring, or digital in the form of AES or optical ADAT.

I'm lost on how the antenna in a phone is a major power consumer. Aren't the screen, power converters, CPU and all the modulators in the radios each consuming more power than the wire that connects the transceiver to the antenna? If it's really consuming that much power, then it stands to reason that wire should burn up.

The article is short on details and so poorly worded that I think the article should not have been published. Even if it's valid, the writing makes it look like pseudoscience.

The problem is that the antenna isn't a major power consumer. It's that the signal path between the circuitry and the antenna is so full of junk on many models due to poor slapped-together designs that the signal must be boosted a lot to communicate with the local cell phone tower. In the old days this wasn't a problem as there weren't major limits on power. Some old Analog units transmitted as much as 10-20W!. Now they have to limit their power to a fraction of that. If the digital signal can't be boosted enough to communicate and it's already at that FCC imposed limit, you're out of luck. No bars. Technically you never actually get "no bars" - you just get too little for the error correction to work any more.

Powered speakers exist because it reduces the cabling between the amp and speaker to a minimum thereby reducing the resistance to a minimum and subsequently maximising the damping factor.

Whilst active monitors are common in smaller control rooms, particularly broadcast/post production and smaller music studios you will still find passive monitor/discreet amplifier configurations in larger control rooms particularly music studios.

Only a masochist or someone who mistakenly thinks it's easier to screen out int

Second, electricity moving through matter is technically a flow of holes where atoms are missing electrons. You get more resistence when dealing with electricity in this form, fewer atoms equals more resistence since there are few atoms available to make hole swaps with. The skin effect when operating at high frequencies makes the effective resistence of PCB trace higher than direct current but still

I'm not an antenna designer, but by the looks looks of it, the design is basically a miniature on-chip waveguide, efficiently channeling the RF energy toward the external antenna, minimizing wasted radiation.

Wires radiate RF like mad unless they're heavily shielded, which is something you really can't do effectively in tight spaces. Of course, testing was done at 5.2GHz, so it will be interesting to see how it works at cellphone frequencies - packaging size might become a factor at lower frequencies.

Umm, dude...just because you shield a component doesn't mean it stops radiating. Shielding inhibits EM fields which are already present. To reduced radiated losses, you need to either improve the fundamental design of the circuit or make it radiate so well that you build an antenna instead.

From the article:"The strategy is useful as it eliminates the need of isolating buffers, bond pads, bond wires, matching elements, baluns and transmission lines. It not only reduces the number of components and simplifies SiP design but alsoconsumes lower power."

That was my initial guess. Electrical circuits include a lot of "glue logic" like resistors, caps, and inductors which burn-off energy as heat. Find a way to eliminate those items (i.e. connect the antenna wirelessly) and you eliminate waste.

Except that omnidirectional range is proportional to the cube of the output.If, as the GP says, you use 1/12 the power of a conventional device with this design, but have 1/3 the range, you need to bump the power to 3^3/12 of a conventional device to get the same range, or 27/12, or more than double.That doesn't seem like a win to me.

You can't violate the first law of physics:: You don't get sumtin for nuttin.

The conventional LTCC package provides 3 times more rangethan the proposed design but consumes 12 times more power.

So you save power versus the conventional design, but you lose range.

To provide the same signal strength at triple the range, you need to broadcast 9 times as much power. To broadcast 9 times as much power with an equally compact transmitter, is it surprising that you need to spend 12 times as much power due to size/efficiency trade-offs?

You are assuming an isotropic emitter, where field strength falls off as 1/r^2. That behavior is invalid for other antennas; for example a dipole's field strength falls off as 1/r (in the far-field approximation). The paper is complicated by the fact that the radiation patterns of the antennas used in this paper are directional and different. The "conventional" chip used a folded dipole with a "boresight radiation pattern", and the "proposed" chip used a custom design with a front-to-back ratio of 10dB.

Let's do some reckless hand-wavy extrapolation. The difference in power is 38/3.3 = 11.5 = 10.6 dB; if we assume perfect scaling of the new package to 38mW, we'd expect 10.6-2.3=8.3 dBi. This is an improvement of 9.3 dB over the conventional method -- it's almost 10 times as efficient.

This analysis ignores, among other things, the relative directionalities of the antennas. I wonder why they didn't choose a more directional antenna for the "conventional" chip, or used the same sort of antenna in order to do a level comparison.

The other point of comparison is between the "standalone" chip and the "proposed" chip. A 32 dB improvement with no power increase is nothing to sneeze at!

You can not get any gain in an on-chip antenna at this frequencies: it is to small. He is comparing the use of only an on-chip antenna, which is never used in mobile phones, with the use of a coupled external, somewhat bigger, antenna on a ceramic substrate. Not at all suprising that he gets a better performance with the latter, as it is bigger. He would get even better performance with a classic mobile phone antenna, though.

I.e. This will not revolutionize the battery life of your iPhone or Blackberry. The losses in the coupling between the integrated PA and the antenna are very small (if we disregard detuning due to human proximity effects. Which is another story, and which is not influenced at all by the design in question.)

The comparison between two different antennas at different powers is not very good science - it is somewhat suprising it got published. (But it is only at a small conference, so it is not that surprising.)

The real strange thing is why they didn't compare their new capacitive coupling with a classic wired connection between the PA and the antenna. Instead they introduce an additional PA with corresponding power consumption when they test the wired connection.

The difference between the standalone chip antenna, with a maximum size of 1 mm, with the proposed antenna, with an size of 17 mm, is not revolutionary, it is expected due to the very bad efficiency of electrically small antennas.

The paper describes a method of simply and efficiently coupling energy from the transmitter VCO chip to the main antenna, making good use of the R.F. energy that chip provides. It seems that most of the power savings is from avoiding (power used by) an external buffer amplifier by eliminating the amplifier.That's great if the chip can provide sufficient output power, and if the spectral purity is good enough to comply with F.C.C. or other requirements. I'd expect that most cell p

Yes, it is counterintuitive. And also not what is actually claimed in the paper.
In the paper three designs are compared:

(1) One with only an antenna on chip. That is, an antenna on the actual chip, with a size of 1x0.5 mm. Draws 3.3 mW, "range" 1m. ("Range" is a very strange measure in RF design...)
(2) The same chip but without the on-chip antenna. Instead the power is coupled to an additional PA-amplifier, and an external small folded dipole antenna: Size about 16x10 mm. Draws 38 mW, "Range" 75 m.
(3) The same chip withou the PA, with the on-chip antenna coupling to an external patch antenna of size 17x17 mm. Draws 3.3 mW, "Range" 24 m.

In summary: Nice engineering work, but no conclusions can be drawn, as it is very much a case of apples and oranges. (No constant TX power, No constant size, Not very much constant between the designs at all.)

And a classic mobile phone does not use an on-chip antenna at all. So this design will not give any benefit to your iPhone or Blackberry etc.

..and switch wifi off (which is even more power hungry, btw.). 3G is only more power hungry in weak areas (since it'll try to find the weak 3g antenna rather than the more powerful 2g one).. in an area of good reception it makes no difference.

But cellphone antennas are already pretty power efficient compared to driving the display, backlight etc... and let's not even get started on the GPS. You aren't going to get multiples of battery life just from this invention.

Yeah, he'd basically short-range broadcasting his long range broadcast. If you got within several feet of him and used the right equipment, you might be able to listen in on everything he's broadcasting!

No because he claims it is 12 times more efficient. If that is true, you would have work 12 times harder to listen to what would get radiated anyway. This guy has figured out a way to patent a matching network.

The ramifications of sending data a short distance to the antenna, which is then relayed a much longer distance to the base station...yeah, I'm sure those hackers are gonna pull your data off your antenna from this connection rather than the antenna's connection to the tower

what are the security ramifications? that a 3rd party might be able to intercept the wireless transmission just like they already can? whether you use this technique or not, you're still going to be broadcasting the signal wirelessly. that's why GSM signals are supposed to be encrypted.

the GSM encryption was broken earlier this year [forbes.com]. the security ramifications of that are far more serious. why would you be worried about someone intercepting this weak wireless signal when attackers can already eavesdrop on your conversation from miles away?

heck, if they're close enough to intercept this signal, then they're already within earshot of you. they wouldn't need to intercept the wireless signal to the antenna. anyone silly enough to do so would look rather conspicuous standing there with a laptop and a directional antenna pointed at your phone.

The explanation given on the website is very poor. The resistance of the wires connecting the transceiver and the antenna is low and little power is lost in them.

In addition, they quote him as saying "There are so many applications in the iPhone, itâ(TM)s like a power-sucking machine" but what they're talking about is the power lost at the antenna and not from the processor which is what he implies. Therefore it wouldn't do anything to prolong battery life when using non-transmitting applications.

Definitely bad journalism. The culprit isn't wire resistance, it's reactance. The impedance mismatch at the junctions from amplifier to circuit board to connector to cable to antenna all create reflections and thus standing waves [wikipedia.org]. The power that goes into those standing waves is reflected back into the amplifier, where it is dissipated as heat. The result is that you need (in his example) a 38mW amplifier in order to get 3.3mW of radiated power out of the antenna.

Oh my god. Please not another "informative" post. I really wish you people would stop commenting on these articles when you clearly have no clue what you are talking about. The reflected power (if it happens to exist in this case...which it doesn't because these transmitters are designed quite well and usually include a circulator or isolator at the output of the amplifier to ensure an excellent match) does not go back into the amplifier, because if it did the amplifier would not work as it was designed and would either oscillate or produce extremely poor waveform quality at the output.

Now, if you can bypass the circulator/isolator I mentioned above (which is what I gather they are trying to do in this article) then that is one less place power can be lost on the way to the antenna.

The amplifiers in question are linear amplifiers. A linear amplifier has maximum efficiency for a resistive load. A properly impedance matched antenna appears resistive at its design frequency. An improperly matched one has a reactive impedance component (and an elevated VSWR to go with it). The reactive nature of the load decreases the efficiency of the amplifier. Whether you want to say the power is reflected back into the amplifier or never leaves it in the first place is a matter of semantics. Of

Oh please, another software engineer? Amplifiers are by their very nature non linear devices as a whole (they just happen to have a linear region which we can make use of). The amplifiers in question are operated within their linear region as much as possible where possible, but certain requirements like efficiency force the designers to drive the transistor partly into its non-linear region (closer to P1dB). Some non-linearity is tolerated and is dictated by the FCC, ETSI or CRTC in the form of emissions m

Well done:) Parenthesis have their place in technical writing for the non-technical. They allow you to set off portions of text as "tid-bits" which may help in the understanding but is not required for the technical reader.

How will this work with multiple frequencies? My phone speaks on the good old GSM band (800/900MHz) as well as 1.8GHz and 3G (2.1 I think). I would have thought this kind of coupling very sensitive to the wavelength needing either a narrowish range or multiples.

He's using a waveguide coupling to launch the wave to an external hunk of waveguide, rather than running it through pins, wires, PC board traces, etc. The latter are very lossy at cellphone frequencies.

(I'm working on something similar right now and lose virtually all my signal going through about 6" of PC board wiring. B-( )

From the systems perspective he made a better RF transmitter block. Digging into that block and looking at the RF design level, he simplified the circuitry normally used such as a matching network for the antenna, transmission lines, oscillator (for modulating the information over the carrier frequency), etc into a discrete chip as opposed to multiple printed circuit board components to do that same job.

Beyond that I'd need to study the paper and find more detailed examples of cell phone architecture to have a better idea of the advantages and disadvantages over the legacy design.

Nevermind that he's apparently ignoring the true cause of a lot of the "lost" power - which is in the various bandlimiting filters that any real cellphone pretty much can't do without. It's tough to get a good multiband filter that doesn't have 1 to 2 dB insertion loss. The apertures are also geometric, so you are automatically sensitive to odd-order harmonics in both directions.

And I wonder how his aperture's impedance matches the amplifier out of band? From what I've seen in bleeding-edge RF architectures over the last 20 years or so, it's far easier to make a poor oscillator than a good amplifier, with any given set of components.

Actually, what I think he's doing isolating the oscillator while impeding the capacitive antenna, all the while the couplings' reactance which is usually between 1.85 GHz and 6.1 dB/mW is going to undergo a radical departure from its aperture (commonly also acting as the modulating amplifier) while the multiband waveguide is going to totally remove the need for the baluns. Now of course this won't have any measurable effect on the odd-order harmonics, which are going to continue to radiate (at 50 Ohms) to t

But what if we reroute the oscillator's output to the main deflector dish and convert it into a pulsed tachyon beam, thereby ignorng the impedance in the twelve lowest space dimensions? Of course the odd-order harmonics, if not compensated, might open a subspace rift, but if we tune the gravimetric scanning equipment to 139.47 THz we might be able to modulate the warp field to generate matching even-order harmonics perpendicular to the original waveguide, thereby reducing the chance of a catastrophic breach

I mean my phone lasts for days if i don't use it and many hours if i'm just talking. The vast majority of power seems to be used when I'm watching video, playing games, or browsing the web. My guess would be this is more CPU related.

So even if it saves 10x in the transmit/receive it still might only be a 2x overall savings or less. I suppose it depends on usage patterns.

Or use a Web browser. Phones typically communicate with the Internet through the cellphone network over the two-way radio. This might improve WiFi phones, too, as WiFi also (obviously) employs a (much lower-power) two-way radio.

Goes double for WiFi, which is an extremely chatty protocol and thus sucks power. Could make WiFi much more usable in smartphones. Right now, if you play with WiFi much, you'll find that your battery gets drained fast as compared to EVDO or the like.

The largest battery hog on your phone is the backlight and screen. After that, you have butt-loads of internal RF processing, and then, at a distant third, the antenna itself. The CPU, PMU, etc., are all eating from the same dish, as well. (I suppose if you have an Intel Atom, though, it would be sitting above the RF processor for power consumption.) My estimation on the increase in battery power would be in the range of low to moderate double-digit percentages, but it depends heavily on usage patterns,

Exactly. That means that this give exactly zero improvement over the current arrangement. Range goes by the square of power (assuming perfect isotropic radiation). If you reduce the transmit power by 12 times, the range at which the same detected signal level would be measured should drop by a factor 3.46. How is this better? Apples and Oranges. To get a comparison that one is better than the other, they would have to be compared at the same received signal strength at the same range. The fact that t

(only a software engineer)... but when you tell me that replacing copper wires with a (wireless) transmitter and receiver helps save power: well I am a non-believer. Sorry. Just does not cut it whatever the headlines say. How about quality ?

For once, something that I'm actually qualified to post on!I was a Weapons system depot level tech in the navy, doing lots of work with waveguides, radar, etc. I went on to work in the private sector, doing among other things antenna design at Nortel.I can't help but say this is a bunch of shit. It is ALWAYS more energy-expensive to do wireless, it's just the way things are.If it is just the journalist making a mistake, I can see some possible advances in energy conservation using a waveguide, or even a virtual waveguide; anything else would only start to be possible if you enter the realm of high energy physics.Unless this guy's name is Tesla, and/or they have developed a completely new principle...

The real question is how much you have to boost the signal to overcome the interference from the electronics nearby. Since we're talking about a digital transmission, this is very much a factor. Too much background noise and you get garbage at the other end.(not quite like analog wireless). As such, digital cellphones have to boost their signal until they can get a connection. Often, quite a lot, in fact.

You can see this with a HDTV set and an antenna. Too low of a signal and you get no picture at all.

You appear to be talking about the power of the signal between the cell phone and a tower. The article is nothing to do with that. It is regarding the signal between the cellphones transmitter circuit, and the cellphones antenna. Technically you could achieve a similar effect using an led and LDR to send data without wires or traces. But unless it saves power in receive mode as well, it won't help much overall. Receiving always needs more power, as I have found when working with 2.4GHz radio.

I'm not as qualified as paganizer, as I usually work at much higher frequencies (mmwave). However, losses from the PA to the antenna are typically pretty low. The claim of 12x improvement imply the current interconnects are at best 8% efficient (utter BS!).

From the PA to the radiated signal you typically have:

1. On PA losses because of their design. For example they typically have at least 3 different output stages to span from just a few milli-watts (single HBT cell), up to full power (hundreds of milli-watts, hundreds of HBT cells). The parasitics of driving the unused cells at less than full power operation creates small losses, but I don't know a hard number for this.

2. Baluns/impedance transforms. PA's are typically class B operation with a load line that is just a few Ohms (3V Vcc, and hundreds of mA of DC power, so the RF loadline is pretty steep). Solutions are matching structures, or a push-pull architecture through a balun to transform up to 50 Ohms. These usually account for 0.5-1 dB of loss (10-20%) of power. The invention ignores this part of a cell phones design.

3. Multi-band switch. Missing in this article is that most phones are designed to operate on at least 2, often 3 frequency bands. Several PA's are used, each designed to cover only one band. A GaAs phemt switch is usually used to switch between the two or more PA die. The invention does not address this aspect of cell phone design. These chips are either integrated in with the PA chip (separate die in the same carrier), or in some cases done in a different chip.

4. Small line loss from the PA chip to the antenna do have modest loss, usually just a few tenths of a dB (few percent). The article addresses this aspect of things.

5. The antenna is a clusterfuck of design hassles, as it is often dual, or tri-band in nature. A lot of compromises go on with the antenna. Making it have multiple resonances to cover the bands is hard. Making it small is hard. Making it work with the crappy ground plane, user's hand and head, and technicolor plastic case is damn hard. The article glosses over all this, and talks about a single narrow band antenna scenario.

But what about the display, the back lighting, the bluetooth/wifi, the internal speaker... I can think of a lot of things in a cell phone that also cause background noise that must be overcome. Those bare traces on the circuit board are essentially also acting like a microphone for any stray RF signals. The mistake I think is that many people are equating this with analog signals. With RF interference with digital signals, it then falls back to how much you can boost the signal to have the error correct

I'm not certain, but I think that just might be crazy enough to work!Speaking generally, though, I can see a few esoteric possibilities, but nothing that could do as much as claimed.Your main power usage is at 2 points; the display, and the antenna. you can do some amazing things with the display, like making a low power digital paper display for normal ops and leaving your relatively power hungry LCD off until you need something the paper can't handle.There is quite a bit of wastage across the EM spectrum

A waveguide is far more efficient of a transmission line than coax or any other wireline can hope to achieve. If he's found a way to build a waveguide (or reasonable fascimile thereof) by clever geometry, it could be very efficient.

Well, not sure what kind of software engineer you are if you did not study physics, mathematics, chemistry and economics at your university.

As of now my studies and experience suggests that transmitting whatever over wireless is far more expensive (as in needs more effort) then doing the same thing over a solid connection (copper, aluminium, gold, zinc, silver..... etc)....

But hey my studies are dated as I finished my IT studies in 1996. Sure with that attitude you are at least... hmm for 3 years in "the

Well, not sure what kind of software engineer you are if you did not study physics, mathematics, chemistry and economics at your university

You have a very strange university where chemistry and physics is part of the software program.As of now my studies and experience suggests that transmitting whatever over wireless is far more expensive (as in needs more effort) then doing the same thing over a solid connection (copper, aluminium, gold, zinc, silver..... etc

Lol.. actually chemistry was not part of the program - my mistake here, but we had a strong physics education.

Other than that: yep I have no experience in the field other than using the technology, and still keep my opinion that whenever you leave wires behind and use the airwaves you deal with interference and increased needs in consumption.

If the circuite powering the antenna was the greatest consumer of power in the device, this would result in a significant improvement to the end-user. However, it's all the other bits in the device which eat thousands of times more power -- the CPU, the display, the speakers, etc.

I don't think this will "significantly extend" mobile device battery life, As other people have pointed out, something that could practically save maybe 10mW of battery power during transmit operation is interesting but not really all that dramatic. On the other hand, the author doesn't appear to make the claim that it will or won't significantly extend battery life. That may be a slashdottism:)

If I understood the abstract right, the gist of this is that he designed a transmit module with a small internal loop antenna, so that a larger transmit antenna could be inductively coupled instead of electrically driven. This means that all of the bias and driver circuitry internal to the transmit chip and also all of the bias and transmit circuitry external to the chip could be done away with. He coupled an antenna to the outside of a microchip to utilized what would essentially be 'waste' magnetic field in a conventional transmitter.

I would also bet that the big boys like Qualcomm probably do something similar already inside of their cell-phone modules. I would imagine that an approach like this eliminates much of the general purpose interfacing that needs to be done between some arbitrary microwave transmit module and some other arbitrary antenna, but things like cellphone transmitter chipsets are so tightly integrated that I bet they already implement something similar.

This idea is pretty useless, since it have been confirmed that cellular companies do not truthfully report the amount of battery life left, so people will make shorter calls and not take up the valuable bandwidth of theirs that they oversold.....

Electrical engineering involves an intricate set of tradeoffs. When choosing how to couple two transmitter stages there are at least six basic ways to do it: Direct, capacitive, single-tuned, double-tuned, critcally coupled, overcoupled, tapped, T-section, balun, and many more. The one you choose depends on a lot of factors, efficiency, power level, bandwidth, phase linearity, space, shielding, cost, parts availability, reliability, feedback, adjustability, temperature stability, and more.