The consumer electronics industry has always had something of a love-hate relationship with remote controls. It’s painful to design and ship a new remote control with every product, but attempts to come up with an interoperable standard have been plagued with problems. As a result our homes are littered with lost and unused remote controls. A few independent companies have tried to solve the problem by producing decent, but generally expensive universal controls, but they’re still a rarity around the home.

In the early days of remotes, the dominant technology was ultrasonic, but they’ve evolved to the point today that almost all use infra red (IR) transmitters. IR is cheap and directional; the latter feature being useful in a world where there is limited interoperability and interference can be mitigated by pointing the remote control in the right direction. However, it’s a one way connection, as keeping a photo diode alive to look for a signal coming back from the TV would decimate the battery life.

As the audio-video equipment we buy has become more sophisticated, manufacturers have been looking for an alternative technology that would allow low power, two-way communication between equipment and remote. The obvious solution is wireless, but the question is which one? A few years ago chip vendors who were looking for customers for their 802.15.4 radio ICs, decided to put together a standard to try and sell a few more of their chips. (802.15.4 is underlying radio standard used by ZigBee and other specialist wireless stacks, none of which are shipping in the volumes required to make chip manufacture very profitable.) That standard became known as RF4CE (Radio Frequency for Consumer Electronics) and was eventually embraced by the ZigBee Alliance. The Japanese AV industry bought the story and have recently begun shipping RF4CE handsets into their local market. As the volumes have ramped up, rumours are growing that an increasing number are being returned because they don’t work. It’s too early to be sure what the reason is, but when you delve into the detail of the RF4CE standard it looks a bit flaky. That could herald a golden opportunity for Bluetooth low energy, which is charging onto the remote control scene like a wireless knight in shining armour.

The RF4CE specification was released by the ZigBee Alliance in March 2009. Although it carries the ZigBee name, it’s not a mesh network, nor is it interoperable with the better known ZigBee 2007 or ZigBee PRO standards. Instead it uses a much simplified network layer (NWK), which is designed for less critical applications where the two prime requirements are battery life and low latency. Low latency means that when you press a button, the effect at the other end, typically the TV, is essentially instantaneous. Users won’t accept a wireless standard where there’s a significant delay between pushing a button to change channel or mute the sound and seeing having it take effect.

RF4CE operates in the same 2.4GHz spectrum that is used by ZigBee, Bluetooth, Wi-Fi, microwave ovens, baby alarms and proprietary wireless solutions, which gives it a problem that every other radio operating in this band has: it needs to cope with interference from other devices.

The method chosen by the RF4CE standard is to use the concept of frequency agility. Frequency agility allows a network coordinator to set up a network on the assumption that it will always work on one fixed frequency, but, if it discovers that it is experiencing interference, then the whole network moves to a different, fixed frequency. The hope is that the system will eventually find a frequency where it can operate without interference.

To work, the people writing the standard need to determine which frequencies the system should operate on and which one the network coordinator should choose when the system first starts. That means you need to look at what else is likely to be transmitting in the band. It’s difficult to predict where microwave ovens, baby alarms and the like will transmit, as manufacturers are free to let these transmit anywhere they want within the frequency band. Rather counter-intuitively, Bluetooth is not normally a problem, as it is a frequency hopping system, changing the frequency it transmits on many times a second. So if it interferes with, or experiences interference, it will move to a different frequency before it retries its connection. The big invisible elephant occupying the 2.4GHz spectrum is Wi-Fi. Wi-Fi access points operate at a fixed frequency like ZigBee and RF4CE, which is generally set for the life of the access point when it is initially set up. It’s a big spectral elephant, with a bandwidth of 22MHz, compared to the slim-line 2MHz of an 802.15.4 channel, or 1MHz of Bluetooth. Which means it can block a large chunk of the spectrum.

Wi-Fi operates on eleven overlapping channels. Because these channels are so wide, there are three non-overlapping channels which access points are normally set to operate on. These are spaced apart so they don’t interfere with each other. These are known as channels 1, 6 and 11. In order to make it simple to design, RF4CE only operates at three different channels, which are ZigBee Channels 15, 20 and 25. (To confuse everyone, these numbers have absolutely nothing to do with the channel numbers of Wi-Fi. You don’t need to know why they are numbered that way, but if you do, buy the book.) What you do need to know is that these are neatly chosen to fit into the gaps between the common Wi-Fi channels to minimise the chance of interference.

That’s nice in theory, but the real world has a nasty habit of kicking theory in the teeth. To see what that means, we need to see what the frequency channels look like in real life. If you have Wi-Fi on your mobile phone or laptop, there are some neat pieces of software that you can use to display the local Wi-Fi networks. Have a look at the excellent WiFi Analyzer for Android, or the more complex Netsurveyor for a PC. These show you that the real world is not like the diagram above – there’s a lot more happening in the spectrum. If you don’t have one of these, read a piece of research done by Washington University in St Louis. They mapped the spectrum usage in student accommodation on each of the 2MHz wide ZigBee channels over the course of a 24 hour day. What they found is shown below:

The graphs indicate what the throughput is for a ZigBee transmitter set to continuously send data on each channel. What it shows is that on every one of the ZigBee Channels in the 2.4GHz spectrum (channels 11 to 26), there are periods within the day when the throughput falls to zero. In other words, there are large chunks of time when an RF4CE transmission would not be able to get through. The research did not determine what caused this, but pointed out that a radio system that works on a fixed channel will experience periods where its transmissions will be blocked.

If you think about it, there’s quite a lot in common between student housing in St Louis and flats in Japan. In both cases rooms tend to be small and located close together, and both sets of occupants have a passion for new technology. So it’s not unexpected that RF4CE remote controls shipped to Japanese consumers might experience interference. As we mentioned above, RF4CE has a system to attempt to cope with interference, which is called frequency agility. When a node detects that a channel is not working, it sets in train a process to move every connected device to one of the other two available channels. The expectation is that this will be a rare event, and as a result, it’s not designed to be a quick change, but can take several seconds. When that happens once a month, a user can live with it. When it happens multiple times a day, it renders a remote control useless.

That’s not good, but matters get even worse for RF4CE in Japan. As we saw above, the three frequencies that RF4CE operates on have been chosen to avoid the Wi-Fi channels 1, 6 and 11. When I said that these avoided the common WI-FI channels I wasn’t being totally honest. They do in the US, but they don’t anywhere else. I’m assuming that these channels were specified by engineers in the US. The reason I’m assuming that is that in Europe and Japan the available 2.4GHz spectrum is wider, extending up to channel 13, or in some countries, channel 14.

This means that the top channel for RF4CE falls bang in the middle of the Wi-Fi channel which is often set as the default for access points in Europe and the Far East. That effectively reduces RF4CE to operating over two fixed channels, and one of these partially overlaps Wi-Fi channel 7, which is also used outside the US. So it only has one channel which might be clear, and that’s not guaranteed. It means that RF4CE has nowhere to escape to, making it look like a standard which has been designed to fail.

And that appears to be what’s happening in Japan. As a result, manufacturers are looking to migrate to Bluetooth low energy for their next generation of remote controls. Bluetooth low energy employs adaptive frequency hopping, working at 37 different frequencies across the same 2.4GHz band. It’s a scheme which makes it very robust to interference, so that even in a noisy spectrum it has an excellent chance of working. Moreover its adaptive capability lets it dynamically exclude parts of the spectrum where it detects interference, allowing it to have the best chance of low latency throughout the course of the day. As well as being robust to interference, Bluetooth low energy offers other advantages. Not least of which is that because it is being built into mobile phones, users can use those to control devices. The first prototypes are currently being tested by AV equipment vendors. If all goes well, your next TV, DVD player or Set Top Box is likely to be controlled by Bluetooth.

From the silicon industry’s point of view RF4CE was just want they wanted. I recall Freescale’s projections for the percentage of 802.15.4 chips they would sell into ZigBee mesh dropping from around 80% to 30% when they realised the potential of RF4CE. They and their competitors saw RF4CE as a market which would reach volume far faster than medical or smart energy.

I’m pleased Ember is profitable, but it’s been a long struggle and their investors still need to work out how to get their money back. And they’re the exception. Earlier this year Jennic ran out of cash and were sold. Meshnetic – albeit not a chip maker, also had to bail out, and it’s not clear whether the 802.15.4 lines in Freescale and TI are profitable.

However, my real point is the shortsightedness of the RF4CE specification. I do not comprehend how they could limit it to three frequencies and forget that the whole of the world doesn’t work on the same channels as the US. (Sadly, I can understand that, but it doesn’t help the progress of wireless.)

Texas Instruments is a supplier of Wi-Fi, Bluetooth, Bluetooth Low Energy and RF4CE and therefore has no reason to be biased as the author clearly is. He is a consultant to a Bluetooth supplier and the article has been written with a clear agenda and based on false claims. I have investigated his claims of RF4CE remotes being returned and there is no evidence to support this.

A little knowledge can be dangerous as evidenced by the author lack of understanding of Wi-Fi, RF4CE and CSMA-CA that is used to share the channel. Even when RF4CE operates in the middle of a Wi-Fi channel RF4CE is able to transmit in the gaps in Wi-Fi activity. Readers can reference this document http://focus.ti.com/lit/an/swra285/swra285.pdf to see just how robust RF4CE is.

Like TI, I have experience of Bluetooth, Wi-Fi, ZigBee and proprietary radio and have worked with all of them. I also believe that there is no one radio perfect for every situation – each has their strengths and weaknesses. However, what I have always stressed to product designers is that the real world is very different from the development lab, in much the same way that a main road at rush hour is very different from an automotive company’s test track. The number of vehicles on the road is what determines how fast a car will go, not the engine specification.

As far as I am aware, the research that is being done by Washington University is totally independent. They are looking at the real world of wireless spectrum and showing that it is very different from what theory suggests. Bluetooth, Wi-Fi and 802.15.4 have tried to design their protocols to cope with each other, so have pretty well defined ways of accessing the spectrum to cope with mutual interference. But there are an equal number of proprietary radio implementations that don’t cooperate. They just shout, because they can.

That’s where real world problems appear. CSMA-CA (Collision Sense, Multiple Access – Collision Avoidance) is a very polite protocol where devices listed for other users and then back-off until the spectrum is clear. It’s like the Victorian gents at a door going through the “after, you”, “no, after you”, “no, I insist, after you” routine. And after each attempt CSMA-CA does a random back-off before trying again. So if a transmitter constantly sees interference it can prevaricate for ever.

A blind spot in CSMA-CA is that the sensing is done by the transmitter. That means that it is unaware of whether the receiver is being blocked by another transmitter which is to far away for it to sense. That’s sufficiently common in the real world to have a name for itself – the “hidden node”. 802.11 has realised the importance of this and uses a Network Allocation Vector to inform other nodes of the intended duration of its conversation, which helps to ensure they don’t interfere. I don’t believe that ZigBee supports this. Instead it lets the transmission proceed, and the unit will have to try again when it fails to receive an acknowledgement, adding further latency.

Both Hattori and Kobus highlight TI’s application note, which covers the same ground as this post, but in a development lab environment. It doesn’t include hidden nodes which may block the receiver, or any other source of interference. It makes much of the fact that a 40MHz 802.11n transmitter is the absolute worst case. If the RF4CE stack is employing frequency agility, and the paper implies it does, that’s irrelevant. If the frequency agility feature works, the RF4CE unit will move to channel 15 and be largely unaware of the Wi-Fi transmission. I would prefer to see the access point running in 802.11b mode. 802.11b is spectrally less efficient than 802.11n, with longer preambles on each packet. The simpler coding also means that the power per bit of the data portion of each packet is likely to be around 5dB greater. My guess is that it would be a substantially worse interferer than 802.11n.

The paper points out that designers can improve performance by tweaking some of the stack parameters, but to my mind that’s saying the specification is not well enough defined. It’s a technique that can work well if the same tweaks go into the products at both ends of the link, which implies that they contain the same chips and the same stacks. That’s certainly what any semiconductor manufacturer would want, but it’s not the real world.

One other item to address is the proximity of the RF4CE receiver to the transmitter. I’m seeing an increasing number of TVs that integrate Wi-Fi and Bluetooth. In this case the RF4CE transceiver is unlikely to be 20cm away from the Wi-Fi transmitter, it’s more likely to be 2 cm away on the same pcb. Wi-Fi and Bluetooth have learnt to live together and manufactures provide protocols for them to talk to each other at the baseband level to ensure that collocated transmitters do not interfere with each other. These coexistence mechanisms extend to Bluetooth low energy. I am not aware of any work to do the same for collocated RF4CE with either Bluetooth or Wi-Fi. But collocation is what’s happening in consumer electronics goods and once again, the standards need to reflect reality, or the consumer experience will suffer.

I have a lot of respect for TI. I’ve used their RF chips in many products, which have sold in the millions. But radio standards must cope with the real world. I’d refer every product designer back to that Washington University study and suggest that they emulate that environment for their product testing.

Leave a Comment

Name

Mail

Website

About Creative Connectivity

Creative Connectivity is Nick Hunn's blog on aspects and applications of wireless connectivity. Having worked with wireless for over twenty years I've seen the best and worst of it and despair at how little of its potential is exploited.

I hope that's about to change, as the demands of healthcare, energy and transport apply pressure to use wireless more intelligently for consumer health devices, smart metering and telematics. These are my views on the subject - please let me know yours.

Essentials of Short Range Wireless

A helping hand for wireless designers

Adding wireless connectivity to a product is a major challenge for any designer. There are so many new concepts, and a plethora of suppliers claiming they’ve solved them for you. I’ve tried to distil 20 years of experience into this book to help you get over the pitfalls, ask the right questions and make sure you understand the answers.