More than 16 technical sessions were held in parallel for 3 days, covering a wide range of research arenas from 5G technologies, to applications, services, and etc. It was interesting to see that a quite large number of sessions were on security and privacy in 5G, IoT, big data and clouds. Also, I found several technical sessions on Cloud-RAN, and the way it will contribute in 5G. Green communications was also an important part of ICC. My paper on ”Fundamental Tradeoffs in Resource Provisioning for IoT Services over Cellular Networks” (paper, presentation) was presented in a section named ”green enabling technologies for IoT for 5G”.

Three interesting papers:

Among different papers, I found the 3 following papers exciting/outstanding for you to read.

Especially, the second paper is close to our techno-economic analysis, and has exciting results. The results presented in this paper shows that deployment of a smart network managing lighting in city center of Luxembourg will bring benefits from the first year, i.e. the saving in energy will be more than the CAPEX and OPEX. It is also interesting to see that for saving energy in lighting (which the authors claimed that it consumes 40% of the municipality budget), the authors have proposed the same ideas as we have in green communications, like traffic-aware BS/light sleeping.

TUTORIALS:

A good set of tutorials were available on the first and last days of conference. Among them, the TS06. Recent Progress in Non-Orthogonal Multiple Access was interesting for me. Besides this tutorial, I saw a quite large number of research papers on non-orthogonal multiple access in the conference which is to me a paradigm shift.

INPUT FROM INDUSTRY:

Besides people from academy, there were several industry exhibitions and forums. Like every other event, National Instruments and their 5G prototypes were there. Also, ICC was a great place for LoRA (from France) to present itself. They had several accepted papers and exhibitions as well. It was interesting that the presenter of LoRA was employed in LoRA in Lund, Sweden, and he claimed that LoRA has been deployed in many countries up to know (loot at this image). It was strange that SigFox was absent in the ICC. From my point of view and to the best of my literature and web review, LoRA has succeeded in competition with SigFox. Nokia was also in the ICC, but Ericsson was absent (Ericsson is busy in these days with Verizon in USA for 5G tests).

In a recent COMSOC Technology News article, Thomas Hazlett and Michael Honig discuss how deal with spectrum in those frequency ranges where WiFi and cellular collide. They argue that when spectrum is scarce, unlicensed, “free”, spectrum is economically inefficient. The argument is that free access makes sense when spectrum is abundant, but is economically inefficient when the spectrum resource becomes scarce. Excessive congestion is the expected results in populated locations. In this case licensing with economic mechanisms would be preferable. The 2.4 GHz ISM-band used for WiFi, Bluetooth etc, is the popular example of this. Although, one can agree to the 2.4 GHz case the authors then extend their reasoning to the 5-6 GHz range. This is clearly an economist’s view – spectrum bandwidth is a resource that is treated independently of where it is located and what is the intended, or even possible, use and what are the associate infrastructure cost. As the authors point out, there are basically two types of use – wide area cellular, and short range indoor use.

Frequencies below 3GHz are more or less suited for wide area systems. As these systems create interference over large areas, this creates a spectrum shortage situation that needs to be dealt with in an economically efficient way as the Hazlett and Honig claim.

However, frequencies above 3 GHz (“centimeter waves”) tend to behave in very different way. Here propagation behaves like light and which causes severe coverage problems for outdoor wide-area systems. They require (too) large investments in infrastructure and little money is left for spectrum fees. The frequencies are however, ideally suited for indoor short range systems (similar to WiFi). Very little radiation will leak out of the building where they are used, and thus little interference will be caused in neighboring buildings. As the deployment of base stations indoor is controlled by the building/facilty owner, licensing will not be meaningful. There is only one “user” and the spectrum value in a fictuous auction would yield a zero price. Basically all the centimeter-wave spectrum is available to the building owner and thus we have a situation with abundance.

Again the “property” view on spectrum is more accurated than the concept of uniform resource that has the same properties regardless of location. Some property types are more suited for certain uses, some other types fit other requirements. Location, location, location … is indeed true in spectrum management.

The 5G myth is the title of book recently published by William Webb. A short summary of the book can also be found in the summary William published on LinkedIn. The authors dissects and debunks some of the hype – in particular the “need for speed”. In particular, he states:

“….that it is in the interests of all the key players to be supportive or even strong promoters of 5G. Academics benefit from 5G initiatives as sources of funding. Manufacturers rely on the roll-out of 5G to provide a boost in revenues. Operators fear if they step out of line they will suffer competitive disadvantage. Governments see political benefit in being supportive. It is in nobody’s interest to rock the boat”

The emperors new clothes, indeed. Further, he notes that

“… the visions are too utopian. Achieving them would require astonishing break-throughs in radio technology and for subscribers to be prepared to significantly increase their spending. Both are heroic assumptions. In practice, most visions can be adequately achieved with existing technology such as evolved 4G, evolving Wi-Fi and emerging IoT technologies”

Indeed true. The parts of the visions that cannot be dealt with in an evolutionary fashion are mainly those related to very short delays and extreme reliabilty. It is hard to see the business case for a massive and costly deployment of the required “revolutionary” new system tailored for some “extreme” applications. Sure remote control of vehicles, remote surgery and virtual reality are exiting applications, but neither of them are likely to be the “killler application” needed to pay for it all. Williams vision for high reliabilty wireless access everywhere with todays data rates make much more sense to me to.

The 59th annual IEEE Global Communications Conference was held in Washington DC, from 5-7th of December 2016. It was interesting for me to see so many researchers which I had seen them before only at the end of journal papers.

Our contribution:

I (with my colleagues Istiak, Vorvait, Ciceck, and Guowang) are here to present our latest research works. On 5th of December, I presented the following paper: Battery Lifetime-Aware Base Station Sleeping Control with M2M/H2H Coexistence, which you can download it from here and the presentation file from here.

I also attended the tutorial on –Wireless Communications and Networking with Unmanned Aerial Vehicles – which was offered by Walid Saad and Mehdi Bennis. You can find more information about the this topic here.

It is worth noting that almost in almost half of the technical sessions, the presented results and techniques were related to IoT, either aiming at solving the scalability issues or energy efficiency issues. Unfortunately, in most presented works on energy efficiency for IoT, they were trying to do something to decrease the required transmit power, e.g. by sending a drone to a region in order to decrease the required transmit power from 4 mW to 2 mW. The ones who have attended my Lic seminar know what is wrong here! Most IoT devices want to send a few bits not MBytes of data, and hence, increasing signaling and waiting time for the arrival of UAV are much more dangerous for battery lifetime than the required transmission power. In some sessions, I tried to feedback the authors about this issue based on our contributions.

Some points from the conference:

Vahid Tarokh, Professor in Harvard, in the keynote session gave a talk on using new models for big data processing coming from a massive number of IoT devices. The interested reader may refer to here for more information. I searched his works and found here that he has a pending grant for this topic and from his initial results in the GC2016, it seems that this topic will be important in near future.

Similar to our works on techno-ecomonic topics, I found a group in France have worked on an interesting topic, which might be of interest of techno-economic researchers in our group. They have modeled cost for deployment of sensors, replacement of batteries by workers, and cost for wireless power transfer, and etc., to find when the CAPEX will be less than OPEX. They found that for dense machine deployment, e.g. the inter-distance is less than 3 meters, it is worth to use wireless power transfer. You may refer to their paper here.

From my point of view, at least a tutorial on techno-economic topics was missing in this conference. Almost all people who I offered them the newly posted paper by Jens [1] became interested in the topic. I hope to see more contribution from CoS in the next events.

Amsterdam based russian mobile operator Vimpelcom announced recently that the are embarking on a new path where instead of charging their subscribers for data plans, the content providers would foot the bill. This comes at a time when a debate on “zero rating” (“toll-free services”, “sponsored connectivity”) is already in full swing. “Zero-rating” is the practice of letting ordinary subscribers with regular data plans, access certain content, e.g. social media, music streaming, etc. for “free”, i.e. without letting this consumption affect the monthly “GB-bucket”. European Regulators and proponents of net neutrality already here see a problem, that by giving the paying content providers preferential access, there is concern it may put the vast number of other services (including the competitors of the paying content providers) at a disadvantage by providing less bandwidth and poorer service. It is easy to understand the MNO:s – this is there way out of the low-end “bitpipe-provision swamp”. The proponents claim that this is nothing new, the MNO:s have been providing QoS -enhancements to paying content provider already in the past (e.g. online-gaming and video streaming services). The “jury is still out” in the EU-commission and in regulatory circles, if in fac zero-rating is compatible with net neutrality.

Vimpelcom now takes this concept to a higher level. Instead of selling data and voice calls the plan is to offer a single, app-based platform where users can communicate for free. Instead the MNO will revenue share through partnerships with recognized Internet names, such as Uber. In addition Vimpelcom wants to mine client data to target new services, e-g. location and user behaviour based services. Vimpelcom has been testing a zero-rated mobile voice, video and text messaging app, Veon, together with regular subscriptions, but the step to free subscription still seems quite significant.

“Rather than depending on shrinking monthly access fees from its customers, Vimpelcom covets the higher growth that comes from competing for a bigger share of the $200-$300 per month a consumer can spend on Internet services on their phones – on things like transport, music or meal delivery”, Reuters tech-writers Anthony Deutsch and Eric Auchard conclude. Even though this probably is a wise move in the long run and gaining a “first-mover” advantage, an abrupt change of business model is still seems to be a high risky endeavor.

Is this a blow to net neutrality or the “open” internet ? Well, it depends. Looking at the analogy of physical transports we have had this situation for a long time. We can either pay for having packages containing goods sent to using the mail service, or we can go to a store and buy those goods. In the latter case the transportation of the goods to the store will be included in the price of the merchandise. Here we have neutrality, the price or availability of the mail service is not affected by the transport of goods to the store. There is capacity enough on roads and rail and the two modes of transport do not really need to compete for resources. In the case of your internet connection the situation may be different. The paying content providers may let the operators reserve a large fraction of the bandwidth for their high-quality services, which means that their current and future competitors will have less bandwidth and less chances of providing competing services. In particular this is a problem for resource-demanding services, like high quality video that may consume a significant part of the bandwidth provided to the end-user. The more bandwidth the user has access to (e.g. fiber to the home), the less of a problem this will be.
Current regulation and EU-directives talk about that net neutrality should be preserved but give no clear and quantitative answer here, unfortunately. How large part of the bandwith can be sponsored ? When will it affect the quality of “free services”. Do I have to pay a separate fee for these service (like in the post office example), creating a two-tier internet ?

On 2th of December, I will present my thesis on “Energy Efficient Machine-Type Communications over Cellular Networks: A Battery Lifetime-Aware Cellular Network Design Framework”. You are most welcome to join us.

I was recently interviewed by the Swedish Tech journal “Ny Teknik” about my thoughts on that several operators are claiming that they will be launching commercial “5G” services in 2017 and 2018. Behind the tabloids “War has begun”-style headlines, my answer can be summarized by the title of the this post. Of course there will be trials and test with some of the new 5G tech components in the next few years to come, but deployment of infrastructure at a commercial scale of something that deserve the name 5G before 2020 – come on! Even for the new radio interface (which per se provides moderate performance improvements), there isn’t even a standard set yet!

Been there, done that – same in 3G and 4G. There are always players that feel that they need to be first in the “newG” party and end up finding that the party hasn’t started yet. So theycook something up – mostly marketing but with little technical substance and call it “5G”. The market actors are of course free to do what they want, but this type of behaviour has three significant downsides to the industry that eventually also hurts the “too-early movers”:

You instill expectations at the customers that you cannot meet (until maybe many years later) – create a lasting feeling of disappointment. “First impressions last”, unfortunately

You push vendors to throw some tech components (e.g. using a prototype og the new 5G radio interface) on the market in haste before they have been standardized. Even if these are to be used for “niche application” (e.g. Verizons planned Gbit/s fixed wireless broadband to homes products), they may “stick” and it becomes more difficult to correct deficiencies in the standard when there are already volume products on the market.

Other operators enter a wait-and-see mode. “Oh, 5G is your around the corner, why should we buy LTE/4G ? Let’s wait and see”. This could bring the industry to a still-stand. Achieving much, much more capacity with LTE/4G and WiFi in a cost-efficient way is perfectly feasible, and the vast majority of the “connected society” and “Internet of Things” applications are best tackled by LTE and its low-power cousins (LTE-M and NB-LTE) which are about to hit the market now.

5G will come and provide significant improvements in several domains – and yes there need to be trials and demos to validate the new technology. Meanwhile in the commercial domain, lets hope that most market players keep their cool and focus on solving their customers need with the technology that keeps coming, instead of being caugth up in some kind of wild race up the Everest. The top may be a cold and lonely place …

In a recent report published by the European Commission the cost and societal benefits of a future deployment are estimated. The report comes to the conclusion that 56B€ is the cost of deploying a 5G infrastructure and that this will create 2.3 million jobs and at least 60.5B€ in societal gain by 2025(!). Amazing result, while most of us are still trying to assess what 5G actually will mean, what parts will be just a straight-forward evolution of 4G/LTE and what is the relation to evolved Wi-Fi. In any case first standards for a new radio interface are not settled until maybe next year. But that’s not all – in the report we learn that even in the most favorable scenario there is at least 15-20 GHz of spectrum needed to make 5G happen – clearly a showstopper. How on earth is it possible for 150 5G-PPP experts to come to all these strange and seemingly precise conclusions?

Well, as usual the “devil” is in the assumptions you make. In the report, the performance and spectrum limiting application for 5G is that thousands of viewers are watching 4K/UHD-TV in their cars on every mile the motorway. As this is realized by a macro-cellular system and not by a dense infrastructure at the roadside, of course we end up with a system with moderate cost, but with outrageous spectrum requirements. Wait a minute – mobile TV as a “killer app” – doesn’t that sound familiar ? Do we really need a new 5G radio interface to make this happen ? And how will 50 Mb/s to cars in the motorway contribute to create millions of jobs in Europe ?

We had two keynotes, the first one by Magnus Frodigh (Research Area Director, Wireless Access Networks, Ericsson) on 5G Machine-Type Communication technologies; the focus was on the double effort to provide connectivity to (1) low-cost mMTC, with some NB-IoT modules expected to reach as low as 5USD per unit, and (2) mission-critical communications, which still is a growing research area for 5G.

The second keynote speaker was Preben Mogensen (Professor at Aalborg University and Principal Engineer at Nokia – Bell Labs) on the IoT evolution towards 5G; an excellent overview of mMTC, Ultra Reliable Low Latency Communication (URLLC), V2X and Unmanned Aerial Vehicles (UAV)—all of them in the context of 5G.

mMTC in 5G is having undivided opinions: not much to be done here; for massive, low-cost devices the answer is somewhere else. 5G work is looking into control and mission critical scenarios. Even for V2X, 4G technologies can cover baseline requirements. One slide that caught my attention was about the interference problems that might be generated when covering drones with cellular technologies: basically, as drones go up, the inference increases since the line of sight probability and the line of sight radius increases, effectively having more interfering neighbours in LOS… is this a new dimension to considering in near-future network deployments?