Wednesday, 7 March 2018

This year at MWC, I took the time out to go and see as many companies as I can. My main focus was looking at connectivity solutions, infrastructure, devices, gadgets and anything else cool. I have to say that I wasn't too impressed. I found some of the things later on Twitter or YouTube but as it happens, one cannot see everything.

I have to be honest, haven't seen a WOW demo yet at #MWC18. While there are lots of interesting stuff, it's all the same, old and tired stuff.

I will be writing a blog on Small Cells, Infrastructure, etc. later on but here are some cool videos that I have found. As its a playlist, if I find any more, it will be added to the same playlist below.

The big vendors did not open up their stands for everyone (even I couldn't get in 😉) but the good news is that most of their demos is available online. Below are the name of the companies that had official MWC 2018 websites. Will add more when I find them.

Anyway, back in Dec. 3GPP and Virtual Reality Industry Forum (VRIF) held a workshop on VR Ecosystem & Standards. All the materials, including agenda is available here. The final report is not there yet but I assume that there will be a press release when the report is published.

While there are some interesting presentations, here is what I found interesting:

From presentation by Gordon Castle, Head of Strategy Development, Ericsson

For anyone wanting to learn more about 6 degrees of freedom (6- DoF), see this Wikipedia entry. According to the Nokia presentation, Facebook’s marketing people call this “6DOF;” the engineers at MPEG call it “3DOF+.”

XR is 'cross reality', which is any hardware that combines aspects of AR, MR and VR; such as Google Tango.

From presentation by Devon Copley, Former Head of Product, Nokia Ozo VR Platform

Some good stuff in the pres.

From presentation by Youngkwon Lim, Samsung Research America; the presentation provided a link to a recent YouTube video on this presentation. I really liked it so I am embedding that here:

Sunday, 10 September 2017

Back in 2013, I spoke about Smart Batteries. Still waiting for someone to deliver on that. In the meantime I noticed that you can use an Android phone to charge another phone, via cable though. See the pic below:

You are probably all aware of the Samsung Galaxy Note 7 catching fires. In case you are interested in knowing the reasons, Guardian has a good summary here. You can also see the pic below that summarises the issue.

Lithium-ion batteries have always been criticized for its abilities to catch fire (see here and here) but researchers have been working on ways to reduce the risk of fire. There are some promising developments.

The electrochemical masterminds at Stanford University have created a lithium-ion battery with built-in flame suppression. When the battery reaches a critical temperature (160 degrees Celsius in this case), an integrated flame retardant is released, extinguishing any flames within 0.4 seconds. Importantly, the addition of an integrated flame retardant doesn't reduce the performance of the battery.

Researchers at the University of Maryland and the US Army Research Laboratory have developed a safe lithium-ion battery that uses a water-salt solution as its electrolyte. Lithium-ion batteries used in smartphones and other devices are typically non-aqueous, as they can reach higher energy levels. Aqueous lithium-ion batteries are safer as the water-based electrolytes are inflammable compared to the highly flammable organic solvents used in their non-aqueous counterparts. The scientists have created a special gel, which keeps water from reacting with graphite or lithium metal and setting off a dangerous chain reaction.

Starting about two years ago, fears of a lithium shortage almost tripled prices for the metal, to more than $20,000 a ton, in just 10 months. The cause was a spike in the market for electric vehicles, which were suddenly competing with laptops and smartphones for lithium ion batteries. Demand for the metal won’t slacken anytime soon—on the contrary, electric car production is expected to increase more than thirtyfold by 2030, according to Bloomberg New Energy Finance.Even if the price of lithium soars 300 percent, battery pack costs would rise only by about 2 percent.

University of Washington researchers recently demonstrated the world's first battery-free cellphone, created with funding from the U.S. National Science Foundation (NSF) and a Google Faculty Research Award for mobile research.The battery-free technology harvests energy from the signal received from the cellular base station (for reception) and the voice of the user (for transmission) using a technique called backscattering. Backscattering for battery-free operation is best known for its use in radio frequency identification (RFID) tags, typically utilized for applications such as locating products in a warehouse and keeping track of high-value equipment. An RFID base station (called a reader) "pings" the tag with an RF pulse, which allows the tag to harvest microwatts of energy from it—enough to return a backscattered RF signal modulated with the identity of the item.

Unfortunately, harvesting generates very little energy; so little, that you really need a new standard. For instance, Wi-Fi signals transmit continuously, but harvesting that energy constantly will only enable transmissions of about 10 feet today. Range will be the big challenge for making this technology successful.

So we wont be seeing them anytime soon unfortunately.

Recycling of materials is always a concern, especially now that the use of Lithium-ion is increasing. Financial Times (FT) recently did a good summary of all the companies trying to recycle Lithium, Cobalt, etc.

Mr Kochhar estimates over 11m tonnes of spent lithium-ion batteries will be discarded by 2030. The company is looking to process 5,000 tonnes a year to start with and eventually 250,000 tonnes — a similar amount to a processing plant for mined lithium, he said.The battery industry currently uses 42 percent of global cobalt production, a critical metal for Lithium-ion cells. The remaining 58 percent is used in diverse industrial and military applications (super alloys, catalysts, magnets, pigments…) that rely exclusively on the material.

According to Wikipedia, The purpose of the Cobalt (Co) within the LIBs is to act as a sort of bridge for the lithium ions to travel on between the cathode (positive end of the battery) and the anode (the negative end). During the charging of the battery, the cobalt is oxidized from Coᶾ⁺ to Co⁴⁺. This means that the transition metal, cobalt, has lost an electron. During the discharge of the battery the cobalt is reduced from Co⁴⁺ to Coᶾ⁺. Reduction is the opposite of oxidation. It is the gaining of an electron and decreases the overall oxidation state of the compound. Oxidation and reduction reactions are usually coupled together in a series of reactions known as red-ox (reduction-oxidation) reactions. This chemistry was utilized by Sony in 1990 to produce lithium ion cells.

“Lithium-ion batteries were supposed to be different from the dirty, toxic technologies of the past. Lighter and packing more energy than conventional lead-acid batteries, these cobalt-rich batteries are seen as ‘green.’ They are essential to plans for one day moving beyond smog-belching gasoline engines. Already these batteries have defined the world’s tech devices.

“Smartphones would not fit in pockets without them. Laptops would not fit on laps. Electric vehicles would be impractical. In many ways, the current Silicon Valley gold rush — from mobile devices to driverless cars — is built on the power of lithium-ion batteries.”

What The Post found is an industry that’s heavily reliant on ‘artisanal miners’ or creuseurs, as they’re called in French. These men do not work for industrial mining firms, but rather dig independently, anywhere they may find minerals, under roads and railways, in backyards, sometimes under their own homes. It is dangerous work that often results in injury, collapsed tunnels, and fires. The miners earn between $2 and $3 per day by selling their haul at a local minerals market.

There is a big potential for reducing waste and improving lives, hopefully we will see some developments on this front soon.

Friday, 12 May 2017

Dan Warren, former GSMA Technology Director who created VoLTE and coined the term 'Phablet' has been busy with his new role as Head of 5G Research at Samsung R&D in UK. In a presentation delivered couple of days back at Wi-Fi Global Congress he set out a realistic vision of 5G really means.

A brief summary of the presentation in his own words below, followed by the actual presentation:

"I started with a comment I have made before – I really hate the term 5G. It doesn’t allow us to have a proper discussion about the multiplicity of technologies that have been throw under the common umbrella of the term, and hence blurs the rationale for one why each technology is important in its own right. What I have tried to do in these slides is talk more about the technology, then look at the 5G requirements, and consider how each technology helps or hinders the drive to meet those requirements, and then to consider what that enables in practical terms.

The session was titled ‘5G – beyond the hype’ so in the first three slides I cut straight to the technology that is being brought in to 5G. Building from the Air Interface enhancements, then the changes in topology in the RAN and then looking at the ‘softwarisation’ on the Core Network. This last group of technologies sets up the friction in the network between the desire to change the CapEx model of network build by placing functions in a Cloud (both C-RAN and an NFV-based Core, as well as the virtualisation of transport network functions) and the need to push functions to the network edge by employing MEC to reduce latency. You end up with every function existing everywhere, data breaking out of the network at many different points and some really hard management issues.

On slide 5 I then look at how these technologies line up to meeting 5G requirements. It becomes clear that the RAN innovations are all about performance enhancement, but the core changes are about enabling new business models from flexibility in topology and network slicing. There is also a hidden part of the equation that I call out, which is that while technology enables the central five requirements to be met, they also require massive investment by the Operator. For example you won’t reach 100% coverage if you don’t build a network that has total coverage, so you need to put base stations in all the places that they don’t exist today.

On the next slide I look at how network slicing will be sold. There are three ways in which a network might be sliced – by SLA or topology, by enterprise customer and by MVNO. The SLA or topology option is key to allowing the co-existence of MEC and Cloud based CN. The enterprise or sector based option is important for operators to address large vertical industry players, but each enterprise may want a range of SLA’s for different applications and devices, so you end up with an enterprise slice being made up of sub-slices of differing SLA and topology. Then, an MVNO may take a slice of the network, but will have it’s own enterprise customers that will take a sub-slice of the MVNO slice, which may in turn be made of sub-sub-slices of differing SLAs. Somewhere all of this has be stitched back together, so my suggestion is that ‘Network Splicing’ will be as important as network slicing.

Slide illustrates all of this again and notes that there will also be other networks that have been sliced as well, be that 2G, 3G, 4G, WiFi, fixed, LPWA or anything else. There is also going to be an overarching orchestration requirement both within a network and in the Enterprise customer (or more likely in System Integrator networks who take on the ‘Splicing’ role). The red flags are showing that Orchestration is both really difficult and expensive, but the challenge for the MNO will also exist in the RAN. The RRC will be a pinch point that has to sort out all of these device sitting in disparate network topologies with varying demands on the sliced RAN.

Then, in the next four slides I look at the business model around this. Operators will need to deal with the realities of B2B or B2B2C business models, where they are the first B. The first ‘B’s price is the second ‘B’s cost, so the operator should expect considerable pressure on what it charges, and to be held contractually accountable for the performance of the network. If 5G is going to claim 100% coverage, 5 9’s reliability, 50Mbps everywhere and be sold to enterprise customers on that basis, it is going to have to deliver it else there will be penalties to pay. On the flip side to this, if all operators do meet the 5G targets, then they will become very much the same so the only true differentiation option will be on price. With the focus on large scale B2B contracts, this has all the hallmarks of a race downwards and commoditisation of connectivity, which will also lead to disintermediation of operators from the value chain on applications.

So to conclude I pondered on what the real 5G justification is. Maybe operators shouldn’t be promising everything, since there will be healthy competition on speed, coverage and reliability while those remain as differentiators. Equally, it could just be that operators will fight out the consumer market share on 5G, but then that doesn’t offer any real uplift in market size, certainly not in mature developed world markets. The one thing that is sure is that there is a lot of money to be spent getting there."

Sunday, 11 September 2016

The above picture is a summary of the spectrum that was agreed to be studied for IMT-2020 (5G). You can read more about that here. I have often seen discussions around how much spectrum would be needed by each operator in total. While its a complex question, we cannot be sure unless 5G is defined completely. There have been some discussions about the requirements which I am listing below. More informed readers please feel free to add your views as comments.

Real Wireless has done some demand analysis on how much spectrum is required for 5G. A report by them for European Commission is due to be published sometime soon. As can be seen in the slide above, one of the use cases is about multi gigabit motorway. If the operators deploy 5G the way they have deployed 4G then 56 GHz of spectrum would be required. If they move to a 100% shared approach where all operators act as MVNO and there is another entity that deploys all infrastcture, including spectrum then the spectrum requirement will go down to 14 GHz.

This is in addition to all the other spectrum for 2G, 3G & 4G that the operator already holds. I have embedded the presentation below and it can be downloaded from here:

The UK Spectrum Policy Forum (UKSPF) recently held a workshop on Frequency bands for 5G, the presentations for which are available to download on the link I provided.

Its going to be a huge challenge to estimate what applications will require how much amount of spectrum and what would be the priority as compared to other applications. mmMagic is one such group looking at spectrum requirements, use cases, new concepts, etc. They have estimated that around 3.1GHz would be required by each operator for 99% reliability. This seems more reasonable. It would be interesting to see how much would operators be willing to spend for such a quantity of spectrum.

Sunday, 29 May 2016

Samsung is one of the 5G pioneers who has been active in this area for quite a while, working in different technology areas but also making results and details available for others to appreciate and get an idea on what 5G is all about.

I published a post back in 2014 from their trials going on then. Since then they have been improving on these results. They recently also published the 5G vision paper which is available here and here.

Saturday, 2 April 2016

When I posted April Fools' jokes on the blog last couple of years (see 2014 & 2015) , they seem to be very popular so I thought its worth posting them this year too. If I missed any interesting ones, please add in comments.

Wi-Fly: Gone are the days of unnoticed, unzipped trouser zippers upon exiting the restroom. Should your fly remain open for more than three minutes, the ZipARTIK module will send a series of notifications to your smartphone to save you from further embarrassment.

Get Up! Alert: Using pressure sensors, Samsung’s intelligent trousers detect prolonged periods of inactivity and send notifications to ‘get up off of that thing’ at least once an hour. Should you remain seated for more than three hours, devices embedded in each of the rear pockets send mild electrical shocks to provide extra motivation.

Keep-Your-Pants-On Mode: Sometimes it’s easy to get carried away with the moment. The Samsung Bio-Processor in your pants checks your bio-data including your heart rate and perspiration level. If these indicators get too high, Samsung’s trousers will send you subtle notifications as a reminder of the importance of keeping your cool.

Fridge Lock: If the tension around your waist gets too high, the embedded ARTIK chip module will send signals to your refrigerator to prevent you from overeating. The fridge door lock can then only be deactivated with consent from a designated person such as your mother or significant other.

Microsoft has an MS-DOS mobile in mind for this day. I wont be surprised if a real product like this does become popular with older generation. I personally wouldn't mind an MS-DOS app on my mobile. Here is a video:

It would have been strange if we didnt have a Robot for a joke. Domino's have introduced the Domimaker. Here's how it works:

T-Mobile USA is not shy pulling punches on its rivals with the Binge On data plan where it lets people view certain video channels without using up their data. Here is the video and more details on mashable.

Sunday, 16 August 2015

Came across this paper from Dec. 2000 recently. Its interesting to see that even back then researchers were thinking about multiple networks that a user can have access to via handovers. Researchers nowadays think about how to access as many networks as possible simultaneously. I call is Multi-stream aggregation (MSA), some others call it Multi-RAT Carrier Aggregation (MCA) and so on.

If we look at the different access technologies, each has its own evolution in the coming years. Some of these are:

Fixed/Terrestrial broadband: (A)DSL, Cable, Fiber

Mobile Broadband: 3G, 4G and soon 5G

Wireless Broadband: WiFi

Laser communications

LiFi or LED based communications

High frequency sound based communications

Then there could be a combination of multiple technologies working simultaneously. For example:

There has been an interest in moving on to higher frequencies. These bands can be used for access as well as backhaul. The same applies for most of the access technologies listed above which can work as a backhaul to enable other access technologies.

While planned networks would be commonplace, other topologies like mesh network will gain ground too. Device to device and direct communications will help create ad-hoc networks.

Satellite networks, the truly global connectivity providers will play an important role too. While backhauling the small cells on planes, trains and ships will be an important part of satellite networks, they may be used for access too. Oneweb plans to launch 900 micro satellites to provide high speed global connectivity. While communications at such high frequencies mean that small form factor devices like mobile cant receive the signals easily, connected cars could use the satellite connectivity very well.

Samsung has an idea to provide connectivity through 4,600 satellites to be able to transmit 200GB monthly to 5 Billion people worldwide. While this is very ambitious, its not the only innovative and challenging idea. I am sure we all now about the Google loon. Facebook on the other hand wants to use a solar powered drone (UAV) to offer free internet access services to users who cannot get online.

As I mentioned, security and privacy will be a big challenge for devices being able to connect to multiple access networks and other devices. An often overlooked challenge is the timing and sync between different networks. In an ideal world all these networks would be phase and time synchronised to each other so as not to cause interference but in reality this will be a challenging task, especially with ad-hoc and moing networks.

I will be giving a keynote at the ITSF 2015 in November at Edinburgh. This is a different type of conference that looks at Time and Synchronisation aspects in Telecoms. While I will be providing a generic overview on where the technologies are moving (continuing from my presentation in Phase ready conference), I am looking forward to hearing about these challenges and their solutions in this conference.

Andy Sutton (Principal Network Architect) and Martin Kingston (Principal Designer) with EE have shared some of their thought on this topic which is as follows and available to download here.

Sunday, 14 June 2015

People often ask at various conferences if TD-LTE is a fad or is it something that will continue to exist along with the FDD networks. TDD networks were a bit tricky to implement in the past due to the necessity for the whole network to be time synchronised to make sure there is no interference. Also, if there was another TDD network in an adjacent band, it would have to be time synchronised with the first network too. In the areas bordering another country where they might have had their own TDD network in this band, it would have to be time synchronised too. This complexity meant that most networks were happy to live with FDD networks.

In 5G networks, at higher frequencies it would also make much more sense to use TDD to estimate the channel accurately. This is because the same channel would be used in downlink and uplink so the downlink channel can be estimated accurately based on the uplink channel condition. Due to small transmit time intervals (TTI's), these channel condition estimation would be quite good. Another advantage of this is that the beam could be formed and directed exactly at the user and it would appear as a null to other users.

This is where 8T8R or 8 Transmit and 8 Receive antennas in the base station can help. The more the antennas, the better and narrower the beam they can create. This can help send more energy to users at the cell edge and hence provide better and more reliable coverage there.

How do these antennas look like? 8T8R needs 8x Antennas at the Base Station Cell, and this is typically delivered using four X-Polar columns about half wavelength apart. I found the above picture on antenna specialist Quintel's page here, where the four column example is shown right. At spectrum bands such as 2.3GHz, 2.6GHz and 3.5GHz where TD-LTE networks are currently deployed, the antenna width is still practical. Quintel’s webpage also indicates how their technology allows 8T8R to be effectively emulated using only two X-Polar columns thus promising Slimline antenna solutions at lower frequency bands. China Mobile and Huawei have claimed to be the first ones to deploy these four X-Pol column 8T8R antennas. Sprint, USA is another network that has been actively deploying these 8T8R antennas.

Sprint's deployment of 8T8R (eight-branch transmit and eight-branch receive) radios in its 2.5 GHz TDD LTE spectrum is resulting in increased data throughput as well as coverage according to a new report from Signals Research. "Thanks to TM8 [transmission mode 8] and 8T8R, we observed meaningful increases in coverage and spectral efficiency, not to mention overall device throughput," Signals said in its executive summary of the report.The firm said it extensively tested Sprint's network in the Chicago market using Band 41 (2.5 GHz) and Band 25 (1.9 GHz) in April using Accuver's drive test tools and two Galaxy Note Edge smartphones. Signals tested TM8 vs. non-TM8 performance, Band 41 and Band 25 coverage and performance as well as 8T8R receive vs. 2T2R coverage/performance and stand-alone carrier aggregation.Sprint has been deploying 8T8R radios in its 2.5 GHz footprint, which the company has said will allow its cell sites to send multiple data streams, achieve better signal strength and increase data throughput and coverage without requiring more bandwidth.The company also has said it will use carrier aggregation technology to combine TD-LTE and FDD-LTE transmission across all of its spectrum bands. In its fourth quarter 2014 earnings call with investors in February, Sprint CEO Marcelo Claure said implementing carrier aggregation across all Sprint spectrum bands means Sprint eventually will be able to deploy 1900 MHz FDD-LTE for uplink and 2.5 GHz TD-LTE for downlink, and ultimately improve the coverage of 2.5 GHz LTE to levels that its 1900 MHz spectrum currently achieves. Carrier aggregation, which is the most well-known and widely used technique of the LTE Advanced standard, bonds together disparate bands of spectrum to create wider channels and produce more capacity and faster speeds.

Alcatel-Lucent has a good article in their TECHzine, an extract from that below:

Field tests on base stations equipped with beamforming and 8T8R technologies confirm the sustainability of the solution. Operators can make the most of transmission (Tx) and receiving (Rx) diversity by adding in Tx and Rx paths at the eNodeB level, and beamforming delivers a direct impact on uplink and downlink performance at the cell edge.By using 8 receiver paths instead of 2, cell range is increased by a factor of 1.5 – and this difference is emphasized by the fact that the number of sites needed is reduced by nearly 50 per cent. Furthermore, using the beamforming approach in transmission mode generates a specific beam per user which improves the quality of the signal received by the end-user’s device, or user equipment (UE). In fact, steering the radiated energy in a specific direction can reduce interference and improves the radio link, helping enable a better throughput. The orientation of the beam is decided by shifting the phases of the Tx paths based on signal feedback from the UE. This approach can deliver double the cell edge downlink throughput and can increase global average throughput by 65 per cent.These types of deployments are made possible by using innovative radio heads and antenna solutions. In traditional deployments, it would require the installation of multiple remote radio heads (RRH) and multiple antennas at the site to reach the same level of performance. The use of an 8T8R RRH and a smart antenna array, comprising 4 cross-polar antennas in a radome, means an 8T8R sector deployment can be done within the same footprint as traditional systems.

Anyone interested in seeing pictures of different 8T8R antennas like the one above, see here. While this page shows Samsung's antennas, you can navigate to equipment from other vendors.

Finally, if you can provide any additional info or feel there is something incorrect, please feel free to let me know via comments below.

Sunday, 12 April 2015

We have talked about the unlicensed LTE (LTE-U), re-branded as LTE-LAA many times on this blog and the 3G4G Small Cells blog. In fact some analysts have decided to call the current Rel-12 non-standardised Rel-12 version as LTE-U and the standardised version that would be available as part of Release-13 as LTE-LAA.

There is a lot of unease in the WiFi camp because LTE-LAA may hog the 5GHz spectrum that is available as license-exempt for use of Wi-Fi and other similar (future) technologies. Even though LAA may be more efficient as claimed by some vendors, it would reduce the usage for WiFi users in that particular spectrum.

As a result, some vendors have recently proposed LTE/WiFi Link Aggregation as a new feature in Release-13. Alcatel-Lucent, Ruckus Wireless and Qualcomm have all been promoting this. In fact Qualcomm has a pre-MWC teaser video on Youtube. The demo video is embedded as follows:

The Korean operator KT was also involved in demoing this in MWC along with Samsung and Qualcomm. They have termed this feature as LTE-Hetnet or LTE-H.

As can be seen the data is split/combined in PDCP layer. While this example above shows the practical implementation using C-RAN with Remote Radio Head (RRH) and BaseBand Unit (BBU) being used, the picture at the top shows LTE Anchor in eNodeB. There would be a need for an ideal backhaul to keep latency in the eNodeB to minimum when combining cellular and WiFi data.

The above table shows comparison between the 3 main techniques for increasing data rates through aggregation; CA, LTE-U/LAA and LTE-H/LWA. While CA has been part of 3GPP Release-10 and is available in more of less all new LTE devices, LTE-U and LTE-H is new and would need modifications in the network as well as in the devices. LTE-H would in the end provide similar benefits to LTE-U but is a safer option from devices and spectrum point of view and would be a more agreeable solution by everyone, including the WiFi community.

A final word; last year we wrote a whitepaper laying out our vision of what 4.5G is. I think we put it simply that in 4.5G, you can use WiFi and LTE at the same time. I think LTE-H fulfills that vision much better than other proposals.

Saturday, 27 September 2014

Four major Release-13 projects have been approved now that Release-12 is coming to a conclusion. One of them is Full dimension MIMO. From the 3GPP website:

Leveraging the work on 3D channel modeling completed in Release 12, 3GPP RAN will now study the necessary changes to enable elevation beamforming and high-order MIMO systems. Beamforming and MIMO have been identified as key technologies to address the future capacity demand. But so far 3GPP specified support for these features mostly considers one-dimensional antenna arrays that exploit the azimuth dimension. So, to further improve LTE spectral efficiency it is quite natural to now study two-dimensional antenna arrays that can also exploit the vertical dimension.

Also, while the standard currently supports MIMO systems with up to 8 antenna ports, the new study will look into high-order MIMO systems with up to 64 antenna ports at the eNB, to become more relevant with the use of higher frequencies in the future.

The idea is to introduce carrier and UE specific tilt/beam forming with variable beam widths. Improved link budget and reduced intra- and inter-cell interference might translate into higher data rates or increased coverage at cell edge. This might go hand in hand with an extensive use of spatial multiplexing that might require enhancements to today’s MU-MIMO schemes. Furthermore in active antenna array systems (AAS) the power amplifiers become part of the antenna further improving the link budget due to the missing feeder loss. Besides a potentially simplified installation the use of many low power elements might also reduce the overall power consumption.

At higher frequencies the antenna elements can miniaturized and their number can be increased. In LTE this might be limited to 16, 32 or 64 elements while for 5G with higher frequency bands this might allow for “massive MIMO”.

Saturday, 5 April 2014

Its very interesting to see all the companies proposing very interesting concepts on the 1st of April. I was told that not everyone knows what April Fools day means so here is the link to Wikipedia.

Samsung Fly-Fi: Samsung has come up with some interesting ideas, the first being Wi-Fi for everyone powered by Pigeons. They have a website here with Video.

Looks like since we have Pigeons everywhere, so they are always used in one way or the other. The best prank ever in my opinion was the PigeonRank by Google, back in 2002. I spent a few hours that day trying to figure out how they were actually doing it.

Smart Wear was always going to be the big thing. Quite a few smart wearables this year.