One of ARM’s more unusual acquisitions in recent years has been Geomerics, a fellow UK company who specializes in video game lighting technology. Geomerics is a far cry from ARM’s day-to-day business of developing hardware blocks and ISAs to license to customers who want to put together their own chips, but Geomerics has been a long-term play for the company. By investing in a company with strong ties to the video gaming industry, ARM would in turn gain an important tool in helping to bring higher quality lighting to SoC-class GPUs, and also help to ensure that such important middleware was including SoC-class GPUs in their feature & performance targets.

With GDC 2015 taking place this week, the ARM is seeing the first real payoff from their acquisition with the release of the latest version of Geomerics’ lighting technology, Enlighten 3. Enlighten 3 in turn is designed to be one of the most advanced global illumination systems on the market, designed to scale up from mobile to desktop PCs. Previous versions of Enlighten were already in several games and engines, including the Frostbite 2 engine backing Battlefield 3, and now with Enlighten 3 the company is hoping to extend its reach further with its inclusion into the ever popular for mobile Unity 5 engine, and as an add-on for the similarly popular Unreal Engines 3 and 4.

From a feature standpoint Enlighten 3 introduces several new features, including a greatly improved indirect lighting system. Also on the docket is a richer materials system, allowing for improved support for transparent surfaces, which in turn allows for the lighting to be updated to reflect when the transparency of a surface has changed. Alternatively, for scenarios without real-time lighting, the middleware also has increased the quality of lightmaps it can bake.

Ultimately ARM tells us that they believe 2015 will be a big year for Geomerics in the mobile space, saying they expect a number of mobile titles to use the technology. To that end, as part of their GDC launch, ARM and Geomerics are showcasing several Enlighten 3 demos, including an in-house demo they are calling Subway, and a demo showcasing Enlighten 3 running inside Unreal Engine 4.

Continuing this week’s GDC-2015 fueled blitz of graphics API news releases, we have Khronos, the industry consortium behind OpenGL, OpenCL, and other cross-platform compute and graphics APIs. Back in August of 2014 Khronos unveiled their own foray into low-level graphics APIs, announcing the Next Generation OpenGL Initiative (glNext). Designed around similar goals as Mantle, DirectX 12, and Metal, glNext would bring a low-level graphics API to the Khronos ecosystem, and in the process making it the first low-level cross-platform API. 2014’s unveiling was a call for participation, and now at GDC Khronos is announcing additional details on the API.

First and foremost glNext has a name: Vulkan. In creating the API Khronos has made a clean break from OpenGL – something that game industry developers have wanted to do since OpenGL 3 was in development – and as a result they are also making a clean break on the name as well so that it’s clear to users and developers alike that this is not OpenGL. Making Vulkan distinct from OpenGL is actually more important than it would appear at first glance, as not only does Vulkan not bring with it the compatibility baggage of the complete history of OpenGL, but like other low-level APIs it will also have a higher skill requirement than high-level OpenGL.

Naming aside, Vulkan’s goals remain unchanged from the earlier glNext announcement. Khronos has set out to create an open, cross-platform low-level graphics API, bringing the benefits of greatly reduced draw call overhead and better command submission multi-threading – not to mention faster shader compiling by using intermediate format shaders – to the entire ecosystem of platforms that actively support Khronos’ graphics standards. Which these days is essentially everything outside of the gaming consoles. This is also Khronos’s unifying move for graphics APIs, doing away with separate branches of OpenGL – the desktop OpenGL and the mobile/scaled-down OpenGL ES – and replacing it with the single Vulkan.

Being announced this week at GDC are some additional details on the API, which given the intended audience is admittedly a bit developer centric. Vulkan is not yet complete – the specification itself is not being released in any form – but Khronos is further detailing the development and execution flows for how Vulkan will work.

Development tools have been a long-standing struggle for Khronos on OpenGL, and with Vulkan they are shooting to get it right, especially given the almost complete lack of hand-holding a low-level graphics API provides. For this reason the Vulkan specification includes provisions for common validation and debug layers that can be inserted into the rendering chain and used during development, allowing developers to perform in-depth debugging on the otherwise bare-bones API. Meanwhile conformance testing is also going to be heavily pushed and developed, having been something OpenGL lacked for many years and something that was a big help in developing Khronos’ more recent APIs such as WebCL. This being Khronos, even the conformance testing is “open” in a way, with developers able to submit new tests and Khronos encouraging it.

The actual Vulkan API itself has yet to be finalized, however at this point in time Khronos expects it to behave very similar to Mantle and DX12, so developers capable of working on the others shouldn’t have much trouble with Vulkan. In fact Khronos has confirmed that AMD has contributed Mantle towards the development of Vulkan, and though we need to be clear that Vulkan is not Mantle, Mantle was used to bootstrap the process and speed its development, making Vulkan a derivation of sorts of Mantle (think Unix family tree). What has changed from Mantle is that Khronos has gone through a period of refinement, keeping what worked in Vulkan and throwing out portions of Mantle that didn’t work well – particularly HLSL and anything that would prevent the API from being cross-vendor – replacing it with the other necessary/better functionality.

Meanwhile from a shader programing perspective, Vulkan will support multiple backends for shaders. GLSL will be Vulkan’s initial shading language, however long-term Khronos wants to enable additional languages to be used as shading languages, particularly C++ (something Apple’s Metal already supports). Bringing support for other languages as shaders will take some effort, as those languages will need graphics bindings extended into them.

As for hardware support for Vulkan, Khronos tells us that Vulkan should work on any platform that supports OpenGL ES 3.1 and later, which is essentially all modern GPUs, and desktop GPUs going some distance back. To be very clear here whether a platform’s owner actually develops and enables their Vulkan runtime is another matter entirely, but in principle the hardware should support it. Though this comes as something of an interesting scenario, as a bare minimum of OpenGL ES 3.1 implies that tessellation and geometry shaders will not be a required part of the standard. As these are common features in desktop parts and more recent mobile parts that are Android Extension Pack capable, this means that these will be optional features for developers to either use (and require) or not at their own discretion.

Wrapping up our look at the API, Khronos tells us that they’re on schedule to release initial specifications this year, with initial platform implementations shortly behind that. Given the fact that Khronos tends to do preliminary releases of APIs first, this puts Vulkan a bit behind DirectX 12 (which will see its shipping implementation this year), but not too far behind. By which time we should have a better idea of what platforms and GPUs will see Vulkan support added, and what the first games are that will support the API.

By basing Vulkan around SPIR-V, developers gain the ability to write to Vulkan in more languages, being able to feed Vulkan almost any code that can be compiled down to SPIR. This is similar to what SPIR has done for OpenCL – which is what SPIR was initially created for – allowing for many languages to work on OpenCL-capable hardware through SPIR. As a side benefit for Vulkan, this also means that Vulkan shaders can be shipped in intermediate format, rather than as raw high-level GLSL code as OpenGL’s shader compiler path currently requirements.

In putting together SPIR-V, what Khronos has done is essentially extended Vulkan’s graphics constructs into the API, allowing SPIR-V to service both compute and graphics workloads. In the short term this is unlikely to make much of a difference for developers (who will be busy just learning the graphics side of Vulkan), but in the long run this would allow developers to more freely mix graphics and compute workloads, as the underlying runtime is all the same. This is also where Vulkan’s ability to extend its shading language from GLSL to other languages comes from, as SPIR’s flexibility is what allows multiple languages to all target SPIR.

SPIR-V also brings with it some compute benefits as well, but for that we need to talk about OpenCL 2.1…

OpenCL 2.1 marks two important milestones for OpenCl. First and foremost, OpenCL 2.1 marks the point where OpenCL (compute) and graphics (Vulkan) come together under a single roof in the form of SPIR-V. With SPIR-V now in place, developers can write graphics or compute code using SPIR, forming a common language frontend that will allow Vulkan and OpenCL to accept many of the same high level languages.

But more significant about OpenCL 2.1 is that after several years of proposals and development, OpenCL is now gaining support for an official C++ dialect, extending the usability of OpenCL into even higher-level languages. Having originally launched using the OpenCL C dialect in 2008, there was almost immediate demand for the ability to write OpenCL code in C++, something that has taken the hardware and software some time to catch up to. And though C++ is not new to GPU computing – NVIDIA’s proprietary CUDA has supported it for some time – this marks the introduction of C++ to the cross-platform OpenCL API.

OpenCL 2.1’s C++ support comes in the form of a subset of C++, stripping out a few parallel-compute unfriendly features such as catch/throw, function pointers, and virtual functions. What remains then is virtually everything else, including classes, templates, and C++’s powerful lambda functionality. This opens up OpenCL programming to the same general benefits that C++ enables over C, giving developers access to a higher level language that is more capable, and generally speaking better known as well.

The addition of C++ to OpenCL is driven by the use of SPIR-V, with Khronos creating an OpenCL C++ to SPIR-V compiler to compile C++ down to the intermedia representation, and then the OpenCL runtime executing the SPIR-V code from there. And meanwhile though OpenCL C isn’t going anywhere for both compatibility and tuning reasons, this is the overall direction that Khronos wants to go with OpenCL, pushing everything through SPIR so that the languages supported is largely a function of what compilers are available, and not what the OpenCL runtime can do.

[

Meanwhile, in the long run C++ support should help Khronos and its partners to better push and deploy OpenCL, with C++ support making the API more useful and accessible than before. Differences such as these have been a big part of the reason that NVIDIA’s CUDA has remained so popular despite being limited to NVIDIA platforms, and though OpenCL C++ arguably won’t erase the gap between the two APIs, it should cut down on the gap significantly. That said, part of this may come down to whether NVIDIA implements OpenCL 2.1 support; with their current dominance with CUDA, NVIDIA has yet to even implement OpenCL 2.0 support, which greatly limits how many discrete GPUs can run the newest versions of OpenCL.

Finally, along with the addition of OpenCL C++, OpenCL 2.1 also adds a few extra features to the overall API to improve the API’s flexibility. Low latency device timers should allow for much more reliable/accurate profiling of code execution than relying on potentially divergent clocks, and kernel cloning functionality has been introduced via the clCloneKernel command.

Wrapping things up, as is common for Khronos, OpenCL 2.1 is initially being released as a provisional specification. While Khronos isn’t commenting on a finalization date just yet, given how early it is in the year, we would be surprised not to see a final version of the API before the year was out.

Much has been made over the advent of low-level graphics APIs over the last year, with APIs based on this concept having sprouted up on a number of platforms in a very short period of time. For game developers this has changed the API landscape dramatically in the last couple of years, and it’s no surprise that as a result API news has been centered on the annual Game Developers Conference. With the 2015 conference taking place this week, we’re going to hear a lot more about it in the run-up to the release of DirectX 12 and other APIs.

Kicking things off this week is AMD, who is going first with an update on Mantle, their in-house low-level API. The first announced of the low-level APIs and so far limited to AMD’s GCN’s architecture, there has been quite a bit of pondering over the future of the API in light of the more recent developments of DirectX 12 and glNext. AMD in turn is seeking to answer these questions first, before Microsoft and Khronos take the stage later this week for their own announcements.

In a news post on AMD’s gaming website, AMD has announced that due to the progress on DX12 and glNext, the company is changing direction on the API. The API will be sticking around, but AMD’s earlier plans have partially changed. As originally planned, AMD is transitioning Mantle application development from a closed beta to a (quasi) released product – via the release of a programming guide and API reference this month – however AMD’s broader plans to also release a Mantle SDK to allow full access, particularly allowing iit to be implemented on other hardware, has been shelved. In place of that AMD is refocusing Mantle on being a “graphics innovation platform” to develop new technologies.

As far as “Mantle 1.0” is concerned, AMD is acknowledging at this point that Mantle’s greatest benefits – reduced CPU usage due to low-level command buffer submission – is something that DX12 and glNext can do just as well, negating the need for Mantle in this context. For AMD this is still something of a win because it has led to Microsoft and Khronos implementing the core ideas of Mantle in the first place, but it also means that Mantle would be relegated to a third wheel. As a result AMD is shifting focus, and advising developers looking to tap Mantle for its draw call benefits (and other features also found in DX12/glNext) to just use those forthcoming APIs instead.

Mantle’s new focus in turn is going to be a testbed for future graphics API development. Along with releasing the specifications for “Mantle 1.0”, AMD will essentially keep the closed beta program open for the continued development of Mantle, building it in conjunction with a limited number of partners in a fashion similar to how Mantle has been developed so far.

Thie biggest change here is that any plans to make Mantle open have been put on hold for the moment with the cancelation of the Mantle SDK. With Mantle going back into development and made redundant by DX12/glNext, AMD has canned what was from the start the hardest to develop/least likely to occur API feature, keeping it proprietary (at least for now) for future development. Which is not to say that AMD has given up on their “open” ideals entirely though, as the company is promising to deliver more information on their long-term plans for the API on the 5th, including their future plans for openness.

Mantle Pipeline States

As for what happens from here, we will have to see what AMD announces later this week. AMD’s announcement is essentially in two parts: today’s disclosure on the status of Mantle, and a further announcement on the 5th. It’s quite likely that AMD already has their future Mantle features in mind, and will want to discuss those after the DX12 and glNext disclosures.

Finally, from a consumer perspective Mantle won’t be going anywhere. Mantle remains in AMD’s drivers and Mantle applications continue to work, and for that matter there are still more Mantle enabled games to come (pretty much anything Frostbite, for a start). How many more games beyond 2015 though – basically anything post-DX12 – remains to be seen, as developers capable of targeting Mantle will almost certainly want to target DX12 as well as soon as it’s ready.

Update 03/03: To add some further context to AMD's announcement, we have the announcement of Vulkan (aka glNext). In short Mantle is being used as a building block for Vulkan, making Vulkan a derivative of Mantle. So although Mantle proper goes back under wraps at AMD, "Mantle 1.0" continues on in an evolved form as Vulkan.

Today Broadcom took the lead by announcing two new Wifi combo chip solutions meant for the smartphone and tablet market. The BCM4359 is a high-end 2x2 MIMO solution for high-performance smartphones, while the BCM43455 is an updated 1x1 MIMO 802.11ac for mass market phones.

Taking a closer look at the BCM4359, we see several innovative new features, the most characterizing one being the inclusion for the first time of Real Simultaneous Dual Band (RSDB). RSDB enables the chip to connect to both 2.4GHz and 5GHz bands simultaneously. This is achieved by doubling up on the baseband processors on the combo chip. Broadcom uses ARM Cortex R4 as the processing units of the IC, and the 4359 uses two of them. What this enables is a sort of "full duplex" on the two frequency bands instead of having the baseband having to switch between each in an interleaving manner. The PHY bandwidth has been upped to 867 Mbps in the two-stream MIMO mode.

In the demo that Broadcom showed us, we had two test devices and a TV as the showcase setup. One device running the BCM4356 was streaming a video to the secondary device which employed the BCM4359 via the 2.4GHz band, who in turn would then stream via Wifi Display on 5GHz to the TV. As a comparison demo, we had the same setup next to it, but with both streaming devices equiped with only a BCM4356 solution. While the BCM4359 setup managed to achieve enough bandwidth to receive and forward the stream to the TV in full 1080p, the other side with the BCM4356 would only be fluid if the quality was reduced to 480p.

Another advantage of RSDB is that it enables the chip to scan for networks on both bands simultaneously, accelerating the time needed to show available Wifi networks, effectively giving a 2x speed improvement.

The BCM43455 is also a new member of the Broadcom family and serves as a solution for the mass market, meaning a cheaper price-point. It is a 1x1 HT80 802.11ac 2.4 and 5GHz solution, enabling up to a 433Mbps PHY rate at 80MHz channel bandwidth. The chip is able to reduce the BoM by 50%, although Broadcom didn't specify to what this was compared with.

One key aspect of these new Wifi generation chips is that SDIO has been retired (but still available as a seconary option) as the connection interface to the SoC and instead replaced by PCIe. The BCM4358 was the first such chip to take advantage of this switch, which was employed on for example the Galaxy Note 4. The PCIe interface not only provides higher bandwidths which are beyond what SDIO is capable of, but also enables crucial power advantages such as low power states on the bus and bonuses such as Direct Memory Access (DMA) for the Wifi chipset.

Both the BCM4359 and BCM43455 are sampling now and will be available in devices later in the year.

After day zero at Mobile World Congress already boasting some impressive releases, Intel tackles their platform on day one on several different fronts. As part of a pre-briefing, we were invited into the presentation where Intel discussed the current state of their mobile portfolio along with looking to the future. The pre-briefing was run by Aicha Evans, Corporate Vice President and General Manager of the Wireless Platform Research and Development Group, who you may remember was interviewed by Anand in a series of videos back in 2013. Ms. Evans' focus stems on the connectivity side of the equation, making sure that Intel’s portfolio develops into a strong base for future platforms.

One of the big elements for Intel is the rebranding of their mobile Atom line of SoCs. Up until this point, all the SoCs were difficult to follow and very similar names such as Z3580 or Z3760. This is adjusted into three different segments as follows:

Similar to their personal computing processor line, the Intel Atom structure will take on x3/x5/x7 naming, similar to the i3/i5/i7 of the desktop and notebook space. This is not to be confused with Qualcomm’s modem naming scheme, or anything by BMW.

The x3 sits at the bottom, and is comprised of Bay Trail based SoCs at the 28nm node all previously part of the SoFIA program aimed at emerging markets. There will be three x3 parts – a dual core x3, a quad core x3 from the Rockchip agreement, and a final quad core x3 with an integrated LTE modem.

This set raises some interesting points to discuss. Firstly is the use of 28nm is the same node as previous Intel Atoms, and thus should be derived from a TSMC source. It is also poignant to note that for these SoCs Intel is using a Mali GPU rather than the Gen 8 graphics and their own IP. This is due to the SoFIA program being aimed at bringing costs down and functionality into the low price points in a competitive time-to-market.

The Rockchip model, indicated by the ‘RK’ at the end of the name of the SoC, comes from the partnership with Rockchip we reported on back in May 2014. At the time Intel discussed the roadmap for producing a quad core SoC with 3G for the China market in the middle of 2015, which this provides.

The final part of the x3 arrangement revolves combining a 5-mode LTE modem on the same die. Intel is going to support 14 LTE bands on a single SoC with PMIC, WiFi and geolocation technologies (GPS, GLONASS, BeiDou).

The Atom x5 and x7 SoCs represent the next step up, implementing Intel’s 14nm process and bringing Cherry Trail to market. The x5 and x7 SoCs are aimed primarily at tablets, but can find their way into sub 10.1 inch tablets as well, providing an interesting counterbalance to the high price premium of Intel Core-M 4.5W products based on Broadwell-Y. While the x3 line will focus first on Android moving into Windows, x5 and x7 is designed to be targeting both, particularly with the bundled Gen 8 graphics and LTE with XMM276x supporting Cat-6 and carrier aggregation.

Not a lot of detail was provided about x5 and x7, suggesting that they are aimed more at late 1H/2H 2015 down the line. This coincides with the next generation of Intel’s XMM 7360 modem, featuring up to 450 Mbps downlink and support for up to 29 LTE bands.

One interesting element in the x5/x7 scenario was the bundled platform block diagram provided by Intel, showing clearly the two dual-core Airmont CPUs each with 1MB of L2 cache, Gen 8 graphics, separate security processors and ISP, as well as USB 3.0 support.

Finally, Intel addressed the obvious lack of a high-end mobile SoC that fits into the performance smartphone category. Intel is still working on development of such a SoC in the form of Braxton and we'll have more news on this piece in the future.

We are lining up a chance to interview Ms. Evans about Intel’s Atom lineup later this week at MWC, so stay tuned for that.

As part of our booth tour at Lenovo during Mobile World Congress, on display was the recently announced Lenovo VIBE Shot and we managed to get some hands-on time. The VIBE Shot is described by Lenovo as a ‘2-in-1 camera smartphone’ attempting to bridge a gap between smartphones and point-and-click cameras. The device attempts this by placing buttons on the sides of the smartphone similar to how a point-and-click would do so, as well as having a full-frame 16:9 16MP low light sensor and a tri-color flash.

The 5-inch full HD device includes optical image stabilization as well as providing simple and pro modes with a button adjustment on the top. Simple mode is equivalent to the auto mode on most cameras, whereas the pro-mode offers manual adjustments such as exposure, white balance, focus mode, saturation and more. Hardware under the hood includes an eight-core Snapdragon 615 (A57/A57) at a 1.7 GHz peak on the fast cluster with 3GB DRAM and 32GB of internal storage.

Battery capacity comes in at 2900 mAh, with LTE Cat-4 and Android 5.0. The device will be offered in a dual Nano-SIM arrangement, weighs 145g and comes in at 7.3mm thin. Storage is expandable, with guaranteed support of up to 128GB via a microSD.

The phone felt pretty solid in hand, and the thinness is remarkable. What wasn't remarkable was the aluminium band on the back along the camera side, as it attracted fingerprints. The display unit had seen a lot of use, and it was quite hard to clean it.

The VIBE Shot will be available in red, white and grey, and come to Lenovo’s regular markets in June starting at $349.

Mobile World Congress is in full swing today and Microsoft woke the press up early to discuss new features coming to their smartphone and tablet space. Top of the bill, presented by Stephen Elop, are the new Lumia 640 and Lumia 640XL devices. The regular 640, with it 5-inch HD display comes as the upgrade from the 630, whereas the 5.7-inch XL version sits as the move up from the large Lumia 1320.

Both devices will be available in 3G and LTE versions, with single and dual SIM in both of those depending on the market. The 640 uses a Snapdragon 400-series quad core processor at 1.2 GHz featuring Gorilla Glass 3, 1 GB of DRAM and 8 GB of storage. One might expect SD card support to come as standard, although a short hands on with the device failed to find one. The rear 8MP camera was described as a wide angle lens (although no numbers were given), with auto-HDR and dynamic LED flash. The battery weighs in at 2500 mAh also. Pricing for the 3G model starts at 139€ with the LTE version at 159€.

The 640 XL was described as ‘a slim 9mm’ with similar specifications to the smaller model but at 5.7 inches, namely a 1280x720 screen but the battery is pushed up to 3000 mAh. Internal specifications were not discussed, but the rear 13MP camera features Zeiss optics. Pricing starts at 189€ for the 3G model with LTE at 219€.

Both devices will ship with Windows 8.1 but will be upgraded to Windows 10, with Microsoft going all out to encourage Windows 10 across all of its future devices when available. The 640 and 640XL will also come with one year of Office 365, allowing installation on one PC and one other device as part of the deal. It also comes with 1TB of One Drive storage and 60 Skype World minutes.

An interesting element to the launch, especially with the ‘seemless feel’ push of Office across all different sizes of devices, was the Microsoft Universal Portable Keyboard. Barely bigger than a wallet, it is designed to fit into an office bag and be able to connect seamlessly via Bluetooth. No pricing or date was announced, but the focus was more on the office environment.

One demonstration that took me (Ian) by surprise was that of the Microsoft Surface Hub. This was an 84-inch display, normally the size used by large scale demonstrations, but this featured 4K resolution at 120 Hz as well as touch screen functionality. Naturally my thoughts drifted towards a TN type panel using MST, although taking the typical wide-angle test for TN panels was confusing as color consistency remained the same – it seems like they are using some kind of IPS display, which seems odd at 120 Hz.

The combination of 4K and 120 Hz and possibly IPS is mind boggling, which pointed me more towards an MST arrangement – either two panels or four. We managed to ask one of the product managers for the Surface Hub about this, but he was unable to give that information until launch. One of the demonstrations of the device featured a white-board scenario, as well as writing on office presentations. Anything written on the screen was recreated back to the controlling Surface tablet, and the tablet user could write as well, or what Microsoft calls ‘ink-back’. In a similar vein, ‘swipe-back’ to allow both users to change slides was demonstrated. The Surface Hub is linked to Windows 10, and we were told to expect more details at Windows 10 launch, along with another version at 55-inches but with 1080p resolution.

Not to be outdone by Qualcomm’s SoC group, Qualcomm’s communication groups are busy at MWC 2015 as well. Though Qualcomm Technologies and Qualcomm Atheros are not announcing any major new products at this moment, the two of them are on the show floor to demonstrate the status of their various LTE initiatives that we should see in upcoming and future products, in conjunction with infrastructure partner Ericsson.

First and foremost, Qualcomm and Ericsson will be offering the first public demonstration of LTE category 11 hardware in action. LTE category 11 increases the download rate of LTE to 600Mbps through a combination of tri-band (3x20MHz) carrier aggregation and the use of QAM256 encoding, with the latter being the major addition of category 11. Due to the use of QAM256 and the higher SNR required to use it – not to mention 60MHz of spectrum – category 11 is being targeted at small scale deployments where cleaner signals and more spectrum is readily available, such as indoor deployments and carefully constructed outdoor environments.

LTE Categories

Category

Max Download

Max Upload

Category 6

300Mbps

50Mbps

Category 7

300Mbps

100Mbps

Category 9

450Mbps

50Mbps

Category 10

450Mbps

100Mbps

Category 11

600Mbps

100Mbps

Qualcomm is not currently announcing the modem being used in this demonstration. However we are likely looking at the successor to Qualcomm’s current X12 LTE modem (9x45), which tops out at category 10.

Meanwhile Qualcomm will also be demonstrating the ability to use category LTE with dual SIMs. Qualcomm’s forthcoming hardware will support dual standby with dual receive.

Finally, Qualcomm will also be demonstrating their current progress on implementing LTE/Wi-Fi call handoff and LTE/Wi-Fi link aggregation. With call handoff – or as Qualcomm likes to call it, Call Continuity – VoLTE calls can be seamlessly transferred between LTE and Wi-Fi, allowing phones to tap into Wi-Fi for call handling when possible, avoiding the greater network expense of using LTE. Meanwhile the first public demonstration of LTE/Wi-Fi link aggregation builds off of handoff to utilize both networks at once to take advantage of Wi-Fi speeds while allowing operators to better control a call via the normal LTE channel. Link aggregation essentially brings Wi-Fi access points under control of the LTE network itself – essentially limiting it to operator owned/controlled access points – and is being created as a solution to reliability concerns over using disparate, independent Wi-Fi networks.

In addition to a new SoC, Qualcomm is announcing a new fingerprint sensing technology for Snapdragon-based devices.

While most fingerprint sensors currently use high-resolution capacitive sensors, Qualcomm’s Snapdragon Sense ID uses ultrasonic sound waves in order to map the surface of the finger. This allows for greater resolution to recognize features such as pores and fingerprint ridges to improve security, along with reduced sensitivity to moisture and other contaminants. In addition, this technology can work through glass and sapphire cover lenses, along with metal and plastic casings.

Snapdragon Sense ID will launch first on Snapdragon 810 and 425 devices, but will be compatible with all 400, 600, and 800 series Snapdragon SoCs, and will support the FIDO authentication standard.

Today, Qualcomm is announcing the new Zeroth Platform, which is enabled by the Snapdragon 820 SoC.

While Qualcomm is avoiding any real disclosure of the SoC at this point, we do know that the Snapdragon 820 will be built on a FinFET process, which could be either TSMC’s 16nm or Samsung’s 14nm process. In addition to all of the improvements that the move to a new process brings, Qualcomm is finally introducing their custom ARMv8 CPU core, named Kryo. Unfortunately, there are no real details here either, but given that there’s only one architecture named it’s likely that Qualcomm is moving away from big.LITTLE with the Snapdragon 820.

The final detail regarding Snapdragon 820 is that it will begin sampling in the second half of 2015, which should mean that we can expect it to be in devices some time either at the end of 2015 or the beginning of 2016. Ultimately, the fact that Qualcomm has come up with a custom ARMv8 CPU architecture in such a short time continues to show just how quickly Qualcomm can respond to changing market conditions, something that we first saw with the Snapdragon 810.

With Mobile World Congress 2015 now in full swing, Imagination Technologies is taking to the show today to announce a couple of new additions to the PowerVR family of video products.

First off is a new low-end GPU in the PowerVR Series6XE family, the G6020. The G6020 is aimed at entry-level mobile devices, embedded computers, and high-end wearables, and is intended to be Imagination’s new entry-level Series6XE part.

From a design perspective, the G6020 is aimed at very simple desktop workloads – the Android UI, wearable interfaces, etc. Imagination has essentially built the bare minimum GPU needed to drive a 720p60 display, taking out any hardware not necessary to that goal such as compute and quite a bit of geometry throughput. What remains is enough of a ROP backend (pixel co-processor) to drive 720p, and the FP16 shading resources to go with it.

Meanwhile from a hardware perspective this is basically a significantly cut down 6XE part. G6020 drops to just a single 4 pipeline USC, versus the 8 pipeline USC found in the G6050, and 16 pipelines as found in a “complete” USC. The number of FP32 ALUs in each pipeline has also been reduced, going from the 6XE standard of 2 per pipeline to 1 for G6020, while the number of FP16 ALUs remains unchanged at 4. Along with scaling down the USCs, Imagination has also stripped down the G6020 in other ways, such as by taking out the compute data master.

PowerVR Series6/6XE "Rogue"

GPU

# of Clusters

# of FP32 Ops

# of FP16 Ops

Optimization

G6020

0.25

8

32

Area + Bandwidth

G6050

0.5

32

64

Area

G6060

0.5

32

64

Area + Bandwidth

G6100

1

64

96

Area

G6100 (XE)

1

64

128

Area

G6110

1

64

128

Area + Bandwidth

The end result of their efforts is designed to be an incredibly small and incredibly low power OpenGL ES 3.0 GPU for devices that fall in the cheap/small range. G6020 is only 2.2mm2 in size on 28nm, making it similar in size to ARM’s Cortex-A7 CPU cores (a likely pairing target). And power consumption is low enough that it should be able to just fit into high-end wearables.

PowerVR Series 5 Video Encoder

Meanwhile Imagination’s second PowerVR announcement of the day is the announcement of their new PowerVR Series 5 family of video encoders. This is Imagination’s entry into the HEVC (H.265) hardware encoder market, offering scalable designs for encoding H.264 and HEVC video.

In terms of designs Imagination will be offering 3 designs, the E5800, E5505, and E5300, targeted at progressively lower-end markets. The E5800 is the largest configuration and is aimed at the prosumer market, offering 4Kp60 encoding with 10-bit color and 4:2:2 chroma sampling (twice the sampling of standard 4:2:0 video). Below that is the E5505, the mainstream/premium mobile part with support for encoding up to 4Kp30, along with VP8 encoding and even MJPEG for certain legacy applications. Finally at the bottom of the list is the E5300, which is a small, low power encoder for 1080p30 applications (cameras/sensors/IoT and the like).

PowerVR Series 5 HEVC Encoders

Encoder

Max Resolution

Chroma Subsampling

Market

E5800

4Kp60

4:2:2

Prosumer/Pro-Cameras

E5505

4Kp30

4:2:0

Mobile

E5300

1080p30

4:2:0

Sensor/IoT/Security Cameras

From a competitive standpoint, along with the expected synergy between the PowerVR encoders and PowerVR GPUs –support for directly handing off compressed memory, in particular – Imagination is also banking on being able to win a quality war with other mobile HEVC encoders. By Imagination’s estimates they can offer equivalent quality at just 70% of the bitrate, which would give them a significant advantage. The company says that this is a result of having a newer encoder that is better tuned than competing encoders, and one that implements more HEVC features (e.g. 10-bit color), allowing them to achieve better compression and the resulting reduction in bitrates.

While Imagination’s testing methodology and resulting numbers to get here are open to interpretation – PSNR is important, though not the end-all of encoder measurements – HEVC encoders are still a fledgling field. There is still ample opportunity to improve on HEVC encoders and reach the same kind of highly tuned status that H.264 encoders have evolved to.

Wrapping things up, both new PowerVR products are now available for licensing.

While Audience is traditionally focused on voice products, today they’re attempting to make their first moves into combined voice recognition and sensor hub products that leverage sensor fusion and neural networks. The NUE N100 is the first of this line of products, which is able to do keyword recognition and can keep the main CPU from waking until a command is received and registered. Audience focused on emphasizing how their solution eliminates the need for additional waiting once the initial wakeup occurs as it can cache the spoken command and feed it into a given system like Google voice actions. In addition, this solution is said to have reduced false wakeup rate, which means that there is far less power wasted on unintended activation. Audience’s solution can cache up to 5 key words, and can accurately distinguish between different people due to their use of neural network-based solutions, and can be programmed either by the end user or the OEM.

Outside of this VoiceQ system, Audience is also introducing MotionQ, which are contextual motion systems. In its current state, using various sensors present on a smartphone or tablet, the motion processing is able to determine whether the device is in a pocket, on a desk or in a person’s hand, whether the device is being held in a sitting, standing, walking or running position, and whether the device is in a car, train, bike, or many other contextual scenarios relying on the neural network algorithms as previously mentioned. The N100 also has OSP support, which means that OEMs can take the N100 and implement custom algorithms in addition to the work that Audience has already done. The N100 will be available for sampling in mid-2015, which means that devices shipping with this chip should appear around in 2016.

Although much of the focus on Sony's work in the mobile space is on their smartphones, they have been a player in the tablet segment of the market for quite some time. In fact, Sony has been responsible for some of the more unique tablet designs, such as the Sony Tablet P which had two displays and folded much like a Nintendo DS.

When focusing on what we traditionally think of as a tablet, one will see that the flagship device in Sony's lineup has always been their 10.1" tablet offering. These devices are usually named in the same "Xperia Z" format as Sony's flagship smartphones, and although the tablet released last year was the Xperia Z2 Tablet, the release of the Xperia Z3 Compact Tablet late last year means that this year's release moves ahead to Z4. The Xperia Z4 tablet is Sony's flagship tablet, and the first of 2015's high end Android tablets. To give an overview of the Z4 Tablet on paper, I've laid out its core specifications below.

Inside the Xperia Z4 Tablet we have Qualcomm's Snapdragon 810 SoC, which has four Cortex-A57 cores and four Cortex-A53 cores running at 2.0GHz and 1.6GHz respectively, along with Qualcomm's new Adreno 430 GPU. The internal battery remains the same capacity as the Z2 Tablet at 6000mAh (22.8Wh), but the tablet has been slimmed down to 6.1mm which puts it on par with Apple's iPad Air 2.

Moving to the outside of the tablet, we see that Sony has placed an 8MP camera on the rear, and a 5.1MP camera on the front. The front of the device is also home to the 10.1" 2560x1600 LCD display. Like many of Sony's recent devices, the Z4 Tablet is both dust and water resistant. It has an IP68 rating for which Sony specifies an immersion depth of up to 1.5 meters for up to thirty minutes, which goes a bit beyond the typical 1 meter for 30 minutes IP67 rating on many other mobile devices

According to Sony, the Xperia Z4 Tablet will be launching globally in June. It will come in both WiFi and LTE variants, with pricing yet to be revealed.

Today Sony has announced a new mid-range device for their line of Xperia smartphones. In the past weeks we've seen the launch of the Xperia E4, and Xperia E4g, each with a focus on a particular group of users. Today's launch is similar, with the phone focusing on its protection from dust and water damage. The Xperia M4 Aqua is Sony's newest smartphone, but at the moment it's somewhat mysterious. At the time of writing, many of the specifications for it are unknown, although when this publishes you'll be able to view the specifications in the source below.

What is known about the Xperia M4 Aqua is that it sports Qualcomm's Snapdragon 615, which has two clusters of four Cortex-A53 cores clocked at 1.7GHz and 1.0GHz respectively. While one can argue the merits or lack thereof with having the same cores in each cluster, both Qualcomm and their customers are clearly aiming at specific markets like China with products that use Snapdragon 615. Beyond the SoC, the phone has a 13MP rear-facing camera with an F/2.0 aperture, and a 5MP front-facing camera.

In terms of aesthetics, the Xperia M4 Aqua actually looks fairly nice for a mid-range device. The edges appear to have a nice ergonomic curve, and the buttons appear to be obviously placed and easy to find and press. At 136g, the device is also quite light. Of course, the main appeal is its dust and water protection, with IP65/IP68 ratings to protect against submersion at up to 1.5 meters for 30 minutes, as well as against water projected at the device from somewhere like a shower head or a hose.

While there are currently no plans to bring it to the United States, the Xperia M4 Aqua will be launching this spring in eighty countries worldwide. It comes in the red, white, and black colors shown above, and it will carry a price of 299 euros.

In general, storage performance has been an area that is only discussed when it becomes a bottleneck. There was very little focus on storage performance in general before devices like the original Nexus 7 started experiencing severe performance issues due to IO pauses. However, delivering high performance storage performance has generally limited storage SKUs to 32GB or less, as the cost of such storage is generally difficult to justify otherwise.

This is a problem that SanDisk hopes to solve with their new iNAND 7132, which uses a hybrid SLC/TLC architecture to deliver both high burst performance and cheaper storage for a given design. SanDisk claims that typical storage usage is extremely peaky in nature, even with seemingly contiguous data streams. In addition, relatively few cases can truly saturate modern eMMC on a smartphone even when using a TLC-based solution.

By integrating an SLC cache into the eMMC package, it’s possible to achieve peak sequential reads of up to 280 MB/s, sequential writes of up to 125 MB/s, and up to 2800 and 3300 IOPS for random writes and reads, respectively. Based upon our discussions with SanDisk, it seems that the SLC cache is generally less than a gigabyte, but is usually enough to avoid situations where the SLC cache is filled and writes must go to the TLC storage.

SanDisk has also implemented a great deal of error correction and extensively tested this storage solution, and claims that eMMC solution can last 10 years of 24/7 intense use without data loss. The iNAND 7132 eMMC 5.0 solution is currently available in 16, 32, and 64 GB variants, with a 128GB variant arriving mid-year.

2015 has been a good year for laptops. We have seen some amazing new designs already, and had the chance to review several of them so far, with more upcoming. HP, even though they are one of the largest PC makers on the planet, is slowly reinventing itself. We have seen their Stream laptops and tablets already, which are a great take on the low end of the market, and now HP has a new offering to go after the premium laptop market. The Spectre x360 is a 13.3 inch laptop, with a CNC aluminum chassis, and a Yoga style hinge to make it as versatile as we already know the Yoga laptop can be.

HP worked closely with Microsoft on the implementation of the x360, and they have included a lot of tweaks and technologies to improve battery life. First, the battery size is good. The x360 has a 56 Wh battery inside, edging out the Dell XPS 13’s 52 Wh battery and the Yoga 3 Pro’s 44 Wh power pack. The QHD (2560x1440) display also features Panel Self-Refresh technology, to let the laptop power down when the display is not changing. And the drivers were tweaked to allow the x360 to deliver up to 12.5 hours of battery life on the FHD model, according to HP.

HP will offer two versions of the display. Both are optically bonded, to increase brightness and bring the pixels closer to the touch digitizer, much like we see on quality tablets. The first display is a Full HD 1920x1080 touch panel, and the upgrade is a 2560x1440 Quad HD model, which works out to 166 Pixels per Inch, and 221 Pixels per Inch respectively. Those who like to use a pen as an input method will be happy to see that HP is offering an active pen as an accessory as well, but at this time we do not know what kind of digitizer it will use.

Powering the new convertible will be the Intel Core i5-5200U and i7-5500U processors, and memory will be 4 to 8 GB. Storage options are all solid state, and options range from 128 GB to 512 GB.

The prices are quite competitive as well, with a starting price of just $900 for the Core i5 model with 4 GB of memory, a 128 GB SSD, and the Full HD touchscreen. To bump up in performance, HP will also be offering a model with a Core i7, 8 GB of memory, and a 256 GB SSD with the Full HD display for $1150, and the top end model will have the Core i7, 8 GB of memory, 512 GB SSD, and the Quad HD touchscreen for $1400.

The Spectre x360 goes on sale today at HP.com, and will be available at Best Buy starting on March 15th.

In addition to the HTC One M9, HTC is also launching the VIVE, a VR headset. While it may seem a bit strange that HTC is doing this, it makes sense once one realizes that the VIVE isn’t designed as a mobile VR solution at all. Instead, this is a product of HTC’s connected devices division, which is the same group that made the HTC RE.

I was definitely quite skeptical of HTC doing a VR headset. But the key here is that HTC has partnered with Valve to be the first OEM to ship a consumer version of SteamVR which means that this is tethered to a PC rather than utilizing a phone or some other mobile device, and using Valve's tracking and input technology. While they haven’t been able to discuss any real detail, they emphasized that the VR experience would be a whole-room experience rather than a sitting experience. Outside of these details, it was said that the developer kit would be available soon after launch in the spring, with consumer availability by the holidays at the end of the year.

While VR headsets are a type of wearable, the other wearable HTC is announcing today is the Grip, a sports band made in partnership with Under Armor. It seems that this is largely similar to the Microsoft Band, as it has a 1.8” PMOLED display, with a ST-M Cortex M3 MCU.

HTC states that they’re trying to target hardcore athletic trainers with this device, and have equipped it with a GPS tracker along with support for Bluetooth heart rate monitors for improved performance. The fitness band should last up to 2.5 days on its 100mAh battery, and will come in three sizes. The fitness band will be able to work with Under Armor Record, along with other fitness tracking applications. Outside of fitness applications the band also supports some basic features such as remote camera shutter, music controls, sleep tracking, and other similar phone companion applications. The Grip will be compatible with both iOS and Android as well.

Around the end of 2012, HTC was in dire straits. The HTC One X, S, and V were supposed to be a big change in HTC’s product lineup, and was supposed to do away with the confusing nomenclature and unfocused lineup. However, it seems that the device fell flat for a wide range of reasons. HTC needed to change their direction in a big way to prevent profits and revenue from slipping. At the time, it seemed quite possible that HTC would soon go out of business and/or be acquired by some other company. With the HTC One, we saw the beginning of a somewhat unprecedented revival from HTC, as they focused on truly impressive hardware innovation in the form of the first all-metal unibody, an Ultrapixel camera with OIS, and dual front-facing speakers with speaker protection amps. On the software side, we saw an incredibly restrained version of Sense that was a far cry from the rather overdone versions of Sense that we saw with Sense 3 and 4.

The M8 was a refinement of the M7 in some ways, but distinctly different in others. Sense 6 was a continuation of Sense 5, which focused on improving design and functionality across the board. We saw a great deal of incremental improvements, such as the new Snapdragon 801 SoC, the louder speakers, and a new metal wraparound design on the back cover. However, in some ways we saw a lack of improvement. Some of the key areas where there was weakness include the rear camera, the somewhat poor ergonomics of the power button, and the slippery finish of the device. In mass use, it was also discovered that the camera lens often had a coating that was susceptible to scratching, and that these scratches significantly degraded camera quality. Overall, despite these issues due to the competitive landscape the M8 was still a strong recommendation.

With 2015, HTC sought to fix all of these issues that were raised with the M8’s release. This brings us to the HTC One M9, which is best described as an evolution of the One M8. To get the basics out of the way, the spec sheet below should help with more details.

Overall, the One M9 is mostly focused on internal and functional upgrades. In order to reduce the height of the phone, we see a move away from the DuoCam system that we saw with the M8, although HTC stated that they had not completely given up on the idea. The rear camera is a 20MP Toshiba T4KA7 sensor, which has a 1.12 micron pixel pitch. This seems to suggest that HTC is moving away from the concept of Ultrapixels, although it isn’t clear if we’ll see it make a return at some later date.

As mentioned in the spec sheet, the optics do back off on aperture a bit to f/2.2, with a 27.8mm equivalent focal length. The cover lens is now made of sapphire in order to avoid scratches on the surface. Combined with the slight camera hump, it’s likely that we’re seeing the limitations of sensor size increases as increasing sensor size dictates an increase to the thickness of the optics unless the focal length is reduced. Reducing focal length also increases distortions, as can be seen by any extremely wide angle lens.

Meanwhile there is no OIS on the rear camera, which is a bit disappointing. HTC seems to be focused on trying to make the best of this camera though, as it was mentioned that a professional mode will be available soon after launch for RAW photo capture for editing and processing in applications like Photoshop and Lightroom.

In casual testing, I saw a dramatic improvement in daytime image quality, and features like HDR as seen above are dramatically improved from the One M8. In general there seem to be fewer issues with things like field curvature and other types of optical distortion along with drastically improved dynamic range, but in low light the results I saw were indicative of poor processing and tuning. HTC representatives stated that the camera tuning, especially in low light were far from final so it remains to be seen if these issues remain in the final product. It seems that the ImageChip 2 has also been deleted from the M9, although the reasons for this are not clear.

On the design side of things, the hardware itself feels like a combination of the M7 and M9. HTC has put a great deal of effort into subtly refining the design, as the corners are no longer nearly as rounded as on the One M8, and the brushed finish has been significantly changed in texture and feel. Rather than a somewhat slick and smooth feel, the brushed finish now feels matte and has far more grip to it.

The curved edge of the M8’s back cover has also been replaced with a hard edge, which does help with grip and makes for a somewhat dramatic transition on the silver/gold version, but it can be a bit uncomfortable in the hand.

The front trim has also been improved, as the One M8’s speaker grilles have been replaced by a one-piece design that alleviates issues with uneven speaker grilles and other issues. It’s clear that this piece is still plastic, but the fit and finish overall is a good step up from the One M8. It is a bit disappointing that there is still a black bar on the bottom of the display, HTC representatives stated that this continues to be necessary to fit display drivers and other circuitry.

In terms of overall ergonomics though we see a massive improvement as the power button has been moved to the side of the phone. It is a bit low though, and requires deliberate effort in order to press it rather than resting around where one’s thumb might be. I found that the power button was generally easy to distinguish from the volume rocker due to the textured finish, although the volume buttons are a bit more difficult to distinguish as they have the same texture and are only separated by position.

On the software side of things, Sense 7 is a continued evolution of Sense 6, with some reorganization and new features. Areas like the weather clock and lock screen have been updated, along with app design in general. Although the design is somewhat like Lollipop in areas, the overall design still mostly resembles Sense 6. However, we see new features with the introduction of Sense 7. HTC focused on location and contextual information for this iteration of Sense, along with easy theming. On the lock screen, depending upon the time of day the lock screen may recommend a nearby restaurant or the morning news. In addition, HTC has introduced a new widget that will present commonly used applications in certain locations and times. It’s possible to also pin applications to make sure that they aren’t removed, although it doesn’t seem possible to force a certain application to avoid being on the widget in certain situations. There’s also a recommended apps folder in the system, although in practice it isn’t particularly useful for power users and in the build of software that we received it wasn’t possible to remove this folder.

On the theming side of things, it seems HTC has put some serious thought into enabling a thoughtful theming system. It’s now possible to do a theme with one step, by simply selecting a given wallpaper which is then processed by the phone to generate a potential theme to be edited. It seems this system goes deep, as it’s possible to theme the status bar, icons, app colors, on screen buttons, and other aspects of the system. In addition, it’s possible for users to manually create their own themes on the phone by using an application on the phone or a web application on a full PC. Themes will appear in an HTC application store, which is simply a UI that presents compatible themes from the Play Store.

While these changes to Sense 7 are interesting, it’s really more interesting to see that HTC is sticking with a grid view for multitasking by default (Google’s card view is available) with pages to allow more than 9 apps to be accessed, and the volume controls that we saw from Sense 6 on KitKat remain on the One M9 to allow for easy access to silent mode, along with easy access to all volume controls regardless of context. Priority notification mode is still present, but it’s in the settings application instead of on the volume controls.

As for the underlying SoC, it seems that a lot of the concerns with the Snapdragon 810 remain unfounded, as the One M9 was quite smooth even with this non-final build of software. I did notice a few stutters in contrast to the almost perfectly smooth M8 with Lollipop, but I suspect things will be better with final software. It’s likely that while the Snapdragon 810 SoC itself is without issue, properly tuning all of the controls in Qualcomm’s big.LITTLE implementation is a much more difficult than a standard aSMP solution. Unfortunately, due to the non-final state of the software we are unable to present benchmarks of the device, but we will be sure to do this for the full review of the device. In general use, the phone remained cool and comfortable to use, and it was hard to really tell if the phone got any hotter than the M8 in practice, even in benchmarks. The One M7 is definitely far hotter in comparison due to its 28LP process used on the SoC in comparison.

Overall, while it remains to be seen whether the One M9 is competitive with the other flagships on the market today, it seems to be a solid improvement over the One M8. There are still a great number of details left to cover when it comes to battery life, the final performance of the Snapdragon 810, the camera, and other major points of differentiation. The HTC One M9 will be available for sale starting mid-March and will come in two-tone silver/gold, dark gunmetal, gold, and pink.

StarTech.com specializes in gadgets performing niche, yet handy functions. We have reviewed a few of their products such as the USB 3.0 to SATA IDE HDD docking station and portable SATA duplicator before. Technology-wise, there are plenty of similar options in the market. StarTech.com hopes to differentiate itself by acting as a one-stop shop for all these miscellaneous needs.

Since the beginning of the year, StarTech.com has launched two interesting products in the DAS (direct-attached storage) space. On the high-end side, we have the S354SMTB2R, a 4-bay Thunderbolt 2 enclosure. It comes with a hardware RAID engine (only JBOD, RAID 0, RAID 1 and RAID 10 - no RAID 5 or RAID 6) and brings with it all the advantages of Thunderbolt 2 (including daisy chaining).

On the chipset side, we have the Marvell 88SE9230 bridge chip, enabling four SATA 6 Gbps ports over two PCIe 2.0 lanes. It also enables the hardware RAID functionality. The PCIe side obviously talks to the Intel Thunderbolt 2 controller.

One of the interesting aspects of the StarTech.com Thunderbolt 2 enclosure is the availability of HyperDuo (thanks to the usage of the Marvell bridge chip). This is a feature that automates SSD / HDD tiering (further details available in Marvell's technology brief - PDF). The benefits of Thunderbolt 2 in DAS units are brought out mainly when SSDs are used, and this type of transparent tiering can enable users to easily gain SSD-like performance while retaining HDD-like capacity at reasonable price points.

StarTech.com has priced the S354SMTB2R at $693 ($543 on Amazon). Other options for diskless 4-bay Thunderbolt 2 solutions are listed below.

AKiTiO Thunder2 Quad (MSRP of $499, available for $370 on Amazon): This unit doesn't come with hardware RAID or HyperDuo features, allowing for the lower price point.

OWC ThunderBay 4 (Available for $419): This unit is similar to the AKiTiO Thunder2 Quad - no hardware RAID, but does come with a special software RAID program for OS X (allowing for high-performance RAID 5 on Mac systems)

HighPoint RocketStor 6324AS 4-Bay RAID Solution with Thunderbolt 2 Adapter (available for $949): The premium for this unit is due to the presence of hardware RAID (JBOD, 0, 1, 5, 6, 10 and 50) and support for both SATA and SAS drives. In addition, this DAS has the added flexibility of being a two component solution - the main drive bays enclosure have two mini-SAS ports (with a second port used for daisy chaining another enclosure or LTO tape drive to get support for up to 8 drives). A SFF-8088 cable connects the enclosure to the external mini-SAS port of a Thunderbolt 2 adapter (which is available for $288 separately, if needed). The adapter has two Thunderbolt 2 ports for standard daisy-chain operation. All in all, this is a very flexible configuration, but tends to create a lot of cable clutter - a possible issue, depending on the workspace.

HighPoint RocketStor 6324LS 4-Bay JBOD Solution with Thunderbolt 2 Adapter (available for $649): This configuration uses the same Thunderbolt 2 adapter as the 6324AS described above, but the 4-bay enclosure supports only SATA drives and there is no hardware RAID.

CalDigit, G-Technology and Promise have 4-bay Thunderbolt 2 solutions too, but they don't seem to be available in diskless configurations.

A few days back, the HDD enclosures lineup was also expanded. The last time we looked at a multi-bay external enclosure was in our review of the Mediasonic Probox. A couple of years have passed since we checked out the JMicron JMB321 port-multiplier (PDF) coupled with a JSM 539 SATA to USB 3.0 bridge. These JMicron parts have been discontinued and it is now time for a new platform for economical multi-bay direct-attached storage enclosures.

StarTech.com has introduced a $315 5-bay (S355BU33ERM) and a $392 8-bay (S358BU33ERM) enclosure. These units support both 3.5" and 2.5" drives. Hot-swapping is also supported. Similar to the Mediasonic Probox, they come with both eSATA and USB 3.0 host connections. UASP is now supported, thanks to the usage of the JMicron JMB575M SATA port multiplier / selector (PDF) and JMS567 SATA to USB 3.0 bridge controller (PDF). The 5-bay unit comes with a 80 mm cooling fan, while the 8-bay unit has a 120 mm cooling fan. There is no hardware RAID support.

The units seem to be much cheaper on Amazon, with the 8-bay coming in at $300 and the 5-bay coming in at $245. The number of options for 5-bay and 8-bay enclosures seem to be numerous compared to that for Thunderbolt 2, so we won't go into the trouble of listing everything here. The key takeaway from the announcement is that we now have high bay-count USB 3.0 enclosures with UASP support.

March is upon us, and the folks over at Microsoft have announced the upcoming Games With Gold titles that will be made available free to anyone with an Xbox Live Gold subscription. March looks to be quite the month, with some top tier games available on both the Xbox One and Xbox 360. Let’s dig in and see what is upcoming.

Xbox One

Rayman Legends

The popular Rayman franchise is making an appearance in Games with Gold. Rayman Legends is from Ubisoft, and was originally released in August 2013, and then it came to the PS4 and Xbox One in February 2014. Legends is a sequel to the 2011 Rayman Origins title, and keeps with the tradition of the series. It is a platformer which can be played as single-player or co-operative. Rayman Legends on the Xbox One scored a very high 91 Metascore, and 7.6 User Score on metacritic. It normally retails for $39.99 on the Xbox Store.

“The Glade of Dreams is in trouble once again! During a 100-year nap, the nightmares multiplied and spread, creating new monsters even more terrifying than before! These creatures are the stuff of legend… Dragons, giant toads, sea monsters, and even evil luchadores. With the help of Murfy, Rayman and Globox awake and must now help fight these nightmares and save the Teensies!”

Xbox 360

Tomb Raider

The first game up for the Xbox 360 is the amazing Tomb Raider, developed by Crystal Dynamics. This is a reboot of the Tomb Raider franchise, released in March 2013, and has players take on the role of Lara Croft once again. It is a 3rd person action-adventure game, set on the island of Yamatai, and Lara must battle the terrain and the inhabitants of Yamatai as she transforms from an archeologist graduate into the star of the show. Tomb Raider scored a 86 Metascore and 8.5 User Score on metacritic, and normally retails for $19.99. Tomb Raider will be available from March 1st to March 15th for the Xbox 360.

“Armed with only the raw instincts and physical ability to push beyond the limits of human endurance, Tomb Raider delivers an intense and gritty story into the origins of Lara Croft and her ascent from a frightened young woman to a hardened survivor.”

Bioshock Infinite

The next game up on the Xbox 360 is another amazing game. Bioshock Infinite was developed by the now defunct Irrational Games, and released in March 2013. This is a first-person shooter set in 1912 in a fictional city of Columbia. Columbia is suspended in the air, and contains all sorts of amazing technology. Players take control of the main character, Booker DeWitt, who is sent to Columbia to rescue Eilizabeth. The plot, characters, and gameplay are all first rate in this game, which scored a 93 Metascore and 8.5 User Score on metacritic. Bioshock Infinite normally retails for $29.99, and will be available from March 16th to the 31st.

“Indebted to the wrong people and with his life on the line, veteran of the U.S. Cavalry and hired gun Booker DeWitt has only one opportunity to wipe his slate clean. He must rescue Elizabeth, a mysterious girl imprisoned since childhood and locked up in the flying city of Columbia. Forced to trust one another, Booker and Elizabeth form a powerful bond during their daring escape. Together, they learn to harness an expanding arsenal of weapons and abilities, as they fight on zeppelins in the clouds, along high-speed Sky-Lines, and down in the streets of Columbia, all while surviving the threats of the sky-city and uncovering its dark secret.”

This is by far the best Xbox Games with Gold lineup that I can recall, and certainly since the Xbox One was added. All of these games are very good, and are near the top of their genres. The people over at Xbox also made note that since the program’s inception, more than 100,000,000 Games with Gold games have been downloaded, and they have also announced that for April they will be doubling up on the games available – so there will be four for the Xbox 360, and two for the Xbox One next month.

Today LG pre-announced significant additions to their high-end wearable, the LG Watch Urbane, via a new edition called the LG Watch Urbane LTE. Both devices will officially launch at Mobile World Congress next week. From a feature standpoint, the LG Watch Urbane LTE adds more wireless functionality via the inclusion of LTE, VoLTE (not 3G voice), GPS, and NFC.

These additions dramatically expand LG's ability to cover the movement use case of wearables and places the Watch Urbane LTE alongside the Samsung Gear S as the only devices to include cellular functionality. This provides a safety net when making a fitness excursion, as emergency calls are now possible. LG had this use case in mind specifically as they included a single key press to initiate an emergency call. Additionally, the inclusion of NFC enables mobile payments, although LG has not yet provided details on how this works. Finally, LG has dramatically increased the battery size from 410mAH to 700mAH, which will help immensely with powering the LTE radio. I should note this is the largest battery I have seen to date in a wearable.

From an industry perspective, the most interesting part of this announcement is that LG has ditched Android Wear which was used for the non-LTE edition of the Watch Urbane. As Android Wear does not support NFC payments or cellular, this was a necessity to bring the Watch Urbane LTE to market, but it highlights that device makers like LG and Samsung are not waiting for Google to add functionality. Google needs to improve the pace of Android Wear updates if they want to keep their partners using the platform.

LG Watch Urbane LTE

LG Watch Urbane

SoC

Qualcomm Snapdragon 400 1.2GHz

Qualcomm Snapdragon 400 1.2GHz

Memory

1GB LPDDR3

512MB LPDDR3

Display

1.3" plastic OLED (320 x 320, 245ppi)

1.3" plastic OLED (320 x 320, 245ppi)

Storage

4GB eMMC

4GB eMMC

Wireless

LTE, NFC, Bluetooth 4.0

Bluetooth 4.0

Ingress protection

IP67

IP67

Battery

700mAH

410mAH

Sensors

Gyro, accelerometer, compass, barometer, heart rate, GPS

Gyro, accelerometer, compass, barometer, heart rate

I/O

Touch screen, buttons, speaker, microphone

Touch screen, buttons, microphone

OS

LG Custom Android"LG Wearable Platform Operating System"

Android Wear

Update: It appears the watch might not run a customized Android distribution but rather something more custom. LG describes it as "LG Wearable Platform Operating System". Other news are reporting this as WebOS derived but nothing has been confirmed from LG. WebOS would be impressive considering we haven't seen a version including VoLTE.

We’re back once again for the 3rd and likely final part to our evolving series previewing the performance of DirectX 12. After taking an initial look at discrete GPUs from NVIDIA and AMD in part 1, and then looking at AMD’s integrated GPUs in part 2, today we’ll be taking a much requested look at the performance of Intel’s integrated GPUs. Does Intel benefit from DirectX 12 in the same way the dGPUs and AMD’s iGPU have? And where does Intel’s most powerful Haswell GPU configuration, Iris Pro (GT3e) stack up? Let’s find out.

As our regular readers may recall, when we were initially given early access to WDDM 2.0 drivers and a DirectX 12 version of Star Swarm, it only included drivers for AMD and NVIDIA GPUs. Those drivers in turn only supported Kepler and newer on the NVIDIA side and GCN 1.1 and newer on the AMD side, which is why we haven’t yet been able to look at older AMD or NVIDIA cards, or for that matter any Intel iGPUs. However as of late last week that changed when Microsoft began releasing WDDM 2.0 drivers for all 3 vendors through Windows Update on Windows 10, enabling early DirectX 12 functionality on many supported products.

Today we’ll be looking at all 3 Haswell GPU tiers, GT1, GT2, and GT3e. We also have our AMD A10 and A8 results from earlier this month to use as a point of comparison (though please note that this combination of Mantle + SS is still non-functional on AMD APUs). With that said, before starting we’d like to once again remind everyone that this is an early driver on an early OS running an early DirectX 12 application, so everything here is subject to change. Furthermore Star Swarm itself is a very directed benchmark designed primarily to showcase batch counts, so what we see here should not be considered a well-rounded look at the benefits of DirectX 12. At the end of the day this is a test that more closely measures potential than real-world performance.

Since we’re looking at fully integrated products this time around, we’ll invert our usual order and start with our GPU-centric view first before taking a CPU-centric look.

As Star Swarm was originally created to demonstrate performance on discrete GPUs, these integrated GPUs do not perform well. Even at low settings nothing cracks 30fps on DirectX 12. None the less there are a few patterns here that can help us understand what’s going on.

Right off the bat then there are two very apparent patterns, one of which is expected and one which caught us by surprise. At a high level, both AMD APUs outperform our collection of Intel processors here, and this is to be expected. AMD has invested heavily in iGPU performance across their entire lineup, where most Intel desktop SKUs come with the mid-tier GT2 GPU.

However what’s very much not expected is the ranking of the various Intel processors. Despite having all 3 Intel GPU tiers represented here, the performance between the Intel GPUs is relatively close, and this includes the Core i7-4770R and its GT3e GPU. GT3e’s performance here immediately raises some red flags – under normal circumstances it substantially outperforms GT2 – and we need to tackle this issue first before we can discuss any other aspects of Intel’s performance.

As long-time readers may recall from our look at Intel’s Gen 7.5 GPU architecture, Intel scales up from GT1 through GT3 by both duplicating the EU/texture unit blocks (the subslice) and the ROP/L3 blocks (the slice common). In the case of GT3/GT3e, it has twice as many slices as GT2 and consequently by most metrics is twice the GPU that GT2 is, with GT3e’s Crystal Well eDRAM providing an extra bandwidth kick. Immediately then there is an issue, since in none of our benchmarks does the GT3e equipped 4770R surpass any of the GT2 equipped SKUs.

The explanation, we believe, lies in the one part of an Intel GPU that doesn’t get duplicated in GT3e, which is the front-end, or as Intel calls it the Global Assets. Regardless of which GPU configuration we’re looking at – GT1, GT2, or GT3e – all Gen 7.5 configurations share what’s essentially the same front-end, which means front-end performance doesn’t scale up with the larger GPUs beyond any minor differences in GPU clockspeed.

Star Swarm for its part is no average workload, as it emphasizes batch counts (draw calls) above all else. Even though the low quality setting has much smaller batch counts than the extreme setting we use on the dGPUs, it’s still over 20K batches per frame, a far higher number than any game would use if it was trying to be playable on an iGPU. Consequently based on our GT2 results and especially our GT3e result, we believe that Star Swarm is actually exposing the batch processing limits of Gen 7.5’s front-end, with the front-end bottlenecking performance once the CPU bottleneck is scaled back by the introduction of DirectX 12.

The result of this is that while the Intel iGPUs are technically GPU limited under DirectX 12, it’s not GPU limited in a traditional sense; it’s not limited by shading performance, or memory bandwidth, or ROP throughput. This means that although Intel’s iGPUs benefit from DirectX 12, it’s not by nearly as much as AMD’s iGPUs did, never mind the dGPUs.

Update: Between when this story was written and when it was published, we heard back from Intel on our results. We are publishing our results as-is, but Intel believes that the lack of scaling with GT3e stems in part from a lack of optimizations for lower performnace GPUs in our build of Star Swarm, which is from an October branch of Oxide's code base. Intel tells us that newer builds do show much better overall performance and more consistent gains for the GT3e, all the while the Oxide engine itself is in flux with its continued development. In any case this reiterates the fact that we're still looking at early code here from all parties and performance is subject to change, especially on a test as directed/non-standard as Star Swarm.

So how much does Intel actually benefit from DirectX 12 under Star Swarm? As one would reasonably expect, with their desktop processors configured for very high CPU performance and much more limited GPU performance, Intel is the least CPU bottlenecked in the first place. That said, if we take a look at the mid quality results in particular, what we find is that Intel still benefits from DX12. The 4770R is especially important here, as it’s a relatively weaker GPU (base frequency 3.2GHz) coupled with a more powerful GPU. It starts out trailing the other Core processors in DX11, only to reach parity with them under DX12 when the bottleneck shifts from the CPU to the GPU front-end. The performance gain is only 25% - and at framerates in the single digits – but conceptually it shows that even Intel can benefit from DX12. Meanwhile the other Intel processors see much smaller, but none the less consistent gains, indicating that there’s at least a trivial benefit from DX12.

Taking a look under the hood at our batch submission times, we can much more clearly see the CPU usage benefits of DX12. The Intel CPUs actually start at a notable deficit here under DX11, with batch submission times worse than the AMD APUs and their relatively weaker CPUs, and 4770R in particular taking nearly 200ms to submit a batch. Enabling DX12 in turn causes the same dramatic reduction in batch submission times we’ve seen elsewhere, with Intel’s batch submission times dropping to below 20ms. Somewhat surprisingly Intel’s times are still worse than AMD’s, though at this point we’re so badly GPU limited on all platforms that it’s largely academic. None the less it shows that Intel may have room for future improvements.

With this data in hand, we can finally make better sense of the results we’re seeing today. Just as with AMD and NVIDIA, using DirectX 12 has a noticeable and dramatic reduction in batch submission times for Intel’s iGPUs. However in the case of Star Swarm the batch counts are so high that it appears GT2 and GT3e are bottlenecked by their GPU front-ends, and as a result the gains from enabling DX12 at very limited. In fact at this point we’re probably at the limits of Star Swarm’s usefulness, since it’s meant more for discrete GPUs.

The end result though is that one way or another Intel ends up shifting from being CPU limited to GPU limited under DX12. And with a weaker GPU than similar AMD parts, performance tops out much sooner. That said, it’s worth pointing out that we are looking at desktop parts here, where Intel goes heavy on the CPU and light on the GPU; in mobile parts where Intel’s CPU and GPU configurations are less lopsided, it’s likely that Intel would benefit more than they do on the desktop, though again probably not as much as AMD has.

As for real world games, just as with our other GPUs we’re in a wait-and-see situation. An actual game designed to be playable on Intel’s iGPUs is very unlikely to push as many batch calls as Star Swarm, so the front-end bottleneck and GT3e’s poor performance are similarly unlikely to recur. But at the same time with Intel generally being the least CPU bottlenecked in the first place, their overall gains under DX12 may be the smallest, particularly when exploiting the API’s vastly improved draw call performance.

In the meantime GDC 2015 will be taking place next week, where we will be hearing more from Microsoft and its GPU partners about DirectX 12. With last year’s unveiling being an early teaser of the API, the sessions this year will be focusing on helping programmers ramp up for its formal launch later this year, and with any luck we’ll find the final details on feature level 12_0 and whether any current GPUs are 12_0 compliant. Along with more on OpenGL Next (aka glNext), it should make for an exciting show for GPU events.

Western Digital is no stranger to the NAS market. Their Sentinel series units (based on Windows Storage Server) have targeted business users for quite some time now. The My Cloud consumer series (1- and 2-bay NAS units based on a custom embedded Linux platform) introduced a few years back targets home users. These two product lines cover the two extreme ends of the market for NAS units costing up to $5000. In late 2013, Western Digital launched the My Cloud Expert series with the introduction of the 4-bay WD My Cloud EX4. This was followed by a 2-bay version in March 2014.

It has been almost a year since Western Digital last updated their hardware offerings, but the firmware and user-experience improvements have been coming in periodically (indicating long-term commitment to this market segment). Today, two sets of products are being introduced to cover the whole range of this NAS market segment:

Updated EXpert Series (EX2100 and EX4100)

New Business Series (DL2100 and DL4100)

From an external viewpoint, all the NAS units being introduced today come with dual GbE ports and a couple of USB 3.0 ports. Similar to previous generation EX units, the new ones also come with two power adapter inputs.

The EX2100 and EX4100 are one of the first NAS units based on the new Marvell Riverwood platform (ARMADA 385 / 388). These are dual-core Cortex-A9-based SoCs running at up to 1.6 GHz. The 2-bay unit comes with the ARMADA 385 and has 1 GB of RAM, while the 4-bay unit sports the ARMADA 388 and has 2 GB of RAM. The main difference between the ARMADA 385 and 388 is the presence of two vs. four native SATA ports. We will look more into the SoC platform in our dedicated review.

The DL2100 and DL4100 are based on the Intel Rangeley SoCs. Based on the Silvermont Atom cores, these SoCs have been quite popular with COTS NAS vendors over the last year (with Seagate's NAS Pro lineup as well as the Synology DSx15(+) series utilizing them). The 2-bay DL2100 is based on the 2C/2T Atom C2350 running at 1.7 GHz and sports 1 GB of RAM. The 4-bay DL4100 is based on the Atom C2338 and has 2 GB of RAM. The clock speeds and features are similar for both SoCs, though the C2350 has a slightly lower TDP (6W vs. 7W). On the software front, the DL series some with extensive Active Directory support, stressing its business focus.

The updated EX models and the new DL models round up Western Digital's offerings in this market segment. They now have units available for different needs and performance levels. The addition of Linux-based business NAS models help in reducing the costs for the small business market segment.

Western Digital has a number of features (both in hardware and the My Cloud OS) that make it stand out amongst the multitude of offerings from various vendors in this market space:

Pre-installed OS / pre-configured NAS units, with OS on embedded flash: The pre-configuration is similar to Synology's Beyond Cloud series, but valid for all models in the EX and DL series. In addition, the OS is itself not spread in a replicated manner across all installed disks, but, resides along with the settings in flash memory on the board. One downside is that system migration is not possible (allows RAID roaming, though), but the approach does have its advantages in terms of fast setup.

Storage scalability using dual NICs: This is a unique feature, allowing units to be daisy chained using the network links. The volumes in the daisy-chained NAS are present / visible through the primary unit's interface. Backups / replication can be easily configured, even though it is not a true high-availability system. The daisy-chained units don't even need to be of the same model.

Redundant power-supply support: This was one of the unique features in the WD EX2 and EX4 that we reviewed last year. It allows for the NAS to remain in operation even if one of the power adapters were to fail.

Expandable memory for the prosumer series: The DL series come with 1 GB and 2 GB of RAM for the 2-bay and 4-bay units respectively. However, end-users can opt for their own SO-DIMM modules to increase the memory in these units (up to 6 GB for the DL4100)

Models with pre-configured disks come with the WD Red drives (6 TB variants included) - this provides consumers with a single point-of-contact for both the NAS unit and the storage media when it comes to support purposes.

The pricing for the various models / capacities is provided in the table below:

Western Digital My Cloud NAS Introductory MSRPs [ Q1 2015 ]

Capacity

EX2100

EX4100

DL2100

DL4100

Diskless

$250

$400

$350

$530

4 TB

$430

-

$530

-

8 TB

$560

$750

$650

$880

12 TB

$750

-

$850

-

16 TB

-

$1050

-

$1170

24 TB

-

$1450

-

$1530

Similar to Seagate's NAS and NAS Pro offerings, the updated hardware platforms and the tying together of the NAS and the storage media will help Western Digital expand their already growing presence in this market segment. The existing channel presence will also provide an additional advantage. Performance evaluation of the EX4100 as well as the DL4100 and comparison with other models in this market segment will be available in the reviews slated to go out over the next few days.

Gigabyte has an interesting line of gaming notebooks these days, including their own brand of P-series laptops as well as the AORUS brand. We’re in the process of reviewing the P35X v3, which packs a GTX 980M into a 0.82” thick 15.6” chassis, and now Gigabyte sends word that they have officially launched the big brother P37X with a 17.3” chassis in the North American market. It’s actually slightly thicker than the P35X, and the design language is very similar as well. That’s either good or bad depending on what you’re looking for in a gaming notebook.

On the one hand it’s generally slimmer (0.9”) and lighter (6.17 lbs.) than competing notebooks from Alienware, ASUS, Clevo, and MSI; however, keeping things cool in a thinner chassis generally means either more noise from the fans, higher temperatures, or both. It’s also either a conservative and subdued looking design, or it’s boring – I tend to like less bling on my laptops, but others are happier with multi-colored keyboard backlighting and a more aggressive industrial design.

In terms of features, all the core elements are essentially the same as the 15.6” model, but the keyboard adds a column of six dedicated macro keys. The top key switches between five banks of macros, so all told that gives you access to 25 macro sets. Besides the GTX 980M GPU, the system also supports Core i7 processors (Haswell series still), up to two 512GB mSATA drives in RAID 0, and two 2.5” drives are available as well. As with most other 17.3” laptops, the display remains a 1080p panel – there just aren’t many other options yet, though we’ve heard 3K/4K may be coming later this year (hopefully?) for 17.3” panels. At least the display is anti-glare and wide viewing angle (IPS most likely, though AHVA is also a possibility)

Amazon and other retailers are carrying the Gigabyte P37X, and the base model comes with i7-4720HQ, GTX 980M 8GB, 8GB system RAM, and a 1TB HDD (no SSDs in the base model, though you can always add your own) for $1999. If you prefer a slightly upgraded build, the Gigabyte P37X-CF2 also has 8GB RAM and an i7-4720HQ, but it includes a 256GB mSATA SSD and a Blu-ray burner for $2499. So yeah, just buy the base model and pick up a pair of 512GB mSATA MX200 SSDs for $440 instead – and if you really want a Blu-ray burner, that can be arranged for the remaining $60. You’ll probably want to upgrade the RAM as well, as 8GB is a bit chintzy on a high-end gaming rig these days.

Despite the odd pricing on the “upgraded” build, it’s good to see additional gaming notebook options, and for those that prefer a more subdued aesthetic the Gigabyte line might be exactly what you’re after. We’ll have the full review of the P35W v3 in the next week or two, so stay tuned.

We've first heard about plans to adopt UFS (Universal Flash Storage) with the announcements of Toshiba and Qualcomm reported over a year ago. While the promised late 2014 schedule seems to have been missed, and we still haven't seen any major product with the technology, it looks like UFS is finally gaining some traction as today Samsung is announcing the mass production of in-house solutions based on the UFS 2.0 standard.

Samsung claims to provide the new embedded memory type in 32GB, 64GB and 128GB capacities. The 128GB model doubles the amount of storage even their biggest eMMC storage solution is able to deliver. It was only last week that Samsung recently released a new eMMC 5.1 based NAND line-up which promised major gains over today's deployed eMMC products.

The UFS solution claims to achieve 19K IOPS (Input/output operations per second) in reads, almost double that of the 11K IOPS their eMMC 5.1 solution is capable of, and 2.7X times what common embedded memory is capable of today. There is also a purported boost to sequential read and write performance to SSD levels, although Samsung doesn't provide any actual figure, so we'll have to wait until we review a device to see what the actual gains are. What should be very interesting is a promised 50% decrease in energy consumption. We're still not very sure on the impact of eMMC power on a smartphone's battery life, but scenarios such as video recording are certain use-cases where a decrease in NAND power could be very beneficial to battery life.

UFS is based on a serial interface as opposed to eMMC's parallel architecture, enabling Full-Duplex data transfer and achieving twice to four times the peak bandwidth (depending on implementation) over the existing eMMC 8-bit interface.

Samsung offers the solution also in an ePoP package, meaning the NAND IC is embedded with the RAM ICs in a PoP package on top of the SoC, a solution already employed in the Galaxy Alpha and Galaxy Note 4. The goal here is to save on precious PCB space in small form factors such as smartphones.

We're looking forward to see in what kind of devices Samsung implements the technology and how it affects their performance and responsiveness.

Today Motorola has announced the launch and immediate availability of the 2015 version of the Moto E, the latest member of the company’s line of low-end smartphones.

The 2015 edition of the Moto E is a pretty hefty upgrade of a phone launched just 9 months ago. In terms of design the new Moto E is generally a bigger, more powerful version of its predecessor, retaining the same rounded plastic design while enlarging the overall body slightly to house the larger 4.5" screen. Meanwhile Motorola has iterated on the 2014’s swappable back covers, with the 2015 featuring the ability to swap in one of the company’s newer grip shells, or the phone’s colored bands can be swapped out separately.

The phone is being released in two versions. The first being the LTE model which primarily targets the US market in its LTE frequency bands. There are also a pair of 3G versions, with one again targeted at the US and the other more globally, with the big difference being the US 3G version's support of the 1700MHz AWS frequency bands for HSPA+.

Motorolla E (2015) Model Breakdown

Region

GSM

UMTS/HSPA+

LTE

4G LTE -
US GSM (XT1527)

850, 900, 1800, 1900

850, 1700, 1900

2, 4, 5, 7, 12, 17

US GSM (XT1511)

850, 900, 1800, 1900

850, 1700, 1900

-

Global GSM (XT1505)

850, 900, 1800, 1900

850, 900, 1900, 2100

-

Priced at $149, the LTE version features a Qualcomm Snapdragon 410 processor, which supplies both the quad-core Cortex-A53 CPU and the Category 4 LTE modem. The 3G versions will be launching later at $119 – the biggest difference here is the use of a Snapdragon 200 series SoC with a quad-core Cortex-A7 CPU instead of the newer A53 Snapdragon 410. Having received the LTE version from Motorola, for our purposes we’ll be focusing on the LTE version.

Likely the single biggest draw for the 2015 Moto E over the 2014 is the inclusion of LTE support, which is a first for a low-end Motorola phone, and in fact is something even the higher-tier 2014 Moto G did not include. Driven by the 9x25 modem integrated into the Snapdragon 410, this gives the Moto E Cat 4 LTE capabilities along the most common North American bands.

Meanwhile users of either version will also quickly notice the larger screen, which sees a slight bump to 4.5”, up from 4.3” in the 2014 version. Though larger, the resolution though remains entry-level at qHD (960x540) pixels, so pixel density has decreased some compared to the 2014 version. Helping to drive this larger display and to take advantage of the larger phone body is a 2390mAh battery, 410mAh more than in last year's model. Even accounting for the larger screen battery life should be improved over the 2014 version – particularly stand-by time – however we’ll have to give the phone a complete rundown to see what the real-world gains are.

Next to the screen and new to this year’s version is a front-facing VGA (640x480, 0.3MP) camera. The 2014 model skipped out on a camera entirely for cost reasons, and while this camera is of limited use, it should be reasonable enough for selfies and video chat on an entry level phone. Meanwhile the rear facing camera is still 5MP, however it’s now capable of auto focus versus last year’s fixed focus camera. As for video recording, this camera is used to record at 720p30, a significant step up from the 2014’s FWVGA (854x480) recording capabilities.

Storage has also seen a bump up, going from 4GB on-board to 8GB on-board, and users still looking for more can add more storage via microSD. The accompanying RAM on the other hand remains at 1GB, though it’s now LPDDR3 as opposed to LPDDR2.

Finally, the phone is shipping with Android 5.0 Lollipop, making it the first Motorola-branded phone to ship with Android 5.0 out of the factory and joining Motorola’s other phones which recently received the OS as an update. Motorola doesn’t specify whether they’re using a 32-bit or 64-bit version of the OS, however a quick check of the phone finds that it's running the 32-bit version of Android. Which given the fact that the 2015 Moto E is available with both Cortex-A53 and Cortex-A7 based SoCs, it makes sense that the company is sticking to 32-bit throughout.

We will be putting the new Moto E through its paces in the coming weeks, but so far it looks like a solid update to the Moto E lineup. At $149 for the LTE it does end up debuting at $20 more expensive than the previous version in what’s a very price sensitive market, so it will be interesting to see how consumers respond to the higher price. But LTE tends to be a big draw.

Shipping today, US customers can order the LTE version of the phone from Motorola’s website. Meanwhile international customers can look forward to Motorola rolling out the phone to more than 50 countries in the Americas, Europe, and Asia.

ZyXEL has a track record of making affordable networking equipment for both home users and service providers. Post-CES, the company has made a couple of product line announcements that warrant perusal from those keeping track ofdevelopments in the wired networking space.

Affordable 10G Switches

The first product line targets enterprise users thinking about shifting to 10G. With platform advancements bringing down the price and power consumption for 10GBASE-T switches, we have seen a host of affordable switches enter the market from various manufacturers. Netgear took the lead a couple of years back with a number of ProSafe 10GBASE-T switches starting at $1400 for the 8-port model. A couple of years down the road, the prices have come down considerably (slightly more than $800 for the 8-port model).

ZyXEL is now entering the affordable 10GBASE-T market with two switches, the Web Smart XS1920-12 and the L2 Managed XS3700-24. The two models are compared in the table below

ZyXEL is also touting their ZON management platform which enables IT administrators to have a unified view and streamlined control of various devices in the network. The new 10G switches are obviously compatible with the ZON platform.

UTM for Home Consumers

Towards the middle of last year, ZyXEL updated their UTM (Unified Threat Management) solutions for SMBs. In what we believe is a first from any home networking equipment vendor, ZyXEL is marketing the 4-port solution in the home consumer market too. Security is becoming an important aspect of home networks (with the rise in popularity of home automation devices and other online activities making home consumers vulnerable to cyberattacks) and ZyXEL is hoping to latch on to this opportunity with the USG40HE.

The USG40HE has a WAN port and 3 LAN/DMZ ports. There is an additional port that can be configured as a secondary WAN or another LAN port. Claimed firewall and VPN throughputs are 400 Mbps and 100 Mbps respectively.

This UTM device / home network security product provides firewall capabilities, content filtering, traffic prioritization depending on application recognition, intrusion detection and prevention and optional anti-virus / anti-spam capabilities. Similar to the tradition in the SMB market, ZyXEL is bundling a 1-yr license for the UTM services. Street price seems to be around $250, while the business edition is closer to $300. The latter comes with anti-virus and anti-spam licenses for 1 year, while the home edition makes them optional.

As home networks become more and more powerful, we believe the trend in the market (at least for power users) will be to move from an advanced Wi-Fi router to a gateway / wired router + Wi-Fi access point. The USG40-HE does fit into that scenario. That said, the 1-yr licensing for UTM capabilities works well in
business use-cases, but it might create a negative mindset for home consumers who are not used to such business models. It will be interesting to see how this product fares in the market.

Even with Broadwell not completely out of the door, a lot of attention is being put towards Skylake, the 14nm architecture update from Intel. Current information out the wild seems to contain a lot of hearsay and supposed leaks, but now we actually have at least some indication that Skylake is coming thanks to ComputerBase.de who spotted an ASRock industrial motherboard with the LGA1151 socket for Skylake processors at Embedded World.

Given how far Skylake is away from market, chances are that this motherboard is a mock-up rather than a working unit as we would imagine Intel to still be working on the first round or two of CPU steppings at this point. The motherboard does show up some interesting differences to Haswell, such as the socket which moves the notches higher up to the corners:

One of the big talking points of Skylake is the DDR4 compatibility, but this board throws a spanner into that by supporting two DDR3L-1600 SO-DIMM slots for up to 16GB of memory. It is also worth noting the separate chipset (most likely a server grade C236 for the next Xeon E3 CPUs) and support for three HDMI ports on the board.

ComputerBase.de also photographed a roadmap showing the boards on offer along with chipsets and some specifications:

Here we see C236 for workstations, Q170/H110 for Desktop (presumably mirroring Q and H chipsets like we have now), QM170 for mobile and at the end is the Atom SoCs. The specifications show desktop CPUs at 35, 65 and 95W, with the 95W being slightly up from Haswell. Mobile CPUs fall in at 15-45W, while the Atom details are thin on the ground. All the boards with memory listed have DDR3L as the main memory type, and in most cases the boards have Q3/Q4 2015 sampling availability with Q1 2016 as the mass production date.

Can we take much information away from this? Aside from TDP numbers and chipset naming, the remarking thing is the DDR3L support, especially with the expectation that Skylake will be DDR4. One thing is for certain is that the motherboard companies are definitely in a situation where designing and building boards for Skylake is on the agenda. It makes me wonder if Embedded World has any more similar motherboards to be seen, and how many we will see at Computex in June.

The original Pebble watch was arguably the first device in what is now a rapidly growing smartwatch segment of the wearables market. Since its release, the software of the Pebble has steadily improved, and Pebble has introduced various new color options as well as more premium version of the Pebble called the Pebble Steel. But even with all those changes, the fundemental hardware of the Pebble remained the same. Today Pebble has announced a brand new smartwatch called the Pebble Time, and it is what one could call a true successor to the original Pebble.

The Pebble Time retains many of the software features that users enjoy from the original Pebble. It's compatible with every existing Pebble application and watchface, and it has the same battery life of up to seven days. But the hardware of the Pebble Time is significantly improved from the original Pebble. The area that most users will notice first is the new display. While the original Pebble used a black and white memory LCD, the Pebble Time uses a color e-paper display. The design and size of the watch is also improved, with the thickness of the case having been reduced to 9.5mm, which is 20% thinner than the original Pebble. The bottom of the case is also curved to fit more comfortably on the user's wrist.

On the software side of things, the Pebble Time uses a new interface that Pebble are calling Timeline. Essentially, the interface is a sequential list of the events that you have planned throughout the day, and the three buttons on the right side of the watch allow you to move to the past, present, or future of your daily timeline. Like the original Pebble, the Pebble Time works with both Android and iOS devices, but features like sending voice replies to incoming notifications are more limited on iOS.

The Pebble Time will intiially come in black, white, and red. For their initial sales run Pebble has gone back to Kickstarter, the website where they originally began. The Pebble Time will retail for $199 when the Kickstarter campaign is over, but users who want to purchase one now can get it for $179 on Pebble's Kickstarter campaign below.