MediaTek has announced two more Helio X20 series products - a Helio X27 and an X23 and as you can figure out from the names, Helio X27 is faster than the X25 while X23 is a bit slower.

Helio X25 was the fastest deca-core 20nm SoC from MediaTek with three cluster designs and this SoC ended up in quite a few prominent China higher end phones including a few Meizu devices. But it looks like customers wanted a bit faster camera, SoC and GPU performance for its late 2016 early 2017 phones, the ones that will launch before the Helio X30 comes to market.

Jeffrey Ju, Executive Vice President and Co-Chief Operating Officer at MediaTek said: “The MediaTek Helio platform fulfills the diverse needs of device makers. Based on the success of MediaTek Helio X20 and X25, we are introducing the upgraded MediaTek Helio X23 and X27. The new SoCs support premium dual camera photography and provide best in-class performance and power consumption,”

The Helio X25 has two Cortex A73 cores clocked at 2.5 GHz, four Cortex A53 clocked at 2.00 GHz and last four Cortex A53 clocked at 1.55GHz. The Mali GT880 graphics is clocked at 850 MHz.

The Helio X20 has two Cortex A73 cores clocked at 2.1 GHz, four Cortex A53 clocked at 1.85 GHz and last four Cortex A53 clocked at 1.4GHz. The Mali GT880 graphics is clocked at 780 MHz.

The newcomer, Helio X27, has two Cortex A73 cores clocked at 2.6 GHz, four Cortex A53 clocked at 2.00 GHz and the last four Cortex A53 clocked at 1.6 GHz. The Mali GT880 graphics is clocked at 875 MHz. The rest of the specification is identical to the Helio X25.

The Helio X23 has two Cortex A73 cores clocked at 2.3 GHz, four Cortex A53 clocked at 1.85 GHz and the last four Cortex A53 clocked at 1.4GHz. The Mali GT880 graphics is clocked at 780 MHz. As you can see, this is just a slightly faster version of Helio X20 and it sits just below Helio X25 with its specs.

Thanks to MediaTek-engineered advancements in the CPU/GPU heterogeneous computing scheduling algorithm, both products deliver more than a 20 percent overall processing improvement and significant increases in web browsing and application launching speeds. This definitely sounds promising but you should bear in mind that MediaTek had enough time to optimize these designs of the new and updated SoCs.

IBM has developed some tech it calls a lab-on-a-chip which can be used to detect cancer or other diseases before any symptoms have appeared.

The tech allows for the separation of bioparticles down to a size of 20nm diameter which means that particles such as DNA, exosomes and viruses can be separated for analysis.

This means that diseases can be detected before any outward signs are visible and means that patients have a far better chance of recovery thanks to early treatment.

The company notes that it's developing this technology in conjunction with the Icahn School of Medicine in New York, and the first step in trialling will be to test it detecting prostate cancer in the US.

The lab-on-a-chip can analyse liquid biopsies from patients and is capable of detecting exosomes with cancer-specific biomarkers. Exosomes, by the way, are vesicles – a small structure within a cell – which are present in bodily fluids such as saliva and urine, and they're being seen as increasingly important in the diagnosis of malignant tumours.

The big idea is to reach a situation where a simple home diagnostic chip could allow people to regularly monitor their health via urine samples.

Gustavo Stolovitzky, Program Director of Translational Systems Biology and Nanobiotechnology at IBM Research, commented: "The ability to sort and enrich biomarkers at the nanoscale in chip-based technologies opens the door to understanding diseases such as cancer as well as viruses like the flu or Zika. Our lab-on-a-chip device could offer a simple, noninvasive and affordable option to potentially detect and monitor a disease even at its earliest stages, long before physical symptoms manifest."

TSMC is thinking about dropping contract prices for its handset-IC clients to help ease the pressure of their declining ASPs and gross margins.

According to Digitimes, the move is likely to help out its handset-chip customers including MediaTek and other firms supplying chips mainly for Android devices.

The handset suppliers are under pressure to cut costs in what has been a particularly rigorous level of competition. Major handset-IC vendors including Qualcomm, MediaTek and Spreadtrum have all suffered gross margin decreases.

MediaTek told its most recent investors meeting that the company would see its gross margin fall further in the second quarter of 2016 despite higher shipments and revenues. MediaTek cut its gross margin outlook for the year of 2016 to 35-38 per cent.

Clearly if Digitimes is right, then TSMC is wanting to keep its business partners healthy until this rough patch is over. While it might lose a bit of cash in the short term, if its partners do well it will make more in the long run. Although to be fair, there are no indications that MediaTek or Qualcomm are going anywhere soon. In fact we predict that both with have good years on the back of some rather nice chip technology.

Roughly six months ago, AMD launched the first graphics cards packed with first-generation High Bandwidth Memory (HBM). Today, Samsung has just announced production of the industry’s first 20nm 4GB DRAM package based on second-generation High Bandwidth Memory 2 (HBM2).

The new standard is expected to run circles around GDDR5, offer up to 95-percent surface area savings on GPUs and allow faster responsiveness for High Performance Computing (HPC) tasks and deep learning supercomputing clusters.

As we mentioned last week, AMD’s 28nm Fiji GPU lineup (Radeon Fury Series) was one of the first to use JEDEC’s High Bandwidth Memory (HBM) DRAM standard. This first-generation HBM standard was limited to 128GB/s (1Gbps per pin) using 4-high TSV stacks across a 1024-bit interface at 1.2v. Now, JEDEC’s updated “JESD235A” HBM2 standard runs 256GB/s and supports 2-high, 4-high and 8-high TSV stacks across a 1024-bit interface at 1.2v and divided into 8 channels on each DRAM stack.

Source: JEDEC (via WCCFTech)

Now, Samsung has created a 20nm 4GB High Bandwidth Memory 2 (HBM2) package with four 8Gb core dies stacked on top of a buffer die at the bottom. These five layers are then vertically interconnected by over 5,000 Through-silicon via (TSV) holes and microbumps. This package contains more than 36 times more holes than an 8Gb TSV DDR4 package. Samsung claims this design offers “a dramatic improvement in data transmission performance compared to typical wire-bonding based packages.”

Source: ComputerBase

Like its predecessor, HBM 2 uses an interposer interface to route connections from the GPU to memory. In short, this 4GB HBM2 package can deliver as much as 256GB/s of bandwidth. This is double the bandwidth of first-generation HBM and a seven-fold increase over a 36GB/s 4GB GDDR5 chip, which currently has the fastest data speed-per-pin at 9Gbps. The main difference over HBM1 is that HBM2 offers more DRAM per stack and higher throughput. Samsung’s 4GB HBM2 also doubles bandwidth-per-watt over a 4Gb GDDR5 chip and uses Error-Correcting Code (ECC) for high reliability.

Source: AMD

As AMD mentioned in May 2015, GDDR5 is entering an inefficient region on the “power-to-performance” curve and has called the interposer architecture found in High Bandwidth Memory a “revolution in chip design.” In 2015, the company partnered with Hynix for its 28nm Radeon Fury Series lineup on HBM1 and has recently confirmed that its upcoming 16nm FinFET Polaris GPU lineup will feature both GDDR5 and HBM2 memory technologies. AMD has decided to go with both DRAM configurations because the current HBM cost structure is still too high to implement in all of its next-generation GPUs. Nevertheless, its high-end Polaris models are expected to ship with up to 16GB HBM2 packages, offering peak bandwidth of 1TB/s.

As it looks now, Nvidia has the choice to go with either Samsung HBM2 or SKHynix HBM2 for its Pascal GPU lineup arriving later this year. The consumer GPU series is expected to ship with up to 16GB of 20nm HBM2 memory (four 4GB packages), offering effective memory bandwidth of 1TB/s. Samsung is committed to producing an 8GB HBM2 package this year for use in next-generation enthusiast and workstation graphics cards. On the other hand, AMD will likely stick with either first-gen or second-gen SKHynix HBM, as they helped AMD develop the new memory architecture in the first place. on Nvidia's end, we hope to have more information around GTC 2016 in April.

“By mass producing next-generation HBM2 DRAM, we can contribute much more to the rapid adoption of next-generation HPC systems by global IT companies,” said Sewon Chun, senior vice president, Memory Marketing, Samsung Electronics. “Also, in using our 3D memory technology here, we can more proactively cope with the multifaceted needs of global IT, while at the same time strengthening the foundation for future growth of the DRAM market.”

This density improvement with HBM2 also translates to much smaller GPUs, with reference designs offering as much as 95-percent space savings compared to current GDDR5-based GPUs. At the very least, we can expect to see Samsung’s upcoming 20nm 8GB HBM2 packages in next-generation Quadro and FirePro workstation GPUs.

Micron has promised to launch a new kind of memory that is twice the speed of mainstreams GDDR5 . This will be released during 2016 and will be company's answer to HBM.

Updated: Micron got back to us claiming that the new memory to come in 2016 is called GDDR5X and not the GDDR6. The company also said that it plans to announce its plans to fight the HBM at later date.

With speeds of 10 to 14 Gb/s the memory will outpace the existing 7.0 Gb/s of 4Gb GDDR5 memory chips from Micron. Even the new larger GDDR5 chips with 8Gb density will end up at 8.0 Gb/s data rate. The information has been confirmed by Kristopher Kido who is Director of Micron’s global Graphics Memory Business.

With speeds from 10 to 14 Gb the next generation memory will be much faster and provide much needed bandwidth. Micron can easily call this memory GDDR6 as we have heard people from the graphics industry already using the term.

The new memory will continue to use traditional component form factor, similar to GDDR5, reducing the burden and complexity of design and manufacturing.

Many might ask why don’t Nvidia and AMD go after HBM 2.0 as this new memory standard offers great speeds and incredible bandwidth. But HMB 2.0 requires interposers that complicate the design of the GPU. The HBM 1.0 still suffers from severe shortages but we hear that things are getting better.

AMD over-hyped the new High Bandwidth Memory standard and now the second generation HBM 2.0 is coming in 2016. However it looks like most of GPUs shipped in this year will still rely on the older GDDR5.

Most of the entry level, mainstream and even performance graphics cards from both Nvidia and AMD will rely on the GDDR5. This memory has been with us since 2007 but it has dramatically increased in speed. The memory chip has shrunken from 60nm in 2007 to 20nm in 2015 making higher clocks and lower voltage possible.

Some of the big boys, including Samsung and Micron, have started producing 8 Gb GDDR5 chips that will enable cards with 1GB memory per chip. The GTX 980 TI has 12 chips with 4 Gb support (512MB per chip) while Radeon Fury X comes with four HMB 1.0 chips supporting 1GB per chip at much higher bandwidth. Geforce Titan X has 24 chips with 512MB each, making the total amount of memory to 12GB.

The next generation cards will get 12GB memory with 12 GDDR5 memory chips or 24GB with 24 chips. Most of the mainstream and performance cards will come with much less memory.

Only a few high end cards such as Greenland high end FinFET solution from AMD and a Geforce version of Pascal will come with the more expensive and much faster HMB 2.0 memory.

GDDR6 (MIcron insists on the GDDR5X name) is arriving in 2016 at least at Micron and the company promises a much higher bandwidth compared to the GDDR5. So there will be a few choices.

The Galaxy S7 phones are expected to launch early next year and which will be sold in the United States and China – presumably becuase the Qualcomm modem works better in that region. The Galaxy S7 phones sold in other markets will be powered by Samsung's own Exynos processors.

This is all based on rumour and speculation. We don't expect an announement until December, but would mean that Samsung might be considering a shift back to Qualcomm after the debacle of the overheating 810 Snapdragons. Although some Galaxys shipped with the 20 nm 810, but most went out under the 14nm Exynos processor instead.

All this was a swift kick to the bottom line of Qualcomm. To be fair Samsung did say it could opt to use Qualcomm chips in the future. It might just have been that Samsung was unimpressed by the 810.

Samsung has previously sourced mobile processors from both Qualcomm and through its own chips division for its premium devices including its Galaxy S and Galaxy Note series.

The rumoured Helio X30 is real and if you thought that X20 was not enough to see off Snapdragon 820, it looks like the Helio X30 has a much better chance.

All new Helio X20 deca-core has two A72 at 2.5GHz, four A53 at 2.0 and four A53 cores at 1.4 GHz. It has Core pilot 3.0 is a smart scheduler that decides which core gets what task.

This processor has every chance to be faster than Snapdragon 620 from Qualcomm. The Snapdragon 620 comes with four A72 cores at 1.8GHz and four A53 at 1.4 GHz but we are unsure how Helio X20 goes will match up against the Snapdragon 820 with its custom quad Krait cores.

But the the Helio X30 has four A72 cores at 2.5GHz, two A72 clocked at 2GHz, two Cortex A53 clocked at 1.5GHz and two low power A53 at 1GHz. A senior executive from MediaTek told us that not all cores were created equal.

Despite the fact that the word "A53" on the box looks like "A53" on the other box, one is optimised for performance and the other for low power. If it is unclear if the A53 based cluster from MediaTek is the same as A53 cluster from Qualcomm.

As you can read at Fudzilla we spent quite some time learning about the potential gains of having three clusters. The X20 can have 30 to 40 percent less power consumption, simply by being smart how it uses all ten cores / three clusters.

With Helio X30 you will gain more performance with six out of ten cores being based on the A72 core. Having ten cores in four clusters raises another question, how efficient will the four cluster approach be versus the three cluster approach?

MediaTek has not officially confirmed or launched the Helio X30, but we expect that this will happen soon. The X30 should be shipping in devices in early 2016. at least this is what we would expect to place it well against the Snapdragon 820.

If you thought the Helio X20 with ten core was the best MediaTek could offer, the company has now come up with an X30 which is a faster version of deca-core and has four A72 cores.

The helio X20 has two Cortex A72 at 2.5GHz, four A53 at 2GHz in a middle cluster and four ARM A53 in a low power cluster clocked at 1.4GHz. Which is all pretty good.

But the X30 has four Cortex A72 cores at 2.5GHz, two A72 clocked at 2GHz, two Cortex A53 clocked at 1.5GHz and two low power A53 at 1GHz. This means that X30 will gain significant performance over X20 and give MediaTek a huge advantage in the fight against Qualcomm and Samsung.

TSMC president and co-CEO Mark Liu has announced his outfit has begun volume shipment of chips based on its 16-nm FinFET manufacturing process.

He added that the ramping of the 16-nm process will be even more aggressive than that of its 20-nm process and he wants to gain foundry market share over the remainder of 2015 and well into 2016 on the back of the technology.

The foundry expects 16-nm processor shipments to begin contributing to revenue in the fourth quarter of 2015, since the process will ramp up during much of the third quarter.

TSMC moved quickly from 20-nm to 16-nm manufacturing claiming that 16 nanometer shared a similar metal backend process with 20 nanometer. In other words its 16 FinFET benefited from what it learnt doing 20-nanometer.

Liu also talked about the foundry's 10-nm and 7-nm processes, saying that the recent product-like validation vehicle milestone was encouraging and that its plans are on-track.

TSMC plans to make 7-nm validation samples in the second quarter of 2017, just fifteen months after 10-nm validation.