IBM today unveiled its next-generation Power Systems Servers incorporating its newly designed POWER9 processor. Built specifically for compute-intensive AI workloads, the new POWER9 systems are capable of improving the training times of deep learning frameworks by nearly 4x allowing enterprises to build more accurate AI applications, faster.

The new POWER9-based AC922 Power Systems are the first to embed PCI-Express 4.0, next-generation NVIDIA NVLink and OpenCAPI, which combined can accelerate data movement, calculated at 9.5x faster than PCI-E 3.0 based x86 systems. The system was designed to drive demonstrable performance improvements across popular AI frameworks such as Chainer, TensorFlow and Caffe, as well as accelerated databases such as Kinetica. As a result, data scientists can build applications faster, ranging from deep learning insights in scientific research, real-time fraud detection and credit risk analysis.

China has been increasingly - and steadily - gaining relevance in the supercomputing world, with most of the top-500 entries being controlled by that country. In fact, China can boast of having the number one supercomputer in the world, the Sunway TaihuLight, which can deliver 93 PetaFLOPS of computing power - just 3x more computational power than the second most powerful machine, China's own Tianhe-2). However, supercomputing, and the amount of money that's earned by selling processing slices of these supercomputers for private or state contractors, i a very attractive pull - especially considering the increasingly more expensive computational needs of the modern world.

The Summit is to be the United State's call to fame in that regard, bringing the country back to number one in raw, top-of-the-line single-machine supercomputing power. Summit is promising to more than double the PetaFLOPS of China's TaihuLight, to over 200 PetaFLOPs. That amounts to around 11x more processing grunt than its predecessor, the Titan, in a much smaller footprint - the Titan's 18,688 processing nodes will be condensed to just ~4,600 nodes on the Summit, with each node achieving around 40 TeraFLOPS of computing power. The hardware? IBM and NVIDIA, married in water-cooled nodes with the powerful GV100 accelerator that's still eluding us enthusiasts - but that's a question for another day.

NVIDIA at the Supercomputing 2017 conference announced a major upgrade of its new SaturnV AI supercomputer, which when complete, the company claims, will be not just one of the world's top-10 AI supercomputers in terms of raw compute power; but will also the world's most energy-efficient. The SaturnV will be a cluster supercomputer with 660 NVIDIA DGX-1 nodes. Each such node packs eight NVIDIA GV100 GPUs, which takes the machine's total GPU count to a staggering 5,280 (that's GPUs, not CUDA cores). They add up to an FP16 performance that's scraping the ExaFLOP (1,000-petaFLOP or 10^18 FLOP/s) barrier; while its FP64 (double-precision) compute performance nears 40 petaFLOP/s (40,000 TFLOP/s).

SaturnV should beat Summit, a supercomputer being co-developed by NVIDIA and IBM, which in turn should unseat Sunway TaihuLight, that's currently the world's fastest supercomputer. This feat gains prominence as NVIDIA SaturnV and NVIDIA+IBM Summit are both machines built by the American private-sector, which are trying to beat a supercomputing leader backed by the mighty Chinese exchequer. The other claim to fame of SaturnV is its energy-efficiency. Before its upgrade, SaturnV achieved an energy-efficiency of a staggering 15.1 GFLOP/s per Watt, which was already the fourth "greenest." NVIDIA expects the upgraded SaturnV to take the number-one spot.

The fiftieth TOP500 list of the fastest supercomputers in the world has China overtaking the US in the total number of ranked systems by a margin of 202 to 143. It is the largest number of supercomputers China has ever claimed on the TOP500 ranking, with the US presence shrinking to its lowest level since the list's inception 25 years ago.

Just six months ago, the US led with 169 systems, with China coming in at 160. Despite the reversal of fortunes, the 143 systems claimed by the US gives them a solid second place finish, with Japan in third place with 35, followed by Germany with 20, France with 18, and the UK with 15.

Intel has been steadily increasing its portfolio of products in the AI space, through the acquisition of multiple AI-focused companies such as Nervana, Mobileye, and others. Through its increased portfolio of AI-related IP, the company is looking to carve itself a slice of the AI computing market, and this sometimes means thinking inside the box more than outside of it. It really doesn't matter the amount of cores and threads you can put on your HEDT system: the human brain's wetware is still one of the most impressive computation machines known to man.

That idea is what's behind of neuromorphic computing, where chips are being designed to mimic the overall architecture of the human brain, with neurons, synapses and all. It marries the fields of biology, physics, mathematics, computer science, and electronic engineering to design artificial neural systems, mimicking the morphology of individual neurons, circuits, applications, and overall architectures. This, in turn, affects how information is represented, influences robustness to damage due to the distribution of workload through a "many cores" design, incorporates learning and development, adapts to local change (plasticity), and facilitates evolutionary change.

IBM Research scientists have achieved a new world record in tape storage - their fifth since 2006. The new record of 201 Gb/in2 (gigabits per square inch) in areal density was achieved on a prototype sputtered magnetic tape developed by Sony Storage Media Solutions. The scientists presented the achievement today at the 28th Magnetic Recording Conference (TMRC 2017) here.

Tape storage is currently the most secure, energy efficient and cost-effective solution for storing enormous amounts of back-up and archival data, as well as for new applications such as Big Data and cloud computing. This new record areal recording density is more than 20 times the areal density used in current state of the art commercial tape drives such as the IBM TS1155 enterprise tape drive, and it enables the potential to record up to about 330 terabytes (TB) of uncompressed data on a single tape cartridge that would fit in the palm of your hand. 330 terabytes of data are comparable to the text of 330 million books, which would fill a bookshelf that stretches slightly beyond the northeastern to the southwestern most tips of Japan.

The United States has been being pushed down in the TOP500 standings for some time courtesy China, whom has taken the 1st and 2nd place seats from the US with their Sunway TaihuLight and Tianhe-2 Supercomputers (at a Linpack performance of 93 and 33.9 Petaflops, respectively). It seemed though the crown was stolen from America, 3rd place was relatively safe for the former champs. Not so. America has been pushed right off the podium in the latest TOP500 refresh... not by China though, but Switzerland?

Developing supercomputers isn't for the faint of heart. Much less it is for those that are looking for fast development and deployment time-frames. And as such, even as the world's supercomputers are getting increasingly faster and exorbitantly expensive to develop and deploy, players who want to stay ahead have to think ahead as well. To this end, the US Department of Energy has awarded a total of $258M in research contracts to six of the US's foremost tech companies to accelerate the development of Exascale Supercomputer technologies (AMD, Cray, Hewlett Packard Enterprise, IBM, Intel, and NVIDIA.) These companies will be working over a three year contract period, and will have to support at least 40% of the project cost - to help develop the technologies needed to build an exascale computer for 2021. It isn't strange that the companies accepted the grant and jumped at the opportunity: 60% savings in research and development they'd have to do for themselves is nothing to scoff at.

Supercomputers birthed from the project are expected to be in the exaFLOPS scale of computing performance, which is around 50 times more processing power than the generation of supercomputers being installed now. Since traditional supercomputing knowledge and materials are known to falter at the objective level of exaFLOPS performance, the PathForward program - which looks to ensure achievement of such systems in a timely fashion to ensure US leadership in the field of supercomputing - will need to see spurred research and development, which the $258M grant is looking out to do.

AMD co-founder Jerry Sanders, in 2009 was famously quoted as stating that "real men have fabs," a jibe probably targeted at the budding fab-less CPU designers of the time. Years later, AMD spun-off its silicon fabrication business, which with a substantial investment of the Abu Dhabi government through its state-owned Advanced Technology Investment Company (ATIC), became GlobalFoundries (or GloFo in some vernacular). This company built strategic partnerships with the right players in the industry, acquisitions such as IBM's fabs, and is now at the forefront of sub-10 nm fab development. It remained one of AMD's biggest foundry partners besides TSMC and Samsung, and is manufacturing its AMD processors at a brand new facility in Upstate New York, USA.

AMD, on the other hand, doesn't regret spinning off GloFo. Speaking at Merrill Lynch Global Technology and Investment Conference, CTO Mark Papermaster said, that going fab-less has helped AMD focus on chip-design without worrying about manufacturing. Production is no longer a bottleneck for AMD, as it can now put out manufacturing contracts to a wider variety of foundry partners. Its chip-designers aren't limited by the constraints of an in-house fab, and can instead ask external fabs to optimize their nodes for their chip-designs, Papermaster said. 14 nm FinFET has added a level of standardization to the foundry industry.

IBM, its Research Alliance partners GLOBALFOUNDRIES and Samsung, and equipment suppliers have developed an industry-first process to build silicon nanosheet transistors that will enable 5 nanometer (nm) chips. The details of the process will be presented at the 2017 Symposia on VLSI Technology and Circuits conference in Kyoto, Japan. In less than two years since developing a 7 nm test node chip with 20 billion transistors, scientists have paved the way for 30 billion switches on a fingernail-sized chip.

The resulting increase in performance will help accelerate cognitive computing, the Internet of Things (IoT), and other data-intensive applications delivered in the cloud. The power savings could also mean that the batteries in smartphones and other mobile products could last two to three times longer than today's devices, before needing to be charged.

It would seem business is not as usual for GLOBALFOUNDRIES, which started as the spin-off from AMD's manufacturing arm way back on March 2, 2009. Blaming the capricious chip market's fluctuations, the company is looking to divest longtime employees in all three of its U.S. semiconductor manufacturing plants, including Essex Junction, which it acquired from IBM in 2015 by... receiving a $1.5 billion payment from the company. And as part of the deal, GLOBALFOUNDRIES agreed to be IBM's exclusive provider of semiconductor chips through 2025.

"We go through these ebbs and flows," Spokesman Jim Keller said Wednesday. "Right now we're at a point where some customers delayed their orders. We're in a period where we don't have as much business." The "voluntary separation" program is part of a larger cost cutting initiative that will look for other efficiency savings as well, though "layoffs are also a possibility". Keller would not say how many of GLOBALFOUNDRIES' 2,800 employees at Essex Junction are eligible for the early retirement program. Most of the workers eligible are in "support roles," such as administrative, sales or finance.

IBM and NVIDIA today announced collaboration on a new deep learning tool optimized for the latest IBM and NVIDIA technologies to help train computers to think and learn in more human-like ways at a faster pace. Deep learning is a fast growing machine learning method that extracts information by crunching through millions of pieces of data to detect and rank the most important aspects from the data. Publicly supported among leading consumer web and mobile application companies, deep learning is quickly being adopted by more traditional business enterprises.

Deep learning and other artificial intelligence capabilities are being used across a wide range of industry sectors; in banking to advance fraud detection through facial recognition; in automotive for self-driving automobiles and in retail for fully automated call centers with computers that can better understand speech and answer questions.

On the heels of the recent Gen-Z interconnect announcement, an aggregate of some of the most recognizable names in the tech industry have once again banded together. This time, it's an effort towards the implementation of a fast, coherent and widely compatible interconnect technology that will pave the way towards tighter integration of ever-more heterogeneous systems.

Technology leaders AMD, Dell EMC, Google, Hewlett Packard Enterprise, IBM, Mellanox Technologies, Micron, NVIDIA and Xilinx announced the new open standard to appropriate fanfare, considering the promises of an up-to 10x performance uplift in datacenter server environments, thus accelerating big-data, machine learning, analytics, and other emerging workloads. The interconnect promises to provide a high-speed pathway towards tighter integration between different types of technology currently making up the heterogeneous server computing's needs, ranging through fixed-purpose accelerators, current and future system memory subsistems, and coherent storage and network controllers.

Modern computer systems have been built around the assumption that storage is slow, persistent and reliable, while data in memory is fast but volatile. As new storage class memory technologies emerge that drive the convergence of storage and memory attributes, the programmatic and architectural assumptions that have worked in the past are no longer optimal. The challenges associated with explosive data growth, real-time application demands, the emergence of low latency storage class memory, and demand for rack scale resource pools require a new approach to data access.

GLOBALFOUNDRIES today announced plans to deliver a new leading-edge 7nm FinFET semiconductor technology that will offer the ultimate in performance for the next era of computing applications. This technology provides more processing power for data centers, networking, premium mobile processors, and deep learning applications.

GLOBALFOUNDRIES' new 7nm FinFET technology is expected to deliver more than twice the logic density and a 30 percent performance boost compared to today's 16/14nm foundry FinFET offerings. The platform is based on an industry-standard FinFET transistor architecture and optical lithography, with EUV compatibility at key levels. This approach will accelerate the production ramp through significant re-use of tools and processes from the company's 14nm FinFET technology, which is currently in volume production at its Fab 8 campus in Saratoga County, N.Y. GLOBALFOUNDRIES plans to make an additional mutli-billion dollar investment in Fab 8 to enable development and production for 7nm FinFET.

"The industry is converging on 7nm FinFET as the next long-lived node, which represents a unique opportunity for GLOBALFOUNDRIES to compete at the leading edge," said GLOBALFOUNDRIES CEO Sanjay Jha. "We are well positioned to deliver a differentiated 7nm FinFET technology by tapping our years of experience manufacturing high-performance chips, the talent and know-how of our former IBM Microelectronics colleagues and the world-class R&D pipeline from our research alliance. No other foundry can match this legacy of manufacturing high-performance chips."

Silicon fabrication company GlobalFoundries is reportedly planning to skip development of the 10 nanometer (nm) process, and is aiming to jump straight to 7 nm. The company currently operates a 14 nm FinFET node. In 2015 the company acquired semiconductor manufacturing assets from IBM, and is using them to fast-track its development. When it's ready, the 7 nm node will offer both optical and EUV (extreme ultra-violet) lithography. Driving the EUV product is an IBM 3300 EUV fabricator at the company's advanced patterning center, in its Albany, New York fab.

Western Digital today announced that it has acquired more than 100 patent assets from IBM (NYSE: IBM). The parties also entered into a patent cross-license agreement. Terms of the transaction were not disclosed.

Patents acquired by Western Digital are in distributed storage, object storage, and emerging non-volatile memory. Western Digital expects the IP to further strengthen its technology leadership position and drive value creation for the company and its customers. The patents will augment Western Digital's existing portfolio of more than 10,000 patents and patent applications.

"This agreement reflects our continued focus on innovation and sets the stage for even more rapid advancement and commercialization of new data storage solutions," said Mike Cordano, president and chief operating officer, Western Digital. "We are building on Western Digital and IBM's long-standing relationship and look forward to future collaborations and business opportunities."

The A200 was designed to complement the rest of Seagate's ClusterStor family of scale-out storage systems. It allows customers to non-disruptively migrate designated data off of the performance-optimized, primary storage tiers while keeping it online for fast retrieval. This avoids a common problem in shared HPC environments where the organization is forced to choose between having all of the data available to make the best analysis versus the time required to retrieve data from tape. Performance of the primary storage is often improved by migrating data and freeing up space for more efficient data layout. The pre-configured ClusterStor A200 solution includes an automatic policy-driven hierarchical storage management (HSM) system and near limitless scale-out capacity.

Intel co-founder Gordon Moore's claim that transistor counts in microprocessors can be doubled with 2 years, by means of miniaturizing silicon lithography is beginning to buckle. In its latest earnings release, CEO Brian Krzanich said that the company's recent product cycles marked a slowing down of its "tick-tock" product development from 2 years to close to 2.5 years. With the company approaching sub-10 nm scales, it's bound to stay that way.

To keep Moore's Law alive, Intel adopted a product development strategy it calls tick-tock. Think of it as a metronome that give rhythm to the company. Each "tock" marks the arrival of a new micro-architecture, and each "tick" marks its miniaturization to a smaller silicon fab process. Normally, each year is bound to see one of the two in alternation.

IBM, in collaboration with NVIDIA and Mellanox, today announced the establishment of a POWER Acceleration and Design Center in Montpellier, France to advance the development of data-intensive research, industrial, and commercial applications. Born out of the collaborative spirit fostered by the OpenPOWER Foundation - a community co-founded in part by IBM, NVIDIA and Mellanox supporting open development on top of the POWER architecture - the new Center provides commercial and open-source software developers with technical assistance to enable them to develop high performance computing (HPC) applications.

Technical experts from IBM, NVIDIA and Mellanox will help developers take advantage of OpenPOWER systems leveraging IBM's open and licensable POWER architecture with the NVIDIA Tesla Accelerated Computing Platform and Mellanox InfiniBand networking solutions. These are the class of systems developed collaboratively with the U.S. Department of Energy for the next generation Sierra and Summit supercomputers and to be used by the United Kingdom's Science and Technology Facilities Council's Hartree Centre for big data research.

GLOBALFOUNDRIES today announced that it has completed its acquisition of IBM's Microelectronics business. With the acquisition, GLOBALFOUNDRIES gains differentiated technologies to enhance its product offerings in key growth markets, from mobility and Internet of Things (IoT) to Big Data and high-performance computing. The deal strengthens the company's workforce, adding decades of experience and expertise in semiconductor development, device expertise, design, and manufacturing. And the addition of more than 16,000 patents and applications makes GLOBALFOUNDRIES the holder of one of the largest semiconductor patent portfolios in the world.

"Today we have significantly enhanced our technology development capabilities and reinforce our long-term commitment to investing in R&D for technology leadership," said Sanjay Jha, chief executive officer of GLOBALFOUNDRIES. "We have added world-class technologists and differentiated technologies, such as RF and ASIC, to meet our customers' needs and accelerate our progress toward becoming a foundry powerhouse." Through the addition of some of the brightest and most innovative scientists and engineers in the semiconductor industry, GLOBALFOUNDRIES solidifies its path to advanced process technologies at 10 nm, 7 nm, and beyond.

Nintendo is working on a next-generation gaming console to succeed even the fairly recent Wii U. The company is reacting to the plummeting competitiveness of its current console to the likes of PlayStation 4 and the Xbox One. Reports suggest that Nintendo would make a course-correction on the direction in which it took its game console business with the Wii, and could come up with a system that's focused on serious gaming, as much as it retains its original "fun" quotient. In that manner, the console could be more NES-like, than Wii-like.

Nintendo could ring up AMD for the chip that will drive its next console. It's not clear if AMD will supply a fully-integrated SoC that combines its own x86 CPU cores with its GCN graphics processor; or simply supply the GPU component for an SoC that combines components from various other manufacturers. The Wii U uses IBM's CPU cores, with AMD's GPU, combined onto a single chip. There's no word on when Nintendo plans to announce the new console, but one can expect a lot more news in 2015-16.Source: Expreview

IBM and GLOBALFOUNDRIES today announced that they have signed a Definitive Agreement under which GLOBALFOUNDRIES plans to acquire IBM's global commercial semiconductor technology business, including intellectual property, world-class technologists and technologies related to IBM Microelectronics, subject to completion of applicable regulatory reviews. GLOBALFOUNDRIES will also become IBM's exclusive server processor semiconductor technology provider for 22 nanometer (nm), 14 nm and 10 nm semiconductors for the next 10 years.

The Agreement, once closed, enables IBM to further focus on fundamental semiconductor research and the development of future cloud, mobile, big data analytics, and secure transaction-optimized systems. IBM continues its previously announced $3 billion investment over five years for semiconductor technology research to lead in the next generation of computing. GLOBALFOUNDRIES will have primary access to the research that results from this investment through joint collaboration at the Colleges of Nanoscale Science and Engineering (CNSE), SUNY Polytechnic Institute, in Albany, N.Y.

Lenovo and IBM today announced that they have completed the initial closing for Lenovo's acquisition of IBM's x86 server business under the terms described in their announcement on Monday, September 29, 2014. Lenovo is acquiring System x, BladeCenter and Flex System blade servers and switches, x86-based Flex integrated systems, NeXtScale and iDataPlex servers and associated software, blade networking and maintenance operations. IBM retains its System z mainframes, Power Systems, Storage Systems, Power-based Flex servers, and PureApplication and PureData appliances.

As part of the agreement, Lenovo and IBM have also established a strategic alliance where Lenovo will serve as an Original Equipment Manufacturer (OEM) to IBM and will resell select products from IBM's industry-leading storage and software portfolio. These include IBM's entry and midrange Storwize storage product family, Linear Tape Open (LTO) products, and elements of IBM's system software portfolio, including Smart Cloud software, General Parallel File System and Platform Computing solutions.