FBGA and Part Marking Decoder

About Micron Insight

Micron Insight brings you stories about how technology transforms information to enrich lives. Learn, imagine, innovate, solve, and gain insight on the technology trends of today and tomorrow from thought leaders around the world. We've built our technological expertise for over 40 years and now we are sharing that expertise with you. Learn how intelligence is being accelerated to enrich life in science and medicine, at the edge, and through the speed of data access and analysis. Accelerate your Intelligence at Micron Insight.

I just returned from a very busy 2016 OCP Summit in San Jose, CA. Micron was an exhibitor and sponsor of the summit and Micron’s booth had plenty of traffic wanting to discuss storage and DRAM. Micron is fortunate to work with several partners on Open Compute Project (OCP) solutions. One of those solutions was on display in our booth, Penguin Computing’s OCP Tundra server complete with Micron’s M510DC SSDs and Micron DRAM. Micron goodness aside, here are my takeaways from the OCP Summit:

OCP Momentum

OCP continues to pick up momentum as evidenced by the addition of Google. 3 of the 4 largest hyperscale providers (Google, Facebook and Microsoft) are now part of OCP, signaling an increased commitment to Open Compute. In addition to the hyperscalers, OCP is making its way into the traditional enterprise. I had meetings with several members of the financial community who are not only actively exploring OCP but already have significant OCP deployments and are committed to their continued growth.

Dense Computing

OCP has always been about driving down the cost of hardware but OCP participants are also looking at other cost vectors and one of them is density. Density is being driven by three main components: server design, CPU capability and flash memory for storage. Intel’s contribution to more dense OCP designs was announced during the Summit and included a new Xeon-D CPU with 16 cores / processor and the Decathlete 2.1 OCP server board standard which increased the number of DRAM slots from 16-24 DIMMs.

Google also recognizes that more power is required to supply a denser compute/storage model and is collaborating with Facebook on a rack architecture with 48v power instead of the 12V standard that exists today. The 48v power standard was Google’s primary motivation for joining OCP.

Flash and Compute Density

Flash storage has become a critical factor in increasing compute density. Flash provides orders of magnitude increase in IOPs/square mm of server space compared to HDDs. Facebook introduced several new OCP components including a flash sled called Lightning which will support 120TB of flash in a 2U form factor.

Facebook is calling Lightning a JBOF (just a bunch of flash). Part of the motivation behind it was to “maximize the amount of flash available to applications” according to Chris Petersen, hardware designer at Facebook. Lightning leverages the existing Knox design and adds significant SSD capabilities including support for NVMe PCIe drives.

Micron’s Role in OCP

Since flash is becoming the cornerstone of server technologies for the future data center, it is no surprise that solutions from Micron are positioned to enable these applications – even those you have yet to think of!

Eric Endebrock, VP of Storage Solutions Marketing and Mark Glasgow, VP of Worldwide Enterprise Sales, sat down to an interview with The Cube while at the OCP Summit. You can see their interview here. In it, Mark Glasgow comments how the world of big data has shifted from doing everything in batch to now doing everything in real time. The storage and compute technologies Micron provides will allow organizations to achieve this real time analysis of data. He also alludes to ‘crazy interesting’ stuff coming down the pipe from Micron in the near future. Be sure to stay tuned!

Storage

AI Matters: Getting to the Heart of Data Intelligence with Memory and Storage

February 6, 2019

Storage performance and system memory density are critical for getting the most performance out of your system, and GPU/accelerator memory is important for performance and enabling future model development.