Wuerthwein teaches at University of California in San Diego and is currently “developing, deploying, and now operating a worldwide distributed computing system for high throughput computing with large data volumes” for the Large Hadron Collider at CERN. Today, “large” data volumes are measured in Petabytes. By 2020, he expects this to grow to Exabytes.

I want to have the maximally broad exposure, so I can have a maximum of avenues they can engage with me. If I talk only about Dynamic Data Centers, I can miss out ten other conversations which are worthwhile having. For example we have this dichotomy between structured and unstructured data. On one side you have Oracle like structured data, and to other extreme data unstructured, where you don’t even know what to look for until you actually look for a specific purpose. I want to position my hundreds of Petabytes of data from particle physics in this continuum, I don’t see this as a either / or. There is a lot of grey between.

Resource Links:

Industry Perspectives

In this special guest feature, Brian D’alessandro, Director of Data Science at SparkBeyond, discusses how AI is a learning curve, and exploring opportunities within the technology further extends its potential to enable transformation and generate impact. It can shape workflows to drive efficiency and growth opportunities, while automating other workflows and create new business models. While AI empowers us with the ability to predict the future — we have the opportunity to change it. [READ MORE…]

Latest Video

White Papers

This insideBIGDATA technology guide explores how current implementations for AI and DL applications can be deployed using new storage architectures and protocols specifically designed to deliver data with high-throughput, low-latency and maximum concurrency.