How will enormous data sets and an endless stream of ever-more granular variables drive supercomputing in the coming years? Will it be like a dust storm that buries us, or flood waters we can redirect and manage? How will it alter the evolution of architecture and subsystems? How will it change computer science education, development tools and job descriptions? And will gargantuan data form a barrier to our evolution to Exascale and beyond by sapping the shrinking resources for funding and creativity?