News

Climate change is a global process that affects everyone on Earth. However, until recently, the tools used for climate change research and the data driving and produced by those studies was accessible to only a small portion of the population. Last week, as part of RDCEP’s Discovery Engines: Under the Hood workshop series, David Kelly and Michael Glotter taught an audience of researchers and students how they can use new available climate data and tools.

An article from Benjamin Recchie of the University of Chicago Research Computing Center looks at how CI Senior Fellow John Goldsmith and graduate student Jackson Lee use high-performance computing to better understand how computers -- and by extension, humans -- learn the rules of language.

A new model of electric shock injury run on the CI’s Beagle supercomputer by the research group of University of Chicago Medicine’s Raphael Lee hopes to understand the harmful effects of electricity, inspiring new treatments and protective gear.

When something goes wrong while you're running a program on your personal computer, the worst outcome is typically a reboot and the loss of any unsaved work. But when an application crashes on a supercomputer, the consequences can be much more dramatic. In his talk at the Computation Institute, Argonne's Franck Cappello discussed new resilience strategies for the next era of supercomputing.

To encourage building owners to assess and reduce their energy usage, the City of Chicago passed the Building Energy Use Benchmarking Ordinance in 2013, requiring certain properties to report energy data. Late last year, that mandate produced the first Building Energy Benchmarking Report, containing insight and visualizations produced in part with the CI’s Urban Center for Computation and Data (UrbanCCD).

Science appears to be slowing down. Over the course of Knowledge Lab Postdoc Aaron Gerow's research in how scientific publications influence one another, he found that as fields develop bodies of research, their citations tend to lag several years behind the present day.

When people talk about the current tech boom, it usually conjures up images of phone apps, social media networks, and startups with one-word names. But inside the public sector, a quieter tech revolution stirs, as governments increasingly recognize the power of data to help them serve their constituents more effectively. This trend creates a new kind of skills gap, as governments look for people with both the technical skills and civic motivation to analyze data and build tools for internal and external use.

More and more industries now use modeling and simulation as critical tools for engineering and design. But as the detail and scale of these simulations grows larger and larger, many companies hit a computational ceiling, unable to perform these advanced calculations as quickly as needed. With a boost from the Chicago Innovation Exchange, Parallel.Works, a new startup company from CI scientists, hopes to provide industries with parallel computing solutions to break through this barrier with a minimum of fuss.

When the metagenomics platform MG-RAST was launched in 2007, data was scarce. Created as a public resource for annotating microbial genomes from environmental samples, MG-RAST originally served a small community of scientists with relatively small datasets, due to the great expense of gene sequencing.