Cookie policy: This site uses cookies (small files stored on your computer) to simplify and improve your experience of this website. Cookies are small text files stored on the device you are using to access this website. For more information on how we use and manage cookies please take a look at our privacy and cookie policies. Some parts of the site may not work properly if you choose not to accept cookies.

To maintain its position at the forefront of international research, the Institute for Computational Cosmology at Durham University wanted to develop a new high-performance computing cluster - find out how they did it.

Deep learning opens up new worlds of possibility in artificial intelligence, enabled by advances in computational capacity, the explosion in data, and the advent of deep neural networks. But data is evolving quickly, and legacy storage systems are not keeping up. Advanced AI applications require a modern all-flash storage infrastructure that is built specifically to work with high-powered analytics.

Deep learning opens up new worlds of possibility in artificial intelligence, enabled by advances in computational capacity, the explosion in data, and the advent of deep neural networks. But data is evolving quickly and legacy storage systems are not keeping up. Advanced AI applications require a modern all-fl ash storage infrastructure that is built specifically to work with high-powered analytics.

Deep learning opens up new worlds of possibility in artifi cial intelligence, enabled
by advances in computational capacity, the explosion in data, and the advent of
deep neural networks. But data is evolving quickly and legacy storage systems
are not keeping up. Advanced AI applications require a modern all-fl ash storage
infrastructure that is built specifically to work with high-powered analytics.

Deep learning opens up new worlds of possibility in artificial intelligence, enabled by advances in computational capacity, the explosion in data, and the advent of deep neural networks. But data is evolving quickly and legacy storage systems are not keeping up. Advanced AI applications require a modern all-flash storage infrastructure that is built specifically to work with high-powered analytics.

Since SAP introduced its in-memory database, SAP HANA, customers have significantly accelerated everything from their core business operations to big data analytics. But capitalizing on SAP HANA’s full potential requires computational power and memory capacity beyond the capabilities of many existing data center platforms.
To ensure that deployments in the AWS Cloud could meet the most stringent SAP HANA demands, AWS collaborated with SAP and Intel to deliver the Amazon EC2 X1 and X1e instances, part of the Amazon EC2 Memory-Optimized instance family. With four Intel® Xeon® E7 8880 v3 processors (which can power 128 virtual CPUs), X1 offers more memory than any other SAP-certified cloud native instance available today.

Teachers have always experimented with new technology and how it can be integrated to augment the lessons and content given to students. Classroom sets of books afforded teachers the opportunity to give homework, movie projectors and televisions offered an opportunity to display new content, and calculators transformed computational mathematics. Augmented and virtual reality are new tools that can transition pedagogy to include new materials and content. Students can travel to historical landmarks, world heritage sites, and past events from the safety of their classroom. Books can be scanned to reveal videos and three-dimensional content identified by the teacher to enhance the content available to the student.
Download this whitepaper to learn more.
Intel, the Intel logo, Intel Core, Intel vPro, Core Inside and vPro Inside are trademarks of Intel Corporation or its subsidiaries in the U.S. and/or other countries.

Deep learning opens up new worlds of possibility in artificial intelligence, enabled by advances in computational capacity, the explosion in data, and the advent of deep neural networks. But data is evolving quickly and legacy storage systems are not keeping up. Read this MIT Technology Review custom paper to learn how advanced AI applications require a modern all-flash storage infrastructure that is built specifically to work with high-powered analytics, helping to accelerate business outcomes for data driven organizations.

This paper provides CIMdata's perspective on Computational Fluid Dynamics (CFD) analysis; the motivations for its use, its value and future, and the importance for making CFD available to all engineers earlier in the product design/development lifecycle.

Data movement and management is a major pain point for organizations operating HPC environments. Whether you are deploying a single cluster, or managing a diverse research facility, you should be taking a data centric approach. As data volumes grow and the cost of compute drops, managing data consumes more of the HPC budget and computational time. The need for Data Centric HPC architectures grows dramatically as research teams pool their resources to purchase more resources and improve overall utilization. Learn more in this white paper about the key considerations when expanding from traditional compute-centric to data-centric HPC.

IBM Platform LSF family provides a complete set of workload management capabilities for demanding, distributed HPC environments. In this video, we will learn how a genomics workflow can be managed in a multi-architecture, hybrid-cloud environment with the IBM Platform LSF family. Featuring IBM Platform Application Center and IBM Process Manager, learn how these add-on products can help to drive productivity through easy-to-use interfaces for managing complex computational workflow.

This paper, nominated for the DesignCon 2016 Best Paper Award, analyzes the computational procedure specified for Channel Operation Margin (COM) and compares it to traditional statistical eye/BER analysis.

The data center is central to IT strategy and houses the computational power, storage resources, and applications
necessary to support an enterprise business. A flexible data center infrastructure than can support and quickly deploy new applications can result in significant competitive advantage, but designing such a data center requires solid initial planning and thoughtful consideration of port density, access-layer uplink bandwidth, true server capacity, oversubscription, mobility, and other details.

Every ten to fifteen years, the types of workloads servers host swiftly shift. This happened with the first single-mission mainframes and today, as disruptive technologies appear in the form of big data, cloud, mobility and security. When such a shift occurs, legacy servers rapidly become obsolete, dragging down enterprise productivity and agility. Fortunately, each new server shift also brings its own suite of enabling technologies, which deliver new economies of scale and entire new computational approaches.
In this interview, long-time IT technologist Mel Beckman talks to HP Server CTO for ISS Americas Tim Golden about his take on the latest server shift, innovative enabling technologies such as software-defined everything, and the benefit of a unified management architecture. Tim discusses key new compute technologies such as HP Moonshot, HP BladeSystem, HP OneView and HP Apollo, as well as the superiority of open standards over proprietary architectures for scalable, cost-effect

Add Research

About us

DatacenterDynamics is a brand of DCD Group, a global B2B media and publishing company that develops products to help senior professionals in the world's most ICT dependent organizations make risk-based infrastructure and capacity decisions.

Our portfolio of live events, online and print publishing, business intelligence and professional development brands are centred on the complexities of technology convergence. Operating in 42 different countries, we have developed a unique global knowledge and networking platform, which is trusted by over 30,000 ICT, engineering and technology professionals.