Add Research

Get your company's research in the hands of targeted business professionals.

Data Deduplication

In storage technology, data deduplication essentially refers to the elimination of redundant data. In the deduplication process, duplicate data is deleted, leaving only one copy of the data to be stored. However, indexing of all data is still retained should that data ever be required. Deduplication is able to reduce the required storage capacity since only the unique data is stored.

The Vatican Apostolic Library implemented the Panduit Integrated Data Center Solution to create a robust and highly available network infrastructure to support the conservation of its literary treasures.

Big data is fueling a new economy—one based on insight. How can you create the valuable insights that are the currency for the new economy while controlling complexity? Apache Spark might be the answer.

Learn more about these trends and how Data Center Infrastructure Management (DCIM) software can help your staff improve productivity, improve awareness of potential issues, and enhance forecasting and decision making.

Now that the technology sector as a whole is becoming increasingly user friendly, transparent and hands on, it makes sense for colocation data centers to offer a higher level of insight and transparency into their clients’ individual environments.

SimpliVity’s Data Virtualization Platform (DVP) is the market-leading hyperconverged infrastructure that delivers triple digit data efficiency rates. The DVP was designed from the ground up to simplify IT by solving the data problem and dramatically improving overall data efficiency.

University of East Anglia wished to create a “green” HPC resource, increase compute power and support research across multiple operating systems. Platform HPC increased compute power from 9 to 21.5 teraflops, cut power consumption rates and costs and provided flexible, responsive support.

The term “Big Data” has become virtually synonymous with “schema on read” unstructured data analysis and handling techniques like Hadoop. These “schema on read” techniques have been most famously exploited on relatively ephemeral human-readable data like retail trends, twitter sentiment, social network mining, log files, etc.