Lossy compression (i.e., compression using a technique in which a portion of the original information is lost) is acceptable for some forms of data (e.g., digital images) in some applications, but for most IT applications, lossless compression (i.e., compression using a technique that preserves the entire content of the original data, and from which the original data can be reconstructed exactly) is required.

The policies, processes, practices, services and tools used to align the business value of data with the most appropriate and cost-effective storage infrastructure from the time data is created through its final disposition.

Data is aligned with business requirements through management policies and service levels associated with performance, availability, recoverability, cost, etc. DLM is a subset of ILM.

A set of services that control of data from the time it is created until it no longer exists.

Data Management Services are not in the data path; rather, they provide control of, or utilize, data in the delivery of their services. This includes services such as data movement, data redundancy, and data deletion.

The length of the statistically expected continuous span of time over which data stored by a population of identical storage subsystems can be correctly retrieved, expressed as Mean Time to Data Loss (MTDL).

Preserving the existence and integrity of data for some period of time or until certain events have transpired, or any combination of the two.

Retention requirements are expressed either as a time period, an event (e.g., the death of a patient), or a combination (e.g., 3 years after said death). Multiple requirements may be active, and some (e.g., judicial holds) may trump others.

A process for deleting data that is intended to make the data unrecoverable.

One such process consists of repeated overwrites of data on disk. Data shredding is not generally held to make data completely unrecoverable in the face of modern forensic techniquesthat requires shredding of the disks themselves. Forensic techniques, however, do require physical access to the storage media.

Disk striping is commonly called RAID Level 0 or RAID 0 because of its similarity to common RAID data mapping techniques. It includes no redundancy, however, so strictly speaking, the appellation RAID is a misnomer.

The data transfer capacity of an I/O subsystem is an upper bound on its data transfer rate for any I/O load. For disk subsystem I/O, data transfer rate is usually expressed in MBytes/second (millions of bytes per second, where 1 million = 106) or Gbits/second (billions of bits per second, where 1 billion = 109). See data transfer capacity.

An set of computer programs with a user and/or programming interface that supports the definition of the format of a database and the creation of and access to its data.

A database management system removes the need for a user or program to manage low level database storage. It also provides security for and assures the integrity of the data it contains. Types of database management systems are relational (table-oriented), network, hierarchical and object oriented.

1. A procedure that renders data unreadable by applying a strong magnetic field to the media.

2. Applying a degaussing procedure.

Degaussing is also called demagnetizing and erasure. Both of these terms are misleading, because in magnetic digital media the individual magnetic domains are not erased or demagnetized, but simply made to line up in the same direction, which eliminates any previous digital structure.

A protocol defined by the IETF for managing network traffic based on the type of packet or message being transmitted.

The Differentiated Services protocol is often abbreviated as DiffServ. DiffServ rules define how a packet flows through a network based on a 6 bit field (the Differentiated Services Code Point) in the IP header. The Differentiated Services Code Point specifies the "per hop behavior" (bandwidth, queuing and forward/drop status) for the packet or message.

Digital object auditing is a process of routine periodic testing of stored digital objects, usually using cryptographic techniques, by comparing their previous signatures and time stamps to their current to verify that change, loss of access, or data loss has not occurred.

A preservation object provides the functionality required to assure the future ability to use, secure, interpret, and verify authenticity of the metadata, information, and data in the container and is the foundational element for digital preservation of information and data.

A digital preservation service includes a comprehensive management and curation function that controls its supporting infrastructure, information, data, and storage services in accordance with the requirements of the information objects it manages to accomplish the goals of digital preservation.

Digital signatures can generally be externally verified by entities not in possession of the key used to sign the information. For example, a secure hash of the information encrypted with the originator's private key when an asymmetric cryptosystem is used. Some algorithms that are used in digital signatures cannot be used to encrypt data. (e.g., DSA).

The secret key used in DSA operates on the message hash generated by SHA-1; to verify a signature, one recomputes the hash of the message, uses the public key to decrypt the signature and then compares the results.

Directories are usually organized hierarchically. I.e., a directory may contain both information about files and objects, and other directories. They are used to organize collections of files and other objects for application or human convenience.

2. A file or other persistent data structure in a file system that contains information about other files.

3. An LDAP-based repository consisting of class definitions and instances of those classes.

DEN's goals are to provide a consistent and standard data model to describe a network, its elements and its policies/rules. Policies are defined to provide quality of service or to manage to a specified class of service.

The recovery of data, access to data and associated processing through a comprehensive process of setting up a redundant site (equipment and work space) with recovery of operational data to continue business operations after a loss of use of all or part of a data center.

This involves not only an essential set of data but also an essential set of all the hardware and software to continue processing of that data and business. Any disaster recovery may involve some amount of down time.

1. Process by which each party obtains information held by another party or non-party concerning a matter. [ISO/IEC 27050-1]

Discovery is applicable more broadly than to parties in adversarial disputes. Discovery is also the disclosure of hardcopy documents, Electronically Stored Information and tangible objects by an adverse party. In some jurisdictions the term disclosure is used interchangeably with discovery.

2. The process of finding devices attached to a storage infrastructure.

3. The process of finding network interfaces in a networking infrastructure.

A set of disks from one or more commonly accessible disk subsystems, combined with a body of control software.

The control software presents the disks' storage capacity to hosts as one or more virtual disks. Control software is often called firmware or microcode when it runs in a disk controller. Control software that runs in a host computer is usually called a volume manager.

Disk blocks are of fixed usable size (with the most common being 512 bytes), and are usually numbered consecutively. Disk blocks are also the unit of on-disk protection against errors; whatever mechanism a disk employs to protect against data errors (e.g., ECC) protects individual blocks of data. Seesector.

This definition includes rotating magnetic and optical disks and solid-state disks, or non-volatile electronic storage elements. It does not include specialized devices such as write-once-read-many (WORM) optical disks, nor does it include so-called RAM disks implemented using software to control a dedicated portion of a host computer's volatile random access memory.

1. A Windows server that contains a copy of a user account database. A Windows domain may contain both primary and backup domain controllers.

2. The control function accessible directly by an N-Port attached to a switch and also addressable in other domains using the Domain Controller address identifier of ""FF FC nn"" hex, where nn is the remote Domain Controller being accessed.

A computer program that converts between IP addresses and symbolic names for nodes on a network in a standard way.

Most operating systems include a version of DNS. The service is defined by the IETF Standard RFCs 974, 1034, 1035, 1122, and 1123, and over a hundred subsequent RFCs that have not yet achieved full standard status.

A technique used to increase data transfer rate by constantly keeping two I/O requests for consecutively addressed data outstanding.

A software component begins a double-buffered I/O stream by making two I/O requests in rapid sequence. Thereafter, each time an I/O request completes, another is immediately made, leaving two outstanding. If a disk subsystem can process requests fast enough, double buffering allows data to be transferred at a disk or disk array's full volume transfer rate.

A pair of components, such as the controllers in a failure tolerant storage subsystem that share a task or class of tasks when both are functioning normally, but take on the entire task or tasks when one of the components fails.

Dual active controllers are connected to the same set of storage devices, and improve both I/O performance and failure tolerance compared to a single controller. Dual active components are also called active-active components.

The responsibility that managers and their organizations have a duty to provide for information security to ensure that the type of control, the cost of control, and the deployment of control are appropriate for the system being managed. [NIST SP 800-30]