The sector is the minimum storage unit of a hard drive.[1] Most disk partitioning schemes are designed to have files occupy an integral number of sectors regardless of the file's actual size. Files that do not fill a whole sector will have the remainder of their last sector filled with zeroes; in practice, operating systems typically operate on blocks of data, which may span multiple sectors.[2]

Geometrically, the word sector means a portion of a disk between a center, two radii and a corresponding arc (see Figure 1, item B), which is shaped like a slice of a pie. Thus, the disk sector (Figure 1, item C) refers to the intersection of a track and geometrical sector.

In disk drives, each physical sector is made up of three basic parts, the sector header, the data area and the error-correcting code (ECC), the sector header contains information used by the drive and controller; this information includes sync bytes, address identification, flaw flag and header parity bytes. The header may also include an alternate address to be used if the data area is undependable, the address identification is used to ensure that the mechanics of the drive have positioned the read/write head over the correct location. The data area contains the recorded user data, while the ECC field contains codes based on the data field, which are used to check and possibly correct errors that may have been introduced into the data.

Contents

Prior to the 1980s there was little standardization of sector sizes; disk drives had a maximum number of bytes per track and various system manufacturers subdivided the track into different sector sizes to suit their OSes and applications. The popularity of the PC beginning in the 1980s and the advent of the IDE interface in the late 1980s led to a 512 byte sector becoming an industry standard sector size for HDDs and similar storage devices.

In 2000 the industry trade organization, International Disk Drive Equipment and Materials Association (IDEMA) started work to define the implementation and standards that would govern sector size formats exceeding 512 bytes to accommodate future increases in data storage capacities.[3]By the end of 2007 in anticipation of a future IDEMA standard, Samsung and Toshiba began shipments of 1.8-inch hard disk drives with 4096 byte sectors. In 2010 IDEMA completed the Advanced Format standard for 4096 sector drives,[3] setting the date for the transition from 512 to 4096 byte sectors as January 2011 for all manufacturers, and Advanced Format drives soon became prevalent.

While sector specifically means the physical disk area, the term block has been used loosely to refer to a small chunk of data. Block has multiple meanings depending on the context; in the context of data storage, a filesystem block is an abstraction over disk sectors possibly encompassing multiple sectors. In other contexts, it may be a unit of a data stream or a unit of operation for a utility,[4] for example, the Unixprogramdd allows one to set the block size to be used during execution with the parameter bs=bytes. This specifies the size of the chunks of data as delivered by dd, and is unrelated to sectors or filesystem blocks.

In Linux, disk sector size can be determined with fdisk -l | grep "Sector size" and block size can be determined with blockdev --getbsz /dev/sda.[5]

If a sector is defined as the intersection between a radius and a track, as was the case with early hard drives and most floppy disks, the sectors towards the outside of the disk are physically longer than those nearer the spindle, because each sector still contains the same number of bytes, the outer sectors have lower bit density than the inner ones, which is an inefficient use of the magnetic surface. The solution is zone bit recording, wherein the disk is divided into zones, each encompassing a small number of contiguous tracks, each zone is then divided into sectors such that each sector has a similar physical size. Because outer zones have a greater circumference than inner zones, they are allocated more sectors, this is known as zoned bit rate.[6]

A consequence of zone bit recording is that contiguous reads and writes are noticeably faster on outer tracks (corresponding to lower block addresses) than on inner tracks, as more bits pass under the head with each rotation; this difference can be 25% or more.

Hard disk drive manufacturers identified the need for large sector sizes to deliver higher capacity points along with improved error correction capabilities, the traditional means of achieving storage capacity increases, which averaged 44 percent per year from 2000 to 2009,[citation needed] were projected to stagnate without revolutionary breakthroughs in magnetic recording system technologies.[citation needed] However, by modifying the length of the data field through the implementation of Advanced Format using 4096-byte sectors, hard disk drive manufacturers could increase the efficiency of the data surface area by five to thirteen percent while increasing the strength of the ECC.

1.
Data cluster
–
In computer file systems, a cluster or allocation unit is a unit of disk space allocation for files and directories. To reduce the overhead of managing on-disk data structures, the filesystem does not allocate individual disk sectors by default, on a disk that uses 512-byte sectors, a 512-byte cluster contains one sector, whereas a 4-kibibyte cluster contains eight sectors. A cluster is the smallest logical amount of space that can be allocated to hold a file. Storing small files on a filesystem with large clusters will therefore waste disk space, however, a larger cluster size reduces bookkeeping overhead and fragmentation, which may improve reading and writing speed overall. Typical cluster sizes range from 1 sector to 128 sectors, a cluster need not be physically contiguous on the disk, it may span more than one track or, if sector interleaving is used, may even be discontiguous within a track. This should not be confused with fragmentation, as the sectors are still logically contiguous, the term cluster was changed to allocation unit in DOS4.0. However the term cluster is still widely used

2.
Track (disk drive)
–
A disk drive track is a circular path on the surface of a disk or diskette on which information is magnetically recorded and from which recorded information is read. A track is a division of data in a disk drive. The concept is concentric, through the physical platters, being a data circle per each cylinder of the disk drive. In other words, the number of tracks on a surface in the drive exactly equals the number of cylinders of the drive. The term track is sometimes prefaced with the word logical to emphasize the fact when used as an abstract concept, hard disk drive Disk sector Cylinder

3.
Magnetic disk
–
Magnetic storage or magnetic recording is the storage of data on a magnetised medium. Magnetic storage uses different patterns of magnetisation in a material to store data and is a form of non-volatile memory. The information is accessed using one or more read/write heads, as of 2017, magnetic storage media, primarily hard disks, are widely used to store computer data as well as audio and video signals. In the field of computing, the magnetic storage is preferred and in the field of audio and video production. The distinction is less technical and more a matter of preference, other examples of magnetic storage media include floppy disks, magnetic recording tape, and magnetic stripes on credit cards. Magnetic storage in the form of wire recording—audio recording on a wire—was publicized by Oberlin Smith in the Sept 8,1888 issue of the Electrical World. Smith had previously filed a patent in September,1878 but found no opportunity to pursue the idea as his business was machine tools, the first publicly demonstrated magnetic recorder was invented by Valdemar Poulsen in 1898. Poulsens device recorded a signal on a wire wrapped around a drum, in 1928, Fritz Pfleumer developed the first magnetic tape recorder. Early magnetic storage devices were designed to record analog audio signals, computers and now most audio and video magnetic storage devices record digital data. In old computers, magnetic storage was used for primary storage in a form of magnetic drum, or core memory, core rope memory, thin film memory. Unlike modern computers, magnetic tape was often used for secondary storage. Information is written to and read from the medium as it moves past devices called read-and-write heads that operate very close over the magnetic surface. The read-and-write head is used to detect and modify the magnetisation of the material immediately under it, there are two magnetic polarities, each of which is used to represent either 0 or 1. The magnetic surface is divided into many small sub-micrometer-sized magnetic regions, referred to as magnetic domains. Due to the nature of the magnetic material each of these magnetic regions is composed of a few hundred magnetic grains. Magnetic grains are typically 10 nm in size and each form a single magnetic domain. Each magnetic region in total forms a magnetic dipole which generates a magnetic field, for reliable storage of data, the recording material needs to resist self-demagnetisation, which occurs when the magnetic domains repel each other. Magnetic domains written too densely together to a weakly magnetisable material will degrade over time due to rotation of the moment one or more domains to cancel out these forces

4.
Optical disc
–
The encoding material sits atop a thicker substrate which makes up the bulk of the disc and forms a dust defocusing layer. The encoding pattern follows a continuous, spiral path covering the disc surface. Most optical discs exhibit a characteristic iridescence as a result of the diffraction grating formed by its grooves and this side of the disc contains the actual data and is typically coated with a transparent material, usually lacquer. The reverse side of a disc usually has a printed label, sometimes made of paper. Optical discs are usually between 7.6 and 30 cm in diameter, with 12 cm being the most common size, a typical disc is about 1.2 mm thick, while the track pitch ranges from 1.6 µm to 320 nm. An optical disc is designed to support one of three recording types, read-only, recordable, or re-recordable, write-once optical discs commonly have an organic dye recording layer between the substrate and the reflective layer. Rewritable discs typically contain an alloy recording layer composed of a phase change material, most often AgInSbTe, an alloy of silver, indium, antimony, Optical discs are most commonly used for storing music, video, or data and programs for personal computers. The Optical Storage Technology Association promotes standardized optical storage formats, although optical discs are more durable than earlier audio-visual and data storage formats, they are susceptible to environmental and daily-use damage. Libraries and archives enact optical media preservation procedures to ensure continued usability in the optical disc drive or corresponding disc player. For computer data backup and physical data transfer, optical discs such as CDs and DVDs are gradually being replaced with faster, smaller solid-state devices and this trend is expected to continue as USB flash drives continue to increase in capacity and drop in price. Additionally, music purchased or shared over the Internet has significantly reduced the number of audio CDs sold annually. The first recorded use of an optical disc was in the 1884 when Alexander Graham Bell, Chichester Bell. An early optical disc system existed in 1935, named Lichttonorgel, an early analog optical disc used for video recording was invented by David Paul Gregg in 1958 and patented in the US in 1961 and 1969. This form of optical disc was an early form of the DVD. It is of special interest that U. S, patent 4,893,297, filed 1989, issued 1990, generated royalty income for Pioneer Corporations DVA until 2007 —then encompassing the CD, DVD, and Blu-ray systems. In the early 1960s, the Music Corporation of America bought Greggs patents and his company, american inventor James T. Russell has been credited with inventing the first system to record a digital signal on an optical transparent foil which is lit from behind by a high-power halogen lamp. Russells patent application was first filed in 1966 and he was granted a patent in 1970, following litigation, Sony and Philips licensed Russells patents in the 1980s. Both Greggs and Russells disc are floppy media read in transparent mode, in the Netherlands in 1969, Philips Research physicist, Pieter Kramer invented an optical videodisc in reflective mode with a protective layer read by a focused laser beam U. S

5.
Byte
–
The byte is a unit of digital information that most commonly consists of eight bits. Historically, the byte was the number of used to encode a single character of text in a computer. The size of the byte has historically been hardware dependent and no standards existed that mandated the size. The de-facto standard of eight bits is a convenient power of two permitting the values 0 through 255 for one byte, the international standard IEC 80000-13 codified this common meaning. Many types of applications use information representable in eight or fewer bits, the popularity of major commercial computing architectures has aided in the ubiquitous acceptance of the 8-bit size. The unit symbol for the byte was designated as the upper-case letter B by the IEC and IEEE in contrast to the bit, internationally, the unit octet, symbol o, explicitly denotes a sequence of eight bits, eliminating the ambiguity of the byte. It is a respelling of bite to avoid accidental mutation to bit. Early computers used a variety of four-bit binary coded decimal representations and these representations included alphanumeric characters and special graphical symbols. S. Government and universities during the 1960s, the prominence of the System/360 led to the ubiquitous adoption of the eight-bit storage size, while in detail the EBCDIC and ASCII encoding schemes are different. In the early 1960s, AT&T introduced digital telephony first on long-distance trunk lines and these used the eight-bit µ-law encoding. This large investment promised to reduce costs for eight-bit data. The development of microprocessors in the 1970s popularized this storage size. A four-bit quantity is called a nibble, also nybble. The term octet is used to specify a size of eight bits. It is used extensively in protocol definitions, historically, the term octad or octade was used to denote eight bits as well at least in Western Europe, however, this usage is no longer common. The exact origin of the term is unclear, but it can be found in British, Dutch, and German sources of the 1960s and 1970s, and throughout the documentation of Philips mainframe computers. The unit symbol for the byte is specified in IEC 80000-13, IEEE1541, in the International System of Quantities, B is the symbol of the bel, a unit of logarithmic power ratios named after Alexander Graham Bell, creating a conflict with the IEC specification. However, little danger of confusion exists, because the bel is a used unit

6.
Hard disk drive
–
The platters are paired with magnetic heads, usually arranged on a moving actuator arm, which read and write data to the platter surfaces. Data is accessed in a manner, meaning that individual blocks of data can be stored or retrieved in any order. HDDs are a type of storage, retaining stored data even when powered off. Introduced by IBM in 1956, HDDs became the dominant secondary storage device for computers by the early 1960s. Continuously improved, HDDs have maintained this position into the era of servers. More than 200 companies have produced HDDs historically, though after extensive industry consolidation most current units are manufactured by Seagate, Toshiba, as of 2016, HDD production is growing, although unit shipments and sales revenues are declining. While SSDs have higher cost per bit, SSDs are replacing HDDs where speed, power consumption, small size, the primary characteristics of an HDD are its capacity and performance. Capacity is specified in unit prefixes corresponding to powers of 1000, the two most common form factors for modern HDDs are 3. 5-inch, for desktop computers, and 2. 5-inch, primarily for laptops. HDDs are connected to systems by standard interface cables such as PATA, SATA, Hard disk drives were introduced in 1956, as data storage for an IBM real-time transaction processing computer and were developed for use with general-purpose mainframe and minicomputers. The first IBM drive, the 350 RAMAC in 1956, was approximately the size of two medium-sized refrigerators and stored five million six-bit characters on a stack of 50 disks. In 1962 the IBM350 RAMAC disk storage unit was superseded by the IBM1301 disk storage unit, cylinder-mode read/write operations were supported, and the heads flew about 250 micro-inches above the platter surface. Motion of the head array depended upon a binary system of hydraulic actuators which assured repeatable positioning. The 1301 cabinet was about the size of three home refrigerators placed side by side, storing the equivalent of about 21 million eight-bit bytes, access time was about a quarter of a second. Also in 1962, IBM introduced the model 1311 disk drive, users could buy additional packs and interchange them as needed, much like reels of magnetic tape. Later models of removable pack drives, from IBM and others, became the norm in most computer installations, non-removable HDDs were called fixed disk drives. Some high-performance HDDs were manufactured with one head per track so that no time was lost physically moving the heads to a track, known as fixed-head or head-per-track disk drives they were very expensive and are no longer in production. In 1973, IBM introduced a new type of HDD code-named Winchester and its primary distinguishing feature was that the disk heads were not withdrawn completely from the stack of disk platters when the drive was powered down. Instead, the heads were allowed to land on an area of the disk surface upon spin-down

7.
CD-ROM
–
A CD-ROM /ˌsiːˌdiːˈrɒm/ is a pre-pressed optical compact disc which contains data. The name is an acronym which stands for Compact Disc Read-Only Memory, computers can read CD-ROMs, but cannot write to CD-ROMs which are not writable or erasable. From the mid-1990s until the mid-2000s, CD-ROMs were popularly used to distribute software for computers, some CDs, called enhanced CDs, hold both computer data and audio with the latter capable of being played on a CD player, while data is only usable on a computer. An early CD-ROM format was developed by Sony and Denon, introduced at a Japanese computer show in 1984 and it was an extension of Compact Disc Digital Audio, and adapted the format to hold any form of digital data, with a capacity of 540 MiB. The Yellow Book is the standard that defines the format of CD-ROMs. One of a set of books that contain the technical specifications for all CD formats. CD-ROMs are identical in appearance to audio CDs, and data are stored and retrieved in a similar manner. Discs are made from a 1.2 mm thick disc of polycarbonate plastic, data is stored on the disc as a series of microscopic indentations. A laser is shone onto the surface of the disc to read the pattern of pits. This pattern of changing intensity of the beam is converted into binary data. Several formats are used for data stored on discs, known as the Rainbow Books. The Yellow Book, published in 1988, defines the specifications for CD-ROMs, the CD-ROM standard builds on top of the original Red Book CD-DA standard for CD audio. Other standards, such as the White Book for Video CDs, the Yellow Book itself is not freely available, but the standards with the corresponding content can be downloaded for free from ISO or ECMA. There are several standards that define how to structure data files on a CD-ROM, ISO9660 defines the standard file system for a CD-ROM. ISO13490 is an improvement on this standard which adds support for non-sequential write-once and re-writeable discs such as CD-R and CD-RW, as well as multiple sessions. The ISO13346 standard was designed to address most of the shortcomings of ISO9660, and a subset of it evolved into the UDF format, which was adopted for DVDs. The bootable CD specification was issued in January 1995, to make a CD emulate a hard disk or floppy disk, is called El Torito, data stored on CD-ROMs follows the standard CD data encoding techniques described in the Red Book specification. This includes cross-interleaved Reed–Solomon coding, eight-to-fourteen modulation, and the use of pits, the structures used to group data on a CD-ROM are also derived from the Red Book

8.
DVD
–
DVD is a digital optical disc storage format invented and developed by Philips, Sony, Toshiba, and Panasonic in 1995. The medium can store any kind of data and is widely used for software. DVDs offer higher capacity than compact discs while having the same dimensions. Pre-recorded DVDs are mass-produced using molding machines that physically stamp data onto the DVD, such discs are a form of DVD-ROM because data can only be read and not written or erased. Blank recordable DVD discs can be recorded using a DVD recorder. Rewritable DVDs can be recorded and erased many times, DVDs containing other types of information may be referred to as DVD data discs. The OED also states that in 1995, The companies said the name of the format will simply be DVD. Toshiba had been using the name ‘digital video disk’, but that was switched to ‘digital versatile disk’ after computer companies complained that it left out their applications, Digital versatile disc is the explanation provided in a DVD Forum Primer from 2000 and in the DVD Forums mission statement. There were several formats developed for recording video on optical discs before the DVD, Optical recording technology was invented by David Paul Gregg and James Russell in 1958 and first patented in 1961. A consumer optical disc data format known as LaserDisc was developed in the United States and it used much larger discs than the later formats. CD Video used analog video encoding on optical discs matching the established standard 120 mm size of audio CDs, Video CD became one of the first formats for distributing digitally encoded films in this format, in 1993. In the same year, two new optical disc formats were being developed. By the time of the launches for both formats in January 1995, the MMCD nomenclature had been dropped, and Philips and Sony were referring to their format as Digital Video Disc. Representatives from the SD camp asked IBM for advice on the system to use for their disc. Alan E. Bell, a researcher from IBMs Almaden Research Center, got that request and this group was referred to as the Technical Working Group, or TWG. On August 14,1995, an ad hoc group formed from five computer companies issued a release stating that they would only accept a single format. The TWG voted to both formats unless the two camps agreed on a single, converged standard. They recruited Lou Gerstner, president of IBM, to pressure the executives of the warring factions, as a result, the DVD specification provided a storage capacity of 4.7 GB for a single-layered, single-sided disc and 8.5 GB for a dual-layered, single-sided disc

9.
Advanced Format
–
Larger sectors enable the integration of stronger error correction algorithms to maintain data integrity at higher storage densities. Without revolutionary breakthroughs in magnetic recording system technologies, areal densities, in August 2005, Seagate shipped test drives with 1K physical sectors to industry partners for testing. In 2010, industry standards for the first official generation of data sectors using a configuration of 4096 bytes per sector, or 4K, were completed. All hard drive manufacturers have committed to shipping new hard drive platforms for desktop, between the sector header and ECC areas, eight 512-byte sectors are combined, eliminating the need for redundant header areas between each individual chunk of 512-byte data. Format efficiency gains resulting from the 4K sector structure range from seven to eleven percent in physical platter space, the 4K format provides enough space to expand the ECC field from 50 to 100 bytes to accommodate new ECC algorithms. The enhanced ECC coverage improves the ability to detect and correct processed data errors beyond the 50-byte defect length associated with the 512-byte sector legacy format. The Advanced Format standard employs the same gap, sync and address mark configuration as the traditional 512-byte sector layout, early engagement with the Long Data Sector Committee provided the opportunity for component and software suppliers to prepare for the transition to Advanced Format. For example, Windows Vista, Windows 7, Windows Server 2008, mac OS X Tiger and onwards can use Advanced Format drives and OS X Mountain Lion 10.8.2 additionally supports encrypting those. Windows 8 and Windows Server 2012 also support 4Kn Advanced Format, oracle Solaris 10 and 11 support 4Kn and 512e hard disk drives for non-root ZFS file systems, while version 11.1 provides installation and boot support for 512e devices. Among the Advanced Format initiatives undertaken by the Long Data Sector Committee, for this purpose, several categories of Advanced Format devices were created. Many host computer hardware and software components assume the hard drive is configured around 512-byte sector boundaries, hard drives configured with 4096-byte physical sectors with 512-byte firmware are referred to as Advanced Format 512e, or 512 emulation drives. The translation of the 4096-byte physical format to a virtual 512-byte increment is transparent to the entity accessing the disk drive. Read and write commands are issued to Advanced Format drives in the format as legacy drives. However, during the process, the Advanced Format hard drive loads the entire 4096-byte sector containing the requested 512-byte data into memory located on the drive. The emulation firmware extracts and re-formats the specific data into a 512-byte chunk before sending the data to the host, the entire process typically occurs with little or no degradation in performance. The translation process is complicated when writing data that is either not a multiple of 4K or not aligned to a 4K boundary. This operation, known as read-modify-write, can require additional revolution of the magnetic disks, when using Advanced Format drives with legacy operating systems, it is important to realign the disk drive using software provided by the hard disk manufacturer. Disk realignment is necessary to avoid a performance degrading condition known as cluster straddling where a shifted partition causes filesystem clusters to span partial physical disk sectors, since cluster-to-sector alignment is determined when creating hard drive partitions, the realignment software is used after partitioning the disk

10.
Circular sector
–
A circular sector or circle sector, is the portion of a disk enclosed by two radii and an arc, where the smaller area is known as the minor sector and the larger being the major sector. In the diagram, θ is the angle in radians, r the radius of the circle. A sector with the angle of 180° is called a half-disk and is bounded by a diameter. The angle formed by connecting the endpoints of the arc to any point on the circumference that is not in the sector is equal to half the central angle, the total area of a circle is πr2. Circular segment – the part of the sector which remains after removing the triangle formed by the center of the circle, conic section Gerard, L. J. V. The Elements of Geometry, in Eight Books, or, First Step in Applied Logic, London, Longmans Green, Reader & Dyer,1874

11.
Disk (mathematics)
–
In geometry, a disk is the region in a plane bounded by a circle. A disk is said to be closed if it contains the circle that constitutes its boundary, and open if it does not. In Cartesian coordinates, the disk of center and radius R is given by the formula D = while the closed disk of the same center. The area of a closed or open disk of radius R is πR2, the open disk and the closed disk are not topologically equivalent, as they have different topological properties from each other. For instance, every closed disk is compact whereas every open disk is not compact, however from the viewpoint of algebraic topology they share many properties, both of them are contractible and so are homotopy equivalent to a single point. This implies that their groups are trivial, and all homology groups are trivial except the 0th one. The Euler characteristic of a point is 1, every continuous map from the closed disk to itself has at least one fixed point, this is the case n=2 of the Brouwer fixed point theorem. The statement is false for the disk, Consider for example the function f = which maps every point of the open unit disk to another point on the open unit disk to the right of the given one. But for the unit disk fixes every point on the half circle x 2 + y 2 =1, x >0

12.
Arc (geometry)
–
In Euclidean geometry, an arc is a closed segment of a differentiable curve. A common example in the plane, is a segment of a circle called a circular arc, in space, if the arc is part of a great circle, it is called a great arc. Every pair of points on a circle determines two arcs. The length, L, of an arc of a circle with radius r and this is because L c i r c u m f e r e n c e = θ2 π. Substituting in the circumference L2 π r = θ2 π, and, with α being the angle measured in degrees, since θ = α/180π. For example, if the measure of the angle is 60 degrees and this is so because the circumference of a circle and the degrees of a circle, of which there are always 360, are directly proportional. The area of the sector formed by an arc and the center of a circle is A =12 r 2 θ. The area A has the proportion to the circle area as the angle θ to a full circle. We can cancel π on both sides, A r 2 = θ2, by multiplying both sides by r2, we get the final result, A =12 r 2 θ. Using the conversion described above, we find that the area of the sector for an angle measured in degrees is A = α360 π r 2. The area of the bounded by the arc and the straight line between its two end points is 12 r 2. To get the area of the arc segment, we need to subtract the area of the triangle, determined by the circles center and the two end points of the arc, from the area A. Using the intersecting chords theorem it is possible to calculate the radius r of a circle given the height H and its perpendicular bisector is another chord, which is a diameter of the circle. The length of the first chord is W, and it is divided by the bisector into two halves, each with length W/2. The total length of the diameter is 2r, and it is divided into two parts by the first chord, the length of one part is the sagitta of the arc, H, and the other part is the remainder of the diameter, with length 2r − H. Applying the intersecting chords theorem to these two chords produces H =2, whence 2 r − H = W24 H, so r = W28 H + H2

13.
IBM Personal Computer
–
The IBM Personal Computer, commonly known as the IBM PC, is the original version and progenitor of the IBM PC compatible hardware platform. It is IBM model number 5150, and was introduced on August 12,1981 and it was created by a team of engineers and designers under the direction of Don Estridge of the IBM Entry Systems Division in Boca Raton, Florida. IBM compatible became an important criterion for sales growth, only the Apple Macintosh family kept significant market share without compatibility with the IBM personal computer, International Business Machines, one of the worlds largest companies, had a 62% share of the mainframe computer market in 1981. Its share of the computer market, however, had declined from 60% in 1970 to 32% in 1980. In 1979 BusinessWeek asked, Is IBM just another stodgy, mature company, by 1981 its stock price had declined by 22%. IBMs earnings for the first half the year grew by 5. 3%—one third of the inflation rate—while those of minicomputer maker Digital Equipment Corporation grew by more than 35%. B. M, No longer dominates the computer business. IBM wished to avoid the outcome with the new personal computer industry, dominated by the Commodore PET, Atari 8-bit family, Apple II, Tandy Corporations TRS-80. With $150 million in sales by 1979 and projected growth of more than 40% in the early 1980s. The Japanese project, codenamed Go, ended before the 1981 release of the American-designed IBM PC codenamed Chess, whether IBM had waited too long to enter an industry in which Apple and others were already successful was unclear. An observer stated that IBM bringing out a computer would be like teaching an elephant to tap dance. Successful microcomputer company Vector Graphics fiscal 1980 revenue was $12 million, the company only sold through its internal sales force, had no experience with resellers or retail stores, and did not introduce the first product designed to work with non-IBM equipment until 1980. Another observer claimed that IBM made decisions so slowly that, when tested, as with other large computer companies, its new products typically required about four to five years for development. IBM had to learn how to develop, mass-produce. The potential importance to microcomputers of a company so prestigious, that a saying in American companies stated No one ever got fired for buying IBM, was nonetheless clear. InfoWorld, which described itself as The Newsweekly for Microcomputer Users, stated that for my grandmother, is far and away the media star, not because of its features, but because it exists at all. When the number eight company in the Fortune 500 enters the field, the influence of a personal computer made by a company whose name has literally come to mean computer to most of the world is hard to contemplate. The editorial acknowledged that some factions in our industry have looked upon IBM as the enemy, desktop sized programmable calculators by Hewlett Packard had evolved into the HP9830 BASIC language computer by 1972. In 1972–1973 a team led by Dr. SCAMP emulated an IBM1130 minicomputer to run APL\1130, in 1973 APL was generally available only on mainframe computers, and most desktop sized microcomputers such as the Wang 2200 or HP9800 offered only BASIC

14.
Parallel ATA
–
Parallel ATA, originally AT Attachment, is an interface standard for the connection of storage devices such as hard disk drives, floppy disk drives, and optical disc drives in computers. The standard is maintained by the X3/INCITS committee and it uses the underlying AT Attachment and AT Attachment Packet Interface standards. The Parallel ATA standard is the result of a history of incremental technical development. The ATA interface itself evolved in stages from Western Digitals original Integrated Drive Electronics interface. As a result, many near-synonyms for ATA/ATAPI and its previous incarnations are still in informal use, in particular Extended IDE. After the introduction of Serial ATA in 2003, the original ATA was renamed to Parallel ATA, Parallel ATA cables have a maximum allowable length of only 18 in. Because of this limit, the technology normally appears as a computer storage interface. For many years, ATA provided the most common and the least expensive interface for this application and it has largely been replaced by SATA in newer systems. The PATA standard was originally conceived as the PC/AT Attachment because its primary feature was a connection to the 16-bit ISA bus introduced with the IBM PC/AT. The AT in IBM PC/AT refers to Advanced Technology, but the ATA specifications simply use the name AT Attachment, the first version of what is now called the ATA/ATAPI interface was developed by Western Digital under the name Integrated Drive Electronics. The first such drives appeared in Compaq PCs in 1986, the interface cards used to connect a parallel ATA drive to, for example, a PCI slot are not drive controllers, they are merely bridges between the host bus and the ATA interface. Since the original ATA interface is essentially just a 16-bit ISA bus in disguise, the integrated controller presented the drive to the host computer as an array of 512-byte blocks with a relatively simple command interface. All of these details of the mechanical operation of the drive were now handled by the controller on the drive itself. This also eliminated the need to design a controller that could handle many different types of drives. The host need only ask for a sector, or block, to be read or written. The interface used by these drives was standardized in 1994 as ANSI standard X3. 221-1994, after later versions of the standard were developed, this became known as ATA-1. A short-lived, seldom-used implementation of ATA was created for the IBM XT and it has been referred to as XT-IDE, XTA or XT Attachment. At the time, in combination with the drive, this was sufficient for most people

15.
Count key data
–
Count key data is a direct access storage device data recording format introduced in 1964 by IBM with its IBM System/360 and still being emulated on IBM mainframes. It is a format with each data record represented by a Count Area that identifies the record and provides the number of bytes in an optional Key Area. This is in contrast to devices using fixed sector size or a separate format track, Count key data also refers to the set of channel commands that are generated by an IBM mainframe for execution by a DASD subsystem employing the CKD recording format. The initial set of CKD CCW introduced in 1964 was substantially enhanced and improved into the 1990s, the count field has the identification of the record in cylinder-head-record format, the length of the key, and the length of the data. The key may be omitted or consist of a string of characters, each CKD record consists of a count field, an optional key field, and an optional user data field with error correction/detection information appended to each field and gaps separating each field. Because of the gaps and other information, the space is larger than that required for just the count data, key data. Most often, the key is omitted, the record located sequentially or by direct cylinder-head-record addressing, the key is locatable via hardware commands. Since the introduction of IBMs System/360 in 1964, nearly all IBM large, the advantages of count key data record format are, The record size can be exactly matched to the application block size CPU and memory requirements can be reduced by exploiting search-key commands. Both synchronous and asynchronous operations are supported on later subsystems, access to specific classes of I/O devices by an IBM mainframe is under the control of Channel Command Words, some of which are generic but many of which are specific to the type of I/O device. g. Read – read information from the DASD CKD CCWs are the set of CCWs used to access CKD DASD subsystems. This is in contrast to fixed block architecture CCWs which are used to access FBA DASD subsystems, initially there was a high degree of correspondence between the logical view of DASD accesses and the actual hardware, as shown in the illustration above. Three digit labels were affixed to identify the address of channel, control unit. On low end systems the Channel and the Control Unit were frequently physically integrated but remained logically separate, IBMs New Attachment Strategy beginning with the 3380 Model 2 in 1972 physically separated the SCU into two physical entities, a director and a controller while keeping them logically the same. DASD without a controller, that is B-Units, have a B prefix in their model number, plug-compatible offered many of the same DASD including 4 CKD subsystems featuring unique DASD. The initial feature set provided by IBM with its 1964 introduction of the CKD track format, defective/Alternative Track – enables an alternate track to replace a defective track transparent to the access method in use. Such programs can search a large amount of information stored on a DASD, upon successful completion returning only the desired data and this mode of operating synchronous to the gap was later enhanced by addionional CCWs enabling a nonsychronous mode of operation. Channel switching – an SCU can be shared between channels – initially two channel switching was provided and it was expanded to up to eight channels in later SCUs, the channels can be on the same or different CPUS. A Scan feature set was provided but not continued into future CKD subsystems beyond the 2314

16.
IDEMA
–
Larger sectors enable the integration of stronger error correction algorithms to maintain data integrity at higher storage densities. Without revolutionary breakthroughs in magnetic recording system technologies, areal densities, in August 2005, Seagate shipped test drives with 1K physical sectors to industry partners for testing. In 2010, industry standards for the first official generation of data sectors using a configuration of 4096 bytes per sector, or 4K, were completed. All hard drive manufacturers have committed to shipping new hard drive platforms for desktop, between the sector header and ECC areas, eight 512-byte sectors are combined, eliminating the need for redundant header areas between each individual chunk of 512-byte data. Format efficiency gains resulting from the 4K sector structure range from seven to eleven percent in physical platter space, the 4K format provides enough space to expand the ECC field from 50 to 100 bytes to accommodate new ECC algorithms. The enhanced ECC coverage improves the ability to detect and correct processed data errors beyond the 50-byte defect length associated with the 512-byte sector legacy format. The Advanced Format standard employs the same gap, sync and address mark configuration as the traditional 512-byte sector layout, early engagement with the Long Data Sector Committee provided the opportunity for component and software suppliers to prepare for the transition to Advanced Format. For example, Windows Vista, Windows 7, Windows Server 2008, mac OS X Tiger and onwards can use Advanced Format drives and OS X Mountain Lion 10.8.2 additionally supports encrypting those. Windows 8 and Windows Server 2012 also support 4Kn Advanced Format, oracle Solaris 10 and 11 support 4Kn and 512e hard disk drives for non-root ZFS file systems, while version 11.1 provides installation and boot support for 512e devices. Among the Advanced Format initiatives undertaken by the Long Data Sector Committee, for this purpose, several categories of Advanced Format devices were created. Many host computer hardware and software components assume the hard drive is configured around 512-byte sector boundaries, hard drives configured with 4096-byte physical sectors with 512-byte firmware are referred to as Advanced Format 512e, or 512 emulation drives. The translation of the 4096-byte physical format to a virtual 512-byte increment is transparent to the entity accessing the disk drive. Read and write commands are issued to Advanced Format drives in the format as legacy drives. However, during the process, the Advanced Format hard drive loads the entire 4096-byte sector containing the requested 512-byte data into memory located on the drive. The emulation firmware extracts and re-formats the specific data into a 512-byte chunk before sending the data to the host, the entire process typically occurs with little or no degradation in performance. The translation process is complicated when writing data that is either not a multiple of 4K or not aligned to a 4K boundary. This operation, known as read-modify-write, can require additional revolution of the magnetic disks, when using Advanced Format drives with legacy operating systems, it is important to realign the disk drive using software provided by the hard disk manufacturer. Disk realignment is necessary to avoid a performance degrading condition known as cluster straddling where a shifted partition causes filesystem clusters to span partial physical disk sectors, since cluster-to-sector alignment is determined when creating hard drive partitions, the realignment software is used after partitioning the disk

17.
Unix
–
Among these is Apples macOS, which is the Unix version with the largest installed base as of 2014. Many Unix-like operating systems have arisen over the years, of which Linux is the most popular, Unix was originally meant to be a convenient platform for programmers developing software to be run on it and on other systems, rather than for non-programmer users. The system grew larger as the system started spreading in academic circles, as users added their own tools to the system. Unix was designed to be portable, multi-tasking and multi-user in a time-sharing configuration and these concepts are collectively known as the Unix philosophy. By the early 1980s users began seeing Unix as a universal operating system. Under Unix, the system consists of many utilities along with the master control program. To mediate such access, the kernel has special rights, reflected in the division between user space and kernel space, the microkernel concept was introduced in an effort to reverse the trend towards larger kernels and return to a system in which most tasks were completed by smaller utilities. In an era when a standard computer consisted of a disk for storage and a data terminal for input and output. However, modern systems include networking and other new devices, as graphical user interfaces developed, the file model proved inadequate to the task of handling asynchronous events such as those generated by a mouse. In the 1980s, non-blocking I/O and the set of inter-process communication mechanisms were augmented with Unix domain sockets, shared memory, message queues, and semaphores. In microkernel implementations, functions such as network protocols could be moved out of the kernel, Multics introduced many innovations, but had many problems. Frustrated by the size and complexity of Multics but not by the aims and their last researchers to leave Multics, Ken Thompson, Dennis Ritchie, M. D. McIlroy, and J. F. Ossanna, decided to redo the work on a much smaller scale. The name Unics, a pun on Multics, was suggested for the project in 1970. Peter H. Salus credits Peter Neumann with the pun, while Brian Kernighan claims the coining for himself, in 1972, Unix was rewritten in the C programming language. Bell Labs produced several versions of Unix that are referred to as Research Unix. In 1975, the first source license for UNIX was sold to faculty at the University of Illinois Department of Computer Science, UIUC graduate student Greg Chesson was instrumental in negotiating the terms of this license. During the late 1970s and early 1980s, the influence of Unix in academic circles led to adoption of Unix by commercial startups, including Sequent, HP-UX, Solaris, AIX. In the late 1980s, AT&T Unix System Laboratories and Sun Microsystems developed System V Release 4, in the 1990s, Unix-like systems grew in popularity as Linux and BSD distributions were developed through collaboration by a worldwide network of programmers

18.
Computer program
–
A computer program is a collection of instructions that performs a specific task when executed by a computer. A computer requires programs to function, and typically executes the programs instructions in a processing unit. A computer program is written by a computer programmer in a programming language. From the program in its form of source code, a compiler can derive machine code—a form consisting of instructions that the computer can directly execute. Alternatively, a program may be executed with the aid of an interpreter. A part of a program that performs a well-defined task is known as an algorithm. A collection of programs, libraries and related data are referred to as software. Computer programs may be categorized along functional lines, such as software or system software. The earliest programmable machines preceded the invention of the digital computer, in 1801, Joseph-Marie Jacquard devised a loom that would weave a pattern by following a series of perforated cards. Patterns could be weaved and repeated by arranging the cards, in 1837, Charles Babbage was inspired by Jacquards loom to attempt to build the Analytical Engine. The names of the components of the device were borrowed from the textile industry. In the textile industry, yarn was brought from the store to be milled, the device would have had a store—memory to hold 1,000 numbers of 40 decimal digits each. Numbers from the store would then have then transferred to the mill. It was programmed using two sets of perforated cards—one to direct the operation and the other for the input variables, however, after more than 17,000 pounds of the British governments money, the thousands of cogged wheels and gears never fully worked together. During a nine-month period in 1842–43, Ada Lovelace translated the memoir of Italian mathematician Luigi Menabrea, the memoir covered the Analytical Engine. The translation contained Note G which completely detailed a method for calculating Bernoulli numbers using the Analytical Engine and this note is recognized by some historians as the worlds first written computer program. In 1936, Alan Turing introduced the Universal Turing machine—a theoretical device that can model every computation that can be performed on a Turing complete computing machine and it is a finite-state machine that has an infinitely long read/write tape. The machine can move the back and forth, changing its contents as it performs an algorithm

19.
Zone bit recording
–
In computer storage, zone bit recording is a method used by disk drives to store more sectors per track on outer tracks than on inner tracks. It is different from zoned constant angular velocity, the inner tracks are packed as densely as the particular drives technology allows, but with a CAV drive the data on the outer tracks are less densely packed. This permits the drive to have more bits stored in each track outside of the innermost zone than drives not using this technique, storing more bits per track equates to achieving a higher total data capacity on the same disk area. On a hard disk using ZBR, the data on the tracks in the outer most zone will have the highest data transfer rate, testing disk drives when they are new or empty after defragmenting them with some benchmarking applications will often show their highest performance. Some other ZBR drives, such as the 800 kilobyte 3, commodore 1541 floppy disk Apple Macintosh 400K/800K floppy disk DVD-RAM HD DVD-RW Most hard drives since the 1990s Zoned constant linear velocity Constant linear velocity

20.
Cylinder-head-sector
–
Cylinder-head-sector, also known as CHS, is an early method for giving addresses to each physical block of data on a hard disk drive. In the case of floppy drives, for which the same exact diskette medium can be truly low-level formatted to different capacities, this is still true. CHS addressing is the process of identifying individual sectors on a disk by their position in a track, the terms are explained bottom up, for disk addressing the sector is the smallest unit. Floppy disks and controllers use physical sector sizes of 128,256,512 and 1024 bytes, the most common physical sector size for harddisks today is 512 bytes, but there have been hard disks with 520 bytes per sector as well for non-IBM compatible machines. In 2005 some Seagate custom hard disks used sector sizes of 1024 bytes per sector, advanced Format hard disks use 4096 bytes per physical sector since 2010, but will also be able to emulate 512 byte sectors for a transitional period. Magneto-optical drives use sector sizes of 512 and 1024 bytes on 5. 25-inch drives and 512 and 2048 bytes on 3. 5-inch drives. In CHS addressing the sector numbers always start at 1, there is no sector 0, for physical disk geometries the maximal sector number is determined by the low level format of the disk. However, for access with the BIOS of IBM-PC compatible machines. This maximum is still in use for virtual CHS geometries, the tracks are the thin concentric circular strips of sectors. At least one head is required to read a single track, with respect to disk geometries the terms track and cylinder are closely related. For a single or double sided floppy disk track is the common term, strictly speaking a track is a given CH combination consisting of SPT sectors, while a cylinder consists of SPT×H sectors. A cylinder is a division of data in a disk drive, the concept is concentric, hollow, cylindrical slices through the physical disks, collecting the respective circular tracks aligned through the stack of platters. The number of cylinders of a disk drive exactly equals the number of tracks on a surface in the drive. It comprises the same number on each platter, spanning all such tracks across each platter surface that is able to store data. Cylinders are vertically formed by tracks, in other words, track 12 on platter 0 plus track 12 on platter 1 etc. is cylinder 12. A device called a head reads and writes data in a drive by manipulating the magnetic medium that composes the surface of an associated disk platter. Naturally, a platter has 2 sides and thus 2 surfaces on which data can be manipulated, usually there are 2 heads per platter, one per side. The CHS addressing supported in IBM-PC compatible BIOSes code used eight bits for - theoretically up to 256 heads counted as head 0 up to 255

21.
Disk formatting
–
Disk formatting is the process of preparing a data storage device such as a hard disk drive, solid-state drive, floppy disk or USB flash drive for initial use. In some cases, the operation may also create one or more new file systems. The first part of the process that performs basic medium preparation is often referred to as low-level formatting. Partitioning is the term for the second part of the process. The third part of the process, usually termed high-level formatting most often refers to the process of generating a new file system. As a general rule, formatting a disk leaves most if not all existing data on the disk medium, special tools can remove user data by a single overwrite of all files and free space. A block, a number of bytes, is the minimum unit of storage that is read from. The earliest disk drives had fixed block sizes but starting with the 1301 IBM marketed subsystems that featured variable block sizes, optical discs generally only use fixed block sizes. Formatting a disk for use by a system and its applications typically involves three different processes. This is intended to be the permanent foundation of the disk, partitioning divides a disk into one or more regions, writing data structures to the disk to indicate the beginning and end of the regions. This level of formatting often includes checking for defective tracks or defective sectors, high-level formatting creates the file system format within a disk partition or a logical volume. This formatting includes the structures used by the OS to identify the logical drive or partitions contents. This may occur during operating system installation, or when adding a new disk, Disk and distributed file system may specify an optional boot block, and/or various volume and directory information for the operating system. The low-level format of floppy disks is performed by the drives controller. Consider a standard 1.44 MB floppy disk, low-level formatting of the floppy disk, normally writes 18 sectors of 512 bytes to each of 160 tracks of the floppy disk, providing 1,474,560 bytes of storage on the disk. Physical sectors are actually larger than 512 bytes, as in addition to the 512 byte data field they include a sector identifier field, CRC bytes and these additional bytes are not normally included in the quoted figure for overall storage capacity of the disk. Different low-level formats can be used on the media, for example. Several freeware, shareware and free software programs allowed considerably more control over formatting, allowing the formatting of high-density 3.5 disks with a capacity up to 2 MB

22.
Disk storage
–
Disk storage is a general category of storage mechanisms where data are recorded by various electronic, magnetic, optical, or mechanical changes to a surface layer of one or more rotating disks. A disk drive is a device implementing such a storage mechanism, notable types are the hard disk drive containing a non-removable disk, the floppy disk drive and its removable floppy disk, and various optical disc drives and associated optical disc media. Audio information was recorded by analog methods. Similarly the first video disc used a recording method. In the music industry, analog recording has mostly replaced by digital optical technology where the data are recorded in a digital format with optical information. The first commercial digital disk storage device was the IBM350 which shipped in 1956 as a part of the IBM305 RAMAC computing system, the random-access, low-density storage of disks was developed to complement the already used sequential-access, high-density storage provided by tape drives using magnetic tape. Disk storage is now used in computer storage and consumer electronic storage, e. g. audio CDs and video discs. Digital disk drives are block storage devices, each disk is divided into logical blocks. Blocks are addressed using their logical block addresses, read from or writing to disk happens at the granularity of blocks. Originally the disk capacity was low and has been improved in one of several ways. Improvements in mechanical design and manufacture allowed smaller and more precise heads, advancements in data compression methods permitted more information to be stored in each of the individual sectors. The drive stores data onto cylinders, heads, and sectors, the sectors unit is the smallest size of data to be stored in a hard disk drive and each file will have many sectors units assigned to it. The smallest entity in a CD is called a frame, which consists of 33 bytes, the other nine bytes consist of eight CIRC error-correction bytes and one subcode byte used for control and display. The information is sent from the processor to the BIOS into a chip controlling the data transfer. This is then sent out to the drive via a multi-wire connector. Once the data are received onto the board of the drive. The data are passed to a chip on the circuit board that controls the access to the drive. The drive is divided into sectors of data stored onto one of the sides of one of the internal disks, an HDD with two disks internally will typically store data on all four surfaces

23.
Disk partitioning
–
Disk partitioning or disk slicing is the creation of one or more regions on a hard disk or other secondary storage, so that an operating system can manage information in each region separately. Partitioning is typically the first step of preparing a newly manufactured disk, the disk stores the information about the partitions locations and sizes in an area known as the partition table that the operating system reads before any other part of the disk. Each partition then appears in the system as a distinct logical disk that uses part of the actual disk. Partitioning a drive is when you divide the total storage of a drive into different pieces, once a partition is created, it can then be formatted so that it can be used on a computer. Creating more than one partition has the advantages, Separation of the operating system. This allows image backups to be made of only the operating system, having a separate area for operating system virtual memory swapping/paging. Keeping frequently used programs and data near each other, having cache and log files separate from other files. These can change size dynamically and rapidly, potentially making a file system full, use of multi-boot setups, which allow users to have more than one operating system on a single computer. Protecting or isolating files, to make it easier to recover a corrupted file system or operating system installation, If one partition is corrupted, other file systems may not be affected. Raising overall computer performance on systems where smaller file systems are more efficient, short stroking, which aims to minimize performance-eating head repositioning delays by reducing the number of tracks used per HDD. The basic idea is that you make one partition approx, 20–25% of the total size of the drive. This partition is expected to, occupy the outer tracks of the HDD, If you limit capacity with short stroking, the minimum throughput stays much closer to the maximum. This technique, however, is not related to creating multiple partitions, for example, a 1 TB disk may have an access time of 12 ms at 200 IOPS with an average throughput of 100 MB/s. When it is partitioned to 100 GB access time may be decreased to 6 ms at 300 IOPS with a throughput of 200 MB/s. Partitioning for significantly less than the size available when disk space is not needed can reduce the time for diagnostic tools such as checkdisk to run or for full image backups to run. It also prevents disk optimizers from moving all frequently accessed files closer to other on the disk. Files can still be moved closer to other on each partition. This issue does not apply to Solid-state drives as access times on those are neither affected by nor dependent upon relative sector positions, may prevent using the whole disk capacity, because it may break free capacities apart

24.
International Standard Book Number
–
The International Standard Book Number is a unique numeric commercial book identifier. An ISBN is assigned to each edition and variation of a book, for example, an e-book, a paperback and a hardcover edition of the same book would each have a different ISBN. The ISBN is 13 digits long if assigned on or after 1 January 2007, the method of assigning an ISBN is nation-based and varies from country to country, often depending on how large the publishing industry is within a country. The initial ISBN configuration of recognition was generated in 1967 based upon the 9-digit Standard Book Numbering created in 1966, the 10-digit ISBN format was developed by the International Organization for Standardization and was published in 1970 as international standard ISO2108. Occasionally, a book may appear without a printed ISBN if it is printed privately or the author does not follow the usual ISBN procedure, however, this can be rectified later. Another identifier, the International Standard Serial Number, identifies periodical publications such as magazines, the ISBN configuration of recognition was generated in 1967 in the United Kingdom by David Whitaker and in 1968 in the US by Emery Koltay. The 10-digit ISBN format was developed by the International Organization for Standardization and was published in 1970 as international standard ISO2108, the United Kingdom continued to use the 9-digit SBN code until 1974. The ISO on-line facility only refers back to 1978, an SBN may be converted to an ISBN by prefixing the digit 0. For example, the edition of Mr. J. G. Reeder Returns, published by Hodder in 1965, has SBN340013818 -340 indicating the publisher,01381 their serial number. This can be converted to ISBN 0-340-01381-8, the check digit does not need to be re-calculated, since 1 January 2007, ISBNs have contained 13 digits, a format that is compatible with Bookland European Article Number EAN-13s. An ISBN is assigned to each edition and variation of a book, for example, an ebook, a paperback, and a hardcover edition of the same book would each have a different ISBN. The ISBN is 13 digits long if assigned on or after 1 January 2007, a 13-digit ISBN can be separated into its parts, and when this is done it is customary to separate the parts with hyphens or spaces. Separating the parts of a 10-digit ISBN is also done with either hyphens or spaces, figuring out how to correctly separate a given ISBN number is complicated, because most of the parts do not use a fixed number of digits. ISBN issuance is country-specific, in that ISBNs are issued by the ISBN registration agency that is responsible for country or territory regardless of the publication language. Some ISBN registration agencies are based in national libraries or within ministries of culture, in other cases, the ISBN registration service is provided by organisations such as bibliographic data providers that are not government funded. In Canada, ISBNs are issued at no cost with the purpose of encouraging Canadian culture. In the United Kingdom, United States, and some countries, where the service is provided by non-government-funded organisations. Australia, ISBNs are issued by the library services agency Thorpe-Bowker

HDD with disks and motor hub removed exposing copper colored stator coils surrounding a bearing in the center of the spindle motor. Orange stripe along the side of the arm is thin printed-circuit cable, spindle bearing is in the center and the actuator is in the upper left

Head stack with an actuator coil on the left and read/write heads on the right

Original IBM Personal Computer motherboard, IBM 5150. It has five expansion slots (an interface later called "PC/XT-bus" by IBM and "8-bit ISA" by other manufacturers of compatible computers), and two DIN connectors for keyboard and cassette interface.

Parallel ATA (PATA), originally AT Attachment, is an interface standard for the connection of storage devices such as …

Example of a 1992 80386 PC motherboard with nothing built in other than memory, keyboard, processor, cache, realtime clock, and slots. Such basic motherboards could have been outfitted with either the ST-506 or ATA interface, but usually not both. A single 2-drive ATA interface and a floppy interface was added to this system via the 16-bit ISA card.

An Oak Technology Mozart 16 16-bit ISA sound card, from when the CDROM drive interface had not yet been standardized. This card offers four separate interface connectors for IDE, Panasonic, Mitsumi, and Sony CDROM drives, but only one connector could be used since they all shared the same interface wiring.

A SoundBlaster 32 16-bit ISA sound card, from after connector standardization had occurred, with an IDE interface for the CDROM drive.