'; window.popUpWin.document.write(zhtm); window.popUpWin.document.close(); // Johnny Jackson 4/28/98 } //--> Upgrading & Repairing PCs Eighth Edition -- Ch 14 -- Hard Disk Drives


Upgrading & Repairing PCs Eighth Edition

Previous chapterNext chapterContents


- 14 -

Hard Disk Drives


To most users, the hard disk drive is the most important, yet most mysterious, part of a computer system. A hard disk drive is a sealed unit that holds the data in a system. When the hard disk fails, the consequences usually are very serious. To maintain, service, and expand a PC system properly, you must fully understand the hard disk unit.

Most computer users want to know how hard disk drives work and what to do when a problem occurs. Few books about hard disks, however, cover the detail necessary for the PC technician or sophisticated user. This chapter corrects that situation.

This chapter thoroughly describes the hard disk drive from a physical, mechanical, and electrical point of view. In particular, this chapter examines the construction and operation of a hard disk drive in a practical sense.

Definition of a Hard Disk

A hard disk drive contains rigid, disk-shaped platters usually constructed of aluminum or glass. Unlike floppy disks, the platters cannot bend or flex--hence the term hard disk. In most hard disk drives, the platters cannot be removed; for that reason, IBM calls them fixed disk drives. Although a removable hard disk drive has been very popular of late, the Jaz drive by Iomega is unlike its smaller brother, the Zip drive, in that the Jaz drive's removable media is comprised of the same hard disks found in any fixed disk drive.

Hard disk drives used to be called Winchester drives. This term dates back to the 1960s, when IBM developed a high-speed hard disk drive that had 30M of fixed-platter storage and 30M of removable-platter storage. The drive had platters that spun at high speeds and heads that floated over the platters while they spun in a sealed environment. That drive, the 30-30 drive, soon received the nickname Winchester after the famous Winchester 30-30 rifle. After that time, drives that used a high-speed spinning platter with a floating head also became known as Winchester drives. The term has no technical or scientific meaning; it is a slang term, and is considered synonymous with hard disk.

Capacity Measurements

To eliminate confusion in capacity measurements, I will be using the abbreviation M in this section. The true industry standard abbreviations for these figures are shown in Table 14.1.

Table 14.1  Standard Abbreviations and Meanings

Abbreviation Description Decimal Meaning Binary Meaning
Kbit Kilobit 1,000 1,024
K Kilobyte 1,000 1,024
Mbit Megabit 1,000,000 1,048,576
M Megabyte 1,000,000 1,048,576
Gbit Gigabit 1,000,000,000 1,073,741,824
G Gigabyte 1,000,000,000 1,073,741,824
Tbit Terabit 1,000,000,000,000 1,099,511,627,776
T Terabyte 1,000,000,000,000 1,099,511,627,776

Unfortunately, there are no differences in the abbreviations when used to indicate metric verses binary values. In other words, M can be used to indicate both "millions of bytes" and megabytes. In general, memory values are always computed using the binary derived meanings, while disk capacity goes either way. Unfortunately, this often leads to confusion in reporting disk capacities.

Hard Drive Advancements

In the 15 or more years that hard disks have commonly been used in PC systems, they have undergone tremendous changes. To give you an idea of how far hard drives have come in that time, following are some of the most profound changes in PC hard disk storage:

Areal Density

Areal density has been used as a primary technology-growth-rate indicator for the hard disk drive industry. Areal density is defined as the product of the linear bits per inch (BPI), measured along the length of the tracks around the disk, multiplied by the number of tracks per inch (TPI) measured radially on the disk. The results are expressed in units of Mbit per square inch (Mbit/sq-inch) and are used as a measure of efficiency in drive recording technology. Current high-end 2.5-inch drives record at areal densities of about 1.5Gbit per square inch (Gbit/sq-inch). Prototype drives with densities as high as 10Gbit/sq-inch have been constructed, allowing for capacities of more than 20G on a single 2 1/2-inch platter for notebook drives.

Areal density (and, therefore, drive capacity) has been doubling approximately every two to three years, and production disk drives are likely to reach areal densities of 10+Gbit/sq-inch before the year 2000. A drive built with this technology would be capable of storing more than 10G of data on a single 2 1/2-inch platter, allowing 20 or 30G drives to be constructed that fit in the palm of your hand. New media and head technologies, such as ceramic or glass platters, MR (Magneto-Resistive) heads, pseudo-contact recording, and PRML (Partial Response Maximum Likelihood) electronics, are being developed to support these higher areal densities. The primary challenge in achieving higher densities is manufacturing drive heads and disks to operate at closer tolerances.

It seems almost incredible that computer technology improves by doubling performance or capacity every two to three years--if only other industries could match that growth and improvement rate!

Hard Disk Drive Operation

The basic physical operation of a hard disk drive is similar to that of a floppy disk drive: A hard drive uses spinning disks with heads that move over the disks and store data in tracks and sectors. A track is a concentric ring of information, which is divided into individual sectors that normally store 512 bytes each. In many other ways, however, hard disk drives are different from floppy disk drives.

Hard disks usually have multiple platters, each with two sides on which data can be stored. Most drives have at least two or three platters, resulting in four or six sides, and some drives have up to 11 or more platters. The identically positioned tracks on each side of every platter together make up a cylinder. A hard disk drive normally has one head per platter side, and all the heads are mounted on a common carrier device, or rack. The heads move in and out across the disk in unison; they cannot move independently because they are mounted on the same rack.

Hard disks operate much faster than floppy drives. Most hard disks originally spun at 3,600 RPM--approximately 10 times faster than a floppy drive. Until recently, 3,600 RPM was pretty much a constant among hard drives. Now, however, quite a few hard drives spin even faster. The Toshiba 3.3G drive in my notebook computer spins at 4,852 RPM; other drives spin as fast as 5,400, 5,600, 6,400, 7,200 and even 10,000 RPM. High rotational speed combined with a fast head-positioning mechanism and more sectors per track make one hard disk faster than another, and all these features combine to make hard drives much faster than floppy drives in storing and retrieving data.

The heads in most hard disks do not (and should not!) touch the platters during normal operation. When the heads are powered off, however, they land on the platters as they stop spinning. While the drive is on, a very thin cushion of air keeps each head suspended a short distance above or below the platter. If the air cushion is disturbed by a particle of dust or a shock, the head may come into contact with the platter spinning at full speed. When contact with the spinning platters is forceful enough to do damage, the event is called a head crash. The result of a head crash may be anything from a few lost bytes of data to a totally trashed drive. Most drives have special lubricants on the platters and hardened surfaces that can withstand the daily "takeoffs and landings" as well as more severe abuse.

Because the platter assemblies are sealed and non-removable, track densities can be very high. Many drives have 3,000 or more TPI of media. Head Disk Assemblies (HDAs), which contain the platters, are assembled and sealed in clean rooms under absolutely sanitary conditions. Because few companies repair HDAs, repair or replacement of items inside a sealed HDA can be expensive. Every hard disk ever made will eventually fail. The only questions are when the hard disk will fail and whether your data is backed up.

Many PC users think that hard disks are fragile, and generally, they are one of the most fragile components in your PC. In my weekly PC Hardware and Troubleshooting or Data Recovery seminars, however, I have run various hard disks for days with the lids off, and have even removed and installed the covers while the drives were operating. Those drives continue to store data perfectly to this day with the lids either on or off. Of course, I do not recommend that you try this test with your own drives; neither would I use this test on my larger, more expensive drives.

The Ultimate Hard Disk Drive Analogy

I'm sure that you have heard the traditional analogy that compares the interaction of the head and media in a typical hard disk as being similar in scale to a 747 flying a few feet off the ground at cruising speed (500+ mph). I have heard this analogy used over and over again for years, and I've even used it in my seminars many times without checking to see whether the analogy is technically accurate with respect to modern hard drives.

One highly inaccurate aspect of the 747 analogy has always bothered me--the use of an airplane of any type to describe the head-and-platter interaction. This analogy implies that the heads fly very low over the surface of the disk--but technically, this is not true. The heads do not fly at all, in the traditional aerodynamic sense; instead, they float on a cushion of air that's dragged around by the platters.

A much better analogy would use a hovercraft instead of an airplane; the action of a hovercraft much more closely emulates the action of the heads in a hard disk drive. Like a hovercraft, the drive heads rely somewhat on the shape of the bottom of the head to capture and control the cushion of air that keeps them floating over the disk. By nature, the cushion of air on which the heads float forms only in very close proximity to the platter and is often called an air bearing by the disk drive industry.

I thought it was time to come up with a new analogy that more correctly describes the dimensions and speeds at which a hard disk operates today. I looked up the specifications on a specific hard disk drive, and then equally magnified and rescaled all the dimensions involved to make the head floating height equal to 1 inch. For my example, I used a Seagate model ST-12550N Barracuda 2 drive, which is a 2G (formatted capacity), 3 1/2-inch SCSI-2 drive. In fact, I originally intended to install this drive in the portable system on which I am writing this book, but the technology took another leap and I ended up installing an ST-15230N Hawk 4 drive (4G) instead! Table 14.2 shows the specifications of the Barracuda drive, as listed in the technical documentation.

Table 14.2  Seagate ST-12550N Barracuda 2, 3 1/2-inch, SCSI-2 Drive
Specifications

Specification Value Unit of Measure
Linear density 52,187 Bits Per Inch (BPI)
Bit spacing 19.16 Micro-inches (u-in)
Track density 3,047 Tracks Per Inch (TPI)
Track spacing 328.19 Micro-inches (u-in)
Total tracks 2,707 Tracks
Rotational speed 7,200 Revolutions per minute(RPM)
Average head linear speed 53.55 Miles per hour (MPH)
Head slider length 0.08 Inches
Head slider height 0.02 Inches
Head floating height 5 Micro-inches (u-in)
Average seek time 8 Milliseconds (ms)

By interpreting these specifications, you can see that in this drive, the head sliders are about 0.08-inch long and 0.02-inch high. The heads float on a cushion of air about 5 u-in (millionths of an inch) from the surface of the disk while traveling at an average speed of 53.55 MPH (figuring an average track diameter of 2 1/2 inches). These heads read and write individual bits spaced only 19.16 u-in apart on tracks separated by only 328.19 u-in. The heads can move from one track to any other in only 8ms during an average seek operation.

To create my analogy, I simply magnified the scale to make the floating height equal to 1 inch. Because 1 inch is 200,000 times greater than 5 u-in, I scaled up everything else by the same amount.

The heads of this "typical" hard disk, magnified to such a scale, would be more than 1,300 feet long and 300 feet high (about the size of the Sears Tower, lying sideways!), traveling at a speed of more than 10.7 million MPH (2,975 miles per second!) only 1 inch above the ground, reading data bits spaced a mere 3.83 inches apart on tracks separated by only 5.47 feet.

Additionally, because the average seek of 8ms (.008 seconds) is defined as the time it takes to move the heads over one-third of the total tracks (about 902, in this case), each skyscraper-size head could move sideways to any track within a distance of 0.93 miles (902 tracksx5.47 feet) which results in an average sideways velocity of more than 420,000 MPH (116 miles per second)!

The forward speed of this imaginary head is difficult to comprehend, so I'll elaborate. The diameter of the Earth at the equator is 7,926 miles, which means a circumference of about 24,900 miles. At 2,975 miles per second, this imaginary head would circle the Earth about once every 8 seconds!

This analogy should give you a new appreciation of the technological marvel that the modern hard disk drive actually represents. It makes the 747 analogy look rather pathetic (not to mention totally inaccurate), doesn't it?

Magnetic Data Storage

Learning how magnetic data storage works will help you develop a feel for the way that your disk drives operate and can improve the way that you work with disk drives and disks.

Nearly all disk drives in personal computer systems operate on magnetic principles. Purely optical disk drives often are used as a secondary form of storage, but the computer to which they are connected is likely to use a magnetic storage medium for primary disk storage. Due to the high performance and density capabilities of magnetic storage, optical disk drives and media probably never will totally replace magnetic storage in PC systems.

Magnetic drives, such as floppy and hard disk drives, operate by using electromagnetism. This basic principle of physics states that as an electric current flows through a conductor, a magnetic field is generated around the conductor. This magnetic field then can influence magnetic material in the field. When the direction of the flow of electric current is reversed, the magnetic field's polarity also is reversed. An electric motor uses electromagnetism to exert pushing and pulling forces on magnets attached to a rotating shaft.

Another effect of electromagnetism is that if a conductor is passed through a changing magnetic field, an electrical current is generated. As the polarity of the magnetic field changes, so does the direction of the electric current flow. For example, a type of electrical generator used in automobiles, called an alternator, operates by rotating electromagnets past coils of wire conductors in which large amounts of electrical current can be induced. The two-way operation of electromagnetism makes it possible to record data on a disk and read that data back later.

The read/write heads in your disk drives (both floppy and hard disks) are U-shaped pieces of conductive material. This U-shaped object is wrapped with coils of wire, through which an electric current can flow. When the disk drive logic passes a current through these coils, it generates a magnetic field in the drive head. When the polarity of the electric current is reversed, the polarity of the field that is generated also changes. In essence, the heads are electromagnets whose voltage can be switched in polarity very quickly.

When a magnetic field is generated in the head, the field jumps the gap at the end of the U-shaped head. Because a magnetic field passes through a conductor much more easily than through the air, the field bends outward through the medium and actually uses the disk media directly below it as the path of least resistance to the other side of the gap. As the field passes through the media directly under the gap, it polarizes the magnetic particles through which it passes so that they are aligned with the field. The field's polarity--and, therefore, the polarity of the magnetic media--is based on the direction of the flow of electric current through the coils.

The disk consists of some form of substrate material (such as Mylar for floppy disks or aluminum or glass for hard disks) on which a layer of magnetizable material has been deposited. This material usually is a form of iron oxide with various other elements added. The polarities of the magnetic fields of the individual magnetic particles on an erased disk normally are in a state of random disarray. Because the fields of the individual particles point in random directions, each tiny magnetic field is canceled by one that points in the opposite direction, for a total effect of no observable or cumulative field polarity.

Particles in the area below the head gap are aligned in the same direction as the field emanating from the gap. When the individual magnetic domains are in alignment, they no longer cancel one another, and an observable magnetic field exists in that region of the disk. This local field is generated by the many magnetic particles that now are operating as a team to produce a detectable cumulative field with a unified direction.

The term flux describes a magnetic field that has a specific direction. As the disk sur- face rotates below the drive head, the head can lay a magnetic flux over a region of the disk. When the electric-current flowing through the coils in the head is reversed, so is the magnetic-field polarity in the head gap. This reversal also causes the polarity of the flux being placed on the disk to reverse.

The flux reversal or flux transition is a change in polarity of the alignment of magnetic particles on the disk surface. A drive head places flux reversals on a disk to record data. For each data bit (or bits) written, a pattern of flux reversals is placed on the disk in specific areas known as bit or transition cells. A bit cell or transition cell is a specific area of the disk controlled by the time and rotational speed in which flux reversals are placed by a drive head. The particular pattern of flux reversals within the transition cells used to store a given data bit or bits is called the encoding method. The drive logic or controller takes the data to be stored and encodes it as a series of flux reversals over a period of time, according to the encoding method used.

Modified Frequency Modulation (MFM) and Run Length Limited (RLL) are popular encoding methods. All floppy disk drives use the MFM scheme. Hard disks use MFM or several variations of RLL encoding methods. These encoding methods are described in more detail later in the section "MFM Encoding" later in this chapter.

During the write process, voltage is applied to the head, and as the polarity of this voltage changes, the polarity of the magnetic field being recorded also changes. The flux transitions are written precisely at the points where the recording polarity changes. Strange as it may seem, during the read process, a head does not output exactly the same signal that was written; instead, the head generates a voltage pulse or spike only when it crosses a flux transition. When the transition changes from positive to negative, the pulse that the head would detect is negative voltage. When the transition changes from negative to positive, the pulse would be a positive voltage spike.

In essence, while reading the disk the head becomes a flux transition detector, emitting voltage pulses whenever it crosses a transition. Areas of no transition generate no pulse. Figure 14.1 shows the relationship between the read and write waveforms and the flux transitions recorded on a disk.

FIG. 14.1  Magnetic write and read processes.

You can think of the write pattern as being a square waveform that is at a positive or negative voltage level and that continuously polarizes the disk media in one direction or another. Where the waveform transitions go from positive to negative voltage, or vice versa, the magnetic flux on the disk also changes polarity. During a read, the head senses the flux transitions and outputs a pulsed waveform. In other words, the signal is zero volts unless a positive or negative transition is being detected, in which case there is a positive or negative pulse. Pulses appear only when the head is passing over flux transitions on the disk media. By knowing the clock timing used, the drive or controller circuitry can determine whether a pulse (and therefore a flux transition) falls within a given transition cell.

The electrical pulse currents generated in the head while it is passing over a disk in read mode are very weak and can contain significant noise. Sensitive electronics in the drive and controller assembly then can amplify the signal above the noise level and decode the train of weak pulse currents back into data that is (theoretically) identical to the data originally recorded.

So as you now can see, disks are both recorded and read by means of basic electromagnetic principles. Data is recorded on a disk by passing electrical currents through an electromagnet (the drive head) that generates a magnetic field stored on the disk. Data on a disk is read by passing the head back over the surface of the disk; as the head encounters changes in the stored magnetic field, it generates a weak electrical current that indicates the presence or absence of flux transitions in the originally recorded signal.

Data Encoding Schemes

Magnetic media essentially is an analog storage medium. The data that we store on it, however, is digital information--that is, ones and zeros. When digital information is applied to a magnetic recording head, the head creates magnetic domains on the disk media with specific polarities. When a positive current is applied to the write head, the magnetic domains are polarized in one direction; when negative voltage is applied, the magnetic domains are polarized in the opposite direction. When the digital waveform that is recorded switches from a positive to a negative voltage, the polarity of the magnetic domains is reversed.

During a readback, the head actually generates no voltage signal when it encounters a group of magnetic domains with the same polarity, but it generates a voltage pulse every time it detects a switch in polarity. Each flux reversal generates a voltage pulse in the read head; it is these pulses that the drive detects when reading data. A read head does not generate the same waveform that was written; instead, it generates a series of pulses, each pulse appearing where a magnetic flux transition has occurred.

To optimize the placement of pulses during magnetic storage, the raw digital input data is passed through a device called an encoder/decoder (endec), which converts the raw binary information to a waveform that is more concerned with the optimum placement of the flux transitions (pulses). During a read operation, the endec reverses the process and decodes the pulse train back into the original binary data. Over the years, several different schemes for encoding data in this manner have been developed; some are better or more efficient than others.

In any consideration of binary information, the use of timing is important. When interpreting a read or write waveform, the timing of each voltage transition event is critical. If the timing is off, a given voltage transition may be recognized at the wrong time, and bits may be missed, added, or simply misinterpreted. To ensure that the timing is precise, the transmitting and receiving devices must be in sync. This synchronization can be accomplished by adding a separate line for timing, called a clock signal, between the two devices. The clock and data signals also can be combined and then transmitted on a single line. This combination of clock and data is used in most magnetic data encoding schemes.

When the clock information is added in with the data, timing accuracy in interpreting the individual bit cells is ensured between any two devices. Clock timing is used to determine the start and end of each bit cell. Each bit cell is bounded by two clock cells where the clock transitions can be sent. First there is a clock transition cell, and then the data transition cell, and finally the clock transition cell for the data that follows. By sending clock information along with the data, the clocks will remain in sync, even if a long string of 0 bits are transmitted. Unfortunately, all the transition cells that are used solely for clocking take up space on the media that otherwise could be used for data.

Because the number of flux transitions that can be recorded on a particular medium is limited by the disk media and head technology, disk drive engineers have been trying various ways of encoding the data into a minimum number of flux reversals, taking into consideration the fact that some flux reversals, used solely for clocking, are required. This method permits maximum use of a given drive hardware technology.

Although various encoding schemes have been tried, only a few are popular today. Over the years, these three basic types have been the most popular:

The following section examines these codes, discusses how they work, where they have been used, and any advantages or disadvantages that apply to them.

FM Encoding

One of the earliest techniques for encoding data for magnetic storage is called Frequency Modulation (FM) encoding. This encoding scheme, sometimes called Single Density encoding, was used in the earliest floppy disk drives that were installed in PC systems. The original Osborne portable computer, for example, used these Single Density floppy drives, which stored about 80K of data on a single disk. Although it was popular until the late 1970s, FM encoding no longer is used today.

MFM Encoding

Modified Frequency Modulation (MFM) encoding was devised to reduce the number of flux reversals used in the original FM encoding scheme and, therefore, to pack more data onto the disk. In MFM encoding, the use of the clock transition cells is minimized, leaving more room for the data. Clock transitions are recorded only if a stored 0 bit is preceded by another 0 bit; in all other cases, a clock transition is not required. Because the use of the clock transitions has been minimized, the actual clock frequency can be doubled from FM encoding, resulting in twice as many data bits being stored in the same number of flux transitions as in FM.

Because it is twice as efficient as FM encoding, MFM encoding also has been called Double Density recording. MFM is used in virtually all PC floppy drives today and was used in nearly all PC hard disks for a number of years. Today, most hard disks use RLL (Run Length Limited) encoding, which provides even greater efficiency than MFM.

Because MFM encoding places twice as many data bits in the same number of flux reversals as FM, the clock speed of the data is doubled, so that the drive actually sees the same number of total flux reversals as with FM. This means that data is read and written at twice the speed in MFM encoding, even though the drive sees the flux reversals arriving at the same frequency as in FM. This method allows existing drive technology to store twice the data and deliver it twice as fast.

The only caveat is that MFM encoding requires improved disk controller and drive circuitry, because the timing of the flux reversals must be more precise than in FM. As it turned out, these improvements were not difficult to achieve, and MFM encoding became the most popular encoding scheme for many years.

Table 14.3 shows the data bit to flux reversal translation in MFM encoding.

Table 14.3  MFM Data to Flux Transition Encoding

Data Bit Value Flux Encoding
1 NT
0 preceded by 0 TN
0 preceded by 1 NN
T = Flux transition N = No flux transition

RLL Encoding

Today's most popular encoding scheme for hard disks, called RLL (Run Length Limited), packs up to 50 percent more information on a given disk than even MFM does and three times as much information as FM. In RLL encoding, groups of bits are taken as a unit and combined to generate specific patterns of flux reversals. By combining the clock and data in these patterns, the clock rate can be further increased while maintaining the same basic distance between the flux transitions on the disk.

IBM invented RLL encoding and first used the method in many of its mainframe disk drives. During the late 1980s, the PC hard disk industry began using RLL encoding schemes to increase the storage capabilities of PC hard disks. Today, virtually every drive on the market uses some form of RLL encoding.

Instead of encoding a single bit, RLL normally encodes a group of data bits at a time. The term Run Length Limited is derived from the two primary specifications of these codes, which is the minimum number (the run length) and maximum number (the run limit) of transition cells allowed between two actual flux transitions. Several schemes can be achieved by changing the length and limit parameters, but only two have achieved any real popularity: RLL 2,7 and RLL 1,7.

Even FM and MFM encoding can be expressed as a form of RLL. FM can be called RLL 0,1, because there can be as few as zero and as many as one transition cell separating two flux transitions. MFM can be called RLL 1,3, because as few as one and as many as three transition cells can separate two flux transitions. Although these codes can be expressed in RLL form, it is not common to do so.

RLL 2,7 initially was the most popular RLL variation because it offers a high-density ratio with a transition detection window that is the same relative size as that in MFM. This method allows for high storage density with fairly good reliability. In very high-capacity drives, however, RLL 2,7 did not prove to be reliable enough. Most of today's highest-capacity drives use RLL 1,7 encoding, which offers a density ratio 1.27 times that of MFM and a larger transition detection window relative to MFM. Because of the larger relative window size within which a transition can be detected, RLL 1,7 is a more forgiving and more reliable code; and, forgiveness and reliability are required when media and head technology are being pushed to their limits.

Another little-used RLL variation called RLL 3,9--sometimes called ARLL (Advanced RLL)--allowed an even higher density ratio than RLL 2,7. Unfortunately, reliability suffered too greatly under the RLL 3,9 scheme; the method was used by only a few controller companies that have all but disappeared.

It is difficult to understand how RLL codes work without looking at an example. Because RLL 2,7 was the most popular form of RLL encoding used with older controllers, I will use it as an example. Even within a given RLL variation such as RLL 2,7 or 1,7, many different flux transition encoding tables can be constructed to show what groups of bits are encoded as what sets of flux transitions. For RLL 2,7 specifically, thousands of different translation tables could be constructed, but for my examples, I will use the endec table used by IBM because it is the most popular variation used.

According to the IBM conversion tables, specific groups of data bits two, three, and four bits long are translated into strings of flux transitions four, six, and eight transition cells long, respectively. The selected transitions coded for a particular bit sequence are designed to ensure that flux transitions do not occur too close together or too far apart.

It is necessary to limit how close two flux transitions can be because of the basically fixed resolution capabilities of the head and disk media. Limiting how far apart these transitions can be ensures that the clocks in the devices remain in sync.

Table 14.4 shows the IBM-developed encoding scheme for 2,7 RLL.

Table 14.4  RLL 2,7 (IBM Endec) Data to Flux Transition Encoding

Data Bit Values Flux Encoding
10 NTNN
11 TNNN
000 NNNTNN
010 TNNTNN
011 NNTNNN
0010 NNTNNTNN
0011 NNNNTNNN
T = Flux transition N = No flux transition

In studying this table, you may think that encoding a byte such as 00000001b would be impossible because no combinations of data bit groups fit this byte. Encoding this type of byte is not a problem, however, because the controller does not transmit individual bytes; instead, the controller sends whole sectors, making it possible to encode such a byte simply by including some of the bits in the following byte. The only real problem occurs in the last byte of a sector if additional bits are needed to complete the final group sequence. In these cases, the endec in the controller simply adds excess bits to the end of the last byte. These excess bits are truncated during any reads so that the last byte always is decoded correctly.

Encoding Scheme Comparisons

Figure 14.2 shows an example of the waveform written to store an X ASCII character on a hard disk drive under three different encoding schemes.

FIG. 14.2  ASCII character "X" write waveforms using FM, MFM, and RLL 2,7 encoding.

In each of these encoding-scheme examples, the top line shows the individual data bits (01011000b) in their bit cells separated in time by the clock signal, which is shown as a period (.). Below that line is the actual write waveform, showing the positive and negative voltages as well as voltage transitions that result in the recording of flux transitions. The bottom line shows the transition cells, with T representing a transition cell that contains a flux transition and N representing a transition cell that is empty.

The FM encoding example is easy to explain. Each bit cell has two transition cells: one for the clock information and one for the data itself. All the clock transition cells contain flux transitions, and the data transition cells contain a flux transition only if the data is a 1 bit. No transition at all is used to represent a 0 bit. Starting from the left, the first data bit is 0, which decodes as a flux transition pattern of TN. The next bit is a 1, which decodes as TT. The next bit is 0, which decodes as TN, and so on. Using Table 14.2, you easily can trace the FM encoding pattern to the end of the byte.

The MFM encoding scheme also has clock and data transition cells for each data bit to be recorded. As you can see, however, the clock transition cells carry a flux transition only when a 0 bit is stored after another 0 bit. Starting from the left, the first bit is a 0, and the preceding bit is unknown (assume 0), so the flux transition pattern is TN for that bit. The next bit is a 1, which always decodes to a transition-cell pattern of NT. The next bit is 0, which was preceded by 1, so the pattern stored is NN. Using Table 14.3, you can easily trace the MFM encoding pattern to the end of the byte. You can see that the minimum and maximum number of transition cells between any two flux transitions is one and three, respectively; hence, MFM encoding also can be called RLL 1,3.

The RLL 2,7 pattern is more difficult to see because it relies on encoding groups of bits rather than encoding each bit individually. Starting from the left, the first group that matches the groups listed in Table 14.4 are the first three bits, 010. These bits are translated into a flux transition pattern of TNNTNN. The next two bits, 11, are translated as a group to TNNN; and the final group, 000 bits, is translated to NNNTNN to complete the byte. As you can see in this example, no additional bits were needed to finish the last group.

Notice that the minimum and maximum number of empty transition cells between any two flux transitions in this example are two and six, although a different example could show a maximum of seven empty transition cells. This is where the RLL 2,7 designation comes from. Because even fewer transitions are recorded than in MFM, the clock rate can be further increased to three times that of FM or 1.5 times that of MFM, allowing more data to be stored in the same space on the disk. Notice, however, that the resulting write waveform itself looks exactly like a typical FM or MFM waveform in terms of the number and separation of the flux transitions for a given physical portion of the disk. In other words, the physical minimum and maximum distances between any two flux transitions remain the same in all three of these encoding-scheme examples.

Another new feature in high-end drives involves the disk read circuitry. Read channel circuits using Partial-Response, Maximum-Likelihood (PRML) technology allow disk drive manufacturers to increase the amount of data that can be stored on a disk platter by up to 40 percent. PRML replaces the standard "detect one peak at a time" approach of traditional analog peak-detect read/write channels with digital signal processing. In digital signal processing, noise can be digitally filtered out, allowing flux change pulses to be placed closer together on the platter, achieving greater densities.

I hope that the examinations of these different encoding schemes and how they work have taken some of the mystery out of the way data is recorded on a drive. You can see that although schemes such as MFM and RLL can store more data on a drive, the actual density of the flux transitions remains the same as far as the drive is concerned.

Sectors

A disk track is too large to manage effectively as a single storage unit. Many disk tracks can store 50,000 or more bytes of data, which would be very inefficient for storing small files. For that reason, a disk track is divided into several numbered divisions known as sectors. These sectors represent slices of the track.

Different types of disk drives and disks split tracks into different numbers of sectors, depending on the density of the tracks. For example, floppy disk formats use 8 to 36 sectors per track, whereas hard disks usually store data at a higher density and can use 17 to 100 or more sectors per track. Sectors created by standard formatting procedures on PC systems have a capacity of 512 bytes, but this capacity may change in the future.

Sectors are numbered on a track starting with 1, unlike the heads or cylinders which are numbered starting with 0. For example, a 1.44M floppy disk contains 80 cylinders numbered from 0 to 79 and two heads numbered 0 and 1, and each track on each cylinder has 18 sectors numbered from 1 to 18.

When a disk is formatted, additional ID areas are created on the disk for the disk controller to use for sector numbering and identifying the start and end of each sector. These areas precede and follow each sector's data area, which accounts for the difference between a disk's unformatted and formatted capacities. These sector headers, inter-sector gaps, and so on are independent of the operating system, file system, or files stored on the drive. For example, a 4M floppy disk (3 1/2-inch) has a capacity of 2.88M when it is formatted, a 2M floppy has a formatted capacity of 1.44M, and an older 38M hard disk has a capacity of only 32M when it is formatted. Modern IDE and SCSI hard drives are preformatted, so the manufacturers now only advertise formatted capacity. Even so, nearly all drives use some reserved space for managing the data that can be stored on the drive.

Although I have stated that each disk sector is 512 bytes in size, this statement technically is false. Each sector does allow for the storage of 512 bytes of data, but the data area is only a portion of the sector. Each sector on a disk typically occupies 571 bytes of the disk, of which only 512 bytes are usable for user data. The actual number of bytes required for the sector header and trailer can vary from drive to drive, but this figure is typical. A few modern drives now use an ID-less recording which virtually eliminates the storage overhead of the sector header information. In an ID-less recording, virtually all of the space on the track is occupied by data.

You may find it helpful to think of each sector as being a page in a book. In a book, each page contains text, but the entire page is not filled with text; rather, each page has top, bottom, left, and right margins. Information such as chapter titles (track and cylinder numbers) and page numbers (sector numbers) is placed in the margins. The "margin" areas of a sector are created and written to during the disk-formatting process. Formatting also fills the data area of each sector with dummy values. After the disk is formatted, the data area can be altered by normal writing to the disk. The sector header and trailer information cannot be altered during normal write operations unless you reformat the disk.

Each sector on a disk has a prefix portion, or header, that identifies the start of the sector and a sector number, as well as a suffix portion, or trailer, that contains a checksum (which helps ensure the integrity of the data contents). Each sector also contains 512 bytes of data. The data bytes normally are set to some specific value, such as F6h (hex), when the disk is physically (or low-level) formatted. (The following section explains low-level formatting.)

In many cases, a specific pattern of bytes that are considered to be difficult to write are written so as to flush out any marginal sectors. In addition to the gaps within the sectors, gaps exist between sectors on each track and also between tracks; none of these gaps contain usable data space. The prefix, suffix, and gaps account for the lost space between the unformatted capacity of a disk and the formatted capacity.

Table 14.5 shows the format for each track and sector on a typical hard disk with 17 sectors per track.

Table 14.5  Typical 17-Sector/17-Track Disk Sector Format

Bytes Name Description
16 POST INDEX GAP All 4Eh, at the track beginning after the Index mark.
The following sector data (shown between the lines in this table) is repeated 17 times for an MFM encoded track.
13 ID VFO LOCK All 00h; synchronizes the VFO for the sector ID.
1 SYNC BYTE A1h; notifies the controller that data follows.
1 ADDRESS MARK FEh; defines that ID field data follows.
2 CYLINDER NUMBER A value that defines the actuator position.
1 HEAD NUMBER A value that defines the head selected.
1 SECTOR NUMBER A value that defines the sector.
2 CRC Cyclic Redundancy Check to verify ID data.
3 WRITE TURN-ON GAP 00h written by format to isolate the ID from DATA.
13 DATA SYNC VFO LOCK All 00h; synchronizes the VFO for the DATA.
1 SYNC BYTE A1h; notifies the controller that data follows.
1 ADDRESS MARK F8h; defines that user DATA field follows.
512 DATA The area for user DATA.
2 CRC Cyclic Redundancy Check to verify DATA.
3 WRITE TURN-OFF GAP 00h; written by DATA update to isolate DATA.
15 INTER-RECORD GAP All 00h; a buffer for spindle speed variation.
693 PRE-INDEX GAP All 4Eh, at track end before Index mark.
571 Total bytes per sector
512 Data bytes per sector
10,416 Total bytes per track

8,704 Data bytes per track This table refers to a hard disk track with 17 sectors. Although this capacity was typical during the mid 1980s, modern hard disks place as many as 150 or more sectors per track, and the specific formats of those sectors may vary slightly from the example.

As you can see, the usable space on each track is about 16 percent less than the unformatted capacity. This example is true for most disks, although some may vary slightly.

The Post Index Gap provides a head-switching recovery period so that when switching from one track to another, sequential sectors can be read without waiting for an additional revolution of the disk. In some drives this time is not enough; additional time can be added by skewing the sectors on different tracks so that the arrival of the first sector is delayed.

The Sector ID data consists of the Cylinder, Head, and Sector Number fields, as well as a CRC field to allow for verification of the ID data. Most controllers use bit 7 of the Head Number field to mark the sector as bad during a low-level format or surface analysis. This system is not absolute, however; some controllers use other methods to indicate a marked bad sector. Usually, though, the mark involves one of the ID fields.

Write Turn on Gap follows the ID field CRC bytes and provides a pad to ensure a proper recording of the following user data area as well as to allow full recovery of the ID CRC.

The user Data field consists of all 512 bytes of data stored in the sector. This field is followed by a CRC field to verify the data. Although many controllers use two bytes of CRC here, the controller may implement a longer Error Correction Code (ECC) that requires more than two CRC bytes to store. The ECC data stored here provide the possibility of Data-field read correction as well as read error detection. The correction/detection capabilities depend on the ECC code chosen and on the controller implementation. A Write Turn-Off Gap is a pad to allow the ECC (CRC) bytes to be fully recovered.

The Inter-Record Gap provides a means to accommodate variances in drive spindle speeds. A track may have been formatted while the disk was running slower than normal and then write-updated while the disk was running faster than normal. In such cases, this gap prevents accidental overwriting of any information in the next sector. The actual size of this padding varies, depending on the speed of disk rotation when the track was formatted and each time the Data field is updated.

The Pre-Index Gap allows for speed tolerance over the entire track. This gap varies in size, depending on the variances in disk-rotation speed and write-frequency tolerance at the time of formatting.

This sector prefix information is extremely important, because it contains the numbering information that defines the cylinder, head, and sector. All this information except the Data field, Data CRC bytes, and Write Turn-Off Gap is written only during a low-level format. On a typical non-servo-guided (stepper-motor actuator) hard disk on which thermal gradients cause mistracking, the data updates that rewrite the 512-byte Data area and the CRC that follows may not be placed exactly in line with the sector header information. This situation eventually causes read or write failures of the Abort, Retry, Fail, Ignore variety. You can often correct this problem by redoing the Low Level Formatting (LLF) of the disk; this process rewrites the header and data information together at the current track positions. Then, when you restore the data to the disk, the Data areas are written in alignment with the new sector headers.

Disk Formatting

You usually have two types of formats to consider:

When you format a floppy disk, the DOS FORMAT command performs both kinds of formats simultaneously. To format a hard disk, however, you must perform the operations separately. Moreover, a hard disk requires a third step, between the two formats, in which the partitioning information is written to the disk. Partitioning is required because a hard disk is designed to be used with more than one operating system. Separating the physical format in a way that is always the same, regardless of the operating system being used and regardless of the high-level format (which would be different for each operating system), makes possible the use of multiple operating systems on one hard drive. The partitioning step allows more than one type of operating system to use a single hard disk or a single DOS to use the disk as several volumes or logical drives. A volume or logical drive is anything to which DOS assigns a drive letter.

Consequently, formatting a hard disk involves three steps:

1. Low-Level Formatting (LLF)

2. Partitioning

3. High-Level Formatting (HLF)

During a low-level format, the disk's tracks are divided into a specific number of sectors. The sector header and trailer information is recorded, as are intersector and intertrack gaps. Each sector's data area is filled with a dummy byte value or test pattern of values. For floppy disks, the number of sectors recorded on each track depends on the type of disk and drive; for hard disks, the number of sectors per track depends on the drive and controller interface.

The original ST-506/412 MFM controllers always placed 17 sectors per track on a disk. ST-506/412 controllers with RLL encoding increase the number of sectors on a drive to 25 or 26 sectors per track. ESDI drives can have 32 or more sectors per track. IDE drives simply are drives with built-in controllers, and depending on exactly what type of controller design is built in, the number of sectors per track can range from 17 to 100 or more. SCSI drives essentially are the same as IDE drives internally with an added SCSI Bus Adapter circuit, meaning that they also have some type of built-in controller; and like IDE drives, SCSI drives can have practically any number of sectors per track, depending on what controller design was used.

Virtually all IDE and SCSI drives use a technique called Zoned Recording, which writes a variable number of sectors per track. The outermost tracks hold more sectors than the inner tracks do, because they are longer. Because of limitations in the PC BIOS, these drives still have to act as though they have a fixed number of sectors per track. This situation is handled by translation algorithms that are implemented in the controller.

Multiple Zone Recording

One way to increase the capacity of a hard drive is to format more sectors on the outer cylinders than on the inner ones. Because they have a larger circumference, the outer cylinders can hold more data. Drives without Zoned Recording store the same amount of data on every cylinder, even though the outer cylinders may be twice as long as the inner cylinders. The result is wasted storage capacity, because the disk media must be capable of storing data reliably at the same density as on the inner cylinders. With older ST-506/412 and ESDI controllers, unfortunately, the number of sectors per track was fixed; drive capacity, therefore, was limited by the density capability of the innermost (shortest) track.

In a Zoned Recording, the cylinders are split into groups called zones, with each successive zone having more and more sectors per track as you move out from the inner radius of the disk. All the cylinders in a particular zone have the same number of sectors per track. The number of zones varies with specific drives, but most drives have 10 or more zones.

Another effect of Zoned Recording is that transfer speeds vary depending on what zone the heads are in. Because there are more sectors in the outer zones, and the rotational speed is always the same, the transfer rate will be highest.

Drives with separate controllers could not handle zoned recordings because there was no standard way to communicate information about the zones from the drive to the controller. With SCSI and IDE disks, it became possible to format individual tracks with different numbers of sectors, due to the fact that these drives have the disk controller built in. The built-in controllers on these drives can be made fully aware of the zoning that is used. These built-in controllers must then also translate the physical Cylinder, Head, and Sector numbers to logical Cylinder, Head, and Sector numbers so that the drive has the appearance of having the same number of sectors on each track. The PC BIOS was designed to handle only a single number of specific sectors per track throughout the entire drive, meaning that zoned drives always must run under a sector translation scheme.

The use of Zoned Recording has allowed drive manufacturers to increase the capacity of their hard drives by between 20 percent and 50 percent compared with a fixed-sector-per-track arrangement. Virtually all IDE and SCSI drives today use Zoned Recording.

Partitioning

Partitioning segments the drive into areas, called partitions, that can hold a particular operating system's file system. Today, PC operating systems use four common file systems:

Of these four file systems, the FAT file system still is by far the most popular (and recommended). The main problem with the original 16-bit FAT file system is that disk space is used in groups of sectors called allocation units or clusters. Because the total number of clusters is limited to 65,536 (the most that can be represented with a 16-bit number), larger drives required that the disk be broken into larger clusters. The larger cluster sizes required cause disk space to be used inefficiently. FAT-32 solves this problem by allowing the disk to be broken up into over 4 billion clusters, so the cluster sizes can be kept smaller. Most FAT-32 and NTFS volumes use 4K clusters.

The term cluster was changed to allocation unit in DOS 4.0 and later versions. The newer term is appropriate because a single cluster is the smallest unit of the disk that DOS can allocate when it writes a file. A cluster is equal to one or more sectors, and although a cluster can be a single sector in some cases (specifically 1.2M and 1.44M floppies), it is usually more than one. Having more than one sector per cluster reduces the size and processing overhead of the FAT and enables DOS to run faster because it has fewer individual units of the disk to manage. The tradeoff is in wasted disk space. Because DOS and Windows can manage space only in full cluster units, every file consumes space on the disk in increments of one cluster.

Smaller clusters generate less slack (space wasted between the actual end of each file and the end of the cluster). With larger clusters, the wasted space grows larger. For hard disks, the cluster size varies with the size of the partition. Table 14.6 shows the default cluster sizes FDISK selects for a particular partition volume size.

Table 14.6  Default Cluster Sizes

Hard Disk Partition Size Cluster (Allocation Unit) Size FAT Type
0-15M 8 sectors or 4,096 (4K) bytes 12-bit
16-128M 4 sectors or 2,048 (2K) bytes 16-bit
129-256M 8 sectors or 4,096 (4K) bytes 16-bit
257-512M 16 sectors or 8,192 (8K) bytes 16-bit
513-1,024M 32 sectors or 16,384 (16K) bytes 16-bit
1,025-2,048M 64 sectors or 32,768 (32K) bytes 16-bit
0-260M 1 sector or 512 bytes 32-bit
260M-8G 8 sectors or 4,096 (4K) bytes 32-bit
8-16G 16 sectors or 8,192 (8K) bytes 32-bit
16-32G 32 sectors or 16,384 (16K) bytes 32-bit
32-2,048G 64 sectors or 32,768 (32K) bytes 32-bit

In most cases, these cluster sizes, which are selected by the FORMAT command, are the minimum possible for a given partition size. Therefore, 8K clusters are the smallest possible for a partition size greater than 256M. Note that FDISK creates a FAT using 12-bit numbers if the partition is 16M or less, while all other FATs are created using 16-bit numbers, unless Large Disk Support is specifically enabled in Windows 95 OSR2 or later.

The effect of the larger cluster sizes on larger disk partitions can be substantial. There can be a significant amount of slack space in the leftover portions of larger clusters. On average the amount of slack space for a file is one half of the space of the last cluster the file uses. To calculate an estimate of slack space on an entire drive, use the following formula: slack space = #_files * cluster_size / 2 A drive partition of over 1G and up to 2G using FAT16 (thus 32K clusters) containing 10,000 files wastes about 16K per file or 160,000K (160M) total [10000*32K/2]. If you were to repartition the drive into two separate partitions of less than or equal to 1G each, then the cluster size would be cut in half, as would the total wasted slack space. You would therefore gain 80M of disk space. The tradeoff is that managing multiple partitions is not as convenient as a single large partition. The only way you can control cluster or allocation unit sizing using FAT16 is by changing the sizes of the partitions.

If you were to reformat the drive using FAT32, then the wasted space would drop to only 2K per file or about 20M total! In other words, by converting to FAT32, you would end up with approximately 140M more disk space free in this example.

NTFS, HPFS, and FAT32 all dramatically reduce the slack space but also increase file management overhead because many more allocation units must be managed.

Despite the problem with slack space, the basic FAT file system is still often the most recommended for compatibility reasons. All the operating systems can access FAT volumes, and the file structures and data-recovery procedures are well known. Also note that data recovery can be difficult to impossible under the HPFS and NTFS systems; for those systems, good backups are imperative.

During partitioning, no matter what file system is specified, the partitioning software writes a special boot program and partition table to the first sector, called the Master Boot Sector (MBS). Because the term record sometimes is used to mean sector, this sector can also be called the Master Boot Record (MBR).

High-Level Format

During the high-level format, the operating system (such as DOS, OS/2, or Windows) writes the structures necessary for managing files and data. FAT partitions have a Volume Boot Sector (VBS), a file allocation table (FAT), and a root directory on each formatted logical drive. These data structures (discussed in detail in Chapter 22, "Operating Systems Software and Troubleshooting") enable the operating system to manage the space on the disk, keep track of files, and even manage defective areas so that they do not cause problems.

High-level formatting is not really formatting, but creating a table of contents for the disk. In low-level formatting, which is the real formatting, tracks and sectors are written on the disk. As mentioned, the DOS FORMAT command can perform both low-level and high-level format operations on a floppy disk, but it performs only the high-level format for a hard disk. Hard disk low-level formats require a special utility, usually supplied by the disk-controller manufacturer.

Basic Hard Disk Drive Components

Many types of hard disks are on the market, but nearly all drives share the same basic physical components. Some differences may exist in the implementation of these components (and in the quality of materials used to make them), but the operational characteristics of most drives are similar. Following are the components of a typical hard disk drive (see Figure 14.3):

FIG. 14.3  Hard disk drive components.

The platters, spindle motor, heads, and head actuator mechanisms usually are contained in a sealed chamber called the Head Disk Assembly (HDA). The HDA usually is treated as a single component; it rarely is opened. Other parts external to the drive's HDA--such as the logic boards, bezel, and other configuration or mounting hardware--can be disassembled from the drive.

Hard Disk Platters (Disks)

A typical hard disk has one or more platters, or disks. Hard disks for PC systems have been available in a number of form factors over the years. Normally, the physical size of a drive is expressed as the size of the platters. Following are the most common platter sizes used in PC hard disks today:

Larger hard drives that have 8-inch, 14-inch, or even larger platters are available, but these drives typically have not been associated with PC systems. Currently, the 3 1/2-inch drives are the most popular for desktop and some portable systems, whereas the 2 1/2-inch and smaller drives are very popular in portable or notebook systems. These little drives are fairly amazing, with current capacities of up to 1G or more, and capacities of 20G are expected by the year 2000. Imagine carrying a notebook computer around with a built-in 20G drive; it will happen sooner than you think! Due to their small size, these drives are extremely rugged; they can withstand rough treatment that would have destroyed most desktop drives a few years ago.

Most hard drives have two or more platters, although some of the smaller drives have only one. The number of platters that a drive can have is limited by the drive's physical size vertically. So far, the maximum number of platters that I have seen in any 3 1/2-inch drive is 11.

Platters traditionally have been made from an aluminum alloy, for strength and light weight. With manufacturers' desire for higher and higher densities and smaller drives, many drives now use platters made of glass (or, more technically, a glass-ceramic composite). One such material is called MemCor, which is produced by the Dow Corning Corporation. MemCor is composed of glass with ceramic implants, which resists cracking better than pure glass.

Glass platters offer greater rigidity and therefore can be machined to one-half the thickness of conventional aluminum disks, or less. Glass platters also are much more thermally stable than aluminum platters, which means that they do not change dimensions (expand or contract) very much with any changes in temperature. Several hard disks made by companies such as Seagate, Toshiba, Areal Technology, Maxtor, and Hewlett-Packard currently use glass or glass-ceramic platters. For most manufacturers, glass disks will replace the standard aluminum substrate over the next few years, especially in high-performance 2 1/2- and 3 1/2-inch drives.

Recording Media

No matter what substrate is used, the platters are covered with a thin layer of a magnetically retentive substance called media in which magnetic information is stored. Two popular types of media are used on hard disk platters:

Oxide media is made of various compounds, containing iron oxide as the active ingredient. A magnetic layer is created by coating the aluminum platter with a syrup containing iron-oxide particles. This media is spread across the disk by spinning the platters at high speed; centrifugal force causes the material to flow from the center of the platter to the outside, creating an even coating of media material on the platter. The surface then is cured and polished. Finally, a layer of material that protects and lubricates the surface is added and burnished smooth. The oxide media coating normally is about 30 millionths of an inch thick. If you could peer into a drive with oxide-media-coated platters, you would see that the platters are brownish or amber.

As drive density increases, the media needs to be thinner and more perfectly formed. The capabilities of oxide coatings have been exceeded by most higher-capacity drives. Because oxide media is very soft, disks that use this type of media are subject to head-crash damage if the drive is jolted during operation. Most older drives, especially those sold as low-end models, have oxide media on the drive platters. Oxide media, which has been used since 1955, remained popular because of its relatively low cost and ease of application. Today, however, very few drives use oxide media.

Thin-film media is thinner, harder, and more perfectly formed than oxide media. Thin film was developed as a high-performance media that enabled a new generation of drives to have lower head floating heights, which in turn made possible increases in drive density. Originally, thin-film media was used only in higher-capacity or higher-quality drive systems, but today, virtually all drives have thin-film media.

Thin-film media is aptly named. The coating is much thinner than can be achieved by the oxide-coating method. Thin-film media also is known as plated, or sputtered, media because of the various processes used to place the thin film of media on the platters.

Thin-film plated media is manufactured by placing the media material on the disk with an electroplating mechanism, much the way chrome plating is placed on the bumper of a car. The aluminum platter then is immersed in a series of chemical baths that coat the platter with several layers of metallic film. The media layer is a cobalt alloy about 3 u-in thick.

Thin-film sputtered media is created by first coating the aluminum platters with a layer of nickel phosphorus and then applying the cobalt-alloy magnetic material in a continuous vacuum-deposition process called sputtering. During this process, magnetic layers as thin as 1 or 2 u-in are deposited on the disk, in a fashion similar to the way that silicon wafers are coated with metallic films in the semiconductor industry. The sputtering technique then is used again to lay down an extremely hard, 1 u-in protective carbon coating. The need for a near-perfect vacuum makes sputtering the most expensive of the processes described here.

The surface of a sputtered platter contains magnetic layers as thin as 1 u-in. Because this surface also is very smooth, the head can float closer to the disk surface than was possible previously; floating heights as small as 3 u-in above the surface are possible. When the head is closer to the platter, the density of the magnetic flux transitions can be increased to provide greater storage capacity. Additionally, the increased intensity of the magnetic field during a closer-proximity read provides the higher signal amplitudes needed for good signal-to-noise performance.

Both the sputtering and plating processes result in a very thin, very hard film of media on the platters. Because the thin-film media is so hard, it has a better chance of surviving contact with the heads at high speed. In fact, modern thin-film media is virtually un-crashable. Oxide coatings can be scratched or damaged much more easily. If you could open a drive to peek at the platters, you would see that the thin-film media platters look like the silver surfaces of mirrors.

The sputtering process results in the most perfect, thinnest, and hardest disk surface that can be produced commercially. The sputtering process has largely replaced plating as the method of creating thin-film media. Having a thin-film media surface on a drive results in increased storage capacity in a smaller area with fewer head crashes--and in a drive that will provide many years of trouble-free use.

Read/Write Heads

A hard disk drive usually has one read/write head for each platter side, and these heads are connected, or ganged, on a single movement mechanism. The heads, therefore, move across the platters in unison.

Mechanically, read/write heads are simple. Each head is on an actuator arm that is spring-loaded to force the head into a platter. Few people realize that each platter actually is "squeezed" by the heads above and below it. If you could open a drive safely and lift the top head with your finger, the head would snap back into the platter when you released it. If you could pull down on one of the heads below a platter, the spring tension would cause it to snap back up into the platter when you released it.

Figure 14.4 shows a typical hard disk head-actuator assembly from a voice coil drive.

FIG. 14.4  Read/write heads and rotary voice coil actuator assembly.

When the drive is at rest, the heads are forced into direct contact with the platters by spring tension, but when the drive is spinning at full speed, air pressure develops below the heads and lifts them off the surface of the platter. On a drive spinning at full speed, the distance between the heads and the platter can be anywhere from 3 to 20 u-in or more.

In the early 1960s, hard disk drive recording heads operated at floating heights as large as 200 to 300 u-in; today's drive heads are designed to float as low as 3 to 5 u-in above the surface of the disk. To support higher densities in future drives, the physical separation between the head and disk is expected to be as little as 0.5 u-in by the end of the century.


CAUTION: The small size of this gap is why the disk drive's HDA should never be opened except in a clean-room environment: Any particle of dust or dirt that gets into this mechanism could cause the heads to read improperly, or possibly even to strike the platters while the drive is running at full speed. The latter event could scratch the platter or the head.

To ensure the cleanliness of the interior of the drive, the HDA is assembled in a class-100 or better clean room. This specification is such that a cubic foot of air cannot contain more than 100 particles that measure up to 0.5 micron (19.7 u-in). A single person breathing while standing motionless spews out 500 such particles in a single minute! These rooms contain special air-filtration systems that continuously evacuate and refresh the air. A drive's HDA should not be opened unless it is inside such a room.

Although maintaining such an environment may seem to be expensive, many companies manufacture tabletop or bench-size clean rooms that sell for only a few thousand dollars. Some of these devices operate like a glove box; the operator first inserts the drive and any tools required, and then closes the box and turns on the filtration system. Inside the box, a clean-room environment is maintained, and a technician can use the built-in gloves to work on the drive.

In other clean-room variations, the operator stands at a bench where a forced-air curtain is used to maintain a clean environment on the bench top. The technician can walk in and out of the clean-room field simply by walking through the air curtain. This air curtain is much like the curtain of air used in some stores and warehouses to prevent heat from escaping in the winter while leaving a passage wide open.

Because the clean environment is expensive to produce, few companies except those that manufacture the drives are prepared to service hard disk drives.

Read/Write Head Designs

As disk drive technology has evolved, so has the design of the Read/Write head. The earliest heads were simple iron cores with coil windings (electromagnets). By today's standards, the original head designs were enormous in physical size and operated at very low recording densities. Over the years, many different head designs have evolved from the first simple Ferrite Core designs into several types and technologies available today. This section discusses the different types of heads found in PC-type hard disk drives, including the applications and relative strengths and weaknesses of each.

Four types of heads have been used in hard disk drives over the years:

Ferrite

Ferrite heads, the traditional type of magnetic-head design, evolved from the original IBM Winchester drive. These heads have an iron-oxide core wrapped with electromagnetic coils. A magnetic field is produced by energizing the coils; a field also can be induced by passing a magnetic field near the coils. This process gives the heads full read and write capability. Ferrite heads are larger and heavier than thin-film heads and therefore require a larger floating height to prevent contact with the disk.

Many refinements have been made in the original (monolithic) ferrite head design. A type of ferrite head called a composite ferrite head has a smaller ferrite core bonded with glass in a ceramic housing. This design permits a smaller head gap, which allows higher track densities. These heads are less susceptible to stray magnetic fields than are heads in the older monolithic design.

During the 1980s, composite ferrite heads were popular in many low-end drives, such as the popular Seagate ST-225. As density demands grew, the competing MIG and thin-film head designs were used in place of ferrite heads, which are virtually obsolete today. Ferrite heads cannot write to the higher coercivity media needed for high-density designs and have poor frequency response with higher noise levels. The main advantage of ferrite heads is that they are the cheapest type available.

Metal-In-Gap.

Metal-In-Gap (MIG) heads basically are a specially enhanced version of the composite ferrite design. In MIG heads, a metal substance is sputtered into the recording gap on the trailing edge of the head. This material offers increased resistance to magnetic saturation, allowing higher-density recording. MIG heads also produce a sharper gradient in the magnetic field for a better-defined magnetic pulse. These heads permit the use of higher-coercivity thin-film disks and can operate at lower floating heights.

Two versions of MIG heads are available: single-sided and double-sided. Single-sided MIG heads are designed with a layer of magnetic alloy placed along the trailing edge of the gap. Double-sided MIG designs apply the layer to both sides of the gap. The metal alloy is applied through a vacuum-deposition process called sputtering. This alloy has twice the magnetization capability of raw ferrite and allows writing to the higher-coercivity thin-film media needed at the higher densities. Double-sided MIG heads offer even higher coercivity capability than the single-sided designs do.

Because of these increases in capabilities through improved designs, MIG heads, for a time, were the most popular head used in all but very high-capacity drives. Due to market pressures that have demanded higher and higher densities, however, MIG heads have been largely displaced in favor of thin-film heads.

Thin Film

Thin-film (TF) heads are produced in much the same manner as a semiconductor chip--that is, through a photolithographic process. In this manner, many thousands of heads can be created on a single circular wafer. This manufacturing process also results in a very small high-quality product.

TF heads offer an extremely narrow and controlled head gap created by sputtering a hard aluminum material. Because this material completely encloses the gap, this area is very well protected, minimizing the chance of damage from contact with the media. The core is a combination of iron and nickel alloy that is two to four times more powerful magnetically than a ferrite head core.

TF heads produce a sharply defined magnetic pulse that allows extremely high densities to be written. Because they do not have a conventional coil, TF heads are more immune to variations in coil impedance. The small, lightweight heads can float at a much lower height than the ferrite and MIG heads; floating height has been reduced to 2 u-in or less in some designs. Because the reduced height enables a much stronger signal to be picked up and transmitted between the head and platters, the signal-to-noise ratio increases, which improves accuracy. At the high track and linear densities of some drives, a standard ferrite head would not be able to pick out the data signal from the background noise. When TF heads are used, their small size enables more platters to be stacked in a drive.

Until the past few years, TF heads were relatively expensive compared with older technologies, such as ferrite and MIG. Better manufacturing techniques and the need for higher densities, however, have driven the market to TF heads. The widespread use of these heads also has made them cost-competitive, if not cheaper, than MIG heads.

TF heads currently are used in most high-capacity drives, especially in the smaller form factors. They have displaced MIG heads as the most popular head design being used in drives today. The industry is working on ways to improve TF head efficiency, so TF heads are likely to remain popular for some time, especially in mainstream drives.

Magneto-Resistive

Magneto-Resistive (MR) heads are the latest technology. Invented and pioneered by IBM, MR heads currently are the superior head design, offering the highest performance available. Most 3 1/2-inch drives with capacities in excess of 1G currently use MR heads. As areal densities continue to increase, the MR head eventually will become the head of choice for nearly all hard drives, displacing the popular MIG and TF head designs.

MR heads rely on the fact that the resistance of a conductor changes slightly when an external magnetic field is present. Rather than put out a voltage by passing through a magnetic-field flux reversal, as a normal head would, the MR head senses the flux reversal and changes resistance. A small current flows through the heads, and the change in resistance is measured by this sense current. This design enables the output to be three or more times more powerful than a TF head during a read. In effect, MR heads are power-read heads, acting more like sensors than generators.

MR heads are more costly and complex to manufacture than other types of heads, because several special features or steps must be added. Among them:

Because the MR principle can only read data and is not used for writing, MR heads really are two heads in one. A standard inductive TF head is used for writing, and an MR head is used for reading. Because two separate heads are built into one assembly, each head can be optimized for its task. Ferrite, MIG, and TF heads are known as single-gap heads because the same gap is used for both reading and writing, whereas the MR head uses a separate gap for each operation.

The problem with single-gap heads is that the gap length always is a compromise between what is best for reading and what is best for writing. The read function needs a thin gap for higher resolution; the write function needs a thicker gap for deeper flux penetration to switch the media. In a dual-gap MR head, the read and write gaps can be optimized for both functions independently. The write (TF) gap writes a wider track than the read (MR) gap does. The read head then is less likely to pick up stray magnetic information from adjacent tracks.

Drives with MR heads require better shielding from stray magnetic fields, which can affect these heads more easily than they do the other head designs. All in all, however, the drawback is minor compared with the advantages that the MR heads offer.

Head Sliders

The term slider is used to describe the body of material that supports the actual drive head itself. The slider is what actually floats or slides over the surface of the disk, carrying the head at the correct distance from the media for reading and writing. Most sliders resemble a catamaran, with two outboard pods that float along the surface of the disk media and a central "rudder" portion that actually carries the head and read/write gap.

The trend toward smaller and smaller form factor drives has forced a requirement for smaller and smaller sliders as well. The typical mini-Winchester slider design is about .160x.126x.034 inch in size. Most head manufacturers now are shifting to 50 percent smaller nanosliders, which have dimensions of about .08x.063x.017 inch. The nanoslider is being used in both high-capacity and small-form-factor drives. Smaller sliders reduce the mass carried at the end of the head actuator arms, allowing for increased acceleration and deceleration, and leading to faster seek times. The smaller sliders also require less area for a landing zone, thus increasing the usable area of the disk platters. Further, the smaller slider contact area reduces the slight wear on the media surface that occurs during normal startup and spindown of the drive platters.

The newer nanoslider designs also have specially modified surface patterns that are designed to maintain the same floating height above the disk surface, whether the slider is above the inner or outer cylinders. Conventional sliders increase or decrease their floating height considerably, according to the velocity of the disk surface traveling below them. Above the outer cylinders, the velocity and floating height are higher. This arrangement is undesirable in newer drives that use Zoned Recording, in which the same bit density is achieved on all the cylinders. Because the same bit density is maintained throughout the drive, the head floating height should be relatively constant as well for maximum performance. Special textured surface patterns and manufacturing techniques allow the nanosliders to float at a much more consistent height, making them ideal for Zoned Recording drives.

Head Actuator Mechanisms

Possibly more important than the heads themselves is the mechanical system that moves them: the head actuator. This mechanism moves the heads across the disk and positions them accurately above the desired cylinder. Many variations on head actuator mechanisms are in use, but all of them can be categorized as being one of two basic types:

The use of one or the other type of positioner has profound effects on a drive's performance and reliability. The effect is not limited to speed; it also includes accuracy, sensitivity to temperature, position, vibration, and overall reliability. To put it bluntly, a drive equipped with a stepper motor actuator is much less reliable (by a large factor) than a drive equipped with a voice coil actuator.

The head actuator is the single most important specification in the drive. The type of head actuator mechanism in a drive tells you a great deal about the drive's performance and reliability characteristics. Table 14.7 shows the two types of hard disk drive head actuators and the affected performance parameters.

Table 14.7  Characteristics of Stepper Motor versus Voice Coil Drives

Characteristic Stepper Motor Voice Coil
Relative access speed Slow Fast
Temperature sensitive Yes (very) No
Positionally sensitive Yes No
Automatic head parking Not usually Yes
Preventive maintenance Periodic format None required
Relative reliability Poor Excellent

Generally, a stepper motor drive has a slow average access rating, is temperature-sensitive during read and write operations, is sensitive to physical orientation during read and write operations, does not automatically park its heads above a save zone during power-down, and usually requires annual or biannual reformats to realign the sector data with the sector header information due to mistracking. Overall, stepper motor drives are vastly inferior to drives that use voice coil actuators.

Some stepper motor drives feature automatic head parking at power-down. If you have a newer stepper motor drive, refer to the drive's technical reference manual to determine whether your drive has this feature. Sometimes, you hear a noise after power-down, but that can be deceptive; some drives use a solenoid-activated spindle brake, which makes a noise as the drive is powered off and does not involve head parking.

Floppy disk drives position their heads by using a stepper motor actuator. The accuracy of the stepper mechanism is suited to a floppy drive, because the track densities usually are nowhere near those of a hard disk. Many of the less expensive, low-capacity hard disks also use a stepper motor system. Most hard disks with capacities of more than 40M have voice coil actuators, as do all drives I have seen that have capacities of more than 100M, which means all drives being manufactured today.

This breakdown does not necessarily apply to other system manufacturers, but it is safe to say that hard disk drives with less than 80M capacity may have either type of actuator and that virtually all drives with more than 80M capacity have voice coil actuators. The cost difference between voice coil drives and stepper motor drives of equal capacity is marginal today, so there is little reason not to use a voice coil drive. No new stepper motor drives are being manufactured today.

Stepper Motor.

A stepper motor is an electrical motor that can "step," or move from position to position, with mechanical detents or click stop positions. If you were to grip the spindle of one of these motors and spin it by hand, you would hear a clicking or buzzing sound as the motor passed each detent position with a soft click. The sensation is much like that of the volume control on some stereo systems which use a detented type control instead of something smooth and purely linear.

Stepper motors cannot position themselves between step positions; they can stop only at the predetermined detent positions. The motors are small (between 1 and 3 inches) and can be square, cylindrical, or flat. Stepper motors are outside the sealed HDA, although the spindle of the motor penetrates the HDA through a sealed hole. The stepper motor is located in one of the corners of the hard disk drive and usually is easy to see.

Mechanical Links

The stepper motor is mechanically linked to the head rack by a split-steel band coiled around the motor spindle or by a rack-and-pinion gear mechanism. As the motor steps, each detent, or click-stop position, represents the movement of one track through the mechanical linkage.

Some systems use several motor steps for each track. In positioning the heads, if the drive is told to move from track 0 to 100, the motor begins the stepping motion, proceeds to the 101st detent position, and stops, leaving the heads above the desired cylinder. The fatal flaw in this type of positioning system is that due to dimensional changes in the platter-to-head relationship over the life of a drive, the heads may not be precisely placed above the cylinder location. This type of positioning system is called a blind system, because the heads have no true way of determining the exact placement of a given cylinder.

Most stepper motor actuator systems use a split-metal-band mechanism to transmit the rotary stepping motion to the in-and-out motion of the head rack. The band is made of special alloys to limit thermal expansion and contraction as well as stretching of the thin band. One end of the band is coiled around the spindle of the stepper motor; the other is connected directly to the head rack. The band is inside the sealed HDA and is not visible from outside the drive.

Some drives use a rack-and-pinion gear mechanism to link the stepper motor to the head rack. This procedure involves a small pinion gear on the spindle of the stepper motor that moves a rack gear in and out. The rack gear is connected to the head rack, causing it to move. The rack-and-pinion mechanism is more durable than the split-metal-band mechanism and provides slightly greater physical and thermal stability. One problem, however, is backlash, or the amount of play in the gears. Backlash increases as the gears wear and eventually renders the mechanism useless.

Temperature Fluctuation Problems

Stepper motor mechanisms are affected by a variety of problems. The greatest problem is temperature. As the drive platters heat and cool, they expand and contract, respectively; the tracks then move in relation to a predetermined track position. The stepper mechanism does not allow the tracks to move in increments of less than a single track to correct for these temperature-induced errors. The drive positions the heads to a particular cylinder according to a predetermined number of steps from the stepper motor, with no room for nuance.

The low-level formatting of the drive places the initial track and sector marks on the platters at the positions where the heads currently are located, as commanded by the stepper motor. If all subsequent reading and writing occurs at the same temperature as the initial format, the heads always record precisely within the track and sector boundaries.

At different temperatures, however, the head position does not match the track position. When the platters are cold, the heads miss the track location because the platters have shrunk and the tracks have moved toward the center of the disk. When the platters are warmer than the formatted temperature, the platters will have grown larger and the track positions are located outward. Gradually, as the drive is used, the data is written inside, on top of, and outside the track and sector marks. Eventually, the drive fails to read one of these locations, and a DOS Abort, Retry, Ignore error message usually appears.

The temperature sensitivity of stepper motor drives also may cause the "Monday morning blues." When the system is powered up cold (on Monday, for example), a 1701, 1790, or 10490 Power-On Self Test (POST) error occurs. If you leave the system on for about 15 minutes, the drive can come up to operating temperature, and the system then may boot normally. This problem sometimes occurs in reverse when the drive gets particularly warm, such as when a system is in direct sunlight or during the afternoon, when room temperature is highest. In that case, the symptom is a DOS error message with the familiar Abort, Retry, Ignore prompt.

Temperature-induced mistracking problems can be solved by reformatting the drive and restoring the data. Then the information is placed on the drive at the current head positions for each cylinder. Over time, the mistracking recurs, necessitating another reformat-and-restore operation, which is a form of periodic preventive maintenance for stepper motor drives. An acceptable interval for this maintenance is once a year (or perhaps twice a year, if the drive is extremely temperature-sensitive).

Reformatting a hard drive, because it requires a complete backup-and-restore operation, is inconvenient and time-consuming. To help with these periodic reformats, most low-level format programs offer a special reformat option that copies the data for a specific track to a spare location, reformats the track, and then copies the data back to the original track. When this type of format operation is finished, you don't have to restore your data because the program took care of that chore for you, one track at a time.


CAUTION: Never use a so-called nondestructive format program without first making a complete backup. This type of program does wipe out the data as it operates. "Destructive-reconstructive" more accurately describes its operation. If a problem occurs with the power, the system, or the program (maybe a bug that stops the program from finishing), all the data will not be restored properly, and some tracks may be wiped clean. Although such programs save you from having to restore data manually when the format is complete, they do not remove your obligation to perform a backup first.

Voice Coil

A voice coil actuator is found in all higher-quality hard disk drives, including most drives with capacities greater than 40M and virtually all drives with capacities exceeding 80M. Unlike the blind stepper motor positioning system, a voice coil actuator uses a feedback signal from the drive to accurately determine the head positions and to adjust them, if necessary. This system allows for significantly greater performance, accuracy, and reliability than traditional stepper motor actuators offered.

A voice coil actuator works by pure electromagnetic force. The construction of this mechanism is similar to that of a typical audio speaker, from which the term voice coil is derived. An audio speaker uses a stationary magnet surrounded by a voice coil connected to the speaker's paper cone. Energizing the coil causes the coil to move relative to the stationary magnet, which produces sound from the speaker cone. In a typical hard disk voice coil system, the electromagnetic coil is attached to the end of the head rack and placed near a stationary magnet. No contact is made between the coil and the magnet. As the electromagnetic coils are energized, they attract or repulse the stationary magnet and move the head rack. Such systems are extremely quick and efficient, and usually much quieter than systems driven by stepper motors.

Unlike a stepper motor, a voice coil actuator has no click-stops, or detent positions; rather, a special guidance system stops the head rack above a particular cylinder. Because it has no detents, the voice coil actuator can slide the heads in and out smoothly to any position desired, much like the slide of a trombone. Voice coil actuators use a guidance mechanism called a servo to tell the actuator where the heads are in relation to the cylinders and to place the heads accurately at the desired positions. This positioning system often is called a closed loop, servo-controlled mechanism. Closed loop indicates that the index (or servo) signal is sent to the positioning electronics in a closed-loop system. This loop sometimes is called a feedback loop, because the feedback from this information is used to position the heads accurately. Servo-controlled refers to this index or the servo information that is used to dictate or control head-positioning accuracy.

A voice coil actuator with servo control is not affected by temperature changes, as a stepper motor is. When the temperature is cold and the platters have shrunk (or when the temperature is hot and the platters have expanded), the voice coil system compensates because it never positions the heads in predetermined track positions. Rather, the voice coil system searches for the specific track, guided by the prewritten servo information, and can position the head rack precisely above the desired track at that track's current position, regardless of the temperature. Because of the continuous feedback of servo information, the heads adjust to the current position of the track at all times. For example, as a drive warms up and the platters expand, the servo information allows the heads to "follow" the track. As a result, a voice coil actuator often is called a track following system.

Two main types of voice-coil positioner mechanisms are available:

The types differ only in the physical arrangement of the magnets and coils.

A linear actuator (see Figure 14.5) moves the heads in and out over the platters in a straight line, much like a tangential-tracking turntable. The coil moves in and out on a track surrounded by the stationary magnets. The primary advantage of the linear design is that it eliminates the head azimuth variations that occur with rotary positioning systems. (Azimuth refers to the angular measurement of the head position relative to the tangent of a given cylinder.) A linear actuator does not rotate the head as it moves from one cylinder to another, thus eliminating this problem.

FIG. 14.5  A linear voice coil actuator.

Although the linear actuator seems to be a good design, it has one fatal flaw: The devices are much too heavy. As drive performance has increased, the desire for lightweight actuator mechanisms has become very important. The lighter the mechanism, the faster it can be accelerated and decelerated from one cylinder to another. Because they are much heavier than rotary actuators, linear actuators were popular only for a short time; they are virtually nonexistent in drives manufactured today.

Rotary actuators (refer to Figure 14.4) also use stationary magnets and a movable coil, but the coil is attached to the end of an actuator arm, much like that of a turntable's tone arm. As the coil is forced to move relative to the stationary magnet, it swings the head arms in and out over the surface of the disk. The primary advantage of this mechanism is light weight, which means that the heads can be accelerated and decelerated very quickly, resulting in very fast average seek times. Because of the lever effect on the head arm, the heads move faster than the actuator, which also helps to improve access times.

The disadvantage with a rotary system is that as the heads move from the outer to inner cylinders, they are rotated slightly with respect to the tangent of the cylinders. This rotation results in an azimuth error and is one reason why the area of the platter in which the cylinders are located is somewhat limited. By limiting the total motion of the actuator, the azimuth error can be contained to within reasonable specifications. Virtually all voice coil drives today use rotary actuator systems.

Servo Mechanisms

Three servo mechanism designs have been used to control voice coil positioners over the years:

These designs are slightly different, but they accomplish the same basic task: They enable the head positioner to adjust continuously so that it is precisely placed above a given cylinder in the drive. The main difference among these servo designs is where the gray code information is actually written on the drive.

All servo mechanisms rely on special information that is only written to the disk when the disk is manufactured. This information usually is in the form of a special code called a gray code. A gray code is a special binary notational system in which any two adjacent numbers are represented by a code that differs in only one bit place or column position. This system makes it easy for the head to read the information and quickly determine its precise position. This guidance code can be written only when the drive is manufactured; the code is used over the life of the drive for accurate positional information.

The servo gray code is written at the time of manufacture by a special machine called a servowriter: Basically, a jig that mechanically moves the heads to a given reference position and then writes the servo information for that position. Many servowriters are themselves guided by a laser-beam reference that calculates its own position by calculating distances in wavelengths of light. Because the servowriter must be capable of moving the heads mechanically, this process is done with the lid of the drive off or through special access ports on the HDA. After the servowriting is complete, these ports usually are covered with sealing tape. You often see these tape-covered holes on the HDA, usually accompanied by warnings that you will void the warranty if you remove the tape. Because servowriting exposes the interior of the drive, it must be done in a clean-room environment.

A servowriter is an expensive piece of machinery, costing up to $50,000 or more, and often must be custom-made for a particular make or model of drive. Some drive-repair companies have servowriting capability, which means that they can rewrite the servo information on a drive if it becomes damaged. Lacking a servowriter, a drive with servo-code damage must be sent back to the drive manufacturer for the servo information to be rewritten.

Fortunately, it is impossible to damage the servo information through any normal reading and writing to a hard disk. Drives are designed so that servo information cannot be overwritten, even during low-level formatting of a drive. One myth that has been circulating (especially with respect to IDE drives) is that you can damage the servo information by improper low-level formatting. This is not true. An improper low-level format may compromise the performance of the drive, but the servo information is totally protected and cannot be overwritten.

The track-following capabilities of a servo-controlled voice coil actuator eliminates the positioning errors that occur over time with stepper motor drives. Voice coil drives simply are not affected by conditions such as thermal expansion and contraction of the platters. In fact, many voice coil drives today perform a special thermal-recalibration procedure at predetermined intervals while they run. This procedure usually involves seeking the heads from cylinder 0 to some other cylinder one time for every head on the drive. As this sequence occurs, the control circuitry in the drive monitors how much the track positions have moved since the last time the sequence was performed, and a thermal calibration adjustment is calculated and stored in the drive's memory. This information then is used every time the drive positions to ensure the most accurate positioning possible.

Most drives perform the thermal-recalibration sequence every five minutes for the first half-hour that the drive is powered on and then once every 25 minutes after that. With some drives (such as Quantum, for example), this thermal-calibration sequence is very noticeable; the drive essentially stops what it is doing, and you hear rapid ticking for a second or so. At this time, some people think that their drive is having a problem reading something and perhaps is conducting a read retry, but this is not true. Most of the newer intelligent drives (IDE and SCSI) employ this thermal-recalibration procedure for ultimate positioning accuracy.

As multimedia applications grew, thermal recalibration became a problem with some manufacturer's drives. The thermal recalibration sequence could interrupt a data transfer, which would make audio and video playback jitter. These companies released special A/V (Audio Visual) drives that would hide the thermal recalibration sequences and not let them ever interrupt a transfer. Most of the newer IDE and SCSI drives are A/V capable, which means the thermal recalibration sequences will not interrupt a transfer such as a video playback.

While we are on the subject of automatic drive functions, most of the drives that perform thermal-recalibration sequences also automatically perform a function called a disk sweep. This procedure is an automatic head seek that occurs after the drive has been idle for a period of time (for example, nine minutes). The disk-sweep function moves the heads to a random cylinder in the outer portion of the platters, which is considered to be the high float-height area because the head-to-platter velocity is highest. Then, if the drive continues to remain idle for another period, the heads move to another cylinder in this area, and the process continues indefinitely as long as the drive is powered on.

The disk-sweep function is designed to prevent the head from remaining stationary above one cylinder in the drive, where friction between the head and platter eventually would dig a trench in the media. Although the heads are not in direct contact with the media, they are so close that the constant air pressure from the head floating above a single cylinder causes friction and excessive wear.

Wedge Servo

Some early servo-controlled drives used a technique called a wedge servo. In these drives, the gray-code guidance information is contained in a "wedge" slice of the drive in each cylinder immediately preceding the index mark. The index mark indicates the beginning of each track, so the wedge-servo information was written in the Pre-Index Gap, which is at the end of each track. This area is provided for speed tolerance and normally is not used by the controller. Figure 14.6 shows the servo-wedge information on a drive.

Some controllers, such as the Xebec 1210 that IBM used in the XT, had to be notified that the drive was using a wedge servo so that they could shorten the sector timing to allow for the wedge-servo area. If they were not correctly configured, these controllers would not work properly with such drives. Many people believed--erroneously--that the wedge-servo information could be overwritten in such cases by an improper low-level format. This is not the case, however; all drives using a wedge servo disable any write commands and take control of the head select lines whenever the heads are above the wedge area. This procedure protects the servo from any possibility of being overwritten, no matter how hard you try. If the controller tried to write over this area, the drive would prevent the write, and the controller would be unable to complete the format. Most controllers simply do not write to the Pre-Index Gap area and do not need to be configured specially for wedge-servo drives.

The only way that the servo information normally could be damaged is by a powerful external magnetic field (or perhaps by a head crash or some other catastrophe). In such a case, the drive would have to be sent in for repair and re-servoing.

One problem is that the servo information appears only one time every revolution, which means that the drive often needs several revolutions before it can accurately determine and adjust the head position. Because of these problems, the wedge servo never was a popular design; it no longer is used in drives.

FIG. 14.6  A wedge servo.

Embedded Servo

An embedded servo (see Figure 14.7) is an enhancement of the wedge servo. Instead of placing the servo code before the beginning of each cylinder, an embedded servo design writes the servo information before the start of each sector. This arrangement enables the positioner circuits to receive feedback many times in a single revolution, making the head positioning much faster and more precise. Another advantage is that every track on the drive has this positioning information, so each head can quickly and efficiently adjust position to compensate for any changes in the platter or head dimensions, especially for changes due to thermal expansion or physical stress.

Most drives today use an embedded servo to control the positioning system. As in the wedge servo design, the embedded-servo information is protected by the drive circuits, and any write operations are blocked whenever the heads are above the servo information. Thus, it is impossible to overwrite the servo information with a low-level format, as many people incorrectly believed.

Although the embedded servo works much better than the wedge servo, because the feedback servo information is available several times in a single disk revolution, a system that offered continuous servo feedback information would be better.

FIG. 14.7  An embedded servo.

Dedicated Servo

A dedicated servo is a design in which the servo information is written continuously throughout the entire track, rather than just one time per track or at the beginning of each sector. Unfortunately, if this procedure were performed on the entire drive, no room would be left for data. For this reason, a dedicated servo uses one side of one of the platters exclusively for the servo-positioning information. The term dedicated comes from the fact that this platter side is completely dedicated to the servo information and cannot contain any data. Although the dedicated-servo design may seem to be wasteful, none of the other platter sides carries any servo information, and you end up losing about the same amount of total disk real estate as with the embedded servo.

When a dedicated-servo drive is manufactured, one side of one platter is deducted from normal read/write usage; on this platter is recorded a special set of gray-code data that indicates proper track positions. Because the head that rests above this surface cannot be used for normal reading and writing, these marks can never be erased, and the servo information is protected, as in the other servo designs. No low-level format or other procedure can possibly overwrite the servo information.

When the drive is commanded to move the heads to a specific cylinder, the internal drive electronics use the signals received by the servo head to determine the position of the heads. As the heads are moved, the track counters are read from the dedicated servo surface. When the requested track is detected below the servo head, the actuator is stopped. The servo electronics then fine-tune the position so that before writing is allowed, the heads are positioned precisely above the desired cylinder. Although only one head is used for servo tracking, the other heads are attached to the same rigid rack so that if one head is above the desired cylinder, all the others will be as well.

One noticeable trait of dedicated servo drives is that they usually have an odd number of heads. For example, the Toshiba MK-538FB 1.2G drive on which I am saving this chapter has eight platters but only 15 read/write heads; the drive uses a dedicated-servo positioning system, and the 16th head is the servo head. You will find that virtually all high-end drives use a dedicated servo because such a design offers servo information continuously, no matter where the heads are located. This system offers the greatest possible positioning accuracy. Some drives even combine a dedicated servo with an embedded servo, but this type of hybrid design is rare.

Automatic Head Parking

When a hard disk drive is powered off, the spring tension in each head arm pulls the heads into contact with the platters. The drive is designed to sustain thousands of takeoffs and landings, but it is wise to ensure that the landing occurs at a spot on the platter that contains no data. Some amount of abrasion occurs during the landing and takeoff process, removing just a "micro puff" from the media; but if the drive is jarred during the landing or takeoff process, real damage can occur.

One benefit of using a voice coil actuator is automatic head parking. In a drive that has a voice coil actuator, the heads are positioned and held by magnetic force. When power is removed from the drive, the magnetic field that holds the heads stationary over a particular cylinder dissipates, enabling the head rack to skitter across the drive surface and potentially cause damage. In the voice coil design, therefore, the head rack is attached to a weak spring at one end and a head stop at the other end. When the system is powered on, the spring normally is overcome by the magnetic force of the positioner. When the drive is powered off, however, the spring gently drags the head rack to a park-and-lock position before the drive slows down and the heads land. On many drives, you can actually hear the "ting...ting...ting...ting" sound as the heads literally bounce-park themselves, driven by this spring.

On a drive with a voice coil actuator, you can activate the parking mechanism simply by turning off the system; you do not need to run a program to park or retract the heads. In the event of a power outage, the heads park themselves automatically. (The drives un-park automatically when the system is powered on.)

Some stepper motor drives (such as the Seagate ST-251 series drives) park their heads, but this function is rare among stepper motor drives. The stepper motor drives that do park their heads usually use an ingenious system whereby the spindle motor actually is used as a generator after the power to the drive is turned off. The back EMF (Electro Motive Force), as it is called, is used to drive the stepper motor to park the heads.

Air Filters

Nearly all hard disk drives have two air filters. One filter is called the recirculating filter, and the other is called either a barometric or breather filter. These filters are permanently sealed inside the drive and are designed never to be changed for the life of the drive, unlike many older mainframe hard disks that had changeable filters. Many main- frame drives circulate air from outside the drive through a filter that must be changed periodically.

A hard disk on a PC system does not circulate air from inside to outside the HDA, or vice versa. The recirculating filter that is permanently installed inside the HDA is designed to filter only the small particles of media scraped off the platters during head takeoffs and landings (and possibly any other small particles dislodged inside the drive). Because PC hard disk drives are permanently sealed and do not circulate outside air, they can run in extremely dirty environments (see Figure 14.8).

FIG. 14.8  Air circulation in a hard disk.

The HDA in a hard disk is sealed but not airtight. The HDA is vented through a barometric or breather filter element that allows for pressure equalization (breathing) between the inside and outside of the drive. For this reason, most hard drives are rated by the drive's manufacturer to run in a specific range of altitudes, usually from -1,000 to +10,000 feet above sea level. In fact, some hard drives are not rated to exceed 7,000 feet while operating, because the air pressure would be too low inside the drive to float the heads properly. As the environmental air pressure changes, air bleeds into or out of the drive so that internal and external pressures are identical. Although air does bleed through a vent, contamination usually is not a concern, because the barometric filter on this vent is designed to filter out all particles larger than 0.3 micron (about 12 u-in) to meet the specifications for cleanliness inside the drive. You can see the vent holes on most drives, which are covered internally by this breather filter. Some drives use even finer-grade filter elements to keep out even smaller particles.

I conducted a seminar in Hawaii several years ago, and several of the students were from the Mauna Kea astronomical observatory. They indicated that virtually all hard disks they had tried to use at the observatory site had failed very quickly, if they worked at all. This was no surprise, because the observatory is at the 13,800-foot peak of the mountain, and at that altitude, even people don't function very well! At the time, it was suggested that the students investigate solid-state (RAM) disks, tape drives, or even floppy drives as their primary storage medium. Since that time, IBM's Adstar division (which produces all IBM hard drives) introduced a line of rugged 3 1/2-inch drives that are in fact hermetically sealed (airtight), although they do have air inside the HDA. Because they carry their own internal air under pressure, these drives can operate at any altitude, and also can withstand extremes of shock and temperature. The drives are designed for military and industrial applications, such as aboard aircraft and in extremely harsh environments.

Hard Disk Temperature Acclimation

To allow for pressure equalization, hard drives have a filtered port to bleed air into or out of the HDA as necessary.

This breathing also enables moisture to enter the drive, and after some period of time, it must be assumed that the humidity inside any hard disk is similar to that outside the drive. Humidity can become a serious problem if it is allowed to condense--and especially if the drive is powered up while this condensation is present. Most hard disk manufacturers have specified procedures for acclimating a hard drive to a new environment with different temperature and humidity ranges, especially for bringing a drive into a warmer environment in which condensation can form. This situation should be of special concern to users of laptop or portable systems with hard disks. If you leave a portable system in an automobile trunk during the winter, for example, it could be catastrophic to bring the machine inside and power it up without allowing it to acclimate to the temperature indoors.

The following text and Table 14.8 are taken from the factory packaging that Control Data Corporation (later Imprimis and eventually Seagate) used to ship its hard drives: If you have just received or removed this unit from a climate with temperatures at or below 50°F (10°C) do not open this container until the following conditions are met, otherwise condensation could occur and damage to the device and media may result. Place this package in the operating environment for the time duration according to the temperature chart.

Table 14.8  Hard Disk Drive Environmental Acclimation Table

Previous Climate Temp. Acclimation Time
+40°F (+4°C) 13 hours
+30°F (-1°C) 15 hours
+20°F (-7°C) 16 hours
+10°F (-12°C) 17 hours
0°F (-18°C) 18 hours
-10°F (-23°C) 20 hours
-20°F (-29°C) 22 hours
-30°F (-34°C) or less 27 hours

As you can see from this table, a hard disk that has been stored in a colder-than-normal environment must be placed in the normal operating environment for a specified amount of time to allow for acclimation before it is powered on.

Spindle Motors

The motor that spins the platters is called the spindle motor because it is connected to the spindle around which the platters revolve. Spindle motors in hard disks always are connected directly; no belts or gears are used. The motors must be free of noise and vibration; otherwise, they transmit to the platters a rumble that could disrupt reading and writing operations.

The motors also must be precisely controlled for speed. The platters on hard disks revolve at speeds ranging from 3,600 to 7,200 RPM or more, and the motor has a control circuit with a feedback loop to monitor and control this speed precisely. Because this speed control must be automatic, hard drives do not have a motor-speed adjustment. Some diagnostics programs claim to measure hard drive rotation speed, but all that these programs do is estimate the rotational speed by the timing at which sectors arrive.

There actually is no way for a program to measure hard disk rotational speed; this measurement can be made only with sophisticated testing equipment. Don't be alarmed if some diagnostic program tells you that your drive is spinning at an incorrect speed; most likely the program is wrong, not the drive. Platter rotation and timing information is simply not provided through the hard disk controller interface. In the past, software could give approximate rotational speed estimates by performing multiple sector read requests and timing them, but this was valid only when all drives had the same number of sectors per track (17) and they all spun at 3,600 RPM. Zoned Recording--combined with a variety of different rotational speeds found in modern drives, not to mention built-in buffers and caches--means that these calculation estimates cannot be performed accurately.

On most drives, the spindle motor is on the bottom of the drive, just below the sealed HDA. Many drives today, however, have the spindle motor built directly into the platter hub inside the HDA. By using an internal hub spindle motor, the manufacturer can stack more platters in the drive, because the spindle motor takes up no vertical space. This method allows for more platters than would be possible if the motor were outside the HDA.


NOTE: Spindle motors, particularly on the larger form-factor drives, can consume a great deal of 12-volt power. Most drives require two to three times the normal operating power when the motor first spins the platters. This heavy draw lasts only a few seconds, or until the drive platters reach operating speed. If you have more than one drive, you should try to sequence the start of the spindle motors so that the power supply does not receive such a large load from all the drives at the same time. Most SCSI and IDE drives have a delayed spindle-motor start feature.

Spindle Ground Strap

Some drives have a special grounding strap attached to a ground on the drive and resting on the center spindle of the platter spindle motor. This device is the single most likely cause of excessive drive noise.

The grounding strap usually is made of copper and often has a carbon or graphite button that contacts the motor or platter spindle. The grounding strap dissipates static generated by the platters as they spin through the air inside the HDA. If the platters generate static due to friction with the air, and if no place exists for this electrical potential to bleed off, static may discharge through the heads or the internal bearings in the motor. When static discharges through the motor bearings, it can burn the lubricants inside the sealed bearings. If the static charge discharges through the read/write heads, the heads can be damaged or data can be corrupted. The grounding strap bleeds off this static buildup to prevent these problems.

Where the spindle of the motor contacts the carbon contact button (at the end of the ground strap) spinning at full speed, the button often wears, creating a flat spot. The flat spot causes the strap to vibrate and produce a high-pitched squeal or whine. The noise may come and go, depending on temperature and humidity. Sometimes, banging the side of the machine can jar the strap so that the noise changes or goes away, but this is not the way to fix the problem. I am not suggesting that you bang on your system! (Most people mistake this noise for something much more serious, such as a total drive-motor failure or bearing failure, which rarely occur.)

If the spindle grounding strap vibrates and causes noise, you can remedy the situation in several ways:

On some drives, the spindle motor strap is easily accessible. On other drives, you have to partially disassemble the drive by removing the logic board or other external items to get to the strap.

Of these suggested solutions, the first one is the best. The best way to correct this problem is to glue (or otherwise affix) some rubber or foam to the strap. This procedure changes the harmonics of the strap and usually dampens vibrations. Most manufacturers now use this technique on newly manufactured drives. An easy way to do this is to place some foam tape on the back side of the ground strap.

You also can use a dab of silicone RTV (room-temperature vulcanizing) rubber or caulk on the back of the strap. If you try this method, be sure to use low-volatile (noncorrosive) silicone RTV sealer, which commonly is sold at auto-parts stores. The noncorrosive silicone will be listed on the label as being safe for automotive oxygen sensors. This low-volatile silicone also is free from corrosive acids that can damage the copper strap and is described as low-odor because it does not have the vinegar odor usually associated with silicone RTV. Dab a small amount on the back side of the copper strap (do not interfere with the contact location), and the problem should be solved permanently.

Lubricating the strap is an acceptable, but often temporary, solution. You will want to use some sort of conducting lube, such as a graphite-based compound (the kind used on frozen car locks). Any conductive lubricant (such as moly or lithium) will work as long as it is conductive, but do not use standard oil or grease. Simply dab a small amount of lubricant onto the end of a toothpick, and place a small drop directly on the point of contact.

The last solution is not acceptable. Tearing off the strap eliminates the noise, but it has several possible ramifications. Although the drive will work (silently) without the strap, an engineer placed it there for a reason. Imagine those ungrounded static charges leaving the platters through the heads, perhaps in the form of a spark--possibly even damaging the TF heads. You should choose one of the other solutions.

I mention this last solution only because several people have told me that members of the tech-support staff of some of the hard drive vendors, and even of some manufacturers, told them to remove the strap, which--of course--I do not recommend.

Logic Boards

A disk drive, including a hard disk drive, has one or more logic boards mounted on it. The logic boards contain the electronics that control the drive's spindle and head actuator systems and that present data to the controller in some agreed-upon form. In some drives, the controller is located on the drive, which can save on a system's total chip count.

Many disk drive failures occur in the logic board, not in the mechanical assembly. (This statement does not seem logical, but it is true.) Therefore, you can repair many failed drives by replacing the logic board, not the entire drive. Replacing the logic board, moreover, enables you to regain access to the data on the failed drive--something that replacing the entire drive precludes.

Logic boards can be removed or replaced because they simply plug into the drive. These boards usually are mounted with standard screw hardware. If a drive is failing and you have a spare, you may be able to verify a logic-board failure by taking the board off the known good drive and mounting it on the bad one. If your suspicions are confirmed, you can order a new logic board from the drive manufacturer, but unless you have data you need to recover, it makes more sense to just buy a new drive, considering today's cost.

To reduce costs further, many third-party vendors also can supply replacement logic-board assemblies. These companies often charge much less than the drive manufacturers for the same components. (See the vendor list in Appendix B for vendors of drive components, including logic boards.)

Cables and Connectors

Most hard disk drives have several connectors for interfacing to the system, receiving power, and sometimes grounding to the system chassis. Most drives have at least these three types of connectors:

Of these, the interface connectors are the most important, because they carry the data and command signals from the system to and from the drive. In many drive interfaces, the drive interface cables can be connected in a daisy chain or bus-type configuration. Most interfaces support at least two drives, and SCSI (Small Computer System Interface) supports up to seven in the chain. Some interfaces, such as ST-506/412 or ESDI (Enhanced Small Device Interface), use a separate cable for data and control signals. These drives have two cables from the controller interface to the drive. SCSI and IDE (Integrated Drive Electronics) drives usually have a single data and control connector. With these interfaces, the disk controller is built into the drive (see Figure 14.9).

FIG. 14.9  Typical hard disk connections (ST-506/412 or ESDI shown).

The different interfaces and cable specifications are covered in the sections on drive interfaces later in this chapter. You also will find connector pinout specifications for virtually all drive interfaces and cable connections in Chapter 15, "Hard Disk Interfaces."

The power connector usually is the same type that is used in floppy drives, and the same power-supply connector plugs into it. Most hard disk drives use both 5v and 12v power, although some of the smaller drives designed for portable applications use only 5v power. In most cases, the 12v power runs the spindle motor and head actuator, and the 5v power runs the circuitry. Make sure that your power supply can provide adequate power for the hard disk drives installed in your system.

The 12v-power consumption of a drive usually varies with the physical size of the unit. The larger the drive is and the more platters there are to spin, the more power is required. Also, the faster the drive spins, the more power required. For example, most of the 3 1/2-inch drives on the market today use roughly one-half to one-fourth the power (in watts) of the full-size 5 1/4-inch drives. Some of the very small (2 1/2- or 1.8-inch) hard disks barely sip electrical power and actually use 1 watt or less!

Ensuring an adequate power supply is particularly important with some systems, such as the original IBM AT. These systems have a power supply with three disk drive power connectors, labeled P10, P11, and P12. The three power connectors may seem to be equal, but the technical-reference manual for these systems indicates that 2.8 amps of 12v current is available on P10 and P11, and that only 1 amp of 12v current is available on P12. Because most full-height hard drives draw much more power than 1 amp, especially at startup, the P12 connector can be used only by floppy drives or half-height hard drives. Some 5 1/2-inch drives draw as much as 4 amps of current during the first few seconds of startup. These drives also can draw as much as 2.5 amps during normal operation. Most PC-compatible systems have a power supply with four or more disk drive power connectors that provide equal power.

A grounding tab provides a positive ground connection between the drive and the system's chassis. In most systems, the hard disk drive is mounted directly to the chassis using screws so the ground wire is unnecessary. On some systems, the drives are installed on plastic or fiberglass rails, which do not provide proper grounding. These systems must provide a grounding wire plugged into the drive at this grounding tab. Failure to ground the drive may result in improper operation, intermittent failure, or general read and write errors.

Configuration Items

To configure a hard disk drive for installation in a system, several jumpers (and, possibly, terminating resistors) usually must be set or configured properly. These items will vary from interface to interface and often from drive to drive as well. A complete discussion of the configuration settings for each interface appears in the section "Hard Disk Installation Procedures" later in this chapter.

The Faceplate or Bezel

Many hard disk drives offer a front faceplate, or bezel (see Figure 14.10) as an option. A bezel usually is supplied as an option for the drive rather than as a standard item. In most cases today, the bezel is a part of the case and not the drive itself.

FIG. 14.10  A hard drive faceplate (bezel).

Older systems had the drive installed so that it was visible outside the system case. Covering the hole drives could be an optional bezel or faceplate. Bezels often come in several sizes and colors to match various PC systems. Many faceplate configurations for 3 1/2-inch drives are available, including bezels that fit 3 1/2-inch drive bays as well as 5 1/4-inch drive bays. You even have a choice of colors (usually, black, cream, or white).

Some bezels feature a light-emitting diode (LED) that flickers when your hard disk is in use. The LED is mounted in the bezel; the wire hanging off the back of the LED plugs into the drive or perhaps the controller. In some drives, the LED is permanently mounted on the drive, and the bezel has a clear or colored window so that you can see the LED flicker while the drive is accessed.

One type of LED problem occurs with some older hard disk installations: If the drive has an LED, the LED may remain on continuously, as though it were a "power-on" light rather than an access light. This problem happens because the controllers in those systems have a direct connection for the LED, thus altering the drive LED function. Some controllers have a jumper that enables the controller to run the drive in what is called latched or unlatched mode. Latched mode means that the drive is selected continuously and that the drive LED remains lighted; in unlatched mode (to which we are more accustomed), the LED lights only when the drive is accessed. Check to see whether your controller has a jumper for changing this function; if so, you may be able to control the way the LED operates.

In systems in which the hard disk is hidden by the unit's cover, a bezel is not needed. In fact, using a bezel may prevent the cover from resting on the chassis properly, in which case the bezel will have to be removed. If you are installing a drive that does not have a proper bezel, frame, or rails to attach to the system, check Appendix B of this book; several listed vendors offer these accessories for a variety of drives.

Hard Disk Features

To make the best decision in purchasing a hard disk for your system, or to understand what differentiates one brand of hard disk from another, you must consider many features. This section examines the issues that you should consider when you evaluate drives:

Reliability

When you shop for a drive, you may notice a feature called the Mean Time Between Failures (MTBF) described in the brochures. MTBF figures usually range from 20,000 hours to 500,000 hours or more. I usually ignore these figures, because they usually are just theoretical--not actual--statistical values. Most drives that boast these figures have not even been manufactured for that length of time. One year of five-day work weeks with eight-hour days equals 2,080 hours of operation. If you never turn off your system for 365 days and run the full 24 hours per day, you operate your system 8,760 hours each year; a drive with a 500,000-hour MTBF rating is supposed to last (on average) 57 years before failing! Obviously, that figure cannot be derived from actual statistics because the particular drive probably has been on the market for less than a year.

Statistically, for the MTBF figures to have real weight, you must take a sample of drives, measure the failure rate for at least twice the rated figure, and measure how many drives fail in that time. To be really accurate, you would have to wait until all the drives fail and record the operating hours at each failure. Then you would average the running time for all the test samples to arrive at the average time before a drive failure. For a reported MTBF of 500,000 hours (common today), the test sample should be run for at least 1 million hours (114 years) to be truly accurate, yet the drive carries this specification on the day that it is introduced.

The bottom line is that I do not really place much emphasis on MTBF figures. Some of the worst drives that I have used boasted high MTBF figures, and some of the best drives have lower ones. These figures do not necessarily translate to reliability in the field, and that is why I generally place no importance on them.

Performance

When you select a hard disk, an important feature to consider is the performance (speed) of the drive. Hard disks come in a wide range of performance capabilities. As is true of many things, one of the best indicators of a drive's relative performance is its price. An old saying from the automobile-racing industry is appropriate here: "Speed costs money. How fast do you want to go?"

You can measure the speed of a disk drive in two ways:

Average seek time, normally measured in milliseconds (ms), is the average amount of time it takes to move the heads from one cylinder to another cylinder a random distance away. One way to measure this specification is to run many random track-seek operations and then divide the timed results by the number of seeks performed. This method provides an average time for a single seek.

The standard way to measure average seek time used by many drive manufacturers involves measuring the time that it takes the heads to move across one-third of the total cylinders. Average seek time depends only on the drive; the type of interface or controller has little effect on this specification. The rating is a gauge of the capabilities of the head actuator.


TIP: Be wary of benchmarks that claim to measure drive seek performance. Most IDE and SCSI drives use a scheme called sector translation, so any commands to the drive to move the heads to a specific cylinder do not actually cause the intended physical movement. This situation renders some benchmarks meaningless for those types of drives. SCSI drives also require an additional command, because the commands first must be sent to the drive over the SCSI bus. Even though these drives can have the fastest access times, because the command overhead is not factored in by most benchmarks, the benchmark programs produce poor performance figures for these drives.

A slightly different measurement, called average access time, involves another element, called latency. Latency is the average time (in milliseconds) that it takes for a sector to be available after the heads have reached a track. On average, this figure is half the time that it takes for the disk to rotate one time, which is 8.33 ms at 3,600 RPM. A drive that spins twice as fast would have half the latency. A measurement of average access time is the sum of the average seek time and latency. This number provides the average amount of time required before a randomly requested sector can be accessed.

Latency is a factor in disk read and write performance. Decreasing the latency increases the speed of access to data or files, accomplished only by spinning the drive platters faster. I have a drive that spins at 4,318 RPM, for a latency of 6.95 ms. Some drives spin at 7,200 RPM or faster, resulting in an even shorter latency time of only 4.17 ms. In addition to increasing performance where real-world access to data is concerned, spinning the platters faster also increases the data-transfer rate after the heads arrive at the desired sectors.

The transfer rate probably is more important to overall system performance than any other specification. Transfer rate is the rate at which the drive and controller can send data to the system. The transfer rate depends primarily on the drive's HDA and secondarily on the controller. Transfer rate used to be more bound to the limits of the controller, meaning that drives that were connected to newer controllers often outperformed those connected to older controllers. This situation is where the concept of interleaving sectors came from. Interleaving refers to the ordering of the sectors so that they are not sequential, enabling a slow controller to keep up without missing the next sector.

Modern drives with integrated controllers are fully capable of keeping up with the raw drive transfer rate. In other words, they no longer have to interleave the sectors to slow the data for the controller.

Another performance issue is the raw interface performance, which, in IDE or SCSI drives, usually is far higher than any of the drives themselves are able to sustain. Be wary of quoted transfer specifications for the interface, because the specifications may have little effect on what the drive can actually put out. The drive interface simply limits the maximum theoretical transfer rate; the actual drive and controller place the real limits on performance.

In older ST-506/412 interface drives, you could sometimes double or triple the transfer rate by changing the controller, because many of the older controllers could not support a 1:1 interleave. When you change the controller to one that does support this interleave, the transfer rate will be equal to the drive's true capability.

To calculate the true transfer rate of a drive, you need to know several important specifications. The two most important specifications are the true rotational speed of the drive (in RPM) and the average number of physical sectors on each track. I say "average" because most drives today use a Zoned Recording technique that places different numbers of sectors on the inner and outer cylinders. The transfer rate on Zoned Recording drives always is fastest in the outermost zone, where the sector per track count is highest. Also be aware that many drives (especially Zoned Recording drives) are configured with sector translation, so that the BIOS reported number of sectors per track has little to do with physical reality. You need to know the true physical parameters, rather than what the BIOS thinks.

When you know these figures, you can use the following formula to determine the maximum transfer rate in millions of bits per second (Mbps): Maximum Data Transfer Rate (Mbps) = SPT x 512 bytes x RPM / 60 seconds / 1,000,000 bits For example, the ST-12551N 2G 3 1/2-inch drive spins at 7,200 RPM and has an average of 81 sectors per track. The maximum transfer rate for this drive is figured as follows: 81 x 512 x 7,200 / 60 / 1,000,000 = 4.98Mbps Using this formula, you can calculate the true maximum sustained transfer rate of any drive.

Cache Programs and Caching Controllers

At the software level, disk cache programs such as SMARTDRV (DOS) or VCACHE (Windows 95) can have a major effect on disk drive performance. These cache programs hook into the BIOS hard drive interrupt and then intercept the read and write calls to the disk BIOS from application programs and the device drivers of DOS.

When an application program wants to read data from a hard drive, the cache program intercepts the read request, passes the read request to the hard drive controller in the usual way, saves the data that was read in its cache buffer, and then passes the data back to the application program. Depending on the size of the cache buffer, numerous sectors are read into and saved in the buffer.

When the application wants to read more data, the cache program again intercepts the request and examines its buffers to see whether the data is still in the cache. If so, the data is passed back to the application immediately, without another hard drive operation. As you can imagine, this method speeds access tremendously and can greatly affect disk drive performance measurements.

Most controllers now have some form of built-in hardware buffer or cache that doesn't intercept or use any BIOS interrupts. Instead, the caching is performed at the hardware level and is invisible to normal performance-measurement software. Track read-ahead buffers originally were included in controllers to allow for 1:1 interleave performance. Some controllers have simply increased the sizes of these read-ahead buffers; others have added intelligence by making them a cache instead of a simple buffer.

Many IDE and SCSI drives have cache memory built directly into the drive. For example, the Seagate Hawk 4G drive on which I am saving this chapter has 512K of built-in cache memory. Other drives have even more built-in caches, such as the Seagate Barracuda 4G with 1M of integral cache memory. I remember when 640K was a lot of memory; now, tiny 3 1/2-inch hard disk drives have more than that built right in! These integral caches are part of the reason why most IDE and SCSI drives perform so well.

Although software and hardware caches can make a drive faster for routine transfer operations, a cache will not affect the true maximum transfer rate that the drive can sustain.

Interleave Selection

In a discussion of disk performance, the issue of interleave always comes up. Although traditionally this was more a controller performance issue than a drive issue, most modern hard disks now have built-in controllers (IDE and SCSI) that are fully capable of taking the drive data as fast as the drive can send it. In other words, virtually all modern IDE and SCSI drives are formatted with no interleave (sometimes expressed as a 1:1 interleave ratio).

Head and Cylinder Skewing

Most controllers today are capable of transferring data at a 1:1 sector interleave. This is especially true of controllers that are built in to IDE and SCSI drives. With a 1:1 interleave controller, the maximum data transfer rate can be maintained when reading and writing sectors to the disk. Although it would seem that there is no other way to further improve efficiency and the transfer rate, many people overlook two important factors that are similar to interleave: head and cylinder skewing.

When a drive is reading (or writing) data sequentially, first all of the sectors on a given track are read; then the drive must electronically switch to the next head in the cylinder to continue the operation. If the sectors are not skewed from head to head within the cylinder, no delay occurs after the last sector on one track and before the arrival of the first sector on the next track. Because all drives require some time (although a small amount) to switch from one head to another, and because the controller also adds some overhead to the operation, it is likely that by the time the drive is ready to read the sectors on the newly selected track, the first sector will already have passed by. By skewing the sectors from one head to another--that is, rotating their arrangement on the track so that the arrival of the first sector is delayed relative to the preceding track--you can ensure that no extra disk revolutions will be required when switching heads. This method provides the highest possible transfer rate when head switching is involved.

In a similar fashion, it takes considerable time for the heads to move from one cylinder to another. If the sectors on one cylinder were not skewed from those on the preceding adjacent cylinder, it is likely that by the time the heads arrive, the first sector will already have passed below them, requiring an additional revolution of the disk before reading of the new cylinder can begin. By skewing the sectors from one cylinder to the next, you can account for the cylinder-to-cylinder head-movement time and prevent any additional revolutions of the drive.

Head Skew

Head skew is the offset in logical sector numbering between the same physical sectors on two tracks below adjacent heads of the same cylinder. The number of sectors skewed when switching from head to head within a single cylinder is to compensate for head switching and controller overhead time. Think of it as the surface of each platter being rotated as you traverse from head to head. This method permits continuous read or write operation across head boundaries without missing any disk revolutions, thus maximizing system performance.

To understand head skew, you first need to know the order in which tracks and sectors are read from a disk. If you imagine a single-platter (two-head) drive with 10 cylinders and 17 sectors per track, the first sector that will be read on the entire drive is Cylinder 0, Head 0, Sector 1. Following that, all the remaining sectors on that first track (Cylinder 0, Head 0) will be read until Sector 17 is reached. After that, two things could take place:

Because head movement takes much longer than electronically selecting another head, all disk drives will select the subsequent heads on a cylinder before physically moving the heads to the next cylinder. Thus, the next sector to be read would be Cylinder 0, Head 1, Sector 1. Next, all the remaining sectors on that track are read (2 through 17), and then in our single platter example it is time to switch heads. This sequence continues until the last sector on the last track is read--in this example, Cylinder 9, Head 1, Sector 17.

If you could take the tracks off a cylinder in this example and lay them on top of one another, the tracks might look like this:

Cylinder 0, Head 0: 1- 2- 3- 4- 5- 6- 7- 8- 9-10-11-12-13-14-15-16-17

Cylinder 0, Head 1: 1- 2- 3- 4- 5- 6- 7- 8- 9-10-11-12-13-14-15-16-17

After reading all the sectors on head 0, the controller switches heads to head 1 and continues the read (looping around to the beginning of the track). In this example, the sectors were not skewed at all between the heads, which means that the sectors are directly above and below one another in a given cylinder.

Now the platters in this example are spinning at 3,600 RPM, so one sector is passing below a head once every 980 millionths of a second! This obviously is a very small timing window. It takes some time for the head switch to occur (usually, 15 millionths of a second), plus some overhead time for the controller to pass the head-switch command. By the time the head switch is complete and you are ready to read the new track, sector 1 has already gone by! This problem is similar to interleaving when the interleave is too low. The drive is forced to wait while the platter spins around another revolution so that it can begin to pick up the track, starting with Sector 1.

This problem is easy to solve: Simply offset the sector numbering on subsequent tracks from those that precede them sufficiently to account for the head-switching and controller overhead time. That way, when Head 0, Sector 17 finishes and the head switches, Head 1, Sector 1 arrives right on time. The result looks something like this:

Cylinder 0, Head 0: 1- 2- 3- 4- 5- 6- 7- 8- 9-10-11-12-13-14-15-16-17

Cylinder 0, Head 1: 16-17- 1- 2- 3- 4- 5- 6- 7- 8- 9-10-11-12-13-14-15

Shifting the second track by two sectors provides time to allow for the head-switching overhead and is the equivalent to a head-skew factor of 2. In normal use, a drive switches heads much more often than it switches physical cylinders, which makes head skew more important than cylinder skew. Throughput can rise dramatically when a proper head skew is in place. Different head skews can account for different transfer rates among drives that have the same number of sectors per track and the same interleave.

A nonskewed MFM drive, for example, may have a transfer rate of 380K/sec, whereas the transfer rate of a drive with a head skew of 2 could rise to 425K/sec. Notice that different controllers and drives have different amounts of overhead, so real-world results will be different in each case. In most cases, the head-switch time is very small compared with the controller overhead. As with interleaving, it is better to be on the conservative side to avoid additional disk revolutions.

Cylinder Skew

Cylinder skew is the offset in logical sector numbering between the same physical sectors on two adjacent tracks on two adjacent cylinders.

The number of sectors skewed when switching tracks from one cylinder to the next is to compensate for track to track seek time. In essence, all of the sectors on adjacent tracks are rotated with respect to each other. This method permits continuous read or write operations across cylinder boundaries without missing any disk revolutions, thus maximizing system performance.

Cylinder skew is a larger numerical factor than head skew because more overhead exists. It takes much longer to move the heads from one cylinder to another than simply to switch heads. Also, the controller overhead in changing cylinders is higher as well.

Following is a depiction of our example drive with a head-skew factor of 2 but no cylinder skew.

Cylinder 0, Head 0: 1- 2- 3- 4- 5- 6- 7- 8- 9-10-11-12-13-14-15-16-17

Cylinder 0, Head 1: 16-17- 1- 2- 3- 4- 5- 6- 7- 8- 9-10-11-12-13-14-15

Cylinder 1, Head 0: 8- 9-10-11-12-13-14-15-16-17- 1- 2- 3- 4- 5- 6- 7

In this example, the cylinder-skew factor is 8. Shifting the sectors on the subsequent cylinder by eight sectors gives the drive and controller time to be ready for sector 1 on the next cylinder and eliminates an extra revolution of the disk.

Calculating Skew Factors

You can derive the correct head-skew factor from the following information and formula:

Head skew = (head-switch time/rotational period) x SPT + 2

In other words, the head-switching time of a drive is divided by the time required for a single rotation. The result is multiplied by the number of sectors per track, and 2 is added for controller overhead. The result should then be rounded up to the next whole integer (for example, 2.3 = 2, 2.5 = 3).

You can derive the correct cylinder-skew factor from the following information and formula: Cylinder skew = (track-to-track seek time/rotational period) x SPT + 4 In other words, the track-to-track seek time of a drive is divided by the time required for a single rotation. The result is multiplied by the number of sectors per track, and 4 is added for controller overhead. Round the result up to a whole integer (for example, 2.3 = 2, 2.5 = 3).

The following example uses typical figures for an ESDI drive and controller. If the head-switching time is 15 us (microseconds), the track-to-track seek is 3 ms, the rotational period is 16.67 ms (3,600 RPM), and the drive has 53 physical sectors per track: Head skew = (0.015/16.67) x 53 +2 = 2 (rounded up) Cylinder Skew = (3/16.67) x 53 + 4 = 14 (rounded up) If you do not have the necessary information to make the calculations, contact the drive manufacturer for recommendations. Otherwise, you can make the calculations by using conservative figures for head-switch and track-to-track access times. If you are unsure, just as with interleaving, it is better to be on the conservative side, which minimizes the possibility of additional rotations when reading sequential information on the drive. In most cases, a default head skew of 2 and a cylinder skew of 16 work well.

Because factors such as controller overhead can vary from model to model, sometimes the only way to figure out the best value is to experiment. You can try different skew values and then run data-transfer rate tests to see which value results in the highest performance. Be careful with these tests, however; many disk benchmark programs will only read or write data from one track or one cylinder during testing, which totally eliminates the effect of skewing on the results. The best type of benchmark to use for this testing is one that reads and writes large files on the disk.

Most real (controller register level) low-level format programs are capable of setting skew factors. Those programs that are supplied by a particular controller or drive manufacturer usually are already optimized for their particular drives and controllers, and may not allow you to change the skew. One of the best general-purpose register-level formatters on the market that gives you this flexibility is the Disk Manager program by Ontrack. I highly recommend this program, which you will find listed in Appendix B.

I normally do not recommend programs such as Norton Calibrate and Gibson Spinrite for re-interleaving drives, because these programs work only through the BIOS INT 13h functions rather than directly with the disk controller hardware. Thus, these programs cannot set skew factors properly, and using them actually may slow a drive that already has optimum interleave and skew factors.

Notice that most IDE and SCSI drives have their interleave and skew factors set to their optimum values by the manufacturer. In most cases, you cannot even change these values; in the cases in which you can, the most likely result is a slower drive. For this reason, most IDE drive manufacturers recommend against low-level formatting their drives. With some IDE drives, unless you use the right software, you might alter the optimum skew settings and slow the drive. IDE drives that use Zoned Recording cannot ever have the interleave or skew factors changed, and as such, they are fully protected. No matter how you try to format these drives, the interleave and skew factors cannot be altered. The same can be said for SCSI drives.

Shock Mounting

Most hard disks manufactured today have a shock-mounted HDA, which means that a rubber cushion is placed between the disk drive body and the mounting chassis. Some drives use more rubber than others, but for the most part, a shock mount is a shock mount. Some drives do not have a shock-isolated HDA due to physical or cost constraints. Be sure that the drive you are using has adequate shock-isolation mounts for the HDA, especially if you are using the drive in a portable PC system or in a system in which environmental conditions are less favorable than in a normal office. I usually never recommend a drive that lacks at least some form of shock mounting.

Cost

The cost of hard disk storage recently has fallen to 10 cents per megabyte or less. You can purchase 4G drives for under $400. That places the value of the 10M drive that I bought in 1983 at about $1. (Too bad--I paid $1,800 for it at the time!)

Of course, the cost of drives continues to fall, and eventually, even 10 cents per megabyte will seem expensive. Because of the low costs of disk storage today, not many drives that are less than 1G are even being manufactured.

Capacity

Four figures commonly are used in advertising drive capacity:

Most manufacturers of IDE and SCSI drives now report only the formatted capacities, because these drives are delivered preformatted. Most of the time, advertisements refer to the unformatted or formatted capacity in millions of bytes, because these figures are larger than the same capacity expressed in megabytes. This situation generates a great deal of confusion when the user runs FDISK (which reports total drive capacity in megabytes) and wonders where the missing space is. This question ranks as one of the most common questions that I hear during my seminars. Fortunately, the answer is easy; it only involves a little math to figure it out.

Perhaps the most common questions I get are concerning "missing" drive capacity. Consider the following example: "I just installed a new Western Digital AC2200 drive, billed as 212M. When I entered the drive parameters (989 cylinders, 12 heads, 35 sectors per track), both the BIOS Setup routine and FDISK report the drive as only 203M! What happened to the other 9M?"

The answer is only a few calculations away. By multiplying the drive specification parameters, you get this result:

Cylinders: 989
Heads: 12
Sectors per track: 35
Bytes per sector: 512
Total bytes: 212.67M
Total megabytes: 202.82Meg

The result figures to a capacity of 212.67M or 202.82Meg. Drive manufacturers usually report drive capacity in millions of bytes, whereas your BIOS and FDISK usually report the capacity in megabytes. One megabyte equals 1,048,576 bytes (or 1,024K, wherein each kilobyte is 1,024 bytes). So the bottom line is that this 212.67M drive also is a 202.82Meg drive! What is really confusing is that there is no industry-wide accepted way of differentiating binary Megabytes from decimal ones. Officially they are both abbreviated as M, so it is often hard to figure which one is being reported. Usually drive manufacturers will always report metric megabytes, since they result in larger, more impressive sounding numbers! One additional item to note about this particular drive is that it is a Zoned Recording drive and that the actual physical parameters are different. Physically, this drive has 1,971 cylinders and four heads; however, the total number of sectors on the drive (and, therefore, the capacity) is the same no matter how you translate the parameters.

Although Western Digital does not report the unformatted capacity of this particular drive, unformatted capacity usually works out to be about 19 percent larger than a drive's formatted capacity. The Seagate ST-12550N Barracuda 2G drive, for example, is advertised as having the following capacities:

Unformatted capacity: 2,572.00M
Unformatted capacity: 2,452.85Meg
Formatted capacity: 2,139.00M
Formatted capacity: 2,039.91Meg

Each of these four figures is a correct answer to the question "What is the storage cap-acity of the drive?" As you can see, however, the numbers are very different. In fact, yet another number could be used. Divide the 2,039.91Meg by 1,024, and the drive's capacity is 1.99G! So when you are comparing or discussing drive capacities, make sure that you are working with a consistent unit of measure, or your comparisons will be meaningless.

Specific Recommendations

If you are going to add a hard disk to a system today, I can give you a few recommendations. For the drive interface, there really are only two types to consider:

SCSI offers great expandability, cross-platform compatibility, high capacity, performance, and flexibility. IDE is less expensive than SCSI and also offers a very high-performance solution, but expansion, compatibility, capacity, and flexibility are more limited compared with SCSI. On the other hand, I usually recommend IDE for most people, because they will not be running more than two hard drives and may not need SCSI for other devices as well. SCSI offers some additional performance potential with a multithreaded OS like Windows NT or OS/2, but IDE offsets this with a lower overhead direct system bus attachment.


NOTE: Note that the current IDE standard is ATA-2 (AT Attachment), otherwise called Fast ATA-2 or Enhanced IDE. SCSI-2 is the current SCSI standard, with SCSI-3 still on the drawing board.


Previous chapterNext chapterContents


Macmillan Computer Publishing USA

© Copyright, Macmillan Computer Publishing. All rights reserved.