Disk geometries

Hi,

I'm wondering what factors drive disk geometries (and, thus, capacities). I.e., what makes certain sizes common and others less common (e.g., you never? saw 7GB disks).

Of course, the sizes of the magnetics determine the sizes of the magnetic domains that can be resolved, etc. But, I don't see anything else in the design of a disk system that forces capacities to the values that are commonplace.

E.g., semiconductor memory has reasons for wanting to be sized in powers of two -- there is no manufacturing advantage to making a 5KB device (e.g.). If a foundry can improve its process, it can make a *smaller* 4KB device and (hopefully) improve market share, profit margin, etc. that way. Ultimately, make an 8KB device that's the size of the old 4KB device, etc.

But, disk platters are fixed sizes (?). There are no economies (?) to be gained by shrinking platter sizes as your magnetics improve. There are no "standards" (e.g., like with removable media) that force the magnetic domains to be of a particular size.

(you can pursue this reasoning to considerable depth)

So, why don't we see disks with 7% more capacity as magnetics shrink by 7% (e.g.)? Or, is it just not economical to retool for anything less than a 2X capacity increase?

--don

Reply to
D Yuniskis
Loading thread data ...

Good questions. I can say that at one point drives did increase in smaller percentages. My earliest hard drive that I purchased myself was 10M (I'd used smaller drives before.) The next step up, a year later, was 20M (and 30M, optionally.) Then 40M. Then 45M. Then 60M. Then 80M. Then

100M and 120M and 150M and 180M. Roughly, about that time the market quickly switched from very expensive MFM (well, very expensive for the larger sizes, anyway) to the cheaper, less repairable, but faster IDE drives and the numbers comparatively quickly rose upwards. But I definitely remember even then 1.2G, 1.6G, 1.7G and 1.76G, 2G and 2.1G, and so on. So your suggestion about incremental increases being a reasonable expectation is consistent with _some_ of the history I recall.

Jon

Reply to
Jon Kirwan

If you look at the actual amount of usable space on disk, it is quite different from model to model and even from version to version, although the "rated" capacity is the same.

VLV

Reply to
Vladimir Vassilevsky

But those changes seem to be "in the noise". E.g., as if an extra cylinder or two were added.

In the PC market, you start to see 50% increases being common (1TB, 1.5TB, 2TB, etc.) instead of 2X. But, nothing finer grained than this (and, marketing to The Unwashed Masses, you would think there would be a push to pitch a 1.1TB disk over a competitor's 1.0TB drive -- perhaps this is what drives the "50%" number?)

I.e., do *all* the disk fabs use the same magnetics, coatings, etc. One would think there would be more "natural variation" in product offerings (?)

Reply to
D Yuniskis

Number of surfaces. Three platters gives 1 through 6 possible surfaces. Delete iron or heads as required by the marketeering department.

There is. Small disks are more stable than large ones. That's why 3.5" disks overtook 5" disks.

When 2X is next month, why bother. ;-)

Reply to
krw

When I was working with PC's I found that we could optimize some disks beyond the rated capacity just by fiddling with the prime factors of the number of actual addressable sectors on the drive.

Reply to
Richard Henry

This seems a contradiction in terms. If you are factoring the total sector count and coming up with a different "physical" geometry, the capacity remains the same (since the product of all of the factors is a constant).

With modern drives (last 10+ years?), the actual physical geometry "published" bears little resemblance to the actual geometry due to things like ZDR.

Most drives now use LBA -- either explicitly or implicitly (the days of a drive *requiring* an INITIALIZE command to tell *it* what it's physical geometry is are long gone :> )

Reply to
D Yuniskis

Ah, the fond memories of manually entering the known-bad sectors list hand-written on the top of the drive into the formatting utility so as to let the OS avoid storing any files there... :-)

Of course that was in the days of, e.g., 20MB drives. I bet there's already thousands of bad sectors in a 1TB drive the day it leaves the factory!

Reply to
Joel Koltner

In the old days, when drives were limited to 8 GB by the BIOS disk- decription parameters, it was sometimes possible to squeeze a little more into them by careful selection of the parameters. The manufacturer's default or recommended settings did not always get the largest recording volume.

Reply to
Richard Henry

Grab a brand new Seagate drive, hook into a Linux box, run smartctl -a, walk away for a couple hours than check the numbers again -- all by itself the drive busily does its runtime calibration and stuff, lots of soft errors in the first few hours power on time.

So these days I leave the drive to itself for a few hours before installing OS, seems much more reliable after that. Old argument used to be let the drive properly warm up before formatting.

I also do an surface write to zeroes prior to formatting, superstition, perhaps? (dd if=/dev/zero bs=1M, of=/dev/sd$new_drive). It gives the controller a chance to remap iffy sectors before they've got my data on them.

Grant.

--
http://bugs.id.au/
Reply to
Grant

When the last mechanical memory drive has ceased production, the true future will have arrived.

--
Dirk

http://www.transcendence.me.uk/ - Transcendence UK
http://www.blogtalkradio.com/onetribe - Occult Talk Show
Reply to
Dirk Bruere at NeoPax

It is gonna be a while yet.

Reply to
Archimedes' Lever

I periodically check the G-list's on my SCSI drives in an attempt to give me a heads-up re: potential failures. Much the same way that SMART tries to work on IDE drives.

Of course, SMART hasn't proven to be very *smart* so I suspect my efforts are probably just "self-reassuring" :-/ (though, I think, G-list additions *are* statistically significant as predictors)

Could be. OTOH, materials have improved, platters are smaller.

Reply to
D Yuniskis

Most drives do this. A/V drives skip it (or dramatically scale it back) as it impacts throughput when you are operating the drive in a near continuous fashion.

Reply to
D Yuniskis

Is core still being made?

--
Dirk

http://www.transcendence.me.uk/ - Transcendence UK
http://www.blogtalkradio.com/onetribe - Occult Talk Show
Reply to
Dirk Bruere at NeoPax

IIRC, early shuttles used it. The military may still use it (hardened).

Reply to
D Yuniskis

let

Quite likely, but given the pickiness of some here, I thought I'd stick to recounting my own experience :)

Maybe the A/V drives been 'run in' at the factory? My impression is that the drives are not finely calibrated before shipment, they self calibrate in use. Probably because the things need to compensate for temperature variations and mechanical wear in normal use anyway?

Grant.

--
http://bugs.id.au/
Reply to
Grant

Because like cannon, for decades only one or two companies drove new HD design and development. That was IBM. When it was huge, multi-disc

5.25" form factor stuff, platter capacity drove it. Now that areal density is so high, reliability, thermal and power issues drive it into smaller form factors.

They were always breaking new records in areal density, and that allowed for the reduced platter diameters which raised MTBF and lowered heat, so everyone was 'buying' IBM's designs for MR recording technology. and now on into the perpendicular orientation heads. IBM has ended their foray and sold that division to Hitachi. Now they and Seagate have the best drives. The rest are mass volume OEM players type folks (WD).

So mainly smaller is better because of heat and power requisite concerns. The 15k rpm drives are like 1.2" and 1.5" 'platters'.

SAS or Serial Attached SCSI is a happening now future wave.

We will be carrying drives around soon enough. Stop buying memory sticks and keep the mini hard drive alive!

Reply to
Archimedes' Lever

let

already

Sorry, I wasn't as complete in my explanation as I should have been.

Reply to
D Yuniskis

Our ability to record across more and more of that area and tighter and tighter across the face molecules determine the actual bit count, not merely area itself.

Heads have gotten better, electronics has gotten better at modulating them, motor controllers have gotten better at moving them more precisely, and spindle and head arm bearings have gotten better at moving without 'bumps' in their motion.

THEN they went and flipped the record axis 90 degrees and quadrupled what they were already doing. (perpendicular recording)

Reply to
Archimedes' Lever

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.