Multithreaded disk access

As a *rough* figure, what would you expect the bandwidth of a disk drive (spinning rust) to do as a function of number of discrete files being accessed, concurrently?

E.g., if you can monitor the rough throughput of each stream and sum them, will they sum to 100% of the drive's bandwidth? 90%? 110? etc.

[Note that drives have read-ahead and write caches so the speed of the media might not bleed through to the application layer. And, filesystem code also throws a wrench in the works. Assume caching in the system is disabled/ineffective.]

Said another way, what's a reasonably reliable way of determining when you are I/O bound by the hardware and when more threads won't result in more performance?

Reply to
Don Y
Loading thread data ...

You know that you can't actually get data off the media faster than the fundamental data rate of the media.

As you mention, cache can give an apparent rate faster than the media, but you seem to be willing to assume that caching doesn't affect your rate, and each chunk will only be returned once.

Pathological access patterns can reduce this rate dramatically, and worse case can result in rates of only a few percent of this factor if you force significant seeks between each sector read (and overload the buffering so it can't hold larger reads for a given stream).

Non-Pathological access can often result in near 100% of the access rate.

The best test of if you are I/O bound is if the I/O system is constantly in use, and every I/O request has another pending when it finishes, then you are totally I/O bound.

Reply to
Richard Damon

If caching is disabled things can get really bad quite quickly, think on updating directory entries to reflect modification/access dates, file sizes, scattering etc., think also allocation table accesses etc. E.g. in dps on a larger disk partition (say >100 gigabytes) the first CAT (cluster allocation table) access after boot takes some noticeable time, a second maybe; then it stops being noticeable at all as the CAT is updated rarely and on a modified area basis only (this on a a processor capable of 20 Mbytes/second) (dps needs the entire CAT to allocate new space in order to do its (enhanced) worst fit scheme). IOW if you torture the disk with constant seeks and scattered accesses you can slow it down from somewhat to a lot, depends on way too many factors to be worth wondering about.

Just try it out for some time and make your pick. Recently I did that dfs (distributed file system, over tcp) for dps and had to watch much of this going on, at some point you reach something between 50 and 100% of the hardware limit, depending on file sizes you copy and who knows what else overhead you can think of.

Dimiter

====================================================== Dimiter Popoff, TGI

formatting link
======================================================
formatting link

Reply to
Dimiter_Popoff

Yes, but you don't know that rate *and* that rate varies based on "where" you're accesses land on the physical medium (e.g., ZDR, shingled drives, etc.)

Cache in the filesystem code will be counterproductive. Cache in the drive may be a win for some accesses and a loss for others (e.g., if the drive read ahead thinking the next read was going to be sequential with the last -- and that proves to be wrong -- the drive may have missed an opportunity to respond more quickly to the ACTUAL access that follows).

[I'm avoiding talking about reads AND writes just to keep the discussion complexity manageable -- to avoid having to introduce caveats with every statement]

Exactly. But, you don't necessarily know where your next access will take you. This variation in throughput is what makes defining "i/o bound" tricky; if the access patterns at some instant (instant being a period over which you base your decision) make the drive look slow, then you would opt NOT to spawn a new thread to take advantage of excess throughput. Similarly, if the drive "looks" serendipitously fast, you may spawn another thread and its accesses will eventually conflict with those of the first thread to lower overall throughput.

But, if you make that assessment when the access pattern is "unfortunate", you erroneously conclude the disk is at its capacity. And, vice versa.

Without control over the access patterns, it seems like there is no reliable strategy for determining when another thread can be advantageous (?)

Reply to
Don Y

But all of these still have a 'maximum' rate, so you can still define a maximum. It does say that the 'expected' rate you can get gets more variable.

Yes, the drive might try to read ahead and hurt itself, or it might not. That is mostly out of your control.

Yes, adding more threads might change the access pattern. it will TEND to make the pattern less sequential, and thus more towards that pathological case (and thus more threads actually decrease the rate you can do I/O and thus slow down your I//O bound rate). It is possible that it just happens to be fortunate to make things more sequential, if the system can see that one thread wants sector N and another wants sector N+1, something can schedule the reads together and drop a seek.

Predicting that sort of behavior can't be done 'in the abstract'. You need to think about the details of the system.

As a general principle, if the I/O system is saturated, the job is I/O bound. Adding more threads will only help if you have the resources to queue up more requests and can optimize the order of servicing them to be more efficient with I/O. Predicting that means you need to know and have some control over the access pattern.

Note, part of this is being able to trade memory to improve I/O speed. If you know that EVENTUALLY you will want the next sector after the one you are reading, reading that now and caching it will be a win, but only if you will be able to use that data before you need to claim that memory for other uses. This sort of improvement really does require knowing details you want to try to assume you don't want to know, so you are limiting your ability to make accurate decisions.

Reply to
Richard Damon

Roughly speaking a drive spinning at 7500 rpm divided by 60 Is 125 revolutions a second and a seek takes half a revolution and the next file is another half a revolution away on average, which gets you 125 files a second roughly speaking depending on the performance of the drive if my numbers are not too far off.

This is plenty to support a dozen Windows VM?s on average if it were not for Windows updates that saturate the disks with hundreds of little file updates at once, causing Microsoft SQL timeouts for the VM?s.

Reply to
Brett

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.