There are two different things: the data structures which describe how the disk space is used and the algorithms which operate with those structures. The smart and the dumb algorithms are equally applicable to FAT as well as any other structure.
Not too big of deal. We developed POSIX compatible FAT filesystem to be used with our OS or as a standalone. The allocation algorithm is more or less smart; there is a severe speed penalty for the fragmentation with the flash cards!
I plonked him a while ago. His posts are mostly nonsense, not worth reading.
What is it, if this is not a secret? RF design?
I know a guy who does the Spice level simulation of the microprocessors; he is complaining about that sort of problems :)
Vladimir Vassilevsky DSP and Mixed Signal Design Consultant
Since windows is closed source how are you gonna to proof your claims ?
One could analyze the binary data but this still requires knowledge of the file system layout and such.
Also I wouldn't be too surprised if windows xp x64 does a little back of defragging in the background ?
Finally I have a very large harddisk and try to delete as few things as possible to prevent any fragmentation from occuring.
So far it seems to be working quite nice... though I must say the system bootup time has increased a little bit... whatever it causes is hard to tell... many applications installed by now... and tortuisesvn installed too... that could be doing things as well
I do believe that that 'magic number' is a 0 (Zero).
FAT32 is supposed to keep a pointer to the lowest available cluster to avoid searching the whole FAT, but who knows if it works?
As well as some use a bit map for both free and allocated sectors.
--
ArarghMail802 at [drop the \'http://www.\' from ->] http://www.arargh.com
BCET Basic Compiler Page: http://www.arargh.com/basic/index.html
To reply by email, remove the extra stuff from the reply address.
It is "cluster" and no, generally not. It writes to the next "available" write space, and newly "released", just previously written to areas get put at the end of the list of available space as they get marked "free", so the first cluster to get written to is the first one that was originally available in the "newly formatted" original "list". This insures that all areas on a drive's platter gets "used", instead of keeping all your drive activity to a confined area on the drive, inviting an early failure mode.
Horseshit. The only way a file gets fragmented is if it gets edited since the original write and after other files have been written after the original write. NO OTHER WAY.
That said, only dopes that want to endlessly thrash their drive constantly defrag.
I defrag less than once a year. I have several partitions.
System screams right along.
Over-maintenance is idiocy.
FAT and FAT32 FS are accessed the same way they were in the DOS days. NTFS is a bit more OS managed, but not much.
The highest frequency is about 500Hz but yes I guess you could call it RF if you wanted to. I was reprocessing some old data that showed the natural variation in the earths magnetic field. It was a little over a week's worth of data.
It is simple with some tricky software that tracks the disk operations. It is a little harder but still quite easy to do if you just have software such as some defraging programs that show you where the sectors that are used are. You run a program under a good debugger that writes a file and the defraging program at the same time. You add something to the file and then look to see where the sector goes.
You can also disassemble the part of windows that deals with the hard disk operation and look to see what it does. Because it is needed to do the swapping the disk driver is always in RAM.
The layout is documented for FAT so that is no problem.
A well written disk system doesn't need degragging.
If boot time is increasing, something is wrong. The part of the disk that is needed for booting should be unchanging/
FATs don't store sector numbers, they store Allocation Unit Numbers, which start at 2.
--
ArarghMail802 at [drop the \'http://www.\' from ->] http://www.arargh.com
BCET Basic Compiler Page: http://www.arargh.com/basic/index.html
To reply by email, remove the extra stuff from the reply address.
Since you again snipped the context, I will answer woth "efficiently". Much earlier I gave a very quick explanation of the problem with your suggest method. Go back and look for it. I am way too lazy to do it for you.
No, the code I posted was LOG2(). It is a rather poor way to do the function but it is much simpler than the realk code.
On a nearly full, already heavily fragmented drive, yes.
On any drive with huge amounts of space left on them, particularly if it has never yet been written to, the file write will ALWAYS be contiguous and monolithic in nature. Also, constantly "defragged" drives will nearly always have their free space "defragged" as well, and that will result in several GB of new file writes, of which all will be contiguous.
Very large files will not obey the rule as much, but a few pieces of a for the most part contiguous huge file will not take a hit for it.
A database file is the most likely candidate for fragmentation as any changes made to the database results in a file fragment which will not be contiguous with the rest of the file.
Another is if one is DLing two files at once to the same partition.
So, it doesn't happen "all the time". It happens when you volume is nearly full already, and already has a highly fragmented fill on it.
If one's drive has never been filled since its original format, a new file write will always fill a new, contiguous space on the volume.
The download example you gave is an interesting example.
I might be prevented by allocating the complete file size for each download up front, if ofcourse the file size is known up front.
This would prevent the download from being fragmented.
Which could help prevent fragmentation in the future after one of the download is deleted and replaced by other files.
Though this would require one of the downloads to remain and the other to be deleted for fragmentation to occur... but still a pretty realistic possibility.
Doing so is possible by seeking to the end of the file-1 and performing a one byte zero write there or so.
This could immediatly allocate the file, unless maybe the file system is working in sparse mode... but that is probably not the case (?)
Finally it has adventages and disadventages:
Adventage:
The user will immediatly know if it has enough space for the downloads. This could prevent wasting bandwidth on download failures.
Disadventage:
The user has no longer time to free up space in case he suspects the download might not fit.
Adventage:
The user is garantueed the download will succeed if he leaves, because space has been pre-allocated. This could again prevent bandwidth being wasted on unnecessary try-again-packets.
Downloads could fail if space is not pre-allocated, other processess might be filling the harddisks.
In your scenerio it's totally possible that both downloads would fail.
Adventage:
File system remains unfragmented.
Disadventage:
If download fails, and partial file remains... then user could simply continue from the last byte, if all previous bytes are garantueed to have arrived, partial files makes evaluating if everything arrived more easy too, by simply comparing file size.
For example support os crashed during download... pre-allocated files could remain creating the illusion that they were downloaded fully.
It's a difficult decision to make.
Robustness vs Assurance vs Bandwidth Wasted vs Speed.
ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here.
All logos and trade names are the property of their respective owners.