Are SSDs always rubbish under winXP?

table?

paged, but

Probably difficult and expensive.

Now that is some useful thinking.

Also remember that MSwin continually writes to the registry, which is mirrored on disk.

?-)

Reply to
josephkk
Loading thread data ...

with

difficult,

One piece of the issue is not so much swap usage, and windows always uses some, but being able to turn it off when you want. It can even be done dynamically in unix/linux, but can barely be done at all in MSwin. It takes fairly advanced direct registry edits in MSwin, and a reboot.

?-)

Reply to
josephkk

with

difficult,

I did indeed mean difficult, deleting the swap file in MSwin is often a hand edit to the registry.

That by itself will NOT stop swapping nor get rid of the swap file.

?-)

Reply to
josephkk

George Neuner wrote

That's interesting and it may prolong the life of an SSD, but I don't think there is any way to disable swapping in winXP onwards. IIRC, a lot of apps stops running if you do that.

Win the old win3.1 you could just do that, and all disk activity stopped totally. I ran a multizone heating controller on such a system, for about 10 years. When NT came along, that was no longer possible (on the retail version).

Reply to
Peter

Hi Joseph,

*easier* (technically) to do than that which follows (which is a superset of this). From a *Marketing* perspective, however, it may be a problem as it would effectively render a drive "useless" (?) if deployed for a filesystem other than intended. (Also a problem if different filesystems are employed on the same medium concurrently)

This is why knowing the intended deployment can come in handy as the drive could "notice" that behavior and opt to divert it to a RAM-resident portion.

Clearly something has to be done as reliability will only get worse as geometries shrink. "The *good news: capacities are going up and costs are going DOWN! The *bad* news? So is durability!"

ISTM that SSDs really only make sense as read-only devices. Put the OS and applications on it and let everything else reside on writable media...

Reply to
Don Y

Ironically, Linux will occasionally use swap space (if enabled) when ram is not full to make the system run faster. If Linux feels that your system will be faster by pushing some old pages into swap to free up more space for file caches, then it will do so. Of course, being Linux it is all controllable - you enable or disable swap dynamically as you want, and can use the "swapiness" control to change the balance between freeing space for file cache, or minimising swap usage.

I certainly have no qualms about putting Linux swap files on SSDs.

Reply to
David Brown

That's a claim I've heard before, I'm not convinced either way.

To change the subject yet again the MS-DOS implementation fo FAT would not allocate recently freed blocks that were prior to the last block allocated until the end of the disk was reached. If one never rebooted (or removed the disk) this would give a primitive sort of wear leveling.

--
?? 100% natural
Reply to
Jasen Betts

The problem is that the disk needs to understand the filesystem(s)' format in order for it to "snoop" the allocation table and deduce, from that, which "physical blocks" (blech... I'm playing fast and loose with terminology, here) *in* the FLASH are STILL IN USE (i.e., are "referenced") vs. MARKED AS FREE.

As far as the SSD is concerned, *all* of the blocks are "in use" as soon as each one has been "touched".

Remember, the SSD implements a similar but *independant* "block tracking scheme" *under* the filesystem's "block tracking scheme" (block != block). So, getting them to be consistent with each other is the trick.

Some OS's include support for the ATA "TRIM" command which allows the OS to explicitly tell the drive which blocks are actually "free" (i.e., the OS can interpret the allocation table(s) on behalf of the SSD and tell the SSD which blocks are suitable for "reuse"). Some SSD manufacturers provide utilities (running

*under* the OS as applications) to interrogate the allocation table and convey this information to the SSD as a proxy for the (deficient) OS.

In either case, the SSD needs to support this capability. And, it doesn't get around the eventual wear-out issue.

[Or, as I mentioned elsewhere, let the drive snoop the partition table and "understand" the filesystem(s) present on its media]
Reply to
Don Y

These things should have full RAM buffers with write-back at power-down, as I think a double-layer cap could supply it especially if the block erases are done when power is up. Then you only have to erase each block once.

Except that DRAM is more expensive than flash, which is itself an odd development. When and why did that reversal occur?

--

Reply in group, but if emailing add one more
zero, and remove the last word.
Reply to
Tom Del Rosso

The paging mechanism itself is not at fault, but other things Microsoft got wrong are working against it.

As someone else said, the paging statistics often are misinterpreted. If you look with, e.g., Process Explorer, quite often you'll often find that there is very little in the pagefile and, at the same time, loads of unused RAM ... and yet the disk is churning. At least on NT/2K/XP ... Windows 7 and 2K3 and later server editions have self tuning management and do a much better job (though they all still have the performance counter registry access issues).

The first issue is that Windows uses relocatable libraries as opposed to position-independent libraries. Because dlls are not position independent, when multiples instances are mapped at different addresses, there must be multiple copies of the code in memory (one for each base address). The most commonly used OS dlls have unique base addresses so the odds of multiple mapping are very low (though not zero), but language runtime and user written dlls all have the same default base addresses unless the developer deliberately rebases them. Non-OS shared dlls often place unnecessary memory pressure on Windows. Code is paged directly from executables, so the pagefile is backing only instance data, but having to page in code for different instances increases disk accesses.

The second issue, which interacts with the first, is that Windows does not have a unified file system cache, but rather it tries to be "fair" by reserving cache address space for each running process. By default, Windows will take up to 80% of RAM for file caching, so if you have the normal situation where a lot of processes aren't using their allotted space, a lot of your RAM may be going unused.

There is a free tool called "cacheset" which will change the per process file cache limits. Unfortunately cacheset does not change the default settings in the registry, so you have to run it each time you log in, but the tool can be command line driven so you can place it a startup batch file.

Cacheset, Process Explorer, and a bunch of other useful stuff are all available at

formatting link

There are a number of registry tweaks available for adjusting process/system RAM distribution and default file caching parameters. You can find these with the search engine of your choice.

George

Reply to
George Neuner

AFAIK, you can run any version of Windows without a pagefile - given sufficient RAM. I haven't tried it with Win7 (or 8) yet, but I know from personal experience that it works in all the previous versions (including server editions).

I can only speculate as to why you couldn't make it work. Windows doesn't handle over-allocation of address space in the same way Unix and Linux does. Unix and Linux don't commit pages until you touch them, so you can do idiotic things like malloc 1.5GB in a system with 256MB of total VMM space. As long as you never touch the extra, you'll never have a problem.

But unless an application is deliberately written using VirtualAlloc() et al., Windows commits *all* pages of an allocation immediately. If there is no pagefile, the total of all the committed space has to fit into RAM, so if programs are grabbing more memory than they intend to use, you can easily have a problem.

George

Reply to
George Neuner

Think about that for a moment. *ASSUMING*, by "full RAM buffers" you mean "a bit of RAM for each bit of FLASH(ROM)", you're talking about ~1TB of RAM in addition to the ~1TB of FLASH!

Flash was a new technology (WAROM :> ). Every new technology rides the manufacturing efficiency curve downward.

Of course, FLASH has many limitations that (any form of) RAM doesn't. OTOH, it is a very *tiny* geometry! You need less "stuff" to store the data.

SRAM requires several (6?) transistors to ACTIVELY store (latch) the datum. Transistors take up space. SRAM is (universally?) highly addressable ("byte at a time", so to speak). So, the decoding logic also takes up space.

DRAM requires just *one* transistor -- and a *capacitor* (that "holds charge" to remember the datum!). Capacitors are big.

FLASH requires just the *transistor*. And, can be stacked (3D) to cram more than one "bit" in a "cell".

NAND flash sacrifices flexibility in addressability for even smaller effective cell sizes.

In addition to size (which translates most directly into manufacturing costs), FLASH uses less power to retain/retrieve/update the data it contains (i.e., it will retain the data in the absence of power!). SRAM uses *gobs* of power to hold the "latches" in their particular states. DRAM uses less to keep those capacitors "topped off" with the right amount of charge (this is what "refresh" is all about).

Power == heat. Imagine putting 60 16GB DIMMs in a case the size of a 3.5" disk drive (1TB -- neglecting any ECC). How do you even package something like that with any hope of getting all the heat

*out* of the case?

DRAM (or any "faster-than-FLASH" RAM) is overkill for this type of application. It offers more bandwidth than is needed. You end up paying for that -- with extra product cost, power requirements, size, weight, complexity, etc.

FLASH tries to hit a sweet spot in that application domain. It gives you higher bandwidth (pseudo-random access) than rotating media without being EXHORBITANTLY so (like DRAM). It avoids the mechanical consequences of rotating media (*drop* your disk-based product on a construction site and see how well it fares :> ).

Of course, the nature of Engineering is such that there are no free lunches. So, you pay for these features with liabilities!

Reply to
Don Y

I get free lunches from the plates left after meetings all the time.

There are always about 15 or so that never get taken, and when the email hits that the meeting is over and there are left overs, you'd better get there fast! Always good to be friends with the exec secs.

Mmmmmmm... Turkey and Bacon Club Wraps... Or On Sourdough... Mmmmmmm.

Reply to
TheQuickBrownFox

fat

Yes and no. I was looking at determining filesystem from usage patterns rather than implementing different wear leveling algorithms for different filsystems. Different code per file system is something that both = variant approaches have in common.

This would be vastly cheaper than trying to infer it from usage patterns. let alone the engineering to figure out how to do that.

I must agree in part. SSD is fine for a write infrequently, read lots kind of use, the OS and applications is a good cut. But it is not a = write once read forever (DVD-R) situation. Not so good for log files and such (write lots, read infrequently). Its major attraction is average access time which is >100 times faster than rotating disk. And it is cheaper than RAM which is another 100 times faster). Getting the most out of a particular machine requires balancing these systems, Amdahls's law is helpful here.

?-)

Reply to
josephkk

allocation table?

paged, but

fat

Actually the easy and better approach may be to watch for frequently rewritten blocks and make sure to keep them moving around. In better engineered systems use that remapping to move them through relatively steady areas of storage preferentially. Simpler implementation and works for all OSs and filesystems (except swap).

?-)

Reply to
josephkk

Ok, I see your point. But deleting the file technically is different from telling Windows not to use it. After turning off paging and rebooting, the file - even if there - won't be used. You can confirm this by monitoring.

2K and above do make it hard to remove the file permanently. Once paging is disabled, the file isn't in use and you can delete it with no problem, but without the registry edit you refer to the system will recreate the missing file on every reboot.

The simplest thing for most people to do is to reduce the file to minimum size (2MB). After reboot, the file will be truncated and won't ever grow. Then just forget about it.

George

Reply to
George Neuner

table?

Wear leveling algorithms effectively do that. The erase count for each flash "page" is tracked. If a page is rewritten, then a count, *somewhere*, is incremented.

I.e., frequently written (filesystem-)blocks will get moved as a consequence of this. The problem is identifying those filesystem blocks that are "no longer being used" (and, from that, the associated flash pages) and good candidates for reuse. The drive needs to know *how* the medium is being used (i.e., the structure and contents of the filesystem) in order to infer this on its own (else, the OS needs to explicitly TELL it the information that it needs).

Imagine I give you each of my telephone messages (those little pink slips of paper) and ask you to hold onto them for me. Months later, your pockets are bulging with all these slips of paper. Soon, you'll have no place to store them! How do you decide which slips you can discard? You have no knowledge of which are still *pertinent* to me!

[This is a bad analogy but it illustrates how the information that *you* have needs to know the information *I* have in order to best make use of the space you have available for holding those little slips of paper! Imagine if you could snoop on all my phone calls and other contacts and DECIDE FOR YOURSELF if I have "returned" a call for which you have been holding a "slip". You could then better manage the slips as you would know which ones you could discard. Failing this, you would have to rely on ME -- playing the role of OS in this analogy -- to tell you which to discard!]
Reply to
Don Y

Hi Joseph,

"Past performance is not a predictor of future performance" :>

It is really hard to look at the SSD's upward facing interface and decide on an EFFECTIVE strategy for managing the medium within.

If is hammering away at 2 particular blocks on the disk, will that behavior continue? Or, will the NEXT two blocks get hammered on just as soon as you (the SSD) have decided that this is a behavior pattern that you can exploit?

With a desktop environment, you have no predictive power as to what the user is likely to WANT to do next.

This then becomes a marketing problem. Now you have a device that fits *some* markets but not others. OK, you can deal with the 800 pound gorillas (MS & Apple). But, what about other deployments? What about *new* filesystems that don't exist at the time you release the product? ("Sorry, this disk can only be used on machines running DOS version X or earlier")

Imagine if semiconductor devices were constrained to ONLY operate in certain applications ("Sorry, this resistor can only be used in personal media players")

And, of course, snooping the filesystem is more complicated (in addition to all the other functions that the SSD must

*still* perform) than just acting like a block storage device! (bugs)

No. The DVD-RW analogy is more appropriate.

But don't forget the mechanical aspects of its appeal: you can drop the thing WHILE OPERATING and not feel your sphincter clench in the process! :>

(There are many other nice features -- like the fact that it spins up and down *really* fast!)

Reply to
Don Y

fat

patterns

different

variant

Perhaps not entirely, there is the registry issue in MS OSs. It is = memory mapped and only on the order of 1/2 MiB to 1 MiB (call it 1000 to 2000 blocks) and i will bet that the frequent(every second) writes cover only = a few blocks.

patterns.

Now that is a little past over the top in straining the analogy.

is

write

Partially. The write and erase endurance of DVD-RW is not appropriate, more like DVD-RAM write endurance. (two magnitudes better, minimum)

access

And it even seeks really fast.

Reply to
josephkk

other

works

Without full models of every file system the is no even hopeful way a storage device can guess "no longer used". Not used before, not written much, and not written recently can all be tracked in dependant of the OS and file system. These are useful for remapping. Of course the space = for keeping this data is in addition to the space for the user available storage. This is why flash in non power of two sizes are reasonable.

=46or general data storage devices this is not the issue. More advanced interfaces for flash to tell it this is no longer needed is an answer. Other than that the techniques already discussed will have to suffice, in spite of more failures than predicted.

Reply to
josephkk

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.