Are SSDs always rubbish under winXP?

A decent SSD should have wear leveling but if you don't disable the virtual memory the SSD will wear out quickly.

Unless you disable the virtual memory XP swaps everything it can to the hard disk to have as much unused memory as possible. Its a real nuisance. Just install 2GB of memory and disable swap to get maximum performance. The performance gain is huge.

--
Failure does not prove something is impossible, failure simply
indicates you are not using the right tools...
nico@nctdevpuntnl (punt=.)
--------------------------------------------------------------
Reply to
Nico Coesel
Loading thread data ...

16G, if you intend to compile FPGAs.
--

John Larkin, President       Highland Technology Inc
www.highlandtechnology.com   jlarkin at highlandtechnology dot com   

Precision electronic instrumentation
Picosecond-resolution Digital Delay and Pulse generators
Custom timing and laser controllers
Photonics and fiberoptic TTL data links
VME  analog, thermocouple, LVDT, synchro, tachometer
Multichannel arbitrary waveform generators
Reply to
John Larkin

You're a dork.

Reply to
WoolyBully

And you're AlwaysWrong.

--

John Larkin, President Highland Technology Inc

formatting link
jlarkin at highlandtechnology dot com

Precision electronic instrumentation Picosecond-resolution Digital Delay and Pulse generators Custom timing and laser controllers Photonics and fiberoptic TTL data links VME analog, thermocouple, LVDT, synchro, tachometer Multichannel arbitrary waveform generators

Reply to
John Larkin

Only a 12 year old mind needs to call names.

My description of you is not a name, it is a behavioral observation. That is beside the fact that you do not know the first thing about how they are making memory arrays these days, much less any reliability figures, you lying POS.

Reply to
WoolyBully

Wrong again.

--

John Larkin, President       Highland Technology Inc
www.highlandtechnology.com   jlarkin at highlandtechnology dot com   

Precision electronic instrumentation
Picosecond-resolution Digital Delay and Pulse generators
Custom timing and laser controllers
Photonics and fiberoptic TTL data links
VME  analog, thermocouple, LVDT, synchro, tachometer
Multichannel arbitrary waveform generators
Reply to
John Larkin

It's not quite that bad - 30-60TB (obviously dependent on the drive sizes and other parameters) is pretty much the worst case (IOW,

30-60TB of random 4KB writes), not the best case.

Eh. I didn't used to worry about firmware updates on my phone, walkman, DVD player or car either. I do now. SO long as its fairly automatic and reliable (which is *not* sufficiently true now of many of these things, including hard-drive/SSD firmware). Most end users allow MS to install patches pretty much at will on Windows, and it rarely leads to disaster (at least not nearly as frequently as *not* installing the patches does!). At that level of reliability it becomes a non-issue for most people.

Reply to
Robert Wessel

One thing that I'd like to know is, how does it store the allocation table? That must be in flash too, or maybe EEPROM where cells are not paged, but they still have a write-cycle lifetime.

--

Reply in group, but if emailing add one more
zero, and remove the last word.
Reply to
Tom Del Rosso

They cheat! :> What you think of as the "FAT" is actually stored in RAM (!) It is recreated at power-up by the controller (or "processor" if you are using bare flash). This involves *scanning* the contents of the FLASH device and piecing together *distributed* bits of information that, in concert, let an *algorithm* determine what is where (and which "copy" is most current, etc.)

When you think about storage devices with non-uniform access characteristics (in this case, much longer write times than read times, durability, etc.), you can't approach the problem with conventional thinking (well, you *could* but you'd end up with really lousy implementations! :> ).

Think of how you would implement a *counter* in an EPROM, bipolar ROM, etc.

For simplicity, assume an erased device starts out as "0". So, a byte starts as 0000 0000b. You increment it to 0000 0001b. Now, you go to increment it a *second* time (to 0000 0010b) only to discover that you can't "write a 0" -- that would require ERASING the LSb.

OTOH, if you count differently:

0 0000 0000 1 0000 0001 2 0000 0011 3 0000 0111 4 0000 1111 5 0001 1111 6 0011 1111 7 0111 1111 8 1111 1111 You don't have to "erase" as often as you would, otherwise: 0 0000 0000 1 0000 0001 2 0000 0010 ERASE 3 0000 0011 4 0000 0100 ERASE 5 0000 0101 6 0000 0110 ERASE 7 0000 0111 8 0000 1000 ERASE The fact that the ERASED flash cells (assuming 0's, here) can be written with 1's -- and, that 1's can *also* be written with 1's (!) means you can skip the erase if you can exploit that aspect.

The goal is to minimize the number of erases.

The same idea is leveraged when storing the mapping structures inside the FLASH. E.g., instead of OVERWRITING an existing entry in the FAT with a "new" value, just *clobber* it! (using the convention above, that would be like writing all

1's over an existing datum). The software then steps *over* that entry when scanning the structure.

Eventually, when all of the entries in a "page" (I'm playing fast and loose with terminology, here) are "clobbered", the page can be ERASED and treated as a *clean* page to be reused by "whatever".

[This is a real back-of-the-napkin explanation. There is a lot more BFM involved. But, it's all painfully obvious -- once you've *seen* it! :> Bugs creep in because you are juggling several criteria at the same time -- trying to relocate data to other pages, tracking which blocks are in which pages, keeping track of how many erase cycles a page has experienced, harvesting pages ready for erasure, etc. And, at the same time, you are trying to make educated guesses at how best to provide "performance" ("Do I erase this page *now* and cause the current operation to wait? Or, do I defer that to some later time in the hope that I can sneak it in "for free", etc.]
Reply to
Don Y

Hector is angry, Hector is angry. Get a life Hector. Mikek

Reply to
amdx

Hector is AlwaysWrong.

Reply to
krw

All blocks are subject to wear leveling. that includes the FAT (if you use a filesystem that works that way)

The wear-leveling is hidden from the operating system.

--
?? 100% natural

--- Posted via news://freenews.netfront.net/ - Complaints to news@netfront.net
Reply to
Jasen Betts
[snip]

Audible noise, power consumption, shock resistance - at least those were the criteria that drove our decision to use them. This is for an embedded system though, not a desktop PC.

Have a look at Microsoft's Enhanced Write Filter. As far as I know it's only available as a component for the embedded versions of Windows - we have used it for XP and 7 so far. Our usage model is to partition the drive into two, then write-protect the C: drive using EWF and write all the application data to D:. All writes to C: are cached in RAM and get lost on power-off. This works fine for our usage pattern, the machine is not networked and the end user is not expected to, or allowed to make any changes to the OS.

Based on our experiences with SSDs I'd never use one for the primary storage for a desktop PC - and this was with industrial-grade single- level cell devices, not cheap commodity MLC stuff.

Reply to
<news

Of course I realize it's hidden from the OS. I said allocation table as in generic index, not FAT as in file system. But Don explained that it doesn't use a table at all.

--

Reply in group, but if emailing add one more
zero, and remove the last word.
Reply to
Tom Del Rosso

Thanks.

I remember a car odometer in the 80's that used PROM. I assumed it used a similar method.

--

Reply in group, but if emailing add one more
zero, and remove the last word.
Reply to
Tom Del Rosso

Probably.

Ages ago (dealing with very *slow* EPROMs), I used to put different options (that I wanted to test/evaluate) in the binary image using a form similar to:

LD ,Value6 LD ,Value5 LD ,Value4 LD ,Value3 LD ,Value2 LD ,Value1 DoSomething

Then, after evaluating how the code runs with "Value1", pull the EPROM and overwrite "LD ,Value1" with (effectively) "NoOps" to see how the code runs with "Value2". Lather, Rinse, Repeat.

(this assumes the opcode for that effective NoOp is more dominant than the "LD", etc. Obviously, there are other ways of getting similar results depending on what your actual opcodes are).

Reply to
Don Y

The part of the flash which can be used as empty space is very small if not zero. Vendors want to sell big SSDs and put as little flash in there as they can get away with. However for the wear leveling to actually work the SSD needs to know which space is really free. This is why SSDs have extra commands which allow the OS to tell the SSD which sectors are in use and which are not.

Windows XP does not have this feature so what you'll see is that the SSD will get slower over time and 'die' prematurely.

--
Failure does not prove something is impossible, failure simply
indicates you are not using the right tools...
nico@nctdevpuntnl (punt=.)
--------------------------------------------------------------
Reply to
Nico Coesel

That's not true. (well, perhaps for low end "consumer" kit it might be) Manufacturers can (and do) set aside extra flash capacity that is not "user accessible/visible". This is called "overprovisioning". It is a common way to increase performance (not just durability) of enterprise class devices.

Some devices may have as much as 30% or 40% *extra* (theoretical) "storage capacity" within the drive. Some SSD manufacturers have mechanisms (I want to say "provisions" but that would be a bad choice of words :> ) by which the user can customize the extent of the "overprovisioning". *Roughly* speaking, this is equivalent to low-level formatting the drive for a capacity less than its actual capacity. The excess capacity allows the controller within the drive more flexibility in replacing/remapping "blocks" to enhance durability, etc.

Expect future devices to go to even greater lengths trying to "move stuff around". E.g., the controller can, theoretically, take *any* "block" and move it to anywhere else as it sees fit as long as it keeps track of what it has done (and, does so in a manner that doesn't allow the block to "disappear" in the process -- imagine if it marked the original copy of the data as "deleted" and, for some misfortune, could NOT later store the moved copy in its intended NEW location)

They have two pressures working on them: one is to drive price per GB down -- so use exactly as much flash as you claim to have in the device; the other is to get reliability/durability/performance

*up* -- so, put EXTRA flash in the device and don't tell anyone it's there (or, let *them* make that decision).

There's nothing new about this sort of practice. How many "industrial" temp range devices actually have different processes and geometries than their "commercial" counterparts? How many "high reliability" devices have just undergone a more thorough shake'n'bake before sale? etc.

Reply to
Don Y

not

logs

I get very different results myself. If the writes are reasonable low proportion, no speed loss. Bulk file copy to SSD very fast, much better than rotating media.

If you can afford enough SSD i think they would make really great backup media. Hmmm. maybe they already do. Tapes be expensive, and duplicate HD are fragile.

?-)

Reply to
josephkk

The nature of the transaction has a LOT to do with performance. Thus, how well a particular SSD implementation may perform in a particular application domain!

E.g., streaming video to/from a SSD tends to involve lots of sequential accesses. An SSD can interleave the actual flash devices *inside* so that two, three, ten, etc. writes can be active at a given time! (not counting the RAM cache that is usually present).

OTOH, writing a "disk block" and then trying to write it again "immediately" could give you atrocious results (perhaps not in the trivial case but you should be able to see where this line of reasoning is headed)

I think SSDs are still too small for useful backup media (given that economy goes with scale... when you can put 4GB on a ~$0.00 DVD its awful hard to compete with pricey SSDs anywhere south of a TB)

I still prefer tape -- though I use large disks, optical media *and* tape for different types of backups (I keep three copies of everything of importance -- though often have only *one* copy of something that is "current" :< )

In addition to "fragile" (some tapes are actually *more* fragile!), I dislike disks because it is too easy for a single failure to wipe out the entire volume! You can't just pull the platters out and stuff them into another "transport" like you can with other removable media (tape, optical, etc.).

Reply to
Don Y

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.