USB memory sticks for root file system - experiences

I am not sure whether folks in this group would count a device such as the pogoplug as an embedded device, but it applies also to many development boards.

I am experimenting for some time with two pogoplugs running linux, booting arch linux from an USB memory stick. It seems that particular the cheap supermarket memory sticks do not last long. In one case, within a week the USB memory stick got unusable, it cannot even be formatted anymore. The last thing I did to it - after it showed problems with ext2 - was formatting it back to vfat and running a capacity checker under windows.

Another stick showed frequently file system corruptions. Now a linux box thinks the file system is clean, but uBoot does not recognize the partition table.

The two failing USB stickes were emtec and disk2go.

A colleague recommended Verbatim sticks, I tried one on a development board and that worked so far. I will test my pogoplug with a new Verbatim stick.

Any other recommendations?

Andreas

Reply to
acd
Loading thread data ...

Memory devices (both USB, SD card and other types) have limitations on the number of times blocks can be erased. The problem is that the minimum erase size can be pretty large, so that making a small modification to a file, or to an inode, can result in a large block of flash that needs to be erased.

These problems get worse if you modify structures that sit on the boundary of two erase blocks, because it requires erasing both of them.

To make these devices last longer, manufacturers typically have a number of blocks at the beginning of the device that have been optimized for frequent access, so they are well suited to FAT tables, and such.

The position of those special blocks is not consistent among vendors, but since these devices come pre-formatted, it is best to leave that formatting alone, assuming that the manufacturer already matched the filesystem to the card, as best as possible. Also, the FAT filesystem itself is a good match for the card, since the FAT updates are concentrated in local tables, and not spread out all over the disk like ext2 inodes. So, it's best if you write your application so it can use FAT filesystem. If you must reformat the card with ext2, at least use the original partition table (even if it wastes 4M at the beginning like some cards do).

Also, to reduce writes, make sure you set the 'noatime' option in your mount command, otherwise the inode is rewritten even if you just read a file.

Reply to
Arlet Ottens

I would expect the device's built-in wear leveling algorithm to actually spread write operations all over the "disk", even if the same logical blocks are rewritten.

For the same reason, relocate /tmp and /var/log to a RAM disk.

-- Roberto Waltman

[ Please reply to the group, return address is invalid ]
Reply to
Roberto Waltman

Make sure you don't have accidental power outages, especially when writing to device. That could result in all kinds of problems from file system corruption to flash hardware failures.

Vladimir Vassilevsky DSP and Mixed Signal Design Consultant

formatting link

Reply to
Vladimir Vassilevsky

Yes, and those algorithms are beyond your control. If something breaks there, the flash device is dead and there is no way to recover.

VLV

Reply to
Vladimir Vassilevsky

Yes, it will do that. But if you have a small inode in the middle of a 4 MB erase block, and you update that inode (containing file size, allocation blocks, and access/update times), it will still have to erase an entire 4MB block. Even with wear levelling, it can add up quickly, especially because the inodes are spread over the disk, instead of combined in the same erase block.

The advantage of the FAT filesystem is that all FAT entries are grouped together, and that the device firmware 'knows' where the FAT is, and it has been optimized to handle frequent changes in that particular area.

Reply to
Arlet Ottens

The problem is that I could sometimes not reach the device (in the case of the Pogoplug) through network anymore, so I have no choice but just pull the power plug. I will check all the other tips (I used the linux installation described on

formatting link
but I still think that ext2/ext3 is better than vfat considering the problems I see. Not to mention that uBoot seems not to support vfat.

Andreas

Reply to
acd

You may be able to use other options, such as using two partitions. One VFAT, in the same place as the old FAT partition, with the same properties, and another one ext2/ext3. If you use the ext2 mostly for read-only stuff, it won't wear the flash.

Reply to
Arlet Ottens

Does your system write to any part of the root filesystem? (e.g., /var/log et al.). Short test: mount root a RO and see *what* is writing to that filesystem (and where!) and either stop the writes (or reduce their frequency by orders of magnitude.... e.g., if you are logging at .info level you're just being wasteful) *or* redirect them to some other "volatile" (e.g., memfs) file system that can tolerate the abuse.

(This will probably also give you some small performance gains)

(Hint: make the root file system only large enough to cover the "static" (non-changing) portion)

Reply to
D Yuniskis

As it is possible (likely?) that the memory failure may be causing the "inaccessibility" problem, consider instrumenting your write(2) so that it does a write-verify instead. Then, be sure all of your write(2c) invocations check the result code (and do something appropriate in the case of the *failed* write!).

This might let you discover the problem sooner (while some functionality still remains)

Reply to
D Yuniskis

A root filesystem mounted read-only is a reasonable thing to demand and quite straightforward to achieve with judicious use of memory filesystems and/or NFS mounts. If there's data that absolutely _must_ be kept locally across reboots then I'd be thinking in terms of cpio'ing it into and out of a disk partition in an init script

- it is likely to prove much less disruptive than putting an actual writable filesystem on the device.

I have something similar sitting on my desk in front of me - my computer terminal is a Neoware CA21 thin client. I found the stock "firmware" a little limiting so I threw a NetBSD installation on it. The integrated 256Mb disk-on-module was a little limiting so I used a 1GB USB drive instead as a temporary measure until I got around to replacing it. Perhaps 18 months later I still haven't quite got around to it and it's still working fine. Again, that's a read-only filesystem in normal operation (i.e. you're not tweaking the configuration) which also has the advantage you can simply turn it off when you're done since all the apps are essentially stateless anyway.

I did take a couple of months to refining that installation to read-only operation during which it had no real issues though, which seems better longevity than you have been experiencing. The USB drive is actually a promotional freebie from the local university so I can't cite a particular make and model, and dmesg isn't particularly revealing either since it too does not cite a manufacturer for it:

sd0 at scsibus0 target 0 lun 0: disk fixed sd0: 956 MB, 1968 cyl, 16 head, 63 sec, 512 bytes/sect x 1957888 sectors

--
Andrew Smallshaw
andrews@sdf.lonestar.org
Reply to
Andrew Smallshaw

Interesting. I use CA5's for Xterminals and have been wanting to repurpose some CA10's (1GHz/1GB) for other bits of fabric (firewall, name service, etc.) but have yet to make time to design a CF adapter that will fit in the thing (connector points the wrong way). So, throwing laptop disk drives does the job without much effort.

How did you decide what parts of the system needed to be tweeked to get it to run R/O? E.g., mount an MFS for /var (and probably cut down on logging and/or set newsyslog.conf to compress and discard logs, *often*).

Do you run many *real* apps on the boxes? Or, just use it to host an Xserver?

(e.g., I figure I could afford to create a small writable portion on the flash device and then intentionally limit the applications that would want to do those writes. That way, I could put my zone files there to startup the name service while letting other "more dynamic" file system uses take place on an MFS mounted elsewhere.)

-------------------------^^^^^^ Ouch! Hence my reluctance to tackle this (until I've decided that the CA10's are, indeed, the right platform o which to invest that time)

So, root is *mounted* R/O? As such, anything that you may have "overlooked" will eventually cause a panic?

Cool!

Reply to
Don Y

I wanted to avoid a disk at all costs. One of the main motivations for using a thin client in the first place was for a completely silent machine so it removes one source of potential distraction (or possibly, it removes one potential excuse). In any case there's no room for an HDD in the CA21 case.

The "right" sort of CF adapters are available but you a lot of searching and sorting out to isolate the correct ones - it doesn't help that a lot of sites don't seem to fully appreciate the differences between form factors so don't provide the necessary details unless you're willing to scrutinise images for detailsa couple of pixels high.

Just /var and /tmp on MFS mounts, the former of which is populated from a tar file at boot time. I haven't seen the need to trim logging: since these are used as terminals they get powered off and reset at the end of the day anyway.

Not really. Mostly it's X querying the relevant machine with XDMCP, or running rdesktop for when I need a Windows machine. SSH, telnet and Minicom (to a serial console server) for command line stuff. I have experimented with local apps and most things I use are fine. OpenOffice takes a while to load (perhaps 15 seconds) but it fine after that. The only problematic app is Firefox - it's slower than a dead slug. That's a pretty big limitation for me and I suspect a lot of users - I need a web browser even for a couple of in house databases.

I'll re-phrase that. A couple of months to _get_around_ to doing the job properly. Perhaps half an hour to sort out when I finally did. There are plenty of examples around online you you look in the right places: it is essentially similar to the way live CDs or even installation media work. There's also a guide on this very scenario at

formatting link

Yes, and I wouldn't expect panics either. Userland issues, sure, although the only problem I recall is with the SSH client and its authorised keys file - the home directories for some users are read-only too.

I think I'd better explain that, since there are two classes of login. The first is conventional logins whose home directories are NFS mounted so there's no issue with those. However, I also have some pseudo-users with home directories in /usr (root filesystem). They have logins names of various systems, no password and when logged in their .profiles fire up X and connect to the relevant system. Crude but a hell of a lot more straightforward than some graphical front end. However, when SSHing directly out of the machine it tends to be as myself, so I have an writeable home driectory anyway.

--
Andrew Smallshaw
andrews@sdf.lonestar.org
Reply to
Andrew Smallshaw

Agreed -- for an X terminal. It's delightful being able to hide the noisey machines (real servers) elsewhere and concentrate on the work at hand... *soft* music in the background instead of being forced to wear headphones, etc.

But, I've had small machines providing "key services" tucked under a dresser in the bedroom for more than a decade. SPARCstation LX was my favorite in this role (*reasonably* quiet -- if you used a new-ish disk -- and low power... not like the majority of PC offerings). Given its unfortunate location (think: sleeping), it is probably an even *better* candidate for "silent operation".

[unfortunately, the LX just didn't have the horsepower to keep up with my routing needs as more nodes and networks came online :< ]

It looks like the CA21 is about the size of the CA5. The CA10 is probably twice that volume. E.g., it supports a single PCI slot, has provisions for a PCMCIA option on the PCB (i.e., lots of real estate), etc. A laptop drive fits easily.

The PCI slot and dual DVI+VGA connectors (it will run dual headed) are the real draw, for me (plus the fact that they were freebies). E.g., I stuffed a

4NIC PCI card in the firewall box so it can straddle the routing between *all* the networks, here (WAN on one NIC, wireless AP on another, "traditional" computing on yet another and automation and multimedia on the last two).

I'd like to deploy some of the others for various "dedicated" roles on those other networks (e.g., media services, etc.) but those apps tend to be more (persistent) stateful...

Exactly. Where is pin 1? Which direction will the card extend into the case (the IDE44 connector is located on an edge of the board so if the adapter mounts "the wrong way", you can't plug the adapter into the connector due to mechanical interference with the case)? Will the CF card sit *on* the adapter or hang below it (interference problems, etc.)?

Unfortunately, I only have one or two of the original "modules" (which, of course, are designed to fit very nicely into the space provided). They are a bit smallish (i.e., imagine fitting a "flush" set of choices for xfs on that card!) and have XPe on them currently.

Understood. In my case (firewall, DHCP, DNS, NTP, xfs, etc.) the whole point was to leave them running 24/7/365 so that any *other* machines could avail themselves of those services as/when needed. All of them tend to *want* to scribble notes someplace -- or, be easily reconfigurable as needs change.

But you remotely mount a $HOME, etc. (?) Something I can't do as *this* is supposed to be the system that others depend upon (i.e., the bottom-most turtle).

Ah. The only time I tend to use a browser on an X terminal is for Solaris/Jaluna help/man pages. I should try it, though. With 1GHz and 1GB and Gb fabric (though I think the CA10 only runs at 100M??) it should be a client-side limitation. (i.e., not *running* the browser on the X terminal iron but just using it for display services)

I see the bigger problem being the effort required to determine what the applications (that will be hosted on that diskless iron) expect in terms of writeable file systems. E.g., long running services *will* tend to create bigger log files, you'll *want* those logs (since the box is providing key services), apps may need to update persistent configuration data, etc.

I was chagrined to discover that PostgreSQL won't support a R/O database -- even if you never MODIFY it's contents! E.g., I had planned on keeping the catalog of music selections in an R/O database for the multimedia server (which I wanted to host using another of these silent, fan-less boxes) so that it (and the music itself -- though requiring external media to store due to size) was accessible whenever a user wanted it (i.e., 24/7/365). The fact that it apparently must reside on R/W media (even though it is never deliberately modified) made that a considerably harder challenge.

[desktop applications seem to have a cavalier attitude towards resources: they expect them to be limitless and, at the very least, have NO IDEA what their actual requirements might be!]

Unlike a *real* embedded system, there don't seem to be any details available that tell you just how much memory the kernel will call upon in whatever situations it is likely to encounter with a given set of apps running on it. Since it makes no sense to have swap on a device like this (mount swap on an MFS? why not just use the underlying memory behind the MFS for physical memory??![1]), any time any set of kernel+apps exceeds the total physical memory available...

[1] Actually, wrapping memory in an MFS with a co-resident swap (like Solaris' /tmp) can make certain configurations of apps more "runnable" (without altering sources).

I would *prefer* the panic (at least while shaking out the various bugs in the configuration) as that draws immediate attention to each (new?) problem. Easier to notice than having to parse *.error syslog entries. :-(

Understood. I manage my ssh, telnet, etc. connections similarly (and make a point to change $PS1 to `hostname` everywhere to remind me of who I'm talking to!)

Is *your* $HOME NFS mounted? Or, "volatile" in an MFS?

I.e., can you *do* anything with *just* this machine up and running (and no others)? E.g., I can (currently, owing to the presence of the laptop drive) fire up a CA5 (as an X terminal) and "work" (run apps) on the CA10 (I often use this to *write* code that I don't yet need to compile... when I don't want to deal with starting any bigger iron).

Reply to
Don Y

It sounds like you and I have similar home set ups, not just here but in a few other things you say later on. I have a similar machine here for file & print, Postgres, Apache and a few odds and ends (DHCP, DNS etc) - that's a 600MHz VIA EPIA board I've been using a number of years.

I'm thinking of replacing that more for the networking than CPU limits: I'm beginning to think I really need to upgrade it to gigabit ethernet and ideally dual NICs (or at least a NIC that supports VLANs): the sole PCI slot is already occupied by a SATA disk controller.

The disk is one of the old 5400 RPM Hitachi CinemaStars which I think I mentioned in a previous thread of yours a year or so ago. They're nice and quiet - 24 dB even when active. Like you I came to the conclusion that sometimes there's no substitute for a disk.

Yes - from that EPIA based server. I did have some ideas of using them locally more than I actually do but the real motivation is chiefly so you can plug in a USB drive and access it from your seat. Having your usual home directory available makes that a lot more convenient but of course you could work around it if it wasn't.

If it's just being used as a terminal performance is fine and generally indistinguishable from a desktop, even on 100Mbit. I run at 1400x1050x24 and even full screen DVD playback is generally acceptable if it is another machine doing the actual decoding. Slow panning shots are slightly cinefilm-ish: you can see the frames but not enough to spoil what you are watching. Audio is fed to the speakers via analog connection to my usual physical "full size" machine on unused pairs of the network lead - I haven't tried network audio.

I haven't played with the Unichrome's MPEG decoder on the Neowares yet, but when that server still had a head (and with a slightly less capable Unichrome chip) it worked well. OTOH I've no idea if that hardware acceleration is network transparent. I suspect it may not be.

--
Andrew Smallshaw
andrews@sdf.lonestar.org
Reply to
Andrew Smallshaw

I wanted to get the "core services" that I use off of bigger machines and *into* the fabric, so to speak. It was annoying to have to fire up a UN*X box just to get name services running so a Windows box could access a network printer, or, the font server running just so I could use a particular font in a display, etc.

For the automation and multimedia applications, this is even more true (I definitely don't want to have to keep a "real machine" running just to listen to music or control the furnace!)

Do you really *need* the speed? I have all of my Gb hosts on a single 8 port switch (actually, I think the other switches are also Gb though the hosts that they serve often are not). My thinking when I was assigning switches was that printers and X terminals really don't *need* that sort of bandwidth. Nor does the automation stuff (though I suspect the multimedia

*will*). Most of my files are served from bigger/faster boxen which already have fat pipes... [I've recently relearned the lesson that I keep having to learn each time I upgrade fabric: "No matter how fast the fabric gets, transferring *archives* will always take a LOT longer than you think -- because the archives get bigger coincident with the fabric getting faster! E.g., a few TB "over the wire" takes forever -- even at Gb speeds! :< I.e., it seems like SneakerNet (though with huge media) will always have a role for truly high bandwidth transfers ;-) ]

So, it seems more effective to keep the "muscle" connected with wide pipes and not worry about the display/print/etc services

It's often an expedient. E.g., it will take me a LONG time to figure out how to support R/O media under PostgreSQL (a key requirement for some of the product development work I am doing). But, silly to prevent myself from moving forward populating those databases until then! And, even sillier to host those DBs on a big, noisey beast. (the bigger iron tends to see more use in the winter months when the excess BTUs are more welcome in the office -- definitely NOT in the summer months!)

If you are running NBSD on the CA21, it should be relatively easy (?) to mount sd0 on $HOME -- assuming you aren't beating on that directory heavily?

I never considered the idea of "carrying" $HOME in my pocket and simply plugging it "wherever" it might be needed. E.g., that would even work on a Windows machine! I'll have to think about this... it sounds like it could be a really good idea! Though keeping the discipline to always use that device for *all* the work on the stuff contained thereon might be hard -- I'd almost have to force myself NOT to back it up onto any other server (lest I be tempted to modify the backed up version at some point and quickly get the two versions out of sync)

[I tend to think of files as having real, physical locations -- that don't MOVE! But, to allow *me* to move and still access them. :> ]

Wow, I had never considered watching full motion video on an X terminal. Most of my "computer use" has fairly static displays so X has been a real win for me. I started using NCD 19r's many years ago. Then, 19c's, HMXpro, etc. Each time, getting more features/performance and smaller footprints (e.g., the HMX "pizza boxes" served 75 pound

21"/25" *CRTs*, while the CA5's support similarly sized LCD monitors in 1/10-th the volume/mass!)

I will have to try that just to see what the experience is like. I had assumed bandwidth requirements would be too high. (I've been looking hard at suitable CODECs for the video clients in my multimedia solution for similar reason)

NASd packages seam to be broken pretty often (not sure how much of this is the package maintainer's fault). I used it on the NCD machines but no longer bother with it. If I need audio it tends to be on the multimedia workstation (I use a HiFi or PMP for my background music)

I think you would have to be able to export (import) a virtual frame buffer. Doubtful that they bothered with that sort of support. I'd be curious to try something like that with the Sun Ray architecture!

Reply to
Don Y

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.