64Gbyte flash memory - hot stuff, unfortunately

I've found git to be a very useful tool for managing configuration files. I create a git repository in / on all my machines and add every configuration file I touch, which gives me a handy fallback for the inevitable mistakes but of course leaves me vulnerable to losing the hard disc - so I clone the repositories to a directory on my ZFS based NAS and have cron run an hourly git pull on each of them.

--
Steve O'Hara-Smith                          |   Directable Mirror Arrays 
C:>WIN                                      | A better way to focus the sun 
The computer obeys and wins.                |    licences available see 
You lose and Bill collects.                 |    http://www.sohara.org/
Reply to
Ahem A Rivet's Shot
Loading thread data ...

I use the 169 versions, and prefer them because they have their own fan and therefore do NOT rely on case airflow (which is mostly good, in my cases, but least good around the drive bays).

Yes, if a fan fails it fails, but I've had no problems with the supplied Icy-Box fans in several years of use.

Each to his own ...

--
Cheers, 
 Daniel.
Reply to
Daniel James

And flash RAM, no?

--
James Harris
Reply to
James Harris

Not so much IIRC.

--
Outside of a dog, a book is a man's best friend. Inside of a dog it's  
too dark to read. 

Groucho Marx
Reply to
The Natural Philosopher

There is no convecton in tight confined spaces. And of course it's not conduction or convection in the air alone but the two heat transfers from air to case and back from case to air on the inside and the outside that are strongly limiting. Unless you touch it with good conduction a case is always a bad thing.

--




/ \  Mail | -- No unannounced, large, binary attachments, please! --
Reply to
Axel Berger

The tiny space inside a flash drive does not provide for convection to any appreciable degree. Double pane windows work by creating a small space between two pieces of glass. The space inside the flash drive is similar giving nearly maximum insulation. Someone recently did some tests with a small cover over an OCXO and found air insulated better than styrofoam.

Heat sinks work when air is *blown* over them. Even without a fan they have lots of room around the heat sink for the air to move in, but are not nearly as effective as with a fan. That's why they come with fans for any real source of heat.

Coming from the uninformed that doesn't mean much.

Yes, it does, *eventually*... unless it is conducted through the connector which is not insulated by the air.

There is *no* evidence that a metal case, not connected to the chips is appreciably better than a thin plastic one. It's like series resistors. The chip package is 1 kohms, the metal case is 10 ohms and the air that connects them is 1 megohm. The total is 1,001,010 ohms and matter not much if you change the 10 ohm metal with 1 kohm plastic. Yes, metal is 100x better than plastic, but it doesn't do a damn thing for the 1000x worse air layer.

--

Rick C
Reply to
rickman

The failure mode of flash is such that it becomes unreliable before failing completely. If you aren't lucky you'd end up with 3 degrading sticks all containing slightly different contents, and unsure what data is good and what isn't.

"A Man with One Watch Knows What Time It Is; a Man with Two Watches Is Never Quite Sure."

---druck

--
This email has been checked for viruses by Avast antivirus software. 
https://www.avast.com/antivirus
Reply to
druck

Today I revisited it. I looks to be acceptable after pausing BOINC, otherwise the audio trails the video by several seconds. On exit if the gui doesn't return a Ctrl-Alt-F1 will put into the full-screen terminal where the boot sequence is partially shown, and a Ctrl-Alt-F7 will put you back into the gui world. As soon as I get a chance I'll throw a USB hard drive, as compared to a thumb drive, onto it and see if I can get rid of the slight jitter that's there (playing a AVI file).

Reply to
Sidney_Kotic

That's the first I have heard of flash failure modes. From a quick read-up it seems that writes start to fail. Presumably they are only a problem if they go undetected. So don't flash drivers read back what they have written?

--
James Harris
Reply to
James Harris

I would have thought BOINC was supposed to run in the machine's 'spare time' so should not impact other tasks. On the Pi there won't, I presume, be any swapping issues so that leaves CPU and network. Assuming it is configurable, is your BOINC setup set up correctly?

--
James Harris
Reply to
James Harris

I am not certain of the following, but I think teh answer is not with pen drives and sticks, but possibly with SSDS they do some form of error checking

With an SSD what you specify in terms of tracks and sectors bears no relation to where the data is actually stored. There's a CPU sorting stuff out and a table of logical to physical maps that are changed to do the wear levelling and several different algos of that in play too.

--
Religion is regarded by the common people as true, by the wise as  
foolish, and by the rulers as useful. 

(Seneca the Younger, 65 AD)
Reply to
The Natural Philosopher

It's supposed to but probably not for the Pi's the way I have them set up. I'm used to a PC where I can tell it to run full speed and give me the spare CPU time. The computer in other room reports not quite 16,000 BogoMIPS versus almost 154 for the Pi. The CPU(s), as I expect like for most folks, spend a relative eternity twiddling their thumbs waiting for us to do something exciting (like press a key). It's not that big of deal to pause processing for the small amount of time I spend watching videos, I will probably write a simple script to suspend BOINC processes, run Kodi and then restart BOINC because I have CRS.

Reply to
Sidney_Kotic

There's not a lot about easy-to-find stuff about flash failure modes, but there was a good thread about it on this newsgroup about three weeks ago. Here are the links and references I kept from it:

1) There are three kinds of devices:

- those which do no wear levelling at all. A given logical disk block always maps to the same physical block, i.e. the same transistors. The blocks holding frequently re-written data wear out quickly.

- those which do dynamic wear levelling, so each time a given logical disk block is written, the hardware chooses, *from those currently not in use*, a different physical block to map it to. This helps a lot, but only if the device has a good number of unused blocks to choose from. Don't run this kind of device close to full.

- those which do static wear levelling. Blocks which hold in-use, but infrequently modified, data are occasionally rotated into more heavily used cells, so that the whole device free and in use wears out at the same rate.

- Thats a summary. More here:

formatting link

- the summary is more or less verbatim from John Aldridge

2) "Flash memory card design" covers the relationship between pages, erase blocks and allocation groups and describes their impact on device life and throughput:

formatting link

2FFlashCardSurvey

3) Andrew Gabriel posted a good piece about Enterprise_vs_Consumer Flash media

All were in a thread in this newsgroup called "High traffic in MySQL can corrupt SD?". The first post was on the 1st of March, so your newsreader may still have a copy. Failing that, there's always Google Groups, which should have the whole thread.

======

Take a look at this:

formatting link

which gives an excellent education of why you should be careful where you buy SD cards and who you buy them from. Well worth the read.

Lastly, here are my thoughts on how best to avoid damaging and/or corrupting SD cards. This is mainly discussing their use in PNAs that are being used as navigation aids, but it is also directly applicable to using them in RPis though the cards used to hold these programs and the associated maps, log files etc can be quite a lot smaller that you'd ever plug into an RPi:

formatting link

--
martin@   | Martin Gregorie 
gregorie. | Essex, UK 
org       |
Reply to
Martin Gregorie

What exactly is a PNA? Your web page uses the term, but doesn't explain it. At first I thought it would be some sort of satnav device, but then you refer to, "PNAs that are being used as navigation aids". If only some of these devices are used for navigation, then I would think we aren't talking about navigation devices....???

formatting link

formatting link

So what is a PNA? Is this something like an iPad?

--

Rick C
Reply to
rickman

Sorry: thought it was more generally known term. Stands for Personal Navigation Assistant, i.e. a PDA (Personal Digital Assistant) with a built-in GPS.

Probably the best-known PDAs were the Compaq iPAQs, though HP sold them too.

PNAs are hand-held satnav units. The best-known over here were made by Garmin, Binatone, Medion and Dell, the Streak. They generally have walking and/or automotive navigation software in firmware and many of them ran under WinCE or Windows Mobile. These would generally run other programs from SD cards.

Both PDAs and PNAs have largely disappeared with the death of Win CE/ Mobile and the arrival of cheap smartphones, which is a pity because the best PNAs had trans-reflective screens that are much more readable in direct sunlight than any phone I've seen. The best current replacements, if a phone can't hack it, are e-Ink eReaders, e.g. the smaller Kobos.

Size can be important, because PNAs often need to be pocketable or are installed on flexmount in front of an instrument panel and so must be small enough to avoid hiding instruments mounted in the panel. I use units with a 3.5" screen for that reason, but almost nobody I know uses anything with more than a 5" screen.

--
martin@   | Martin Gregorie 
gregorie. | Essex, UK 
org       |
Reply to
Martin Gregorie

Ok, so it is a PDA. So what is different about using a PDA as a satnav from using it in any other mode from the perspective of the SD card?

--

Rick C
Reply to
rickman

/home and /local here/ I trust I can get a working Linux or FreeBSD up from a standard, downloadable distro; ans sync it to the repos. I take backups of the list of installed packages only. Including my own, which I use the package system for too, from local media.

The Linux md mirroring system can actually run with 4 or more drives in a raid1 system, copying the data to all of them.

I run a triplex for the partition where I keep all my current stuff, and add a file via losetup and nfs and sync to it overnight, and then do a clean break with the mirror.

FreeBSD has similar options.

Amen.

The / can be run as almost read-only media once you move away /var, /home and /local.

-- mrr

Reply to
Morten Reistad

I cheated a little:

- the in-house Apache server is configured to put root in /home and the various page groups in other /home users

- my Postgres database is in a /home user

- anything I've changed in /etc, /var and /root has copies maintained in my main user in /home

- everything that normal people keep in /usr/local is in /home/local and /usr/local is a symlink pointing to it.

This is how I get away with only needing to restore /home. Needless to say, I have one or two shell scripts that put the symlink back, add the various users back into /etc/passwd etc.

Nice. At one stage I was doing a lot of work with IBM's AS/400 (now iSeries) midrange systems. They use RAID 5 on sets of five disks. I was impressed with their reliability (and read performance!) so using something similar @home is on my to-do list.

--
martin@   | Martin Gregorie 
gregorie. | Essex, UK 
org       |
Reply to
Martin Gregorie

At a PPOE with the Tandem Guardian we used this three-way mirror to take backups. We included the spare in the raid, synced it; then took another drive out of the raid, removed it from the cabinet and sent it for backup storage. This way we circulated around 20 drives for each of the three raids/mirrors on the machine. The backup storage was in the disaster recovery site, so we could be online within round 10 minutes by booting from the backups, and be synced into a mirrord config within around 90 minutes. Drives were a lot smaller then.

Last year I copied this setup for my home office.

In my setup I have this (and now we get back on topic on rpi) :

I have a pi clone with two 1GHz arm7 processors, 1 SATA port, 1G ethernet and pretty fast RAM, plus a USB3 hub as my store for the personal stuff I depend on. It has power both from the on-board port and from the USB hub, from two different sources.I have 1 256G SSD SATA as the primary in the RAID, all the others are "write-mostly". Two more ~300G USB3 disks as members, and a slot for one on a loopback nfs-mounted disk.

This goes on a fiber out to a similar server in my garage, 30 meters from the house, where i have two mirrored plain 2T drives where I rotate a number of files to be the last sync'ed backup with the main raid. A full resync takes about 10 hours, and I do this once every week as routine, and I keep the last 4. I also have a script that takes over the virtual IP address the original 3+1 raid is exported to and exports the raid, so even on a total failure of the primary I can still run all the systems with only a NFS remount of the partition. ( I have extensive server parks with x86, arm6 and arm7 clients to test stuff on. I just simulated a whole cluster of 8 servers for a client there ; asterisk/mysql/kamailio, and ran extensive tests before deployment. All of them ran chrooted into directories on this raid).

I have a small bootable partition of ~7G in front on all the drives, and retain 247G for the raid. This is plenty for the important stuff (no movies, some photos, scans of documents, svn server for my software and configs etc).

A few times a year I take a snapshot to an external drive and put it in a friend's safe a few kilometers away. I also do a manual sync every time I upload something significant. I do the same for my friends backups.

This pi clone does nothing but being a NFS server for my valuable stuff. That, and icinga and monit clients. Ditto for the one in my garage. They have at least one 100Ah 12V battery as backup for power each. Which holds for at least 2 days of operation.

While I do the resync-to-the-garage the write speeds suffer, I am doing this right now and have write speeds of ~4 MB/second. Read speeds are around 25MB/second throughout on machines with 1G intefaces, around 9 on the ones without. Write performance is around 8-10 MB/sec when not doing syncs. The sync is limited by the write speeds to the USB drives in the garage. I see that the SATA SSD drive is doing almost all the work; the others are just doing writes.

The cpu usage on the raid box is rarely above 90%(of 200) even when transferring ~250 mbit/second. 8 nfsd's at 7-12% cpu each, 25% interrupt load, zero user cpu :-/

Here is /proc/mdstat, losetup and df from when the copy is done ::

[root@raid mrr]# more /proc/mdstat Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath] [faulty] md0 : active raid1 loop2[4] sda1[2] sdd1[0] sdc1[1] 244066432 blocks super 1.2 [4/3] [UUU_] [>>>>>>>>>>>..........] recovery = 49.0% (121083072/244066432) finish=238.3min speed=7552K/sec bitmap: 2/2 pages [8KB], 65536KB chunk

unused devices: [root@raid mrr]# losetup -a /dev/loop2: [0034]:51904518 (/2local/parts/sdc1) [root@raid mrr]# df Filesystem 1K-blocks Used Available Use% Mounted on /dev/root 7638904 5969444 1281416 83% / devtmpfs 302996 0 302996 0% /dev tmpfs 303168 0 303168 0% /dev/shm tmpfs 303168 800 302368 1% /run tmpfs 303168 0 303168 0% /sys/fs/cgroup tmpfs 303168 24 303144 1% /tmp tmpfs 60636 0 60636 0% /run/user/1003 /dev/md0 240235144 172049108 55982716 76% /raid0 odroid:/1local/ 1922726720 1443215520 381835648 80% /1local odroid2:/local/ 1922728960 1042282112 782771200 58% /2local [root@raid mrr]#

And it is all in standard raspbian. Just some packages from the repos, just some extra hardware and configuration.

-- mrr .. who has uid 1003 on all the servers, kept throughout

4 employers since 1987.
Reply to
Morten Reistad

Looks good. I was never sysadmin for Guardian so never got involved with its disk management. I take it you didn't have a connection between the sites or not a lot of bandwidth, or IIRC you could have simply declared that pairs of disks on the two machines mirrored each other?

At one time I was sysadmin for an IBM S/88 (a badge-engineered Stratus) which used RAID1 mirroring, with backups done as you describe. The one I looked after was just a development system with a single mirrored pair, so the backup disk of the day was just rotated onto a shelf.

--
martin@   | Martin Gregorie 
gregorie. | Essex, UK 
org       |
Reply to
Martin Gregorie

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.