64Gbyte flash memory - hot stuff, unfortunately

Do you have a question? Post it now! No Registration Necessary

Translate This Thread From English to

Threaded View
I am looking for some 64Gbyte USB flash drives - preferably in slim  
casing. This one looks ideal as a form factor:

https://www.mymemory.co.uk/USB-Flash-Drives/Kingston/Kingston-64GB-DataTraveler-SE9-USB-Flash-Drive

But the Amazon reviews for that and similar products report that they -  
or, at least, some of them - get very hot in use. Naturally, the heat  
comes from consuming a lot of power, and that would not be good in a RPi  
or anywhere else. (One use I have in mind is to put three of these in a  
USB hub where the ports are close together and, of course, Pi USB ports  
are _very_ close.) Apart from putting a load on a small PSU, running hot  
could affect the working life of flash memory. Some Amazon reviews  
report that hot USB flash drives fail after a few months. Bad news when  
precious content is stored on them!

I see in a previous thread that Amazon reviews can be for related  
products rather than the specific one a customer buys. So I wondered if  
you guys had come across any particular flash drives that were slim  
enough to fit in adjacent Pi ports but did not get uncomfortably hot  
while in use.

Any make/model suggestions?


--  
James Harris


Re: 64Gbyte flash memory - hot stuff, unfortunately
On Tue, 21 Mar 2017 11:55:55 +0000

Quoted text here. Click to load it

Use a powered USB hub.

Quoted text here. Click to load it

Remove the case (unless it acts as a heatsink, as the metal ones may)
and attach a heatsink or place a small fan nearby, or stick a Peltier
device on it.

Quoted text here. Click to load it

Just insert a new stick and restore from backup - you do have backup,
don't you?




Re: 64Gbyte flash memory - hot stuff, unfortunately
On 21/03/2017 16:37, Rob Morley wrote:
Quoted text here. Click to load it

Backups of system drives are a pain. They are always missing the latest  
info. You don't find out whether they are good or bad until you need  
them. And they don't always work properly on a running system, needing a  
shutdown to effect.


--  
James Harris


Re: 64Gbyte flash memory - hot stuff, unfortunately
On Tue, 21 Mar 2017 22:32:25 +0000, James Harris wrote:

Quoted text here. Click to load it
DataTraveler-SE9-USB-Flash-Drive
Quoted text here. Click to load it

Use rsync as your backup tool. Been using it for years and had no  
problems with backing up system disks on a running system.

Rsync is faster than tar, once its done the first backup, because it does  
the minimum of work needed to make the backup identical with the disk its  
backing up. Use a cycle of at least two backup disks/sticks/SD cards    
and store them offline so a mains spike can't hurt them and preferably in  
a firesafe or a different building.


--  
martin@   | Martin Gregorie
gregorie. | Essex, UK
We've slightly trimmed the long signature. Click to see the full one.
Re: 64Gbyte flash memory - hot stuff, unfortunately
On 21/03/2017 23:06, Martin Gregorie wrote:
Quoted text here. Click to load it

Ah, but how many times have you restored a system disk? AIUI because the  
system is running when the backup is taken - even with the brilliant  
rsync - not all data will be stored on disk. It, therefore, won't all  
get backed up.

I use rsync mirroring for system backup and I think I had to do a  
restore once. It was much better than nothing but still left the system  
needing a bit of fettling to get it working properly. Things like mysql  
database tables were, presumably, cached in memory - leading to the  
rsync'd database tables being broken and, even when fixed, inconsistent.

I do database backups into sequential files and they get rsynced but all  
such backups are, of necessity, out of date. And unless the database is  
stopped between such backups the tables could still be inconsistent with  
each other.

Quoted text here. Click to load it

AISI there are two types of backup:

1. those which let us go back to an earlier state (which you are talking  
about)
2. those which are used to prevent or allow recovery from disaster

Both are useful but unless an OS has effective inbuilt system-drive  
backup I can't see how any file or drive backup tool can perform a real  
snapshot. There could always be some data which are only half written to  
disk when the disk copies are the bits which get backed up.

A true system backup would really need a system shutdown first.

As far as I am aware there are no true system backup options in a  
running Raspbian or any other Linux OS.

As an aside, one of the few ways a true system backup can be performed  
is in a VM. A host system can allow an entire guest OS image to be  
snapshotted, and then later restored to that snapshot on demand. The  
snapshot contains files and memory contents and running programs. The  
restored snapshot therefore continues where it left off - even though it  
may get a bit confused by a sudden change in the time of day.

Speaking of offsite backups, I have considered using rsync to back up  
data disks across the internet. That could be a good way to go for  
anyone who has, say, some cloud storage or, perhaps, comes to a  
reciprocal agreement with a friend.

But to guard against hardware failure of system drives raid-1 seems  
best. That would probably be one of the first things I would do with USB  
flash.


--  
James Harris


Re: 64Gbyte flash memory - hot stuff, unfortunately
On Tue, 21 Mar 2017 23:39:52 +0000, James Harris wrote:

Quoted text here. Click to load it
Quite a lot, actually. I went through a phase when my system disks were  
failing about every two years. Never found any lost data from any of thos  
recoveries: SOP was to do a clean Linux install on the new disk and then  
overwrite the /home partition from the backup.
    
Quoted text here. Click to load it
Databases are different. I do two backups there:

- one is an incremental backup written as part of the system
  maintaining the DB.  

- the other is a whole DB dump using a standard utility which dumps
  tables as CSV files interspersed with the SQL statements needed to
  recreate and reload the tables.

My preference is to restore the database by using my incremental backup  
files, which are designed for reasonably easy & fast replay through my  
own batch loader. BTW, this is PostgreSQL, not MySQL, and I've had  
problems with the Postgres backup in the past due to a broken CSV  
implementation which couldn't handle fields containing the the character  
used as quote marks.  
  
Quoted text here. Click to load it
If you're sensible you shut down anything that's likely to make changes  
in response to external events while backing up. I stop mail clients and  
web browser (mainly because I want to clear out history before taking the  
backup) and I know that DB won't be updated (no clients running), buts  
the regular offline stored weekly backup. rsync will tell you about any  
files that change while you're backing up but I see that about once every  
six months - and its almost always a system log.

I also have an automatic overnight backup to an always-online disk, so  
its really only good for far-finger protection. That uses compressed tar  
backups of the /home and /var partitions and also reports it when  a file  
changes during a backup. This is also a rare event, happening no more  
than 2-3 times a year, and again its almost always a system log that gets  
hit.

Quoted text here. Click to load it
I think its good enough - see above, but you can always get a clean  
backup from RAID 1: offline one of the mirrors, back it up and then put  
it back online afterwards. Back in the days of 14" removable disks when  
Stratus fault-tolerant systems ran RAID 1 using paired drives it was even  
simpler - you knocked one of the plexes off line, and swapped its disk  
pack for the one in the firesafe and fired the drive back up. The same  
trick should work equally well with 2.5 or 3.5 disks in hot-swap disk  
caddies.
  
Quoted text here. Click to load it
Yes, over a VPN to a friend's system sounds like a good plot. Dunno about  
your cloud suggestion - some of them have been losing a bit of data  
recently.  

Quoted text here. Click to load it
....with some caveats:  
    (1) you still need offline, and preferably offsite, backups
    (2) make sure the mirrored drives are from different batches or
        different manufacturers
    (3) Use hot-swap caddies or external USB or SATA drives.


--  
martin@   | Martin Gregorie
gregorie. | Essex, UK
We've slightly trimmed the long signature. Click to see the full one.
Re: 64Gbyte flash memory - hot stuff, unfortunately
On 22/03/2017 02:13, Martin Gregorie wrote:
Quoted text here. Click to load it

Did your system backups not include settings in /etc, installed apps,  
email repositories, logs, etc?

Quoted text here. Click to load it

You could run

   rsync
   sync
   rsync

The second mirroring rsync should run quickly and capture changes since  
the first. Still no guarantee over logs, though.

Quoted text here. Click to load it

That's a good idea. It still won't necessarily capture changes of open  
files being written. But it sounds like it may be the best option  
without OS support or a VM.

Quoted text here. Click to load it

Have they? Very naughty!

Quoted text here. Click to load it

Good suggestions.


I use a brilliant set of caddies for 5.25-inch drives - the Icybox  
IB-168SK-B. They are good because drives can be simply swapped without  
need of a screwdriver to attach tray or runners. (The enclosed versions  
'169' with the 30mm are poor due to fan failure but the 168s are open  
and use case airflow.)

The 168s are not perfect, though. Their LEDs require connection to a  
driver port rather than snooping the SATA interface. Motherboards  
usually only have one such driver header for every disk. And the 168s  
don't allow you to see which disk is in which drive. I use post-it notes  
which protrude past the drive door.

--  
James Harris


Re: 64Gbyte flash memory - hot stuff, unfortunately
On Wed, 22 Mar 2017 09:50:06 +0000, James Harris wrote:

Quoted text here. Click to load it
Depends. The weekly backup does, but the overnight run doesn't directly.

I have hacks:  

1) /home contains a local and a java directory that are symlinked as /usr/
local and /usr/java, so backing up /home cpatures them too and all I need  
to do after a clean install is to relink them. /usr/java doesn't exist in  
a clean install and /usr/local does but its directory structure is empty.

2) and changes I make explicitly to files in /etc are copied to a  
directory in my main login directory, so after a clean install, its  
simple to copy them back to /etc and run systemctl to enable and start  
the services they affect.

3) I also have a set of scripts to set up users and groups for my various  
logins, relink /usr/local, and reinstall packages that aren't part of the  
standard iso used or clean installs.

4) I put /home in a separate partition so that everything under /home  
will be unaffected by a clean install when all the other partitions are  
reformatted.

Quoted text here. Click to load it
I don't care about the log files in those circumstances.  

Long ago I configured Apache and Postgres so their data is part of /home.  
This means that, with my recovery strategy after a disk failure, the only  
real reason for backing up /var is to keep the mail spools.

There's nothing in /boot, /root and the rest of the root-level directory  
structures apart from /home and /var that I care about and that isn't  
automatically rebuilt by a clean Linux reinstall.

Quoted text here. Click to load it
The problem is that people *assume* that the regional part of a cloud  
they're using is automagically backed up and replicated to another cloud  
region AND that if their region fails they'll be automatically linked to  
the replicated copy. Then they find out the hard way that one or more of  
those assumptions were false and realise they don't have local backups of  
anything.

If you read The Register you'll see that cloud failures and resulting  
lost data are no less frequent than similar webhost and mailhost failures.
The Amazon and Azure clouds seem to be better than the rest in this area,  
but even they have regional failures from time to time.


--  
martin@   | Martin Gregorie
gregorie. | Essex, UK
We've slightly trimmed the long signature. Click to see the full one.
Re: 64Gbyte flash memory - hot stuff, unfortunately
On Wed, 22 Mar 2017 09:50:06 +0000

Quoted text here. Click to load it

    I've found git to be a very useful tool for managing configuration
files. I create a git repository in / on all my machines and add every
configuration file I touch, which gives me a handy fallback for the
inevitable mistakes but of course leaves me vulnerable to losing the hard
disc - so I clone the repositories to a directory on my ZFS based
NAS and have cron run an hourly git pull on each of them.

--  
Steve O'Hara-Smith                          |   Directable Mirror Arrays
C:>WIN                                      | A better way to focus the sun
We've slightly trimmed the long signature. Click to see the full one.
OT: SATA enclosures (Was: 64Gbyte flash memory - hot stuff, unfortunately)
Quoted text here. Click to load it

I use the 169 versions, and prefer them because they have their own fan  
and therefore do NOT rely on case airflow (which is mostly good, in my  
cases, but least good around the drive bays).

Yes, if a fan fails it fails, but I've had no problems with the supplied  
Icy-Box fans in several years of use.

Each to his own ...

--  
Cheers,
 Daniel.
  


Re: 64Gbyte flash memory - hot stuff, unfortunately
Quoted text here. Click to load it

/home and /local here/ I trust I can get a working Linux or FreeBSD
up from a standard, downloadable distro; ans sync it to the repos.
I take backups of the list of installed packages only. Including
my own, which I use the package system for too, from local media.
    
Quoted text here. Click to load it

The Linux md mirroring system can actually run with 4 or more
drives in a raid1 system, copying the data to all of them.

I run a triplex for the partition where I keep all my current
stuff, and add a file via losetup and nfs and sync to it overnight,  
and then do a clean break with the mirror.

FreeBSD has similar options.
  
Quoted text here. Click to load it

Amen.

The / can be run as almost read-only media once you move
away /var, /home and /local.

-- mrr

Re: 64Gbyte flash memory - hot stuff, unfortunately
On Wed, 29 Mar 2017 22:36:06 +0200, Morten Reistad wrote:

Quoted text here. Click to load it
I cheated a little:

- the in-house Apache server is configured to put root in /home and
  the various page groups in other /home users

- my Postgres database is in a /home user

- anything I've changed in /etc, /var and /root has copies maintained
  in my main user in /home

- everything that normal people keep in /usr/local is in /home/local and
  /usr/local is a symlink pointing to it.    

This is how I get away with only needing to restore /home. Needless to  
say, I have one or two shell scripts that put the symlink back, add the  
various users back into /etc/passwd etc.
  
Quoted text here. Click to load it
Nice. At one stage I was doing a lot of work with IBM's AS/400 (now  
iSeries) midrange systems. They use RAID 5 on sets of five disks. I was  
impressed with their reliability (and read performance!) so using  
something similar @home is on my to-do list.  
  

--  
martin@   | Martin Gregorie
gregorie. | Essex, UK
We've slightly trimmed the long signature. Click to see the full one.
Re: 64Gbyte flash memory - hot stuff, unfortunately
Quoted text here. Click to load it

At a PPOE with the Tandem Guardian we used this three-way mirror to take
backups. We included the spare in the raid, synced it; then took another
drive out of the raid, removed it from the cabinet and sent it for backup
storage. This way we circulated around 20 drives for each of the
three raids/mirrors on the machine. The backup storage was in the
disaster recovery site, so we could be online within round 10 minutes
by booting from the backups, and be synced into a mirrord config
within around 90 minutes. Drives were a lot smaller then.

Last year I copied this setup for my home office.

In my setup I have this (and now we get back on topic on rpi) :

I have a pi clone with two 1GHz arm7 processors, 1 SATA port, 1G
ethernet and pretty fast RAM, plus a USB3 hub as my store for the
personal stuff I depend on. It has power both from the on-board port
and from the USB hub, from two different sources.I have 1 256G SSD
SATA as the primary in the RAID, all the others are
"write-mostly". Two more ~300G USB3 disks as members, and a slot for
one on a loopback nfs-mounted disk.

This goes on a fiber out to a similar server in my garage, 30 meters from
the house, where i have two mirrored plain 2T drives where I rotate a number
of files to be the last sync'ed backup with the main raid. A full resync
takes about 10 hours, and I do this once every week as routine, and
I keep the last 4. I also have a script that takes over the virtual
IP address the original 3+1 raid is exported to and exports the raid,
so even on a total failure of the primary I can still run all the systems
with only a NFS remount of the partition. ( I have extensive server
parks with x86, arm6 and arm7 clients to test stuff on. I just simulated
a whole cluster of 8 servers for a client there ; asterisk/mysql/kamailio,
and ran extensive tests before deployment. All of them ran chrooted
into directories on this raid).

I have a small bootable partition of ~7G in front on all the drives,
and retain 247G for the raid. This is plenty for the important stuff
(no movies, some photos, scans of documents, svn server for my software
and configs etc).

A few times a year I take a snapshot to an external drive and put it
in a friend's safe a few kilometers away. I also do a manual sync every
time I upload something significant. I do the same for my friends backups.

This pi clone does nothing but being a NFS server for my valuable
stuff. That, and icinga and monit clients. Ditto for the one in my
garage. They have at least one 100Ah 12V battery as backup for power
each. Which holds for at least 2 days of operation.

While I do the resync-to-the-garage the write speeds suffer, I am
doing this right now and have write speeds of ~4 MB/second. Read
speeds are around 25MB/second throughout on machines with 1G intefaces,
around 9 on the ones without. Write performance is around 8-10 MB/sec
when not doing syncs. The sync is limited by the write speeds to the
USB drives in the garage. I see that the SATA SSD drive is doing
almost all the work; the others are just doing writes.  

The cpu usage on the raid box is rarely above 90%(of 200)  even when
transferring ~250 mbit/second. 8 nfsd's at 7-12% cpu each, 25% interrupt
load, zero user cpu :-/

Here is /proc/mdstat, losetup and df from when the copy is done ::

[root@raid mrr]# more /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath] [faulty]
md0 : active raid1 loop2[4] sda1[2] sdd1[0] sdc1[1]
      244066432 blocks super 1.2 [4/3] [UUU_]
            [>>>>>>>>>>>..........]  recovery =  49.0% (121083072/244066432) finish23%8.3min speed75%52K/sec
              bitmap: 2/2 pages [8KB], 65536KB chunk

unused devices: <none>
[root@raid mrr]# losetup -a
/dev/loop2: [0034]:51904518 (/2local/parts/sdc1)
[root@raid mrr]# df
Filesystem       1K-blocks       Used Available Use% Mounted on
/dev/root          7638904    5969444   1281416  83% /
devtmpfs            302996          0    302996   0% /dev
tmpfs               303168          0    303168   0% /dev/shm
tmpfs               303168        800    302368   1% /run
tmpfs               303168          0    303168   0% /sys/fs/cgroup
tmpfs               303168         24    303144   1% /tmp
tmpfs                60636          0     60636   0% /run/user/1003
/dev/md0         240235144  172049108  55982716  76% /raid0
odroid:/1local/ 1922726720 1443215520 381835648  80% /1local
odroid2:/local/ 1922728960 1042282112 782771200  58% /2local
[root@raid mrr]#  


And it is all in standard raspbian. Just some packages from the
repos, just some extra hardware and configuration.

-- mrr .. who has uid 1003 on all the servers, kept throughout
4 employers since 1987.




Re: 64Gbyte flash memory - hot stuff, unfortunately
On Fri, 31 Mar 2017 16:11:09 +0200, Morten Reistad wrote:

Quoted text here. Click to load it
Looks good. I was never sysadmin for Guardian so never got involved with  
its disk management. I take it you didn't have a connection between the  
sites or not a lot of bandwidth, or IIRC you could have simply declared  
that pairs of disks on the two machines mirrored each other?

At one time I was sysadmin for an IBM S/88 (a badge-engineered Stratus)  
which used RAID1 mirroring, with backups done as you describe. The one I  
looked after was just a development system with a single mirrored pair,  
so the backup disk of the day was just rotated onto a shelf.
    


--  
martin@   | Martin Gregorie
gregorie. | Essex, UK
We've slightly trimmed the long signature. Click to see the full one.
Re: 64Gbyte flash memory - hot stuff, unfortunately
On 2017-03-31, Martin Gregorie wrote:
Quoted text here. Click to load it

This isn't as easy as it might at first appear.

With a decent software RAID controller, it's not too hard to persuade
it that a networked remote device is just another disc onto which it
should be mirroring writers.

However, the remote disc will inevitably take longer to complete those
writes than the local one. So your local system becomes constrained by
the speed of the link. You can queue writes locally and send them over
the link as bandwidth allows, but then you don't have a current mirror
any more.

These days I'd be inclined to use ZFS snapshots every (arbitrary
interval), with incremental send.

Re: 64Gbyte flash memory - hot stuff, unfortunately
On Fri, 31 Mar 2017 16:19:23 +0000, Roger Bell_West wrote:

Quoted text here. Click to load it
That was a rather off-topic comment for this NG: apologies. It was meant  
to be a specifically Tandem NonStop point. These machines were  
effectively a network in a box from when they first appeared in the early  
'70s. There were up to 16 independent processors in a cabinet, each  
connected to dual disk and comms controllers with mirrored disk pairs  
split across controllers. Individual processors weren't fault tolerant,  
but processes running on them were because they had a continually updated  
backup image on another processor and all processors continually  
monitored the health of each other. On top of that, up to 16 cabinets  
could be linked in a single network which acted as a continuation of the  
inter-processor commms bus in each cabinet. This meant that any processor  
could directly access any disk attached to any cabinet in the network.

Thats why I said what I did and enquired about network capacity, because  
with the Guardian OS running on Tandem hardware there was no need to  
periodically sync data to the disks in the disaster recovery site or even  
to use the AS/400 trick of sending a copy of the transaction logs to the  
backup site to keep the backup disks updated. The only restriction was  
that the linking network required sufficient capacity to handle the  
mirroring of writes to remote disks. This required capacity only depends  
on the data volume and update frequency.  


--  
martin@   | Martin Gregorie
gregorie. | Essex, UK
We've slightly trimmed the long signature. Click to see the full one.
Re: 64Gbyte flash memory - hot stuff, unfortunately
On 2017-03-31, Martin Gregorie wrote:
Quoted text here. Click to load it

All right then: has anyone got zfsroot working on a Raspberry Pi? I
think it's unlikely and probably not very useful, but it would be
interesting, and it would certainly ease the backup problem that seems
to be a regular topic of discussion here..


Re: 64Gbyte flash memory - hot stuff, unfortunately
On Fri, 31 Mar 2017 20:21:50 -0000 (UTC)

Quoted text here. Click to load it

It might have inspired someone to go develop a fault-tolerant cluster
using Pi compute modules ...
:-)


Re: 64Gbyte flash memory - hot stuff, unfortunately

Quoted text here. Click to load it
Off topic?? there's a thread on the merits (or not) of metric units and  
electric sockets. It's easy to miss the header is "ARMv8.1?"

I'm thinking of getting it vaguely back on topic-ish as Yet Another vi v  
emacs debate!

--  
Bah, and indeed, Humbug

Re: 64Gbyte flash memory - hot stuff, unfortunately
On Tue, 4 Apr 2017 11:45:25 +0100

Quoted text here. Click to load it

    AFAIK there were no patents on Unix, there were however copyright
restrictions on the AT&T source code which Linux avoided by the simple (in
principle) expedient of being a clean rewrite while BSD got embroiled in a
legal argument over which bits they could distribute and which bits they
could not.

--  
Steve O'Hara-Smith                          |   Directable Mirror Arrays
C:>WIN                                      | A better way to focus the sun
We've slightly trimmed the long signature. Click to see the full one.

Site Timeline