Time to Upgrade ?:-}

On Tue, 04 Aug 2015 09:28:56 -0400, Joe Gwinn Gave us:

But in the form of the stored file, it CAN keep and carry the malware within the files themselves and then still be passed on.

Reply to
DecadentLinuxUserNumeroUno
Loading thread data ...

On Tue, 04 Aug 2015 09:28:56 -0400, Joe Gwinn Gave us:

No shit. It/they are file system independent.

So tape gets you nothing but dog slow linear 'worm' type access.

Reply to
DecadentLinuxUserNumeroUno

True. One backup is in a NAS and the other on an external USB hard drive which is only mounted during a (nightly) backup so I can unplug it at will.

Reply to
N. Coesel

My daily backup routine checks the files against their 'originals' so verification goes automatically in my setup.

Now that is a perfect definition of cumbersome.

You still have the risk of a drive not spinning up after you have not been needing it for several years. That is exactly the problem I faced. Sometimes I need a file after a decade or more. If it is on some hard drive on a shelve I have no idea about the condition of the drive let alone whether the file still exists. All this assuming I can still plug the hard drive in my system... I don't think my new PC even has PATA connectors for example.

Reply to
N. Coesel

Then you're probably talking about an order of magnitude (or THREE!) less data! I'm talking about archives/repositories -- EVERYTHING you've ever wanted to preserve!

(How many terabytes do you "check against their originals" in each of your daily backup routines? If a file was deleted, today -- but backed up YESTERDAY -- what do you check its backup against?)

Automagic. Essentially, doing the same sort of thing that locate.updatedb(8) does -- that locate(1) eventually uses. Except, not constrained by requiring everything in the locate database to be mounted when locate.updatedb(8) runs!

A drive never sits unused for several years. That's the point! Even if I don't happen to need to access ANY of the files on a particular volume, the code that walks through the archive(s) knows that it hasn't

*examined* those files in N days and requests the drive be mounted.

So, not only is the *media* accessed but the actual magnetic domains comprising each and every file contained on that medium are routinely "examined" and verified -- against the MD5 hash/checksum that was stored in the database when the file was added to the archive.

In the event bit rot, surface wear, failed reallocation, etc. causes

*that* instance of *that* file to degrade (checksum no longer verifies) or become "unavailable" (read errors, seek errors, etc.), then I get notified that some set of files on some particular medium are now "lost"; their *backups* must be recovered and used to recreate the "mirror copy" (so I, once again, have two copies of each file).

Having the files on media that is spinning 24/7/365 doesn't guarantee you that they are intact or accessible -- unless you check each and every file "regularly" (for some definition of "regularly").

I sidestep the PATA/SATA/SCA/SCSI/FW/etc. issue by adopting USB drives. This allows me to connect the drive directly to *any* machine -- instead of tethering it to *one* specific machine (which could crash).

Because the medium is used as "Just a Bunch of Files" (JBOF?? :> ) and doesn't embed any particular RAID structures, there's no need for me to even use drives of identical sizes, or store the mirror copy of drive1's files ENTIRELY on drive2! Some could be on drive2 while others are on drive8.

[Consider what happens to a RAID array with a failed/failing drive; do you have a hot/cold spare of the appropriate size ON HAND? Are you ready to rebuild the array as soon as *it* tells you that you have a failed drive? Does it tell you when ANY portion of the drive's contents are damaged -- even if you haven't explicitly gone looking at those particular files??]

I.e., all of the "mechanism" that would be present in a RAID configuration is maintained in the database and the scripts that walk the filesystem(s) to update/maintain/verify its contents!

Reply to
Don Y

Currently about 600GB but I expect that to increase steadily.

Reply to
N. Coesel

I've got 1TB spinning on each of my 8 workstations. Granted, some (small amount!) of that is operating system. But, the bulk is specific to the activities performed *on* that particular workstation. E.g., CAD-related files on the CAD workstation (but not on the Multimedia Authoring workstation, etc.)

My archive is *many* TB spanning more than 30 years. As such, it's common for files not to be "viewed/accessed" for long periods of time. I'd not want to keep all of that "spinning, on-line" *just* so I could *hope* it was "intact".

A friend had what he thought was the "clever" idea of keeping all his ROM images on his PC (early 80's) thinking that they'd be backed up each time his PC was backed up. Never occured to him that this didn't guarantee the files were intact! Or, even *present* on the machine when the next backup came along ("Gee, where did those files go? There's no sign on them in yesterday's backup... or the day before... or the WEEK before... or...")

Ask yourself how you'll know when you've INTENTIONALLY discarded a file vs. unintentionally having accomplished the same feat.

Reply to
Don Y

My new Dell PC runs Spice about 5x faster than my old HP.

HP: Dual core 1.8 GHz Xeon, 2G ram, Win XP, 2 threads in LT Spice

Dell: Quadcore 2.8GHz Xeon, 8G ram, 64-bit Win7, 4 threads

The hard drives are faster, which may help in Spice too, making huge .RAW files.

Reply to
John Larkin

You mean that a.b.s.e. is an insufficient resource for schematics, project pictures, etc?

Reply to
Robert Baer

What would DecadentLoser know ?>:-} ...Jim Thompson

--
| James E.Thompson                                 |    mens     | 
| Analog Innovations                               |     et      | 
| Analog/Mixed-Signal ASIC's and Discrete Systems  |    manus    | 
| San Tan Valley, AZ 85142     Skype: skypeanalog  |             | 
| Voice:(480)460-2350  Fax: Available upon request |  Brass Rat  | 
| E-mail Icon at http://www.analog-innovations.com |    1962     | 
              
I love to cook with wine.     Sometimes I even put it in the food.
Reply to
Jim Thompson

In theory, I should agree with you. However, I once had a very different experience. I was building RAID arrays for customers out of identical drives. Most common was RAID 0+1 which consisted of 5 drives. After about a year of faultless operation, one drive in one array starting to show signs of failure. I replaced the drive with a brand new "shelved" drive, re-mirrored and continues business as usual. A few days later, another drive started complaining, so I replaced it. That made me worry, so I started monitoring the DPT controller statistics only to find that the first drive that I had previously replaced was beginning to fail. I replaced it with yet another new "shelved" drive. During re-mirroring, another of the original drives began to fail. I ended up copying everything to a big single drive (from another manufacturer) and shut down the array.

Initially, I thought that there was some kind of power supply issue that was killing the drives. I had a spare overpriced power supply which I swapped in place of the original, but that wasn't the problem. I couldn't check any more new "shelved" drives because I only had three spares. I won't go into the eventual solution, as it was a bit strange and complexicated.

The bottom line is that it appears that these drives aged at the same rate whether spinning or sitting powered off on the shelf. My guess(tm) is that there was some kind of IC package leakage, chemical attack, plating deterioration, manufacturing defect that was causing the failures.

This is the reason that I detest RAID arrays built from identical drives, because they all tend to fail at the same time.

Gesundheit.

--
Jeff Liebermann     jeffl@cruzio.com 
150 Felker St #D    http://www.LearnByDestroying.com 
Santa Cruz CA 95060 http://802.11junk.com 
Skype: JeffLiebermann     AE6KS    831-336-2558
Reply to
Jeff Liebermann

On Tue, 04 Aug 2015 15:16:52 -0700, Jeff Liebermann Gave us:

Yeah. The brand of the drive.

I use exclusively Seagate drives.. I used a few IBM drives when perpendicular recording first came out because IBM was the leader in MR head tech, and all the others licensed their IP or actual Hdw t make their drives.

You will find that much of the commercial comm industry uses Toshiba SAS drives currently. 2.5 inch laptop form factor but double the slim height. I do not know what HP puts in their blades in the SAS hot swap slots. Probably Seagate or Toshiba.

WD sucks. Consumer level crap.

Then again an SSD (either mSATA or m.2 on PCIe) RAID array is also becoming popular and they have a regular change-out schedule 'cause the per GB price is cheap by comparison to the days when the HDs were the most expensive elements in the system. They got racks full of them now. .Sort of like the DAT tape days when they were rotated daily and weekly, etc.

I suspect the future will be a rack full of Samsung M.2 sticks on multiple redundant RAID arrays. Dem suckers are fast.

Reply to
DecadentLinuxUserNumeroUno

I wouldn't be so sure of that.

formatting link

Additionally, the retention time is affected by how much data has been written to an SSD. Intel's retention specification for their SSDs when the drive is near it's rated endurance is 90 days.

Reply to
JW

On Wed, 05 Aug 2015 09:22:08 -0400, JW Gave us:

You fail to realize that a RAID array on a shelf can lose data and still have it fully recoverable, and up to two entire volumes can be lost and still be fully recovered from the remaining array elements.

Ooops, you lose.

Also you must not have noticed that the article was skewed and was actually a plus for solid storage technology as those very same environmental conditions will most certainly, and in every case, cause a data loss on simple optical storage media, regardless of what brand you dopes think is so reliable.

Reply to
DecadentLinuxUserNumeroUno

Who's talking about RAID? My statement was about SSDs and the (your) idiocy of using them as archival storage.

Only in your fevered, walnut-sized "mind", AW.

Not talking about optical media either, AW. Do try to keep up, m'kay?

Reply to
JW

On Thu, 06 Aug 2015 09:01:13 -0400, JW Gave us:

There is no idiocy, dingledorf.

I have drives which have set dormant for two years and still fire up fine and contain all their data. I have several Linux distros installed across several of them but typically only use one Linux variant, so those sit dormant until I set up a new family of distros on them to check out the next thing in Linux.

Your idiocy abounds.

And the musician who wrote that article isn't far behind you.

Reply to
DecadentLinuxUserNumeroUno

On Thu, 06 Aug 2015 09:10:55 -0400, DecadentLinuxUserNumeroUno Gave us:

Oh and the RAID reference was because if your "archival storage" is in the form of a RAID array, the likelihood that you'll experience ANY data loss is so close to nil that the shott noise has a better chance of providing an errant bit.

SO you lose on all fronts, JW.

Reply to
DecadentLinuxUserNumeroUno

I'm getting the general impression that I should avoid 64-bit to make sure that my legacy programs will still work. Is that correct? ...Jim Thompson

--
| James E.Thompson                                 |    mens     | 
| Analog Innovations                               |     et      | 
| Analog/Mixed-Signal ASIC's and Discrete Systems  |    manus    | 
| San Tan Valley, AZ 85142     Skype: skypeanalog  |             | 
| Voice:(480)460-2350  Fax: Available upon request |  Brass Rat  | 
| E-mail Icon at http://www.analog-innovations.com |    1962     | 
              
Hillary has the charisma of a steamy warm turd. 

Jeb Bush has the charisma of a fresh cow-patty. 

A political contest made to stink >:-}
Reply to
Jim Thompson

I hope not! There's very little (and mostly low-end) computer hardware that isn't 64-bit nowadays, and software support for 32-bit is uncertain. Any software speedups will be implemented and tested on 64-bit hardware (with lots of RAM: 32-bit tops out at 2 to 4 Gbytes)

Reply to
whit3rd

No, you should make the jump and adapt (with VMware or some other method) or dump the really old 16 bit programs. Most 32 bit stuff will still run. It's time, and it will be the last major change for a very long time.

--sp

--
Best regards,  
Spehro Pefhany 
Amazon link for AoE 3rd Edition:            http://tinyurl.com/ntrpwu8 
Microchip link for 2015 Masters in Phoenix: http://tinyurl.com/l7g2k48
Reply to
Spehro Pefhany

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.