How long will Raspbian 7 be supported - same as Debian Wheezy?

These filers have snapshot operation, i.e. they take snapshots at specified moments in time so you can go back in the past and see a file as it was back then, and copy it back to the current state. At some point in time, old snapshots are deleted. There appears to be some mechanism so that he hourly snapshots created today are being deleted in a few days, leaving a single snapshot for that day, and after some time those are deleted leaving a single one for the week, etc.

Correct me if I am wrong, but my guess is that these snapshots live in the "available space" on the volume. So, the more available space you have and the less activity on the filesystem (file modifications), the longer the history of snapshots that you can refer back to.

My fear is that there will appear ransomware that not only encrypts your files (with an unencrypted copy still existing in the snapshots), but will also fill the filesystem with large files with random data, filling up the free space and thus deleting the snapshots.

I don't know how these filers are configured and if there may be some mechanism that prevents the trojan to use up all available free space and thus delete the last valid snapshot, but I am not so sure that this would save us. After all, the trojan may choose to become active at friday evening when everyone has gone home and use the entire weekend to do its work and even ensuring that any further snapshots are from the encrypted data.

This whole garbage would then also be replicated to the backup filer, triggering the same mechanism there. Unless the space on the backup filer is much more than on the main filer, it may effectively overwrite all backup snapshots as well.

So it might be save with very clever configuration, but unfortunately my experience with outsourced services is that the operators optimize on profit rather than on optimal technical configuration. When discussing nifty features you invariably see them think "that will cost more man-hours, we are not going to do that unless it is paid by the hour". So let's assume for now that it has not been cleverly configured and we can only be saved by the manufacturer who might have put a clever default there.

Notice that those filers do not even trigger an alarm when there is a job from a single machine that reads and rewrites tens of thousands of files at high rate. That would certainly be something to consider, in my world.

Reply to
Rob
Loading thread data ...

I have a system like this, I wrote it myself using rsync's "hard link" ability. I actually run two 'levels', one is an incremental backup to a second hard disk on my desktop machine, the second is a less frequent (daily and up) backup off site.

The off-site backup makes a snapshot of my desktop machine every day but uses rsync's hard link ability so that unchanged files are just (hard) links to the previous day's snapshot. Thus the space occupied is only the total of changed files.

I then run a script which 'weeds out' older backups. I keep daily increments for 31 days, then one per month, then one per year. So if I look at my backup system I see backups for 2014/01/01, 2015/01/01 and the first of each of the last 12 months and for the last 31 days.

Each backup is an image of the system on that day, no special restore mechanism is needed, the files are exactly as they were on that day.

--
Chris Green
Reply to
Chris Green

The storage used for snapshots is allocated and will not be reused unless the snapshot is deleted (which is an explicit operation), at least this is true for FFS, ZFS, Hammer and OneFS which are the only snapshot capable filesystems I have used. Hammer (from DragonFlyBSD) is a little different in that it keeps a fine grained history which is pruned into coarse grained named snapshots to recover space but the effect is the same with the added benefit that there is history available in the period after the last snapshot.

There may be systems where this can happen but I doubt it, and I certainly would not consider using one. That being said many filesystems are known to react badly to being filled completely and require root privileges to fill past a safe threshold.

--
Steve O'Hara-Smith                          |   Directable Mirror Arrays 
C:>WIN                                      | A better way to focus the sun 
The computer obeys and wins.                |    licences available see 
You lose and Bill collects.                 |    http://www.sohara.org/
Reply to
Ahem A Rivet's Shot

I used tapes initially, Amanda 'just worked' once I'd set it up, but eventually I ran out of tape capacity at a price I was willing to pay for drive and media.

Now I just rsync to two generations of 2.5" USB hard drives - both kept offline in a fire safe. These hold weekly snapshots, which are made immediately before a weekly software upgrade.

For finger trouble protection and DR I have a third USB drive that's permanently connected to the house server so a compressed backup can be automatically made at 03:00. This currently has room for 13 daily backups.

With the low cost and ready availability of 2.5" USB enclosures and suitable 2.5" drives its really difficult to think of a reason why anybody wouldn't be using a similar offline backup strategy, particularly as I can back up all three systems (2 x Fedora, 1 x Raspbian) to the same disk.

--
martin@   | Martin Gregorie 
gregorie. | Essex, UK 
org       |
Reply to
Martin Gregorie

I think most of us are using rsync, which just maintains a single, identical snapshot but default. I, and I guess other too, get multiple snapshots by using a set of at least two backup disks and rotating them.

There's also a related program, rsnapshot, that keeps multiple backup versions. It seems to start out like rsync, by copying everything, but after that each backup run only makes copies of new and changed files, with unchanged files being represented by links to the last copy made.

I've thought about trying rsnapshot but never got around to it. Have any of you used it and, if so, is it a better approach to maintaining several backup generations than rsync?

--
martin@   | Martin Gregorie 
gregorie. | Essex, UK 
org       |
Reply to
Martin Gregorie

If you use the link above, it will show you the internet facing IP address, then bring up a terminal on the Pi type nmap followed by that address, and it will show you what the world can see, just as grc does.

I'm on PlusNet, and the first time you do it, it shows a lot of open ports, and the second time it shows me what I expect to be open. I suspect the first time I'm seeing a different sever, and then it redirects.

---druc

Reply to
druck

It's very easy, if say a database file is kept open, the malware encrypts a copy of it, and then deletes the original. It's Linux so an open file is removed from the filing system, but still accessible to the application that has it open. The encrypted file will then be backed up by rsync. It's not until either the application closes and re-opens the file, or the system is rebooted, will you notice anything has happened.

---druck

Reply to
druck

I am so amazed at the total lack of understanding displayed here, that I cant think of a suitable response.

All I can say is try what you propose and see what actually happens.

--
Future generations will wonder in bemused amazement that the early  
twenty-first century?s developed world went into hysterical panic over a  
globally average temperature increase of a few tenths of a degree, and,  
on the basis of gross exaggerations of highly uncertain computer  
projections combined into implausible chains of inference, proceeded to  
contemplate a rollback of the industrial age. 

Richard Lindzen
Reply to
The Natural Philosopher

I used it for a while and then wrote my own equivalent.

--
Chris Green
Reply to
Chris Green

Note that our system is not at all like that. Snapshots are an integral part of the filesystem. They do not happen by rsync or with only a small number of levels. Snapshots in the filesystem are merely reference points that allow you access to files as they were at that point in time. Each updated file block has some reference to know if it was from before or after that time, and updates to files are not written to the original block, so that remains available.

In Windows you can go to the "previous versions" tab and see how your file was in the past.

This kind of functionality is also present in some Linux filesystems, e.g. ZFS.

Reply to
Rob

I checked on a website for NetApp and indeed the snapshots are in the free space but there is some minimal reserved space for it that probably is not going to be overwritten when the disk is filled. However, when I understand the site correctly, some snapshots may have to be removed when all space on the filesystem is used.

Reply to
Rob

Not used rsnapshot, but there is a way to make rsync create many (many!) incremental backups which only add the space of the changes for each backup. (I suspect this may be what rsnapshot is doing under the bonnet)

In brief: you do a "level 0" rsync as normal (say into a directory called

0), then for the next one:

mv 0 1 cp -al 1 0

then do the next rsync into 0. The next one you

mv 1 2 mv 0 1 cp -al 1 0 rsnyc ..

and so on. Basically, you create a snapshot of links to the previous one and the rsync breaks the links and creates real files for files that have changed.

So you can keep days, weeks, months, etc. worth of backups for minimal disk space addition on each backup, and you can restore any of the "snapshots" at any time with a regular copy command.

What you can then do is pull any one of these and then copy it to a remote server (or removable media, etc.) to maintain full "level 0" archives - similar to pulling a tape on a regular basis. So e.g. if you were on a

30 day cycle, then you copy '30' to another remote server (or removable media), then delete it before the mv 29 30 ; mv 28 29 ... sequence)

It takes a bit of setting up and managing, but once there I think it's no worse than the days when I was using tapes and amanda. The worst thing that can happen on the original data source is renaming a high-level directory as that then causes a lot of dupliacted data to be stored in the backup, but that's fine - it just uses more disk space on the backup server.

Another scenario I've employed is to create servers with double the anticipated disk space and use one half (in a mostly read-only partition) to rsync the main data to overnight, then that partition is remotely rsynced to the backup store. That's not really a backup, but a cheap "get out of jail free" card to help with accidental deletion. (I was doing that in the days of tapes too - remount that partition r/w, do the rsync, remount r/o, then to tape - much easier to restore a few accidentally deleted files from the local disk than get the tapes out)

It's not perfect (what is), but its a fair solution for the cost.

There are still utilities like "tripwire" which have their place for issues like viral infection + encrypting files - as long as you check the output of their logs and test your backups!

Gordon

Reply to
Gordon Henderson

Nope, because you're approaching that IP address from inside your LAN not via the external interface. For a perfect example I have port 25 open on my external IP address for incoming email (no it does not relay, feel free to try, you wouldn't be the first), but from inside my LAN connections to port 25 on my external address will fail viz:

? steve@steve ~/tmp $ telnet sohara.org 25 Trying 89.127.62.20... telnet: connect to address 89.127.62.20: Connection refused telnet: Unable to connect to remote host

Whereas from outside:

Starting Nmap 6.00 (

formatting link
) at 2016-07-28 15:13 EEST Initiating Ping Scan at 15:13 Scanning 89.127.62.20 [4 ports] Completed Ping Scan at 15:13, 0.07s elapsed (1 total hosts) Initiating SYN Stealth Scan at 15:13 Scanning 89.127.62.20 [100 ports] Discovered open port 25/tcp on 89.127.62.20 Discovered open port 8080/tcp on 89.127.62.20 Discovered open port 80/tcp on 89.127.62.20 Discovered open port 443/tcp on 89.127.62.20 Completed SYN Stealth Scan at 15:13, 1.34s elapsed (100 total ports)

Routing matters.

--
Steve O'Hara-Smith                          |   Directable Mirror Arrays 
C:>WIN                                      | A better way to focus the sun 
The computer obeys and wins.                |    licences available see 
You lose and Bill collects.                 |    http://www.sohara.org/
Reply to
Ahem A Rivet's Shot

I don't know NetApp well but I would be appalled if the snapshot removal was anything other than explicit, perhaps driven by a policy control (eg. keep hourly for three days, daily for three weeks ...) rather than delete oldest snapshot when short of space.

--
Steve O'Hara-Smith                          |   Directable Mirror Arrays 
C:>WIN                                      | A better way to focus the sun 
The computer obeys and wins.                |    licences available see 
You lose and Bill collects.                 |    http://www.sohara.org/
Reply to
Ahem A Rivet's Shot

I have worked with Novell Netware and VAX/VMS in the past and in both systems it worked that way. Apparently sort of accepted.

Reply to
Rob

He is right! It works that way in Linux. That is what makes Linux different from Windows. (e.g. you can update a library in the running system without having to create some pre-boot list of libraries to replace on the next system boot and then reboot, as is done in Windows)

Reply to
Rob

Actually, you can rename a dll in use in windows. Then replace it with whatwever you want. Running processes using it will NOT notice.

This is how we deploy new versions for our software, on running sites.

rename old dll, move the new dll to site, restart to take effect.

On unix (AIX and Linux), you can write over the exe-file and its libraries without problems. (execpt AIX 5.3 I think, which failed the link phase. But it deleted the executable, so another link, and it was ok.

Reply to
Björn Lundin

Perhaps not too surprising then (given disc sizes) but still YEUCH!

--
Steve O'Hara-Smith                          |   Directable Mirror Arrays 
C:>WIN                                      | A better way to focus the sun 
The computer obeys and wins.                |    licences available see 
You lose and Bill collects.                 |    http://www.sohara.org/
Reply to
Ahem A Rivet's Shot

we are not talking about libraries dear.

We are talking about databases or other user data files.

Even a modified text file will signal most text processors to say 'copy has changed on disk' if you pull the rug from under.

A database file is not 'kept open'. Neither are many many other files in constant use. They are opened and closed dynamically.

This is less of a pain if they happen to be cached but that's how it all works.

Even if a file is kept permanently open, all of it wont be cached in the application. And if its opened on a read write basis the application is likely to write it back to disk, undoing any corruption.

--
Microsoft : the best reason to go to Linux that ever existed.
Reply to
The Natural Philosopher

Make that unix rather than Linux - it's pretty much as old as unix (ie. nearly as old as Linus Torvalds).

--
Steve O'Hara-Smith                          |   Directable Mirror Arrays 
C:>WIN                                      | A better way to focus the sun 
The computer obeys and wins.                |    licences available see 
You lose and Bill collects.                 |    http://www.sohara.org/
Reply to
Ahem A Rivet's Shot

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.