How long will Raspbian 7 be supported - same as Debian Wheezy?

That is right, but here most people probably never used Unix, only Linux.

Reply to
Rob
Loading thread data ...

probably. I've used both since..well a long time.

Of course it only works that way with files that are held open, and then not even always then: They also have to be totally cached in

*application* memory.
--
"When a true genius appears in the world, you may know him by this sign,  
that the dunces are all in confederacy against him." 

Jonathan Swift.
Reply to
The Natural Philosopher

But this property is not in the handling of libraries (shared libraries did not even exist when this mechanism of handling files was implemented), it is a general property of a file in Unix/Linux. In Unix/Linux, when you remove a file that is open by some programs, and then create a file with the same name, the programs that had the file open will continue to see the old file, they can even update it and see the updates, while new programs that open the file will see another file and can update that independently. When the last program that had the old file open closes it, the allocated space will be released and all updates done to that version are lost.

Only modern editors (text processors, in your words) notice that, and it is because they check the file before writing back. In Linux it is not common to keep a file open (and possibly locked) during an edit. This, however, is common in Windows.

That is an implementation-dependent thing. I would not want to make that statement without checking. A database service will probably keep its database container open at least as long as there is a client that has that database in use.

Generally, yes.

Crypto trojans on Windows usually read the file and write a new, encrypted, copy, then delete the old file and leave only the new copy (which has a different extension tacked on so the decrypt tool can recognize them). So the original file never gets written, it only gets removed. See above how that would affect a Linux program that has it open.

Reply to
Rob

It doesn't matter whether it's a database, a text file, an executable or any other kind of file. A file is a file, that is a concept introduced in Multics and kept in unix.

However this will not happen if you remove the directory entry for the file while it is still open because you have not changed the file at all simply removed *one* of the links to the inode.

That depends on the application, most database servers keep their files open all the time.

Sometimes yes, sometimes no depends on the application.

However you are correct that if the file is not open at the time it is replaced then the process will get the replacement on opening, OTOH if the file is open at the time it is replaced then the process that has it open will not notice any changes to the file because there have been none.

It has nothing to do with cache, and everything to do with how unix handles files and directory entries.

True, although it might not write the whole file - however if you have replaced the directory entry used to open it then that directory entry may well point to a different file, and the original may have no directory entries at all.

This is how it really works in unix on an inode based filesystem (pretty much all unix filesystems).

Process opens file "/a/b/c" - filesystem looks up the inode pointed to by this directory entry, increments the reference count on the inode and hands a handle for the inode to the process. In unix the inode is the file not the directory entry, that's just a name for the file there can be many.

Blackhat deletes the entry c from directory /a/b/ - filesystem decrements the reference count on the inode that was pointed to by /a/b/c - process does not notice and continues to use the handle and inode it has been given which still has at least one reference and so is not freed.

Blackhat creates a new file as /a/b/c - filesystem allocates a new inode for the purpose. Blackhat writes garbage to it. The file opened by Process is not affected.

Process closes the file - filesystem decrements reference count on the inode referenced by the file handle, if it reaches zero then the inode and data blocks are freed by the filesystem.

The next process to open /a/b/c gets the garbage written by Blackhat.

--
Steve O'Hara-Smith                          |   Directable Mirror Arrays 
C:>WIN                                      | A better way to focus the sun 
The computer obeys and wins.                |    licences available see 
You lose and Bill collects.                 |    http://www.sohara.org/
Reply to
Ahem A Rivet's Shot

True.

False.

-- Steve O'Hara-Smith | Directable Mirror Arrays C:>WIN | A better way to focus the sun The computer obeys and wins. | licences available see You lose and Bill collects. |

formatting link

Reply to
Ahem A Rivet's Shot

You have not used Unix for long enough (or studied it well enough) to know that this is not true. The behavior I described has nothing to do with the cache, it was already there in Unix on the PDP11 where there was not even the memory to implement a cache.

It is built into the filesystem and the method that files are referenced in directories, as pointers to inodes that can exist independent of a directory entry. When a file is "replaced", a new inode is created and the name is linked to that inode, while existing processes that have the file open keep referring to the old inode. An inode holds all the information about a file except its name.

Reply to
Rob

One thing that surprised me, at least on FreeBSD the file replacement semantics work the same even on a FAT32 filesystem which lacks inodes. I suspect some layer is synthesising inodes for FAT32.

--
Steve O'Hara-Smith                          |   Directable Mirror Arrays 
C:>WIN                                      | A better way to focus the sun 
The computer obeys and wins.                |    licences available see 
You lose and Bill collects.                 |    http://www.sohara.org/
Reply to
Ahem A Rivet's Shot

My data all lives on ZFS (including my Windows data). It gets regularly snapshot, and I keep the daily snapshots forever. I can go back and look at what was in my files on any day in the last

9 years. Mostly you only want to go back a day or two, but if you find some program broke in the past and you only just noticed, you can go back and find when it broke and home-in very quickly on what broke it.

One of the cloud providers who similarly uses ZFS was talking about how quickly they have got several of their Windows customers back up and running after ransomware attacks (without paying the ransom), and I think it was just a couple of hours to get 100's of Windows VMs booted back into their last pre-attack snapshots.

That sort of thing is tunable. One of my colleagues who had a couple of kids at home doing homework at the time had his home ZFS server taking snapshots every 2 mins, because he was tired of hearing "Daddy, I just lost all my homework" when they did something wrong in Word, Powerpoint, etc.

Snapshots don't live in free space. It might be that when free space runs out, some filers will automatically delete old snapshots to make more free space, but I certainly don't do that with ZFS.

--
Andrew Gabriel 
[email address is not usable -- followup in the newsgroup]
Reply to
Andrew Gabriel

They must live *somewhere*. Does that mean you have to allocate space statically when you make the filesystem, and how do you calculate how much space you are going to need to keep 9 years of history, not knowing what can happen any day?

When a trojan decides to fill your entire volume with noise and keep rewriting it day after day, how is that not going to eat up all snapshot space? And what will happen then, will it stop the snaphots, will it delete other snapshots, or will it panic the system?

Reply to
Rob

It is possibly doable on FAT32, as you could keep the old directory entry in memory after deleting it, and keep the FAT chain on disk as a record of the allocated space. Then you can write a new directory entry (with the same name), pointing to another FAT chain. The in-memory directory entry, used by the running programs, keeps track of the filesize and modification time of the old file.

However, in Windows you cannot delete a file that is open, not even when it is open only for read. Someone said they could rename .dll files that are in use, but I seriously doubt that. Not even Windows Update is doing that.

Reply to
Rob

I expect that is pretty much how FreeBSD is doing it.

--
Steve O'Hara-Smith                          |   Directable Mirror Arrays 
C:>WIN                                      | A better way to focus the sun 
The computer obeys and wins.                |    licences available see 
You lose and Bill collects.                 |    http://www.sohara.org/
Reply to
Ahem A Rivet's Shot

Oh sure it's all out of the same pool, it's just allocated space not free space.

No you don't have to allocate space for it.

Once the free space is gone you can't write any more data, snapshot or otherwise until you delete something. Nothing else *should* happen, although it's likely many programs will start crashing when they can't write to the filesystem.

ZFS filesystems are usually large, my 4TB RAIDZ2 setup is very small by ZFS standards so filling one can be expected to take quite some time which should provide opportunity to find, kill and remove whatever is chewing up the space.

--
Steve O'Hara-Smith                          |   Directable Mirror Arrays 
C:>WIN                                      | A better way to focus the sun 
The computer obeys and wins.                |    licences available see 
You lose and Bill collects.                 |    http://www.sohara.org/
Reply to
Ahem A Rivet's Shot

They live in diskspace just like the main filesystem.

No. When I run out of disk space, I replace the disks with bigger ones. This also means I don't keep the same disks running for 9 years, which would be a bad thing!

With ZFS, a snapshot initially uses no space, because it's no different from the filesystem and the snapshot points to exactly the same disk blocks as the current filesystem. If I then edit a file containing, say, 10 blocks, and I change two of those blocks, the snapshot will look like it contains 2 blocks, because there are now two blocks dedicated to the snapshot which are no longer shared by the current filesystem. Actually, it will be more than 2 blocks changed because the inode will be changed, and the pointer to it will also be different, but you get the idea that the snapshot only occupies blocks which are different between it and the filesystem (or it and the next snapshot when you take another snapshot). The daily changes in my data tends to be small, although over 9 years, it does add up obviously.

The snapshot space is not free to be eaten up - it's all marked as allocated, just like the main filesystem. A trojan could use up all the free space which would prevent me writing anything, but it can't do anything to the snapshots.

None of those - it would just stop me writing to the filesystem, but I'll notice the full disk rather quickly. (It would probably also stop new snapshots.)

Here are a few properties of my home directory (I stripped most of them out, as they're not interesting for this discussion)...

$ zfs get all export/home NAME PROPERTY VALUE SOURCE export/home type filesystem - export/home creation Fri Jul 1 15:49 2016 - export/home used 306G - export/home available 2.65T - export/home referenced 47.2G - export/home compressratio 1.68x - export/home compression lzjb inherited from export export/home usedbysnapshots 258G - export/home usedbydataset 47.2G - $

The create date is recent because I just upgraded the disks, the previous disks having got pretty full and been spinning for 5 years, but all the snapshots have been moved across with the filesystem.

So the filesystem is using 47.2G, and the 3087 snapshots are using

258G, an average of 83MB per snapshot although they vary wildly depending on what I've been doing - here are the last 10 days...

$ zfs list -t all -r export/home NAME USED AVAIL REFER MOUNTPOINT export/home 306G 2.65T 47.2G /export/home . . [snip thousands of snapshots] . export/home@20160718 168K - 46.7G - export/home@20160719 144K - 46.7G - export/home@20160720 432K - 46.7G - export/home@20160721 196K - 46.7G - export/home@20160722 96K - 46.7G - export/home@20160723 208K - 46.7G - export/home@20160724 57.9M - 46.7G - export/home@20160725 53.1M - 46.7G - export/home@20160726 58.0M - 46.7G - export/home@20160727 87.1M - 46.7G - $

It is not normal to keep snapshots forever, and I don't do this with other filesystems. I still take them daily, but I periodically clear out snapshots which are older than some months. e.g. for a couple of filesystems which are full of photos, I very rarely delete any pictures, so the space dedicated to the snapshots is almost nothing, even though they also have daily snapshots - the snapshots are there in case anyone has finger trouble and accidentally deletes photos. However, for my home directory which contains software I develop myself, it has been useful to be able to go back much further.

--
Andrew Gabriel 
[email address is not usable -- followup in the newsgroup]
Reply to
Andrew Gabriel

Unix (including Linux and FREEBSD) keeps a usage count of inode. The in-memory usage count is the on-disk link count plus the open filehandle count.

When you delete a file, the entry is removed from the parent directory and the on-disk link count is decremented (and in- memory usage count decremented to match). Then when the in-memory usage count drops to zero (meaning it isn't open anymore and the on-disk link count is zero), the inode is freed and the file blocks are freed.

If something has the file open when it's deleted, the in-memory usage count will not drop to zero, so the file continues to exist on-disk exactly as it always has, except it might no longer have any directories which have a reference to it (if link count zero), which would make opening it again difficult (although not impossible).

--
Andrew Gabriel 
[email address is not usable -- followup in the newsgroup]
Reply to
Andrew Gabriel

Ok the confusion results from the fact that referred to it as "free space" as that is how it worked in older systems and as I expected them to be cleaned away only when required. Of course the space is allocated to the snapshots, so it is not free, but apparently it is also not available to the filesystem. In the NetApp, as far as I understand the document I read, there is a fixed amount of space reserved for the snapshots, but they can overrun that space. This implies that they can be trimmed back to the allocated space when required, but I have not yet read if that is an automatic or a manual operation.

Compare snapshots in Linux LVM, they need extra space OUTSIDE the area they snapshot, and when that area overflows the snapshot is deleted.

Ok, that means the snapshots require maintenance, either manual or by some daemon that deletes them according to some schedule.

I do the latter using a versioning system, but I do see the value of such a filesystem. Interesting to setup at home next time I re-install the OS and/or replace disks.

Reply to
Rob

Got you.

Is that the only way you can do something like keeping the most recent 'n' snapshots, where 'n' would be tuned to match the number of snapshots to the average snapshot size increment and the disk size?

In effect its not a lot different from my overnight backups. These are compressed tarballs containing everything except system files. My backup script starts out by getting the size of the last tarball and deleted enough tarballs to free 110% of the disk space. Then is makes and stores today's tarball, whose name includes the day's date.

IOW the script has a built-in assumption that no tarball will be more than 10% bigger than the last one: this guestimate works well in practise. The script fills the disk up before starting to delete tarballs to make space for the next one. Once the disk is full, it deletes one before adding one, except very occasionally it will delete two before adding tonight's backup.

Its only disadvantage is that it is quite slow, generally taking 95 minutes to make a backup, but who cares - the machine isn't doing much else for an hour or two after 03:30

I dimly remember using that.

These days I use rkhunter, which is really an enhanced tripwire: as well as knowing which system programs should be checked, it also has a set of rules for recognising rootkits and trojans. It seems to record some sort of checksum as well as the inode number for the executables it checks: it notices if they have been moved.

--
martin@   | Martin Gregorie 
gregorie. | Essex, UK 
org       |
Reply to
Martin Gregorie

Sorry you are right, should have tried before posting. I normally run nmap from outside my network and only see the non-standard ssh port. From inside, even using the external address, you get the other services it offers to the local network.

---druck

Reply to
druck

Yes yes if you look upthread you will see me explaining this in detail, what we're talking about here is how it's done pn a FAT32 filesystem that doesn't use inodes - DKUATB,

--
Steve O'Hara-Smith                          |   Directable Mirror Arrays 
C:>WIN                                      | A better way to focus the sun 
The computer obeys and wins.                |    licences available see 
You lose and Bill collects.                 |    http://www.sohara.org/
Reply to
Ahem A Rivet's Shot

Then give it a try rather than doubt it. Has worked for ages.

To confirm, I just renamed libpq.dll having pgAdminIII running. No problems there, and the application still can query the database.

But I am not allowed to delete it.

Works as I expected.

Reply to
Björn Lundin

In Linux you would even be allowed to delete it!

However, I know that Microsoft updates always put a file at a temporary place and construct a list of files to be renamed at the next reboot, then when you reboot it first renames all those files before it boots the system (the "updates phase x of 3"). There must be a reason for that. Having to reboot the system is often pinpointed as a disadvantage of Windows, and when they would have been able to easily solve that the probably would have done so.

(I am reluctant, because Microsoft normally only make changes when it results in visibible monetary gain to them, and that might not be clear in this case)

Reply to
Rob

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.