small Fanless SBC with 2 or more SATA's I/F

"horribly broken" ... ! Can you elaborate please ?

The NFS services run as a charm for years on a dedicated GNU/Linux server ; believe it or not, the NFS server runs at the very time on large business network.

Hab.

PS: Synology NFS implementation is not a proof of general NFS reliability.

Reply to
Habib Bouaziz-Viallet
Loading thread data ...

This article "NFS considered harmful" sums it up:

Basically the semantics for NFS don't match the usual unix filesytem semantics, as well as a lack of security. Whether this matters may depend on how it's being used.

Regards, Allan

Reply to
Allan Herriman

Linux's loosey-goosey file system semantics cause unpredictable delays in file deletion on NFS, or they did some years back when I was doing cluster work. You could delete a temporary file, open another one of the same name, write to it, and then it would disappear some random time later, causing all sorts of breakage. It could be 20 seconds later, or maybe more. Never happened on Samba/CIFS.

I don't use it anyway, so I have no idea of how good or bad Synology's NFS is, but it exists.

Cheers

Phil Hobbs

--
Dr Philip C D Hobbs 
Principal Consultant 
ElectroOptical Innovations LLC 
Optics, Electro-optics, Photonics, Analog Electronics 

160 North State Road #203 
Briarcliff Manor NY 10510 

hobbs at electrooptical dot net 
http://electrooptical.net
Reply to
Phil Hobbs

What he said.

Cheers

Phil Hobbs

--
Dr Philip C D Hobbs 
Principal Consultant 
ElectroOptical Innovations LLC 
Optics, Electro-optics, Photonics, Analog Electronics 

160 North State Road #203 
Briarcliff Manor NY 10510 

hobbs at electrooptical dot net 
http://electrooptical.net
Reply to
Phil Hobbs

The devil is in the details: "on how it is being used". The problem is folks use network/distributed file systems without understanding how they are implemented or how they are *intended* to be used.

Note the OP uses NFS to mount his $HOME -- a perfectly acceptable application Usually $HOME is "private" so that automatically eliminates many of the "problems" that can manifest in a PROPERLY CONFIGURED client+server (OTOH, mixing and matching NFS implementations without understanding the assumptions made by each client/server is a recipe for disaster; especially as EVERYONE tries to tweak their client/server products to goose NFS performance -- often without realizing the consequences "on the other end of the wire")

For example, a proper server/client should never "lose" a file that is being written (or read). Locks should prevent another process from unceremoniously unlinking a file that is in use, remotely. A server crash should result in clients being notified of it's coming back on-line so they can reclaim locks in the grace period (and thus avoid some OTHER client taking the lock that *they* previously held). Clients should retain written data until confirmation of a commit from the server, etc.

In *practice*, implementations often overlook niggling little details. E.g., clients often cache directory contents to avoid unnecessary trips to the server (for lookups). But, often forget to *verify* their cached content is "current" before RELYING ON IT!

So, if the original contents of the directory have been changed (e.g., a file deleted), the client should be aware of this (even in the absence of locks) because the modification time of the DIRECTORY no longer agrees with the cached modification time. I.e., the cache needs to be purged (perhaps even the directory itself is gone!) and refreshed -- the file handle is now *stale*! OTOH, if the client naively *relies* on its cache of the directory...

[If you thought there was a possibility of another process unlinking or altering the file WHILE you were using it, why didn't you take a lock on it while you were using it??]

Seemingly trivial permission screwups (esp on mountpoints) can lead to all sorts of wacky behaviors -- that you would never encounter with local filesystems! (E.g., # mkdir ~allan ). Failing to run the lock daemon; Coexistence with ad hoc PC/NFSD implementations and concurrent SMB mounts; Faults in automounter maps; Not-Ready-For-Primetime client/server implementations/configurations. etc.

"The devil is in the details".

A good/knowledgeable IT guy with "reliable" clients/servers should have no problem getting "acceptable" performance out of NFS. And, would know when *not* to use it (e.g., NFS mounting a database store seems silly, at best!)

*But*, everyone thinks they're qualified to be a sysadm -- SOLELY because they know root's password! (*and*, the distro was FREE!) :>
Reply to
Don Y

To be fair, nor is it proof that synology's implementation is UNreliable.

I can take two MS boxes and transfer LOTS of files to/from a SMB share and watch performance (throughput) drop like a rock. And, MS wrote the code on both ends of the wire (the only "foreign" element is the wire connecting them)

[Of course, there are *some* workarounds requiring registry hacks, etc. and specifics for each Widows client/server version. I've taken to using an FTP cient when I want to move large numbers of files under Windows]

The point being naive reliance on an implementation is probably a Bad Idea unless you know (from inspection or authority) or *measure*/characterize that performance.

Reply to
Don Y

That's not the case on local filesystems, why should NFS forbid that?

--
umop apisdn
Reply to
Jasen Betts

It's the semantics of what it _does_ that are broken. In an unbroken FS the file doesn't disappear until the last hard link goes away.

Cheers

Phil Hobbs

Reply to
Phil Hobbs

The file's *content* still persists beyond that event (see below).

The problem isn't in NFS but, rather, the *expectation* that it will behave like a local filesystem.

I can open a file for write and then unlink it -- KNOWING that I am the sole creator of the file and this *is* the last link to the file -- *yet*, still write to that file (rewind, read, etc.). EXPECTING this to work "because it does on a local filesystem" is the flaw.

I can create a new file and open it for writing. Write "Hello". Close it. Reopen for reading and *expect* to see "Hello", there. (assuming no other users can access it or unlink it in the meantime). Or, open it for writing under one file handle and reading under another. Then, write something and read/verify "right behind it".

OTOH, if that file happens to be, e.g., the name of a pipe... or, an I/O device (/dev/cua0, /dev/pty, etc.) then the code is broken. It wasn't *intended* to operate under the conditions that have been applied (I'm willing to *bet* you've never made such a note in your code regarding such a fundamental assumption: "file must be persistent and read what is written")

It's not uncommon to use a shared memory object to communicate between a producer and a consumer. One always writing to it while the other is always reading (neglect the synchronization mechanism as you have lots of choices, there including "none required").

*I* use distributed shared memory in my RTOS. So, the above idea should apply "equally", right? Producer only writes to it, consumer only reads from it. So, the cost of a single write/update can be "unexpected" (gee, "why didn't that take 2 clock cycles at 1GHz?")

Ah, but what if "it" disappears between one "access" and the next?! Where do I add the "test" to see if "a = 7;" succeeded or not?

Similarly, I allow any function interface to be transparently exported across the network and serviced by another (remote) node. Expecting printf("...", ...) to happen at "local bus speeds" would result in faulty code -- it could take any amount of time to complete (including

*never*! What's the return code for printf(3c) to indicate "failed to complete"?)

None of this should be surprising -- anyone who's written code to run on a single-tasking OS has "learned" similar sorts of things when moving to an MTOS. Anyone who's worked on a VM system has learned that the "next opcode" could take a very long time to execute (if the OS has to fault in a new page -- possibly flushing some existing page out to disk in the process).

Reply to
Don Y

That doesn't cover the breakage scenarios. As I said, I had a cluster script controlling an optimizing simulator, where I opened a file whose name was generated automatically, ran a simulation, dorked the simulation parameters appropriately, ran it again, and so on until timeout or convergence.

At the end of each simulation, I deleted the file and created another one with the same name for the next run. The result was that my file kept getting deleted incorrectly as long as *minutes* into the next run. It was my file, I created it, I deleted it (or apparently so) and then NFS in its infinite wisdom deleted the new one while I had write handles open to it.

So your sunny apology is entirely unpersuasive. NFS is horribly broken, and there are lots of modern alternatives, so I dropped it and never looked back.

Cheers

Phil Hobbs

--
Dr Philip C D Hobbs 
Principal Consultant 
ElectroOptical Innovations LLC 
Optics, Electro-optics, Photonics, Analog Electronics 

160 North State Road #203 
Briarcliff Manor NY 10510 

hobbs at electrooptical dot net 
http://electrooptical.net
Reply to
Phil Hobbs

Once you close a file, writing to it should cause a fatal error. I make this mistake from time to time.

You cannot have two handles point to the same open file. That is simply idiotic. Your system should have prevented creating the second handle. The file is already open.

There is only one read/write head. The channel cannot write and read at the same time. Whoever gets there first owns the channel. Everyone else is locked out until the first user is finished and closes the channel.

You need to include a little bit of the actual physics of these things in understanding how a filesystem should work. Because it will break if you don't.

Reply to
Tom Swift

Not so. Multiple read-only handles are fine. Linux file system semantics are loosey-goosey, as I've often said--you can open the same file for writing as many times as you like, as long as it's in the same process. That means you really can't use file handles for locking purposes, at least not unaided.

Nobody's talking about simultaneous access to different places on the actual rust (or metal film). There are buffers upon buffers involved, for one thing.

File systems and discs work quite differently. Otherwise, why do you need a file system?

Cheers

Phil Hobbs

--
Dr Philip C D Hobbs 
Principal Consultant 
ElectroOptical Innovations LLC 
Optics, Electro-optics, Photonics, Analog Electronics 

160 North State Road #203 
Briarcliff Manor NY 10510 

hobbs at electrooptical dot net 
http://electrooptical.net
Reply to
Phil Hobbs

The topic is reading and writing simultaneously.

You cannot open a file for writing and then open the same file for reading. The channel cannot do both at the same time.

That is the topic. He stated:

That is impossible. The file system must prohibit it.

To tell the disk whether to read or write, among other things.

Reply to
Tom Swift

I didn't say "open a file for write AND THEN CLOSE IT". Rather, I said "UNLINK IT" (i.e., sever the binding between the name and the contents -- make it "nonexistent" in the eyes of other subsequent users of the filesystem). The file is STILL OPEN FOR WRITING (by the process that opened it and then unlinked it).

And what planet are YOU from??

Here, on Earth, we have these wonderful devices called KUM-PEW-TIRZ that you might be interested in! Not only can they "WALK", but they can,

*also*, CHEW GUM at the same time!!

you probably need to include some first-hand experience in YOUR understanding! :>

This is semi-pseudo-code but should be close enough to compile that you should be able to EASILY get this running in order to convince yourself of my assertion (error checking elided)!

#include

FILE *f1, *f2; int n; char buffer[50]; // sloppy but we're just making a point!

void foo1() { f1 = fopen("myfile", "rw"); // one file accessed via one fd n = 0; do { fprintf(f1, "N is %d\n", n++); } while (n < 100); fflush(f1); rewind(f1); // Or, we could close f1 and reopen it for reading, here while (1) { if (NULL == fgets(&buffer[0], sizeof(buffer), f1)) break; fprintf(stdout, buffer); // let's live dangerously! :> }; fclose(f1); }

void foo2() { f1 = fopen("myfile", "w"); // file written via FIRST fd f2 = fopen("myfile", "r"); // same file read via SECOND fd n = 0; do { fprintf(f1, "N is %d\n", n++); // write at file pointer in fd1 fflush(f1); // deal with buffering fgets(&buffer[0], sizeof(buffer), f2); // read-behind via fd2 fprintf(stdout, buffer); // let's live dangerously! :> } while (n < 100); fclose(f2); fclose(f1); }

int main(int argc, char **argv) { printf("Try foo1\n"); foo1(); printf("Try foo2\n"); foo2(); return( 0 ); }

Note that foo2 has two fd's pointing into the *same* file AT THE SAME TIME! (something you claim is problematic) And, they are usually NOT referencing the same offset in that file!

Note, also, that the flush is only required to make the code simple to write. You could likewise prepend setvbuf(2c) to take the buffering out of the equation more heavy-handedly.

Reply to
Don Y

-----------^^^^^^^

You can open a file as many times as you have file descriptors! (usually a limitation that the OS imposes on your process) And, each file descriptor can point to a different place in the same "file" and be performing different operations.

What "channel" are you talking about? We're way *above* the hardware layer (note that some media actually *do* read and write concurrently)

And "he" stands by it!

(sigh) Rather than spouting ignorance, why don't you run the sample code I posted (upthread) and see -- *poof*! -- for yourself. "Impossible"??

Reply to
Don Y

I just read and i admit the first two criticisms in that "paper" make me laugh ; time synchronisation and locking mechanism ... ! i resign to read more. NFS has never been designed for replacement of Oses services but relies on it. I suspect some (not you Phill) are using NFS as a hammer for doing broderies. Long time ago in 2000, /home accounts of quasi all student and permanents here in INRIA were exported via NFS, all with heterogeneous machines (old SUN's, SGI station (mine was a Octane) , GNU/Linux, ... even MS-Win) and guess what ...? it runs even now.

Reply to
Habib Bouaziz-Viallet

laugh ; time synchronisation and locking mechanism ... ! i resign to read more. NFS has never been designed for replacement of Oses services but relies on it.

Well, if you aren't interested, that's okay with me. If that's your usual style--to laugh and not listen to arguments you don't like--I'm sure glad you aren't running my IT, though.

Cheers

Phil Hobbs

Reply to
Phil Hobbs

AFAIK a file descriptor created upon a process context (e.g. flock() ) is used and intended for implementing lock feature on it.

Did not really understand what you say, i re-read and that seems "imcomprehensible" to me. Sorry.

This thread is boring after all.

Hab.

Reply to
Habib Bouaziz-Viallet

It was not my goal to displease you. Your experience on NFS seems to me strange. Far from my idea to change your mind on NFS. Please keep going on SAMBA as you seems to be confident on it.

Hab.

Reply to
Habib Bouaziz-Viallet

NFS is a stateless protocol - that makes it simple (or simpler) and efficient, but makes some features difficult to implement. Samba is stateful, and has a different feature set. Similarly, Samba has a different feature set from an NFS server.

Both file sharing protocols have their place - many things can be done with either, but some things are best done with one of the two.

Reply to
David Brown

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.