Yes. But that doesn't mean it has to work 100% of the time regardless of other environmental factors -- that makes the system too expensive (dedicated) or too brittle (where you, as a consumer, have to buy more capability than you really need).
Think of it as a parallel to an SRT solution. Your appraoch is more in line with an HRT solution.
No, I am willing to let performance degrade IF NECESSARY, as dictated by the conditions in which the system is being operated. That doesn't mean I want to set out with "less performance" as a *goal*.
You don't need TCP to get "error free". Rather, you need mechanisms that allow you to detect and recover from errors.
TCP is a bad choice as it would require a "connection" from the server to each client. By contrast, with a UDP-based protocol, you can leverage multicasting to reduce the total traffic on the network (clients seeing the same "program" simply participate in the same multicast group). Since packets are essentially numbered, a client can determine if it has missed a packet and request its retransmission explicitly (assuming it *needs* that packet!). If missed packets are The Exception instead of The Rule, then the retransmission requests will be infrequent (they also let the server get a feel for the integrity of the fabric *during* operation).
See above.
Note, also, that you need not implement an entire stack with the "usual" complement of utilities, etc. Many aspects of a traditional stack are useless or can be short-circuited since the clients are designed to talk to *a* server.
E.g., there is no need for a resolver, arp cache, etc. A UDP-based protocol can omit checksums if the higher level protocol already has error detection mechanisms. Etc.
Of course! My point is just that there are "events" that you simply can't predict and (economically) guard against. So, you either end up with a brittle system "breaking" when things aren't running EXACTLY as you had hoped (at *design* time) -- or, you make the design resilient to the sorts of things that are likely to occur.
Yes.
Yes. Hence the appeal of token-ring (token-passing) networks in applications where you needed predictable performance.
It's not quite as simple as that. You need to know how big (deep) the buffer is, whether it is *shared* across all ports in the switch or a "buffer per port", etc. On top of that, you need to know if the switch is blocking or non-blocking. And, how the switch forwards packets (cut-through, etc.) and how it handles "bad" packets. All of these things affect how long a packet might "stall" inside the switch.
More importantly, it affects the *range* of times that this sort of parking might occur.
No, it's just one of other types of applications that it can address. It could (theoretically) be used in a drive-in theater, music/audio distribution within a hotel, audio distribution at a conference, "intercom" at a school, etc.
Why design for *an* application if you can, instead, address a *range* of applications?
Nothing that complicated! E.g., for the "sound stage" application, generate an audio "reference signal" at the "source" (e.g., at the proscenium). Feed that same signal through the "system" to your first "tower/speaker-repeater". Adjust the absolute delay of this second signal until the phase of the first and second signals are in sync.
No, the variation in network protocol stacks is too unpredictable. In "real time" stacks, you can control this to a large degree (though often the NIC is poorly documented in terms of *when* data actually gets onto the wire). But, even then, the delays that are encountered in the switches themselves is too unreliable (unless you want to hope there is enough entropy in the system as a whole to "guarantee" a typical observation remains constant forever). E.g. a blocking, store-and-forward switch with a deep *shared* buffer will exhibit a wider (theoretical) range of switch delays than a non-blocking, cut-through switch.
And, all that can vary over time, traffic, etc.
With NTP, you *might* get ~O(1ms+) synchronization. Plan on
10ms to be safe.