This is the first of a series of related posts. I figured it best to make sure the groundwork is in place, first...
On any *wired* network (save the wireless complications for later) using a star *physical* topology beyond10BaseT, there is a potential for multicast packets to NOT arrive at all nodes coincidentally (i.e., ignoring "speed of light" propagation down the wire).
Presumably, switches enqueue incoming multicast packets on *all* outbound ports. Since there is no way of knowing what's already queued on a particular port, the storage time in the switch can vary from port to port.
Also, the switch can't know a priori if multicast traffic might *originate* on all ports simultaneously. (Consider this in light of the previous assertion).
So, first question: how do switches handle incoming multicast traffic (in terms of a real algorithm, not just an approximate "hand waving" explanation)?
This then clears the way for the second question: how to determine the maximum latency presented by a switch?
And, the third question: how is the above mentioned pathological case handled (i.e., you can create more traffic than you have bandwidth to process!)?
Or, do switch vendors avoid this with arbitrary restrictions on their application? (e.g., akin to segmented hubs in days gone by)
I could see how a switch could conceivably monitor IGMP traffic to "eavesdrop" on the virtual connections desired. But, IGMP is only required when routers come into play (or is this a misunderstanding on my part?).[I should just set up a few multicast hosts and watch where the bytes go... :-/ ]