Multicasting and Switches

Hi,

This is the first of a series of related posts. I figured it best to make sure the groundwork is in place, first...

On any *wired* network (save the wireless complications for later) using a star *physical* topology beyond

10BaseT, there is a potential for multicast packets to NOT arrive at all nodes coincidentally (i.e., ignoring "speed of light" propagation down the wire).

Presumably, switches enqueue incoming multicast packets on *all* outbound ports. Since there is no way of knowing what's already queued on a particular port, the storage time in the switch can vary from port to port.

Also, the switch can't know a priori if multicast traffic might *originate* on all ports simultaneously. (Consider this in light of the previous assertion).

So, first question: how do switches handle incoming multicast traffic (in terms of a real algorithm, not just an approximate "hand waving" explanation)?

This then clears the way for the second question: how to determine the maximum latency presented by a switch?

And, the third question: how is the above mentioned pathological case handled (i.e., you can create more traffic than you have bandwidth to process!)?

Or, do switch vendors avoid this with arbitrary restrictions on their application? (e.g., akin to segmented hubs in days gone by)

I could see how a switch could conceivably monitor IGMP traffic to "eavesdrop" on the virtual connections desired. But, IGMP is only required when routers come into play (or is this a misunderstanding on my part?).

[I should just set up a few multicast hosts and watch where the bytes go... :-/ ]
Reply to
D Yuniskis
Loading thread data ...

By a star topology, I assume you mean one and only one switch. Correct?

Ask the switch vendor. How would you expect anyone in this group to have anything but a educated guess for that question?

Falling back to the educated guess disclaimer, I'd say the maximum latency is indeterminate.

It seems that by definition, that if the multicast packet collides with another packet, the latency will be indeterminate.

Presumably you will argue that this is a purpose- built network and the possibility of a collision is small to non-existent. I would reply that in addition to coordinating the host and client stacks and packets to avoid collisions, you also have to consider that the switch might be sending spanning tree packets at unknown intervals.

Reply to
Jim Stewart

Use a dumb hub.

For exactly those reasons, some industrial ethernet protocols require that hubs (not switches) are used.

Expect to pay a lot for a primitive hub due to "industrial" and possibly some certification markings :-).

Reply to
Paul Keinanen

I'm not sure, but I believe that some switches can snoop on IGMP so they know where to send multicast traffic, and so minimize broadcasts.

Sorry I can't point you at a reference though.

Same way they handle all traffic; buffer it if possible, drop it if not.

That would depend on the buffer size, but typically that's not very big.

Drop packets.

If you google for reliable multicast protocols, as I did when I studied this stuff several years ago, you'll get a lot more insight into the hardware behaviours that must be handled by the software protocols. Ack-fan-in is a big problem, replacing the fan-out problem of point-to-point connections.

Clifford Heath.

Reply to
Clifford Heath

But I don't (?) need to support IGMP if I am not trying to route the multicasts. I.e., a switch should (?) be able to handle multicast traffic on a single LAN segment.

But, does the switch *effectively* treat the multicast packet AS IF it was a unicast packet *destined* for the host(s) serviced by this port? In one conceptual model, the packet is *copied* to each destination port; in another conceptual model, the packet is "cached" and "a flag set" in each port telling each destination port "pass the cached multicast packet, also" (when all flags have been cleared, the multicast packet has been effectively copied to each port; etc.

Note that the details of how the packet is handled influence how multiple packets are handled as well as the maximum throughput (even with "few" destination ports) for the multicast traffic (e.g., a "cached" implementation would affect *all* multicast streams going through the switch)

Of course -- no free lunch. But, does the switch do so "fairly" or with some bias (towards or against the multicast traffic)?

Cisco has some detailed information on their switches. But, I was hoping for some more general conclusions that would apply to *all* product offerings.

E.g., if you drag your mind along the evolution of the technology, you can see multicast traffic was originally limited by the bandwidth of the cable segment. E.g., one multicast host could conceivably generate every packet the wire was capable of carrying.

Now you introduce a hub (10BaseT). The same limitation applies -- one host could still use all of the bandwidth. Two hosts could split/share it, etc.

Once you introduce switches, packet amplification becomes a problem. I.e., the very same attribute of the switch that makes it so attractive (vs. a hub) -- getting something for nothing -- comes back to bite you when you multicast.

I was hoping some standards "task force" or "working group" would have codified the minimum behavior to be expected of a switch... i.e., can a switch NOT pass multicast traffic AT ALL and still be called a switch?

Reply to
D Yuniskis

No, I meant "not a physical BUS" (multicasting on a physical bus is a lot easier to characterize).

Because I assume *I* am not the only person designing products that employ multicasting. I am assuming/hoping that others may have gone down this road before (why do people ask specific questions about MCU's here instead of "asking the MCU vendor"?).

This seems especially true given the move towards "whatever-over-IP" implementations (e.g., where are the folks designing the AVB products -- in alt.knitting?)

That depends on the buffering in the switch. And, how the multicast packet is treated *by* the switch.

No, I'm wanting to know how *you* and your *neighbor* and your *friends* are going to be deploying network fabric in your house in the next 5-10 years to handle the shift to the X-over-IP distribution systems that are in the pipeline TODAY. Will you have to buy new "AV" switches? Will those switches take pride in advertising themselves: "New and improved! Now supports *3* multicast streams!!" Or, will folks just be complaining that their video spontaneously pixelates, etc.?

Or, will you have to design the network fabric into the house at the time of construction (retrofit being impossible), etc.

Reply to
D Yuniskis

Ha! I guess that's a solution! Do they make GB hubs? It seems like this problem will only be getting more significant in the coming years... especially in the SOHO market.

Perhaps this suggests running *two* networks -- one that uses hubs ("bus" topology) that is friendly to multicast traffic and the other using switches for unicast traffic?

Reply to
D Yuniskis

I have only seen dumb hubs up to 10 MbpS, it would be quite a piece of hardware to do the hub work at 100MbpS+ without buffering/ transmitting. But then I have not been looking for such a thing.

Yet I believe you are looking into a non-issue. The switches (I have dealt in more detail with only one chip, though) broadcast all data to all ports until they can identify the destination MAC address - which they do when that MAC address sends something. And since I don't think any data will be sent with a multicast source address, the multicast data will be always just broadcast (I assume). OTOH, if you want to route it some particular way, sending a packet from a "multicast" source address may do the job and the switch will likely route everything to that port (everything to that multicast address, that is). The switch chip I have been dealing with does not have the word "multicast" at all in its datasheet/manual, so I guess it treats multicast addresses like any other non-broadcast address. If this guess is wrong, all of the above is also probably wrong :-).

Dimiter

------------------------------------------------------ Dimiter Popoff Transgalactic Instruments

formatting link

------------------------------------------------------

formatting link

Reply to
Didi

Forgot to mention that in my previous post, the switch chip that I know has a "broadcast storm protection" facility. Just drops packets above certain count IIRC.

Dimiter

Reply to
Didi

Now we are getting somewhere. I happen to have ATT U-verse in my house, running 3 SD video streams and 3mbit internet. It is running quite happily over the existing 10/100 ethernet installation using generic netgear switches. Of the 3 ATT "cable boxes", 2 are at least 2 switches downstream from the U-verse router.

Note that I do have 3 concurrent video streams running 24-7. Normally the ATT settop box times out and drops the stream if the channel doesn't change in 4 or so hours. Since we use Tivos downstream of the settop boxes, I had to program each tivo to switch channels periodically so that the stream wouldn't drop and the Tivo wouldn't record the "Press OK" screen the settop box puts up when the stream drops.

BTW, U-verse also has the option of running its digital link over existing 75 ohm antenna coax. I haven't tried it.

Haven't seen any spontaneous pixelations. There's other occasional digital weirdness, but not so much as I saw coming in on Comcast analog cable, presumably from their headend issues.

Reply to
Jim Stewart

Since to the best of my knowledge, in the event of an ethernet collision, both senders back off a random amount of time then retransmit, I can't see how the switch buffering would make any difference.

For that matter, does the sender even monitor for collisions and retransmit in a multicast environment. I guess I don't know...

Reply to
Jim Stewart

Do you know if it is multicasting those streams or *unicasting* them?

Also, all of your streams are sourced from a single "host" (port on the switch). I.e., you can never (theoretically) put more than 100Mb into the network because the wire connecting to your source has that inherent limitation. OTOH, if you multicast from different sources, you can exceed the bandwidth of the network.

I'm looking at the network fabric as a resource that can freely be used -- not just dedicated to a head-end source.

E.g., imagine pushing audio+video from a PC in your bedroom to a display in your living room... while pushing audio+video from your broadband link to a display in your "guest bedroom"... while pushing audio from a media server to speakers in the garage... while viewing the security camera at the front door on a monitor "someplace", etc.

I think when there is a single source, the problem is easier to solve/self-limiting. OTOH, when you treat it as just *fabric*, you have to be much more pedantic about enumerating the limitations.

Replacing the switch with a router goes a long way to "solving" (i.e., minimizing the impact) the problem. Integrating the media server *in* that router would also be a big win -- for locations that have a sole media spigot.

Reply to
D Yuniskis

Don't know, but other do. Check this out:

formatting link

If I didn't have so many things on my plate right now I'd run it and send you the data.

Reply to
Jim Stewart

The time a packet (*any* packet) spends buffered in the switch looks like an artificial transport delay (there's really nothing "artificial" about it :> ). Hence my comment re: "speed of light" delays.

When you have multicast traffic, the delay through the switch can vary depending on the historical traffic seen by each targeted port. I.e., if port A has a packet already buffered/queued while port B does not, then the multicast packet will get *to* the device on port B quicker than on port A.

If you have two or more streams and are hoping to impose a temporal relationship on them, you need to know how they will get to their respective consumers.

Multicast is like "shouting from the rooftop -- WITH A DEAF EAR". If it gets heard, great. If not, .

There are reliable multicast protocols that can be built on top of this. They allow "consumers" to request retransmission of portions of the "broadcast" that they may have lost (since the packet may have been dropped at their doorstep or anyplace along the way).

With AV use, this gets to be problematic because you want to reduce buffering in the consumers, minimize latency, etc. So, the time required to detect a missing packet, request a new copy of it and accept that replacement copy (there is no guarantee that you will receive this in a fixed time period!) conflicts with those other goals (assuming you want to avoid audio dropouts, video pixelation, etc.).

Remember that any protocol overhead you *add* contributes to the problem, to some extent (as it represents more network traffic and more processing requirements). The "ideal" is just to blast UDP packets down the pipe and *pray* they all get caught.

Reply to
D Yuniskis

I do not know, if this is really relevant, but check out what switches advertised with "IEC 61850 support" actually do differently compared to other switches. This protocol relies heavily on MAC level broadcasts for real time traffic as well as ordinary IP traffic for non-realtime traffic.

Reply to
Paul Keinanen

Only when a reliable multicast protocol is layered on top. The outbound packets must be numbered in some way so the recipients know when they've missed one, and they NACK back to the source.

If the packet got dropped at the source, rather than at some hub/switch, you get a problem of NACK flooding. This is the exact inverse of the problem of sending individual streams to each destination, except that it only occurs on packet loss. Reliable multicast protocols deal with NACK flooding in various ways. The most obvious is to coalesce them at the switch/router, but that might require changes in router infrastructure.

When I implemented a P2P file distribution platform, I decided that multicast wasn't useful and went instead for broadcasts. The LDSS protocol (Local Download Sharing Service) that I devised (and got an IANA port number for) enables nodes on the same LAN to schedule and share file downloads from a limited WAN pipe, without any master. Each node is responsible for knowing what it wants, how much WAN bandwidth is allowed at this time of day (to be shared across all peers), how much is currently being used, and what downloads have been scheduled by other peers. In return, it limits its download to a fair share of the allowed WAN bandwidth (distributed rate limiting), sends progress updates, and shares completed downloads using TCP transfers. Every I/O (whether disk, LAN or WAN) is scheduled to a rate limit, to maintain a low impact on normal system operations. It was an interesting project!

Clifford Heath.

Reply to
Clifford Heath

To clarify: I'm concerned at what goes on inside the home. E.g., I suspect AT&T (or other content provider) multicasts certain content to "all subscribers" and that content is then STORED on a local PVR for "delayed viewing".

Regardless of how the content gets to the "media server" within the home, I am interested in how that content is distributed to clients *within* the home. I suspect each "consumer" (assuming a consumer is NOT another PVR) ends up receiving a unicast stream.

Consider: if "you" decide to watch "a movie" (NOT a real-time broadcast!) and another person in your home independantly opts to watch the same movie "5 minutes later", the second viewer doesn't "miss" the first five minutes of the movie (which would have been true of a live TV broadcast!). Unless each consumer endpoint has storage facilities, this requires a separate stream from the media server to each endpoint (while those *could* be multicast streams, it would seem foolish to do so UNLESS you knew two or more consumers wanted to watch the content synchronously)

As such, the load is reflected directly to the server and the switch is spared the burden/responsibility of packet amplification.

For "live TV", the issue gets muddied. If only a single endpoint is viewing the content, then multicasting needlessly floods the network with "undirected" traffic (unless the switch is smart). OTOH, unicasting "doubles" the load on the server (nits...).

I guess one shirt-cuff test would be to watch 5 different channels on 5 different displays and look at the load on the switch... (?) I.e., the weight of 5 multicast streams vs. 5 unicast streams should be pretty obvious without counting packets.

I guess I need to look to see which protocols are wrapped in UPnP to better *guess* at the capabilities that it *could* support...

Reply to
D Yuniskis

I think it would be easier to use a sniffer like Wireshark and check the packet types etc for movie vs broadcast.

Reply to
Fredxx

Why? --------------------^^^^^^^^^^^^^

I'll have to look at the RFC's...

This is geared towards asynchronous sharing, no doubt. How would you (re)consider your design choices in a

*synchronous* environment?

E.g., imagine all of those peers requesting the same "file" at the same "instant"? (within some small "epsilon" of "application time")

How would you (re)consider that same scenario in a wireless network (with nodes closely located -- "tight weave" instead of a "loose mesh")?

Reply to
D Yuniskis

Reply to
Clifford Heath

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.