10Base-T vs. 100Base-T (discussion)

I curious to know what peoples thoughts were on 100Base-T in embedded systems applications. In my experience, many embedded systems that are network enabled aren't necessarily very high throughput devices and don't really take advantage of the extra throughput available in

100Base-T. I.e. how many embedded processors can really transfer packets at 10Mbytes/sec between memory and the NIC, and still get useful work done ?

Also, it seems like running at 10Base-T on CAT5 would give you some extra noise margins and perhaps extended range over 100Base-T. Is this important ? 100Base-T gives a lower packet latency, is this important to real time response in application ?

Are there benefits to using 100Base-T in applications where your throughput is known to be low ? Do systems intergrators really worry about noise margins in cabling ? Do most network enabled devices just plug into a 10/100 Hub anyway ? Is 10Base-T effectively dead/legacy ?

See ya, -ingo

--
/* Ingo Cyliax, cyliax@ezcomm.com, Tel: 812-391-0895 */
Reply to
Ingo Cyliax
Loading thread data ...

For most embedded systems, 100BT is purely a marketing gimmick.

Once upon a time in the days of yore, it was not unusual to find a hub that would force all ports to 10M whenever any port was connected to a 10M device. These-days "hubs" are almost always switches and can run ports at different speeds.

Probably.

That depends on the application.

That depends on the application.

Marketing feature list.

Only after things fail.

Probably.

Yup.

What's up with the extra space before the question marks?

--
Grant Edwards                   grante             Yow!  What UNIVERSE is
                                  at               this, please??
 Click to see the full signature
Reply to
Grant Edwards

10Mbps can only run half duplex except on managed switches with operator configuration. If your application is well below 10Mbps requires then you will probably be fine. However, there are non related things like multicast and broadcast packets that can infringe on your usable bandwidth, or worse completely use up your bandwidth. 100Mbps is extra protection for network issues that may be beyond your control.
Reply to
FLY135

Hi Ingo,

I assume you're trying to gather data to decide whether or not ZWorld should add 100 Mbps to some new Rabbit products?

I have the same questions with 100 Mbps versus gigabit on the products I work on. Even though our hardware can't sustain passing gigabit traffic, we decided to use gigabit NICs for connectivity rather than performance. In our market, it turns out upstream from our product is a T-1 or two - that's such a small pipe (1.5) that the difference between

100 and 1000 is totally moot.

I have had many more problems with auto-negotiating correct speed and duplex using (cheap) embedded systems and (cheap) hub/switches than I've had problems running out of bandwidth.

To put it another way, very few users tend to performance test their installed systems and utilize them near their maximum capacity. But anyone can look at the link speed LED and make a snap decision.

My advice: make sure that when the user plugs it in, the little LED comes on. If that means ditching 10 Mbps only controllers, then do it.

Kelly

Reply to
Kelly Hall
100Base-T gives a lower packet latency, is this important

Please note that 100Base-T has extended delays between frames to extend the collision domain to an useful value. Simply scaling 10Base-T by

10 would give a maximum cable length of 18 meters, which is deemed to be on the short side. This also translates the other way that 100 Mbit/s does not cut the delays of 10 Mbits/s by a factor of 10.

They should, but pretty many are fallen victims of marketing push of 100 Mbits/s.

Currently it's nearly always a dual-speed switch. It is somewhat difficult to get plain hubs - I tried to get one for network sniffing purposes.

Is 10Base-T effectively dead/legacy ?

Maybe, but I hope that it still has many years ahead.

One point in embedded devices is that 100 Mbit/s chips are much more power-hungry than plain 10 Mbits/s.

Tauno Voipio tauno voipio (at) iki fi

Reply to
Tauno Voipio

This is a excellent reason to keep 10BT, at least so long 100TX does not meet this point.

Furthermore, there are now many 10BT DSL modems and 10BT NICs spread around the world which are working fine and provide a correct service to end users (Internet access, most of 512k). How do you think all those parts (plastics, lead, chemicals, ...) will be recycled?

Reply to
sap

You're assuming there are only two devices on the wire -- the unit in question and "something else" talking to it. I.e. if you have several devices sharing the wire -- in one or more applications -- then the load any one device sees may be considerably less (yet still bang the network pretty hard overall).

Smaller processors using older technology NICs are usually bandwidth limited in themselves. But, smarter NICs can offload almost all of this overhead from the processor, cutting down on bcopy()'s, even checksum generation, etc. NIC's with klunky implementations ("programmed I/O", 8 bit data paths, etc.) are more of a hassle than the 10 vs 100 issue.

From my experience, the biggest win is a well engineered protocol stack. Look at the time from *application* generating a message to the time it appears on the wire. In many stacks, you'll find that this isn't (realistically) bounded at all! Does the stack provide any QoS guarantees/features? Is it "real time" (no, not *real fast*)? Or, does it have a more traditional "desktop/mainframe" structure?

Depends on your application domain and operating environment. In the past few years, I have been deploying 10Base*2* designs as they are more appropriate to the applications I was addressing. The design in progress currently (totally different market) deliberately offboards the NIC so the consumer can opt for whatever is appropriate for *their* needs.

If you are designing for a "PC" type market, then you'll probably chase

100Mb "commodity" parts. And, probably faster processors (streaming audio, video, network infrastructure, etc.). If you're doing process control, you'll probably stick to other technologies (even EIA485, et al.) more appropriate to that market and or that *environment*.

--don

Reply to
Don

That's pretty much a requirement for 10BaseT and 100BaseT. They are both point-to-point connections.

That's only going to be a consideration if you're using a dumb hub rather than a switch, and dumb hubs are really hard to find these days.

--
Grant Edwards                   grante             Yow!  Yow! It's a hole
                                  at               all the way to downtown
 Click to see the full signature
Reply to
Grant Edwards

The point was, the *network* may have more than two nodes in/on it.

I don't care how smart your switch is. If 30 nodes all want to talk to "node 0", the bandwidth of the radial out to node 0 sets the performance of the entire network. And, the reverse (my comment above) is true -- if node 0 is driving the performance of the network, then the load that any of the other (30, in this example) nodes sees is correspondingly throttled by that node.

In many of the distributed applications that I have encountered, this is often the case. "Node 0" might be a SCADA device (e.g., often, a "PC" watching / controlling a network of field devices). Those devices are often dumb and slow. Yet, with enough of them on the wire, the "PC" ends up as the bottleneck despite the fact that other nodes are transfering data at ridiculously low rates.

That SCADA function may sit behind a "gateway" (explicit or otherwise) and, AFTER-THE-FACT impose limits on the traffic by its mere presence.

Also, there seems to be a steady trend towards replacing wired networks with wireLESS technologies. Suddenly, the fact that a switch could give *apparent* bandwidth multiplication disappears as the "network" reverts to more of a true star topology -- with, potentially, other UNEXPECTED devices competing for bandwidth. Sure you can design with the *expectation* of a particular communication technology; but, eventually find yourself faced with the aspect of that "communication subsystem" being increasingly treated like a commodity item -- swapped out by marketeers for whatever is "en vogue" down the road.

So, unless you want to "special case" the *types* of traffic you are going to tolerate and the *virtual* topology of the network (a function of the application domain), expecting a switch to magically alter the apparent available bandwidth is wishful thinking.

And, unless you are designing a *truly* distributed application with roughly level network loads, you are going to find some node(s) drive the overall performance of the network. And, indirecly, *limit* the effective data rates for all other nodes regardless of the technology and topology used.

--don

Reply to
Don

[... lots snipped...]

Done, you're missing the point. The rest of the network is quite irrelevant to the question being discussed here, which was about the connections between some embedded device and its neighboring node in the network, under the premise that a 10BaseT would be sufficient for the bandwidth needs of that device.

If some "central" node's link to the network becomes the bottleneck, exchanging the link of the embedded device we're discussing here will help quite exactly nothing at all to solve that problem --- it's strictly the link between that central node and its peer (e.g. the switch) that has to be discussed then, and possibly the internal throughput of the switch.

The only case where a 10BaseT device's link could, theoretically, become a bottleneck would be for traffic patterns dominated by broadcasts transmissions. But if that's the case, you have much worse problems to worry about than the link type of that individual device.

--
Hans-Bernhard Broeker (broeker@physik.rwth-aachen.de)
Even if all the snow were burnt, ashes would remain.
Reply to
Hans-Bernhard Broeker

[... lots snipped...]

Don, you're missing the point. The rest of the network is quite irrelevant to the question being discussed here, which was about the connections between some embedded device and its neighboring node in the network, under the premise that a 10BaseT would be sufficient for the bandwidth needs of that device.

If some "central" node's link to the network becomes the bottleneck, exchanging the link of the embedded device we're discussing here will help quite exactly nothing at all to solve that problem --- it's strictly the link between that central node and its peer (e.g. the switch) that has to be discussed then, and possibly the internal throughput of the switch.

The only case where a 10BaseT device's link could, theoretically, become a bottleneck would be for traffic patterns dominated by broadcasts transmissions. But if that's the case, you have much worse problems to worry about than the link type of that individual device.

--
Hans-Bernhard Broeker (broeker@physik.rwth-aachen.de)
Even if all the snow were burnt, ashes would remain.
Reply to
Hans-Bernhard Broeker

Or multicast.

Reply to
FLY135

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.