About bandwidth on USB ad GbE

Hi, i have some doubts about REAL data rate on USB and gigaethernet. For example, USB 2.0 bandwidth is 480Mb/sec: is all data rate? Or data

+protocol information? I have the same doubts with GbE: what is real data rate? Thanks
Reply to
fasf
Loading thread data ...

It's worse than just data+protocol -- it's also bit-stuffing and backwards compatibility with USB 1.2.

See here:

formatting link

People who like FireWire point out that, in real-world usage, 400Mbps FireWire pretty much always beats the performance of 480Mbps USB 2.0.

Note that gigabit Ethernet can do markedly better if you're only talking point-to-point links (and carefully choose your network interface card and PC to be able to handle the data in the first place), whereas USB doesn't usually get much better no matter the application.

---Joel

Reply to
Joel Koltner

We've been trying to figure out how fast we can run a PCI Express link between a dedicated PowerPC computer and a dedicated FPGA. We need to know packet latency and trroughput after that. It's really hard to find numbers on this stuff.

We're also running experiments on actual USB throughput between a Windows PC and an ARM processor. You have to do these experiments yourself!

John

Reply to
John Larkin

Especially when your machines are your bottleneck!

WHAT PowerPC machine? What are its buses AND EQ "north" and "south" chipsets? IOW, the throughput of that machine is very likely slower than your DUT.

You are behind. The numbers are already out there and are real and every day practical.

Reply to
TheGlimmerMan

Well, the bit rate is 1.25 GB, to compensate for the 8-10 encoding that's done to balance the transmission and limit strings of the same bit sign to 5.

The gigabit that's left after that naturally has protocol information, as do 100Mb and 10Mb ethernet - ethernet has protocols, by definition. For certain types of traffic under certain conditions, the use of "jumbo frames" can speed effective data throughput by improving the fraction of bit rate taken by protocol. Certain types of data are inherently inefficient.

Add in a poorly designed network and things slow down in a hurry.

--
Cats, coffee, chocolate...vices to live by
Reply to
Ecnerwal

FireWire

usually

Aren't computers usually the bottleneck to computing?

The problem is that PCs and their busses are designed for throughput, not for latency. And it's hard to figure out the latencies.

The DUT will be an FPGA. No computer+OS can keep up with pure hardware.

OK, give us some numbers.

Oops, now I remember: you don't do numbers.

John

Reply to
John Larkin

We can take that answer as "don't know."

John

Reply to
John Larkin

There was a recent 'record' set as someone with a "Cell" based IP transactor got 8.65 out of ten GB/s.

Overhead is an animal that we are already fully aware of. We have known for decades that the stated rate of any given channel is not what any knowledgeable man would ever expect or want to see. There are elements in place to manage the data being sent without error. That overhead is worth the price.

10GbE is common at work now. Especially within a system.

Manage a whole bunch of 1Gb external channels a lot better if all your local hooks are 10Gb.

So, that is like OC-192 or current lingo as STM-64. Yes, it is optical. Next jump is 40Gb/s internals

Reply to
TheQuickBrownFox

The DUT we were talking about was PCIe interconnect,IIRC.

You spoke of testing its capacity. That makes IT the DUT and THAT is what I refer to.

You PC is too slow to give you the results you desire. That being knowledge of just how fast that particular link can be pushed.

That PCIe should be tertiary to the PCI main bus too no?

Did you bother to perform any simple standards research?

Reply to
TheGlimmerMan

But it does leave the OP confused at a much higher level.

-- Bill Sloman, Nijmegen

Reply to
Bill Sloman

USB is half-duplex i.e. any extra latency in the Tx/Rx/Tx sequence can kill the performance, especially at high bit rates.

While the (unreliable) isochronous transfer can utilize most of the bandwidth, this is of course useless for storing data into a USB stick due to unreliability.

This latency is not so much an issue on full-duplex confections (such as 1/10GbE), unless the propagation delay is very large (e.g. satellite or transoceanic fiber links), in which case the maximum TCP/IP window size is too small and some better protocols must be used.

Reply to
upsidedown

It wasn't a yes/no question, dumbfuck.

Reply to
TheGlimmerMan

Finally, someone that knows WTF is going on.

We are doing 130Gb/s from satellite and the latencies are low, since the customer is directly linked.

Reply to
TheQuickBrownFox

They must be in knee-high orbits if latency is low.

John

Reply to
John Larkin

In order to fully utilize the 130 Gb/s link with geosynchronous satellites using standard TCP/IP, you would have to divide the data transfer to about 130000 parallel TCP/IP streams 1Mbit/s each :-), in order to avoid the problems with the limited window size.

Reply to
upsidedown

On a sunny day (Tue, 15 Feb 2011 06:43:18 -0800) it happened John Larkin wrote in :

LOL, JUST what I was thinking. hehe :-)

Reply to
Jan Panteltje

Using GbE, we routinely do 560 Mb/s using UDP. On the receiving end are Windoze computers running W2k, XP, or W7. To get this throughput, you need to set the receive buffers on the NIC to the largest value it can handle (easy to do with Intel NICs, Broadcom NICs require a registry edit) and increase the Winsock receive buffer to something much larger than the paltry default. In tests, we have gotten higher, perhaps something around 700 Mb/s (I have forgotten the actual number).

Reply to
qrk

The PowerPC dual-core that my customer intends to use has PCI Express on the CPU chip, so it doesn't go through legacy PCI or through a bridge chip first. He wants to connect to our box+FPGA through 8-lane cabled Gen1 PCI Express. We are trying to figure out the latency when his CPU reads or writes a block of data over the PCIe link. It's a realtime control application and it matters.

There's not much data available on that.

Most of the stuff we find is number-free hand-waving (sound familiar?) like this...

formatting link

Actually, the best one we've found online is a paper that tries to show how slow PCIe is...

formatting link

which has some actual latency numbers, in nanoseconds. In reality, we're going to have to test this on a prototype board.

John

Reply to
John Larkin

The latency between here and Boulder at NIST over a POTS modem uses standard satellite linkage and the delay compensation, including for the PC itself is only 45 ms. That is for about 4 hops between here and there. You have to take about ten or more of that away for the PC, and that leaves a pretty damned fast ping and return, all things considered.

I have been on gaming sites that were a couple hundred for each connection and the kids played just fine.

What magical number is it that you think you are going to or need to achieve?

Reply to
TheQuickBrownFox

snipped retarded baby bullshit from the bullshit baby retard.

Fuck off, you little retarded bitch. Come back when the spew coming out of you is not total horseshit. That should result in never seeing you again.

Reply to
TheGlimmerMan

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.