It's worse than just data+protocol -- it's also bit-stuffing and backwards compatibility with USB 1.2.
See here:
formatting link
People who like FireWire point out that, in real-world usage, 400Mbps FireWire pretty much always beats the performance of 480Mbps USB 2.0.
Note that gigabit Ethernet can do markedly better if you're only talking point-to-point links (and carefully choose your network interface card and PC to be able to handle the data in the first place), whereas USB doesn't usually get much better no matter the application.
We've been trying to figure out how fast we can run a PCI Express link between a dedicated PowerPC computer and a dedicated FPGA. We need to know packet latency and trroughput after that. It's really hard to find numbers on this stuff.
We're also running experiments on actual USB throughput between a Windows PC and an ARM processor. You have to do these experiments yourself!
Well, the bit rate is 1.25 GB, to compensate for the 8-10 encoding that's done to balance the transmission and limit strings of the same bit sign to 5.
The gigabit that's left after that naturally has protocol information, as do 100Mb and 10Mb ethernet - ethernet has protocols, by definition. For certain types of traffic under certain conditions, the use of "jumbo frames" can speed effective data throughput by improving the fraction of bit rate taken by protocol. Certain types of data are inherently inefficient.
Add in a poorly designed network and things slow down in a hurry.
There was a recent 'record' set as someone with a "Cell" based IP transactor got 8.65 out of ten GB/s.
Overhead is an animal that we are already fully aware of. We have known for decades that the stated rate of any given channel is not what any knowledgeable man would ever expect or want to see. There are elements in place to manage the data being sent without error. That overhead is worth the price.
10GbE is common at work now. Especially within a system.
Manage a whole bunch of 1Gb external channels a lot better if all your local hooks are 10Gb.
So, that is like OC-192 or current lingo as STM-64. Yes, it is optical. Next jump is 40Gb/s internals
USB is half-duplex i.e. any extra latency in the Tx/Rx/Tx sequence can kill the performance, especially at high bit rates.
While the (unreliable) isochronous transfer can utilize most of the bandwidth, this is of course useless for storing data into a USB stick due to unreliability.
This latency is not so much an issue on full-duplex confections (such as 1/10GbE), unless the propagation delay is very large (e.g. satellite or transoceanic fiber links), in which case the maximum TCP/IP window size is too small and some better protocols must be used.
In order to fully utilize the 130 Gb/s link with geosynchronous satellites using standard TCP/IP, you would have to divide the data transfer to about 130000 parallel TCP/IP streams 1Mbit/s each :-), in order to avoid the problems with the limited window size.
Using GbE, we routinely do 560 Mb/s using UDP. On the receiving end are Windoze computers running W2k, XP, or W7. To get this throughput, you need to set the receive buffers on the NIC to the largest value it can handle (easy to do with Intel NICs, Broadcom NICs require a registry edit) and increase the Winsock receive buffer to something much larger than the paltry default. In tests, we have gotten higher, perhaps something around 700 Mb/s (I have forgotten the actual number).
The PowerPC dual-core that my customer intends to use has PCI Express on the CPU chip, so it doesn't go through legacy PCI or through a bridge chip first. He wants to connect to our box+FPGA through 8-lane cabled Gen1 PCI Express. We are trying to figure out the latency when his CPU reads or writes a block of data over the PCIe link. It's a realtime control application and it matters.
There's not much data available on that.
Most of the stuff we find is number-free hand-waving (sound familiar?) like this...
formatting link
Actually, the best one we've found online is a paper that tries to show how slow PCIe is...
formatting link
which has some actual latency numbers, in nanoseconds. In reality, we're going to have to test this on a prototype board.
The latency between here and Boulder at NIST over a POTS modem uses standard satellite linkage and the delay compensation, including for the PC itself is only 45 ms. That is for about 4 hops between here and there. You have to take about ten or more of that away for the PC, and that leaves a pretty damned fast ping and return, all things considered.
I have been on gaming sites that were a couple hundred for each connection and the kids played just fine.
What magical number is it that you think you are going to or need to achieve?
ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here.
All logos and trade names are the property of their respective owners.