I am looking for someone who knows the internals of the TCP implementation on Linux (2.6.10 or thereabouts). Here's a brief overview of the issue I'm trying to resolve:
Background: I'm trying to optimize transfers over a local GigE connection. The Linux machine (MIPS) is supposed to send 500K+ of data using a single send() function from the test application. The socket buffer size is set to more than 1MB. Nagle is disabled (not that it should matter in this case). I've essentially disabled congestion control by initializing tcp_cwnd to something like 128. I've done everything I can think of to make sure the kernel and/or TCP stack have no reason to do anything but send this chunk of TCP data as fast as possible.
Problem: Whenever the Linux TCP stack receives a packet from the peer indicating a larger window size, it seems to cause a delay of about 350 microseconds before additional TCP processing occurs on this connection. This occurs BEFORE the peer's window ever gets too small for the Linux machine to stop filling it, so it's not that the window closed and Linux had to stop sending data to the peer.
Analysis: Doing the math, this chunk should be able to be transferred in under 5 milli- seconds (really, closer to 4 msec). Instead, it's taking around 20 msec. There are 41 of these window opening delay events in my test transfer, adding at least 15 msec to the transfer time.
I don't know if I've explained this as clearly as I'd like. I could really use a quick chat with someone who knows the workings of the Linux stack inside and out (especially with regards to congestion control and ACK/ window processing).
Patrick ========= For LAN/WAN Protocol Analysis, check out PacketView Pro! ========= Patrick Klos Email: snipped-for-privacy@klos.com Klos Technologies, Inc. Web: