Lewis Donzis
12/12/2008 1:32 AM
post18475
|
While we're on the subject of NetBSD compatibility...
I haven't done any thorough research into the following, so it's equally a question about where NetBSD is at.
In some "really modern" TCP/IP stacks, the window size is no longer something you "set" to some maximum value, but is a
range of values and it dynamically adjusts to whatever is necessary to assure good performance with the smallest
possible memory usage. Obviously, on some connections with large bandwidth * delay products, the only way to get good
performance is with a very large TCP windows.
Anyway, at a minimum, modern Linux and (ugh) Vista appear to do this, and I just assumed that NetBSD would also do so.
However... one thing I noticed over the years, even on 6.3.2, is that setting the TCP send or receive window size much
over 200k causes any connection to get an error. For example:
root@qdev:/root# sysctl -w net.inet.tcp.sendspace=233017
net.inet.tcp.sendspace: 233017 -> 233017
root@qdev:/root# telnet 1.2.3.4
Trying 1.2.3.4...
telnet: socket: No buffer space available
(Note: 233016 works.)
So anyway, I figured with all of this new NetBSD portability, this would certainly be fixed and we'd also have the fancy
new auto-dynamic stuff, but apparently, no such luck. The exact same thing happens on 6.4.0.
This is not a really terrible problem for us, but it would be nice to hear what the roadmap is for the NetBSD stack.
In our particular application, 99% of our TCP connections are relatively short range, so a small window works fine. But
we also have a few connections that go over the Internet and move a lot of data, and it takes a really long time unless
you have a pretty big TCP window.
For the time being, we just set it to 64k, but that's a compromise and the best is where the stack automatically figures
out what it needs.
Thanks,
lew
|
|
|