Lewis Donzis
12/12/2008 1:32 AM
post18475
|
While we're on the subject of NetBSD compatibility...
I haven't done any thorough research into the following, so it's equally a question about where NetBSD is at.
In some "really modern" TCP/IP stacks, the window size is no longer something you "set" to some maximum value, but is a
range of values and it dynamically adjusts to whatever is necessary to assure good performance with the smallest
possible memory usage. Obviously, on some connections with large bandwidth * delay products, the only way to get good
performance is with a very large TCP windows.
Anyway, at a minimum, modern Linux and (ugh) Vista appear to do this, and I just assumed that NetBSD would also do so.
However... one thing I noticed over the years, even on 6.3.2, is that setting the TCP send or receive window size much
over 200k causes any connection to get an error. For example:
root@qdev:/root# sysctl -w net.inet.tcp.sendspace=233017
net.inet.tcp.sendspace: 233017 -> 233017
root@qdev:/root# telnet 1.2.3.4
Trying 1.2.3.4...
telnet: socket: No buffer space available
(Note: 233016 works.)
So anyway, I figured with all of this new NetBSD portability, this would certainly be fixed and we'd also have the fancy
new auto-dynamic stuff, but apparently, no such luck. The exact same thing happens on 6.4.0.
This is not a really terrible problem for us, but it would be nice to hear what the roadmap is for the NetBSD stack.
In our particular application, 99% of our TCP connections are relatively short range, so a small window works fine. But
we also have a few connections that go over the Internet and move a lot of data, and it takes a really long time unless
you have a pretty big TCP window.
For the time being, we just set it to 64k, but that's a compromise and the best is where the stack automatically figures
out what it needs.
Thanks,
lew
|
|
|
Robert Craig
12/12/2008 9:10 AM
post18490
|
Hi Lew:
When you bump up the sendspace, the app tries to allocate a send buffer
that exceeds the default maximum. Luckily, this maximum is now a
run-time configurable setting.
# sysctl -w kern.sbmax=500000
This got telnet working for me again.
Robert
-----Original Message-----
From: Lewis Donzis [mailto:community-noreply@qnx.com]
Sent: Friday, December 12, 2008 1:33 AM
To: drivers-networking
Subject: large TCP windows
While we're on the subject of NetBSD compatibility...
I haven't done any thorough research into the following, so it's equally
a question about where NetBSD is at.
In some "really modern" TCP/IP stacks, the window size is no longer
something you "set" to some maximum value, but is a range of values and
it dynamically adjusts to whatever is necessary to assure good
performance with the smallest possible memory usage. Obviously, on some
connections with large bandwidth * delay products, the only way to get
good performance is with a very large TCP windows.
Anyway, at a minimum, modern Linux and (ugh) Vista appear to do this,
and I just assumed that NetBSD would also do so.
However... one thing I noticed over the years, even on 6.3.2, is that
setting the TCP send or receive window size much over 200k causes any
connection to get an error. For example:
root@qdev:/root# sysctl -w net.inet.tcp.sendspace=233017
net.inet.tcp.sendspace: 233017 -> 233017
root@qdev:/root# telnet 1.2.3.4
Trying 1.2.3.4...
telnet: socket: No buffer space available
(Note: 233016 works.)
So anyway, I figured with all of this new NetBSD portability, this would
certainly be fixed and we'd also have the fancy new auto-dynamic stuff,
but apparently, no such luck. The exact same thing happens on 6.4.0.
This is not a really terrible problem for us, but it would be nice to
hear what the roadmap is for the NetBSD stack.
In our particular application, 99% of our TCP connections are relatively
short range, so a small window works fine. But we also have a few
connections that go over the Internet and move a lot of data, and it
takes a really long time unless you have a pretty big TCP window.
For the time being, we just set it to 64k, but that's a compromise and
the best is where the stack automatically figures out what it needs.
Thanks,
lew
_______________________________________________
Networking Drivers
http://community.qnx.com/sf/go/post18475
|
|
|
Lewis Donzis
|
Re: RE: large TCP windows
|
Lewis Donzis
12/13/2008 11:20 AM
post18572
|
Re: RE: large TCP windows
Thanks, that helps.
What about the new "dynamic window stuff" (I'm not sure of the proper terminology)?
The nice thing about allocating the window dynamically is that for "nearby" connections, you don't waste a bunch of
memory, but for far-away connections, you still get good performance.
Thanks,
lew
|
|
|
Robert Craig
|
RE: RE: large TCP windows
|
Robert Craig
12/15/2008 5:44 PM
post18664
|
RE: RE: large TCP windows
Hi Lew:
And THAT I'm not so sure of. Do you happen to have any web
sites that I can take a look at that describe what's being done? I take
it that it modifies the window size based on the RTT or something along
those lines?
Robert.
-----Original Message-----
From: Lewis Donzis [mailto:community-noreply@qnx.com]
Sent: Saturday, December 13, 2008 11:21 AM
To: drivers-networking
Subject: Re: RE: large TCP windows
Thanks, that helps.
What about the new "dynamic window stuff" (I'm not sure of the proper
terminology)?
The nice thing about allocating the window dynamically is that for
"nearby" connections, you don't waste a bunch of memory, but for
far-away connections, you still get good performance.
Thanks,
lew
_______________________________________________
Networking Drivers
http://community.qnx.com/sf/go/post18572
|
|
|
Lewis Donzis
|
Re: RE: RE: large TCP windows
|
Lewis Donzis
12/17/2008 1:49 AM
post18794
|
Re: RE: RE: large TCP windows
I've been trying to get a handle on it myself. There are a number of keywords, like "dynamic right-sizing" that seem to
have something to do with it. Just by way of example, Ubuntu 6 did not have this capability and you had to tune the
window size the normal way. With Ubuntu 8, it works pretty much out of the box with auto-tuning. You can, however, set
various parameters that control the minimum and maximum window sizes as well as conditions under which "pressure" will
be applied to reduce the window size.
In addition, it appears to be widely understood that in the Microsoft world that, while up to XP you had normal control
of the window size, in Windows Vista, this automatic tuning is now performed. With Vista, you no longer need to tune
the window size and the myriad of tools for doing so under XP and prior no longer work.
Here are some links that may be related: http://www.csm.ornl.gov/~dunigan/netperf/auto.html
http://public.lanl.gov/radiant/pubs/hptcp/hpdc02-drs.pdf
http://woozle.org/~mfisk/papers/tcpwindow-lacsi.pdf
(Note the reference to the NetBSD stack in the first link.)
In short, I've been trying to find the "this is the way everyone is doing it" link/article, but so far, that's proving
elusive. There must be an explanation somewhere of what was put into the Linux stack, and Microsoft does have some
articles about the Vista stack.
lew
|
|
|
Robert Craig
|
RE: RE: RE: large TCP windows
|
Robert Craig
12/17/2008 2:44 PM
post18881
|
RE: RE: RE: large TCP windows
Hi Lew:
http://mail-index.netbsd.org/source-changes/1997/12/11/msg026916.html
Says that auto-tuning of a sort for the initial congestion window size
has been in there since 97.
http://netbsd.gw.com/cgi-bin/man-cgi?sysctl+7+NetBSD-current
Look for the "tcp.init_win" setting (and that value exists and is
settable in our stack). The default value is 0 so it should be doing
some sort of auto-tuning already.
There's also the socket buffer tuning which may be more in line with
what you're thinking of.
http://mail-index.netbsd.org/tech-net/2007/02/04/0006.html
http://mail-index.netbsd.org/current-users/2008/03/13/msg001361.html
http://mail-index.netbsd.org/tech-net/2007/12/06/0003.html
This didn't make it in to the 4.0 release that our implementation is
based on.
Robert.
-----Original Message-----
From: Lewis Donzis [mailto:community-noreply@qnx.com]
Sent: Wednesday, December 17, 2008 1:49 AM
To: drivers-networking
Subject: Re: RE: RE: large TCP windows
I've been trying to get a handle on it myself. There are a number of
keywords, like "dynamic right-sizing" that seem to have something to do
with it. Just by way of example, Ubuntu 6 did not have this capability
and you had to tune the window size the normal way. With Ubuntu 8, it
works pretty much out of the box with auto-tuning. You can, however,
set various parameters that control the minimum and maximum window sizes
as well as conditions under which "pressure" will be applied to reduce
the window size.
In addition, it appears to be widely understood that in the Microsoft
world that, while up to XP you had normal control of the window size, in
Windows Vista, this automatic tuning is now performed. With Vista, you
no longer need to tune the window size and the myriad of tools for doing
so under XP and prior no longer work.
Here are some links that may be related:
http://www.csm.ornl.gov/~dunigan/netperf/auto.html
http://public.lanl.gov/radiant/pubs/hptcp/hpdc02-drs.pdf
http://woozle.org/~mfisk/papers/tcpwindow-lacsi.pdf
(Note the reference to the NetBSD stack in the first link.)
In short, I've been trying to find the "this is the way everyone is
doing it" link/article, but so far, that's proving elusive. There must
be an explanation somewhere of what was put into the Linux stack, and
Microsoft does have some articles about the Vista stack.
lew
_______________________________________________
Networking Drivers
http://community.qnx.com/sf/go/post18794
|
|
|
Lewis Donzis
|
Re: RE: RE: RE: large TCP windows
|
Lewis Donzis
12/20/2008 11:38 AM
post19099
|
Re: RE: RE: RE: large TCP windows
Yes, that sure sounds like it. You set minimum and maximum boundaries and it automatically uses "the right amount of
memory" to optimize performance over a given connection.
So I suppose when the next TCP/IP stack refresh gets incorporated, it will make its way into QNX?
Thanks,
lew
|
|
|
Robert Craig
12/20/2008 12:40 PM
post19101
|
No comments as to timing (it depends on when NetBSD pull it into their
release and then when we pull in the latest NetBSD code), but it will
end up in here eventually...
Robert.
-----Original Message-----
From: Lewis Donzis [mailto:community-noreply@qnx.com]
Sent: Saturday, December 20, 2008 11:38 AM
To: drivers-networking
Subject: Re: RE: RE: RE: large TCP windows
Yes, that sure sounds like it. You set minimum and maximum boundaries
and it automatically uses "the right amount of memory" to optimize
performance over a given connection.
So I suppose when the next TCP/IP stack refresh gets incorporated, it
will make its way into QNX?
Thanks,
lew
_______________________________________________
Networking Drivers
http://community.qnx.com/sf/go/post19099
|
|
|
|