|
|
performance of io-pkt vs. io-net with the Broadcom driver
|
|
11/16/2007 11:13 AM
post2692
|
performance of io-pkt vs. io-net with the Broadcom driver
Thought I would share some of my test results with the community:
I did some ttcp performance measurements on my 2 GHz dual core Dell laptop with the Broadcom driver for io-pkt which
AFAIR is a ported NetBSD driver, not a native io-pkt driver. I guess this is part of the explanation for the result?
I was using ttcp in transmit mode under Neutrino, on the other side there was a dual core + hyperthreading Windows XP
box (Pentium D).
Here's the output of ttcp on the NTO machine, with io-pkt:
# ./ttcp -s -t -n100000 -vv 192.168.51.173
ttcp-t: buflen=8192, nbuf=100000, align=16384/0, port=5001 tcp -> 192.168.51.173
ttcp-t: socket
ttcp-t: connect
ttcp-t: 819200000 bytes in 23.19 real seconds = 34502.91 KB/sec +++
ttcp-t: 819200000 bytes in 0.54 CPU seconds = 1481481.48 KB/cpu sec
ttcp-t: 100000 I/O calls, msec/call = 0.24, calls/sec = 4312.86
ttcp-t: 0.4user 0.1sys 0:23real 2%
ttcp-t: buffer address 8058000
On the Windows side PCATTCP was used:
E:\PCATTCP>pcattcp -r -v
PCAUSA Test TCP Utility V2.01.01.08
TCP Receive Test
Local Host : multicore
**************
Listening...: On port 5001
Accept : TCP <- 192.168.51.116:65532
Buffer Size : 8192; Alignment: 16384/0
Receive Mode: Sinking (discarding) Data
Statistics : TCP <- 192.168.51.116:65532
819200000 bytes in 23.19 real seconds = 34502.09 KB/sec +++
numCalls: 134235; msec/call: 0.18; calls/sec: 5789.24
---
Hm.. 34 MB/s on a GigE line isn't an awful lot. And io-net is faster:
# ./ttcp -s -t -n100000 -vv 192.168.51.173
ttcp-t: buflen=8192, nbuf=100000, align=16384/0, port=5001 tcp -> 192.168.51.173
ttcp-t: socket
ttcp-t: connect
ttcp-t: 819200000 bytes in 16.99 real seconds = 47082.64 KB/sec +++
ttcp-t: 819200000 bytes in 0.56 CPU seconds = 1428571.43 KB/cpu sec
ttcp-t: 100000 I/O calls, msec/call = 0.17, calls/sec = 5885.33
ttcp-t: 0.4user 0.1sys 0:17real 3%
ttcp-t: buffer address 8058000
Windows side:
E:\PCATTCP>pcattcp.exe -r -v
PCAUSA Test TCP Utility V2.01.01.08
TCP Receive Test
Local Host : multicore
**************
Listening...: On port 5001
Accept : TCP <- 192.168.51.116:65534
Buffer Size : 8192; Alignment: 16384/0
Receive Mode: Sinking (discarding) Data
Statistics : TCP <- 192.168.51.116:65534
819200000 bytes in 16.98 real seconds = 47100.38 KB/sec +++
numCalls: 155496; msec/call: 0.11; calls/sec: 9154.90
|
|
|
|
|
|
|
Re: performance of io-pkt vs. io-net with the Broadcom driver
|
|
11/16/2007 11:31 AM
post2696
|
Re: performance of io-pkt vs. io-net with the Broadcom driver
On Fri, Nov 16, 2007 at 11:13:36AM -0500, Malte Mundt wrote:
> Thought I would share some of my test results with the community:
Can you try receive?
-seanb
|
|
|
|
|
|
|
RE: performance of io-pkt vs. io-net with the Broadcom driver
|
|
11/16/2007 12:01 PM
post2699
|
RE: performance of io-pkt vs. io-net with the Broadcom driver
Hi Malte:
I believe that you've hit a bug that we're actively looking into.
The performance of io-pkt when transmitting with buffer sizes > 4K is slower
than io-net (reference PR52704).
Robert.
-----Original Message-----
From: Malte Mundt [mailto:mmundt@qnx.com]
Sent: Friday, November 16, 2007 11:14 AM
To: technology-networking
Subject: performance of io-pkt vs. io-net with the Broadcom driver
Thought I would share some of my test results with the community:
I did some ttcp performance measurements on my 2 GHz dual core Dell laptop
with the Broadcom driver for io-pkt which AFAIR is a ported NetBSD driver,
not a native io-pkt driver. I guess this is part of the explanation for the
result?
I was using ttcp in transmit mode under Neutrino, on the other side there
was a dual core + hyperthreading Windows XP box (Pentium D).
Here's the output of ttcp on the NTO machine, with io-pkt:
# ./ttcp -s -t -n100000 -vv 192.168.51.173
ttcp-t: buflen=8192, nbuf=100000, align=16384/0, port=5001 tcp ->
192.168.51.173
ttcp-t: socket
ttcp-t: connect
ttcp-t: 819200000 bytes in 23.19 real seconds = 34502.91 KB/sec +++
ttcp-t: 819200000 bytes in 0.54 CPU seconds = 1481481.48 KB/cpu sec
ttcp-t: 100000 I/O calls, msec/call = 0.24, calls/sec = 4312.86
ttcp-t: 0.4user 0.1sys 0:23real 2%
ttcp-t: buffer address 8058000
On the Windows side PCATTCP was used:
E:\PCATTCP>pcattcp -r -v
PCAUSA Test TCP Utility V2.01.01.08
TCP Receive Test
Local Host : multicore
**************
Listening...: On port 5001
Accept : TCP <- 192.168.51.116:65532
Buffer Size : 8192; Alignment: 16384/0
Receive Mode: Sinking (discarding) Data
Statistics : TCP <- 192.168.51.116:65532
819200000 bytes in 23.19 real seconds = 34502.09 KB/sec +++
numCalls: 134235; msec/call: 0.18; calls/sec: 5789.24
---
Hm.. 34 MB/s on a GigE line isn't an awful lot. And io-net is faster:
# ./ttcp -s -t -n100000 -vv 192.168.51.173
ttcp-t: buflen=8192, nbuf=100000, align=16384/0, port=5001 tcp ->
192.168.51.173
ttcp-t: socket
ttcp-t: connect
ttcp-t: 819200000 bytes in 16.99 real seconds = 47082.64 KB/sec +++
ttcp-t: 819200000 bytes in 0.56 CPU seconds = 1428571.43 KB/cpu sec
ttcp-t: 100000 I/O calls, msec/call = 0.17, calls/sec = 5885.33
ttcp-t: 0.4user 0.1sys 0:17real 3%
ttcp-t: buffer address 8058000
Windows side:
E:\PCATTCP>pcattcp.exe -r -v
PCAUSA Test TCP Utility V2.01.01.08
TCP Receive Test
Local Host : multicore
**************
Listening...: On port 5001
Accept : TCP <- 192.168.51.116:65534
Buffer Size : 8192; Alignment: 16384/0
Receive Mode: Sinking (discarding) Data
Statistics : TCP <- 192.168.51.116:65534
819200000 bytes in 16.98 real seconds = 47100.38 KB/sec +++
numCalls: 155496; msec/call: 0.11; calls/sec: 9154.90
_______________________________________________
Technology
http://community.qnx.com/sf/go/post2692
|
|
|
|
|
|
|
Re: RE: performance of io-pkt vs. io-net with the Broadcom driver
|
|
11/19/2007 4:00 AM
post2739
|
Re: RE: performance of io-pkt vs. io-net with the Broadcom driver
I tried receive now. In both cases (io-net, io-pkt), the Windows command line looked like this:
E:\PCATTCP>pcattcp -t -n10000 192.168.51.116
On the Neutrino side, I put ttcp into receive mode.
io-net:
# ./ttcp -s -r
ttcp-r: buflen=8192, nbuf=2048, align=16384/0, port=5001 tcp
ttcp-r: socket
ttcp-r: accept from 192.168.51.173
ttcp-r: 81920000 bytes in 1.33 real seconds = 60341.06 KB/sec +++
ttcp-r: 35644 I/O calls, msec/call = 0.04, calls/sec = 26884.96
ttcp-r: 0.0user 0.0sys 0:01real 6%
io-pkt:
# ./ttcp -s -r
ttcp-r: buflen=8192, nbuf=2048, align=16384/0, port=5001 tcp
ttcp-r: socket
ttcp-r: accept from 192.168.51.173
ttcp-r: 81920000 bytes in 1.92 real seconds = 41564.79 KB/sec +++
ttcp-r: 13813 I/O calls, msec/call = 0.14, calls/sec = 7176.68
ttcp-r: 0.0user 0.0sys 0:01real 0%
|
|
|
|
|
|
|
Re: RE: performance of io-pkt vs. io-net with the Broadcom driver
|
|
11/19/2007 12:35 PM
post2757
|
Re: RE: performance of io-pkt vs. io-net with the Broadcom driver
Hmmm... Now we've got a huge measurement discrepancy... I think that you've got the same laptop as I do (a D820).
I'm communicating with a dual core linux box with an Intel GigE card in it:
ttcp -tsn10000 192.168.200.10
I'm having quite a bit of trouble getting the tigon3 driver to produce anything like the results that you're seeing...
What version of driver are you using and are you passing any special options to it?
With io-pkt:
# io-pkt-v4-hc -dbge -ptcpip
# ifconfig bge0 192.168.200.10
# ./ttcp -rsv
tcp-r: buflen=8192, nbuf=2048, align=16384/0, port=5001 tcp
tcp-r: socket
tcp-r: accept from 192.168.200.12
tcp-r: 819200000 bytes in 9.63 real seconds = 83077.81 KB/sec +++
tcp-r: 819200000 bytes in 0.13 CPU seconds = 6009750.82 KB/cpu sec
tcp-r: 138066 I/O calls, msec/call = 0.07, calls/sec = 14337.78
tcp-r: 0.0user 0.1sys 0:09real 1% 0i+0d 0maxrss 0+0pf 0+0csw
tcp-r: buffer address 0x8058000:w
which appears to be 2x faster than your io-pkt results. Our setups are quite definitely different somewhere...
Robert.
|
|
|
|
|
|
|
Re: RE: performance of io-pkt vs. io-net with the Broadcom driver
|
|
11/23/2007 8:17 AM
post2887
|
Re: RE: performance of io-pkt vs. io-net with the Broadcom driver
Hi Robert,
I re-tested, under Neutrino using io-pkt and
ttcp -s -r
I guess the difference is the default buffer size on the sender side. I am using Windows XP with the pcattcp program,
like this:
E:\PCATTCP>pcattcp -t -n20000 192.168.51.116
PCAUSA Test TCP Utility V2.01.01.08
TCP Transmit Test
Transmit : TCP -> 192.168.51.116:5001
Buffer Size : 8192; Alignment: 16384/0
TCP_NODELAY : DISABLED (0)
Connect : Connected to 192.168.51.116:5001
Send Mode : Send Pattern; Number of Buffers: 20000
Statistics : TCP -> 192.168.51.116:5001
163840000 bytes in 3.56 real seconds = 44905.98 KB/sec +++
numCalls: 20000; msec/call: 0.18; calls/sec: 5613.25
Note the 8192 buffer. When I instead start it with a 65535 buffer, like this:
E:\PCATTCP>pcattcp -t -l65535 -n10000 192.168.51.116
PCAUSA Test TCP Utility V2.01.01.08
TCP Transmit Test
Transmit : TCP -> 192.168.51.116:5001
Buffer Size : 65535; Alignment: 16384/0
TCP_NODELAY : DISABLED (0)
Connect : Connected to 192.168.51.116:5001
Send Mode : Send Pattern; Number of Buffers: 10000
Statistics : TCP -> 192.168.51.116:5001
655350000 bytes in 7.58 real seconds = 84453.71 KB/sec +++
numCalls: 10000; msec/call: 0.78; calls/sec: 1319.61
Could you try to specify an 8k buffer on your Linux sender? Our results should be very similar then.
- Malte
|
|
|
|
|
|
|
Re: RE: performance of io-pkt vs. io-net with the Broadcom driver
|
|
11/23/2007 11:57 AM
post2895
|
Re: RE: performance of io-pkt vs. io-net with the Broadcom driver
Hi Malte:
Doesn't look like it's buffer size... Here's the output from my linux ttcp session with io-pkt running on the laptop
rcraig@network-performance1:~$ ./ttcp -tsn10000 192.168.200.10
ttcp-t: buflen=8192, nbuf=10000, align=16384/0, port=5001 tcp -> 192.168.200.10
ttcp-t: socket
ttcp-t: connect
ttcp-t: 81920000 bytes in 0.98 real seconds = 81400.58 KB/sec +++
ttcp-t: 10000 I/O calls, msec/call = 0.10, calls/sec = 10175.07
ttcp-t: 0.0user 0.0sys 0:00real 2% 0i+0d 0maxrss 0+3pf 1107+1csw
And (for comparison's sake, use large buffers and TCP_NO_DELAY option)
On the Neutrino side the command is: ttcp -D -b65535 -l65535 -rsv
On the Linux side:
rcraig@network-performance1:~$ ./ttcp -b65535 -l65535 -D -tsn10000 192.168.200.10
ttcp-t: buflen=65535, nbuf=10000, align=16384/0, port=5001, sockbufsize=65535 tcp -> 192.168.200.10
ttcp-t: socket
ttcp-t: sndbuf
ttcp-t: nodelay
ttcp-t: connect
ttcp-t: 655350000 bytes in 5.59 real seconds = 114419.64 KB/sec +++
ttcp-t: 10000 I/O calls, msec/call = 0.57, calls/sec = 1787.83
ttcp-t: 0.0user 0.4sys 0:05real 8% 0i+0d 0maxrss 0+16pf 11009+23csw
rcraig@network-performance1:~$
which is pretty darn close to line rate...
On the Neutrino side, the Ethernet HW is reported as VID=0x14e4 and DID=1600 (BCM5752).
|
|
|
|
|
|
|
Re: RE: performance of io-pkt vs. io-net with the Broadcom driver
|
|
11/23/2007 12:24 PM
post2901
|
Re: RE: performance of io-pkt vs. io-net with the Broadcom driver
You know, the OTHER possibility is that there's something about the windows TTCP application that's skewing things a bit
. It's possible that the reduced throughput that you're seeing with io-pkt is achieved using considerably less CPU than
io-net.
So the real measure would be Throughput / CPU.
I'm still not seeing anywhere near the io-net numbers with the tigon3 driver that you are for some reason...
|
|
|
|
|
|
|
Re: RE: performance of io-pkt vs. io-net with the Broadcom driver
|
|
12/04/2007 11:35 PM
post3343
|
Re: RE: performance of io-pkt vs. io-net with the Broadcom driver
When I did benchmarks of io-pkt vs io-net, I tried to monitor CPU usage. Sometimes io-pkt would use less CPU than io-net
. This varied on a run by run basis, sometimes it would use 70% CPU and you could run it again and get 100%.
Also, the OS on the other machine makes a difference. For example, I got different benchmarks when I tested io-pkt vs io
-net than when I tested io-pkt vs Linux. The differences were mostly in CPU utilization, such that I divided the result
by the percent of CPU used I usually got consistent results.
Gilles
|
|
|
|
|
|
|
Re: performance of io-pkt vs. io-net with the Broadcom driver
|
|
12/07/2007 2:54 AM
post3439
|
Re: performance of io-pkt vs. io-net with the Broadcom driver
Hi,
I would like to support you guys in testing performance, but whenever I try to start ttcp
I get:
ttcp-r: nbuf=1024, buflen=1024, port=2000
ttcp-r: socket
ttcp-r: bind: Address family not supported by protocol family
errno=247
Would you piont me to what I am doing wrong?
|
|
|
|
|
|
|
Re: performance of io-pkt vs. io-net with the Broadcom driver
|
|
12/07/2007 8:00 AM
post3444
|
Re: performance of io-pkt vs. io-net with the Broadcom driver
On Fri, Dec 07, 2007 at 02:54:11AM -0500, Marek W wrote:
> Hi,
> I would like to support you guys in testing performance, but whenever I
> try to start ttcp
> I get:
>
> ttcp-r: nbuf=1024, buflen=1024, port=2000
> ttcp-r: socket
> ttcp-r: bind: Address family not supported by protocol family
> errno=247
>
> Would you piont me to what I am doing wrong?
>
Your ttcp is old. You're hitting this note from the
io-net compatibility page:
http://community.qnx.com/sf/wiki/do/viewPage/projects.networking/wiki/IoNet_migration
Protocol compatibility
* bind() on an AF_INET socket now requires that the second argument
* has its (struct sockaddr *)->af_family member be initialized to
* AF_INET. Previously a value of 0 was accepted and assumed to be
* this value.
-seanb
|
|
|
|
|
|
|
Re: performance of io-pkt vs. io-net with the Broadcom driver
|
|
12/07/2007 8:25 AM
post3445
|
Re: performance of io-pkt vs. io-net with the Broadcom driver
> On Fri, Dec 07, 2007 at 02:54:11AM -0500, Marek W wrote:
> > Hi,
> > I would like to support you guys in testing performance, but whenever I
> > try to start ttcp
> > I get:
> >
> > ttcp-r: nbuf=1024, buflen=1024, port=2000
> > ttcp-r: socket
> > ttcp-r: bind: Address family not supported by protocol family
> > errno=247
> >
> > Would you piont me to what I am doing wrong?
> >
>
> Your ttcp is old. You're hitting this note from the
I guess it's not old, just broken. I built the ttcp
from pkgsrc which patches this issue.
> io-net compatibility page:
> http://community.qnx.com/sf/wiki/do/viewPage/projects.networking/wiki/IoNet_migration
>
>
> Protocol compatibility
>
> * bind() on an AF_INET socket now requires that the second argument
> * has its (struct sockaddr *)->af_family member be initialized to
> * AF_INET. Previously a value of 0 was accepted and assumed to be
> * this value.
>
> -seanb
|
|
|
|
|
|
|
Re: performance of io-pkt vs. io-net with the Broadcom driver
|
|
12/07/2007 10:27 AM
post3452
|
Re: performance of io-pkt vs. io-net with the Broadcom driver
> > On Fri, Dec 07, 2007 at 02:54:11AM -0500, Marek W wrote:
> > > Hi,
> > > I would like to support you guys in testing performance, but whenever I
> > > try to start ttcp
> > > I get:
> > >
> > > ttcp-r: nbuf=1024, buflen=1024, port=2000
> > > ttcp-r: socket
> > > ttcp-r: bind: Address family not supported by protocol family
> > > errno=247
> > >
> > > Would you piont me to what I am doing wrong?
> > >
> >
> > Your ttcp is old. You're hitting this note from the
>
> I guess it's not old, just broken. I built the ttcp
> from pkgsrc which patches this issue.
>
> > io-net compatibility page:
> > http://community.qnx.com/sf/wiki/do/viewPage/projects.networking/wiki/
> IoNet_migration
> >
> >
> > Protocol compatibility
> >
> > * bind() on an AF_INET socket now requires that the second argument
> > * has its (struct sockaddr *)->af_family member be initialized to
> > * AF_INET. Previously a value of 0 was accepted and assumed to be
> > * this value.
> >
> > -seanb
Cool,
I have made (from NetBSD surce)
- if (bind(fd, &sinme, sizeof(sinme)) < 0)
+ sinme.sin_family = AF_INET;
+ if (bind(fd, (struct sockaddr *)&sinme, sizeof(sinme)) < 0)
err("bind");
works fine now
|
|
|
|
|
|