Sean Boudreau(deleted)
|
Re: io-net VS io-pkt: Unix Domain Sockets
|
Sean Boudreau(deleted)
02/09/2009 2:42 PM
post21805
|
Re: io-net VS io-pkt: Unix Domain Sockets
On Mon, Feb 09, 2009 at 02:05:29PM -0500, Mark Wakim wrote:
> Hi all,
>
> I have two questions about io-net VS io-pkt in regards to Unix Domain Sockets:
>
> 1)
> I've been working on a benchmark to determine if Unix Domain Sockets will perform better when using io-pkt vs io-net.
Unfortunately I'm getting some strange results: io-net actually performs better than io-pkt. I've included the results
of my benchmark (io-net_VS_io-pkt.xls) from an x86 system. Has anyone noticed io-net performing better than io-pkt for
UDS before? I've seen similar results on an ARM platform as well...
Not for UDS in particular. We've started doing some profiling
of localhost recently.
>
> I've been using 6.4.0 to run my tests. For io-net tests I had to do the following:
> 1) renamed the devn-pcnet.so library to devn-mark.so
> 2) made symlinks for npm-tcpip.so, and my devn-mark.so in /lib and /lib/dll.
> 3) slay io-pkt-v4-hc
> 4) slay inetd
> 5) ./ionet -ptcpip -dmark
> 6) ./ifconfig en0 (my ip address)
> 7) inetd
>
> 2)
> I've noticed that every time I open a connection to a socket using io-pkt/ionet also has a connection, and the
connection never closes. When I do a pidin -p io-pkt-v4-hc fd I notice that my sockets (s, and c) never close, and
cotinue to accumulate, sometimes there are 20, 30, or even 40 socket connections after a test. Will this affect my
benchmarking? (see pidin result below).
>
bind() on a AF_LOCAL socket is persistent until unlink() is
called.
-seanb
|
|
|
Robert Craig
|
RE: io-net VS io-pkt: Unix Domain Sockets
|
Robert Craig
02/13/2009 3:30 PM
post22270
|
RE: io-net VS io-pkt: Unix Domain Sockets
Hi Mark:
Can you e-mail me the test scripts that you run? I'd like to
try and reproduce this.
Thanks!
Robert.
-----Original Message-----
From: Mark Wakim [mailto:community-noreply@qnx.com]
Sent: Monday, February 09, 2009 2:05 PM
To: ionetmig-networking
Subject: io-net VS io-pkt: Unix Domain Sockets
Hi all,
I have two questions about io-net VS io-pkt in regards to Unix Domain
Sockets:
1)
I've been working on a benchmark to determine if Unix Domain Sockets
will perform better when using io-pkt vs io-net. Unfortunately I'm
getting some strange results: io-net actually performs better than
io-pkt. I've included the results of my benchmark
(io-net_VS_io-pkt.xls) from an x86 system. Has anyone noticed io-net
performing better than io-pkt for UDS before? I've seen similar results
on an ARM platform as well...
I've been using 6.4.0 to run my tests. For io-net tests I had to do the
following:
1) renamed the devn-pcnet.so library to devn-mark.so
2) made symlinks for npm-tcpip.so, and my devn-mark.so in /lib and
/lib/dll.
3) slay io-pkt-v4-hc
4) slay inetd
5) ./ionet -ptcpip -dmark
6) ./ifconfig en0 (my ip address)
7) inetd
2)
I've noticed that every time I open a connection to a socket using
io-pkt/ionet also has a connection, and the connection never closes.
When I do a pidin -p io-pkt-v4-hc fd I notice that my sockets (s, and c)
never close, and cotinue to accumulate, sometimes there are 20, 30, or
even 40 socket connections after a test. Will this affect my
benchmarking? (see pidin result below).
Thanks,
Mark
# pidin -p io-pkt-v4-hc fd
pid name
131091 sbin/io-pkt-v4-hc
0 4103 rw 0 /dev/con1
1 4103 rw 0 /dev/con1
2 4103 rw 0 /dev/con1
3 4100 -w 0 /dev/slog
4 4099 rw 0 /dev/pci
0s 1
2s 131091
4s 1 MP 0 /dev/socket/2
5s 1 MP 0 /dev/socket/17
6s 1 MP 0 /dev/socket/1
7s 1 MP 0 /dev/socket/autoconnect
8s 1 MP 0 /dev/socket/config
9s 1 MP 0 /dev/socket/netmanager
10s 1 MP 0 /
11s 1 MP 0 /dev/crypto
12s 1 MP 0 /dev/bpf
13s 1 MP 0 /dev/bpf0
14s 1 MP 0 /dev/socket/pppmgr
17s 131091
19s 1 MP 0 /dev/io-net/en0
22s 1 MP 0 /var/run/rpcbind.sock
28s 1 MP 0 /s
30s 1 MP 0 /c
32s 1 MP 0 /s
34s 1 MP 0 /c
36s 1 MP 0 /s
_______________________________________________
io-net migration
http://community.qnx.com/sf/go/post21796
|
|
|