Project Home
Project Home
Wiki
Wiki
Discussion Forums
Discussions
Project Information
Project Info
Forum Topic - io-net VS io-pkt: Unix Domain Sockets: (7 Items)
   
io-net VS io-pkt: Unix Domain Sockets  
Hi all,

I have two questions about io-net VS io-pkt in regards to Unix Domain Sockets:

1)
I've been working on a benchmark to determine if Unix Domain Sockets will perform better when using io-pkt vs io-net.  
Unfortunately I'm getting some strange results: io-net actually performs better than io-pkt.  I've included the results 
of my benchmark (io-net_VS_io-pkt.xls) from an x86 system.  Has anyone noticed io-net performing better than io-pkt for 
UDS before?  I've seen similar results on an ARM platform as well...

I've been using 6.4.0 to run my tests.  For io-net tests I had to do the following:
1) renamed the devn-pcnet.so library to devn-mark.so
2) made symlinks for npm-tcpip.so, and my devn-mark.so in /lib and /lib/dll.
3) slay io-pkt-v4-hc
4) slay inetd
5) ./ionet -ptcpip -dmark
6) ./ifconfig en0 (my ip address)
7) inetd

2)
I've noticed that every time I open a connection to a socket using io-pkt/ionet also has a connection, and the 
connection never closes.  When I do a pidin -p io-pkt-v4-hc fd I notice that my sockets (s, and c) never close, and 
cotinue to accumulate, sometimes there are 20, 30, or even 40 socket connections after a test.  Will this affect my 
benchmarking?  (see pidin result below).

Thanks,
Mark

# pidin -p io-pkt-v4-hc fd
     pid name
  131091 sbin/io-pkt-v4-hc
           0    4103 rw        0 /dev/con1
           1    4103 rw        0 /dev/con1
           2    4103 rw        0 /dev/con1
           3    4100 -w        0 /dev/slog
           4    4099 rw        0 /dev/pci
           0s      1
           2s 131091
           4s      1 MP        0 /dev/socket/2
           5s      1 MP        0 /dev/socket/17
           6s      1 MP        0 /dev/socket/1
           7s      1 MP        0 /dev/socket/autoconnect
           8s      1 MP        0 /dev/socket/config
           9s      1 MP        0 /dev/socket/netmanager
          10s      1 MP        0 /
          11s      1 MP        0 /dev/crypto
          12s      1 MP        0 /dev/bpf
          13s      1 MP        0 /dev/bpf0
          14s      1 MP        0 /dev/socket/pppmgr
          17s 131091
          19s      1 MP        0 /dev/io-net/en0
          22s      1 MP        0 /var/run/rpcbind.sock
          28s      1 MP        0 /s
          30s      1 MP        0 /c
          32s      1 MP        0 /s
          34s      1 MP        0 /c
          36s      1 MP        0 /s

Attachment: Excel io-net_VS_io-pkt.xls 23.5 KB
Re: io-net VS io-pkt: Unix Domain Sockets  
On Mon, Feb 09, 2009 at 02:05:29PM -0500, Mark Wakim wrote:
> Hi all,
> 
> I have two questions about io-net VS io-pkt in regards to Unix Domain Sockets:
> 
> 1)
> I've been working on a benchmark to determine if Unix Domain Sockets will perform better when using io-pkt vs io-net. 
 Unfortunately I'm getting some strange results: io-net actually performs better than io-pkt.  I've included the results
 of my benchmark (io-net_VS_io-pkt.xls) from an x86 system.  Has anyone noticed io-net performing better than io-pkt for
 UDS before?  I've seen similar results on an ARM platform as well...

Not for UDS in particular.  We've started doing some profiling
of localhost recently.

> 
> I've been using 6.4.0 to run my tests.  For io-net tests I had to do the following:
> 1) renamed the devn-pcnet.so library to devn-mark.so
> 2) made symlinks for npm-tcpip.so, and my devn-mark.so in /lib and /lib/dll.
> 3) slay io-pkt-v4-hc
> 4) slay inetd
> 5) ./ionet -ptcpip -dmark
> 6) ./ifconfig en0 (my ip address)
> 7) inetd
> 
> 2)
> I've noticed that every time I open a connection to a socket using io-pkt/ionet also has a connection, and the 
connection never closes.  When I do a pidin -p io-pkt-v4-hc fd I notice that my sockets (s, and c) never close, and 
cotinue to accumulate, sometimes there are 20, 30, or even 40 socket connections after a test.  Will this affect my 
benchmarking?  (see pidin result below).
> 

bind() on a AF_LOCAL socket is persistent until unlink() is
called.

-seanb
Re: io-net VS io-pkt: Unix Domain Sockets  
> Not for UDS in particular.  We've started doing some profiling
> of localhost recently.

Thanks, i'll definitely continue my benchmarking in that case.


> bind() on a AF_LOCAL socket is persistent until unlink() is
> called.

Even when calling unlink, the socket file descriptor still remains (I can see it when I do ls -l), AND io-pkt is still 
associated with the file descriptors - meaning that I cannot delete the sockets.  As I mentioned earlier I am worried 
that this is causing performance issues.   I've attached a really short c file which shows the socket connection, as 
well as the socket being closed.  After running the file 10 times I get the following pidin fd output below.  The code 
is pretty simple, but why does io-pkt seem to permanently be associated with the file descriptors?

Thanks,
Mark

# pidin -p io-pkt-v4-hc fd
     pid name
  131091 sbin/io-pkt-v4-hc
           0    4103 rw        0 /dev/con1
           1    4103 rw        0 /dev/con1
           2    4103 rw        0 /dev/con1
           3    4100 -w        0 /dev/slog
           4    4099 rw        0 /dev/pci
           0s      1
           2s 131091
           4s      1 MP        0 /dev/socket/2
           5s      1 MP        0 /dev/socket/17
           6s      1 MP        0 /dev/socket/1
           7s      1 MP        0 /dev/socket/autoconnect
           8s      1 MP        0 /dev/socket/config
           9s      1 MP        0 /dev/socket/netmanager
          10s      1 MP        0 /
          11s      1 MP        0 /dev/crypto
          12s      1 MP        0 /dev/bpf
          13s      1 MP        0 /dev/bpf0
          14s      1 MP        0 /dev/socket/pppmgr
          17s 131091
          19s      1 MP        0 /dev/io-net/en0
          22s      1 MP        0 /var/run/rpcbind.sock
          30s      1 MP        0 /tmp/sock
          31s      1 MP        0 /tmp/sock
          32s      1 MP        0 /tmp/sock
          33s      1 MP        0 /tmp/sock
          34s      1 MP        0 /tmp/sock
          35s      1 MP        0 /tmp/sock
          36s      1 MP        0 /tmp/sock
          37s      1 MP        0 /tmp/sock
          38s      1 MP        0 /tmp/sock
          39s      1 MP        0 /tmp/sock




Attachment: Text test.c 956 bytes
Re: io-net VS io-pkt: Unix Domain Sockets  
On Tue, Feb 10, 2009 at 10:18:47AM -0500, Mark Wakim wrote:
> > Not for UDS in particular.  We've started doing some profiling
> > of localhost recently.
> 
> Thanks, i'll definitely continue my benchmarking in that case.
> 
> 
> > bind() on a AF_LOCAL socket is persistent until unlink() is
> > called.
> 
> Even when calling unlink, the socket file descriptor still remains (I can see it when I do ls -l), AND io-pkt is still
 associated with the file descriptors - meaning that I cannot delete the sockets.  As I mentioned earlier I am worried 
that this is causing performance issues.   I've attached a really short c file which shows the socket connection, as 
well as the socket being closed.  After running the file 10 times I get the following pidin fd output below.  The code 
is pretty simple, but why does io-pkt seem to permanently be associated with the file descriptors?
> 

You're not passing the correct length to bind().
You're binding "/tmp/sock" and unlinking "/tmp/sock "
(extra space) which is failing silently since you're
not checking the return from unlink() correctly.

Here's a diff.

-seanb
Attachment: Text diff 981 bytes
Re: io-net VS io-pkt: Unix Domain Sockets  
> You're not passing the correct length to bind().
> You're binding "/tmp/sock" and unlinking "/tmp/sock "
> (extra space) which is failing silently since you're
> not checking the return from unlink() correctly.
> 
> Here's a diff.
> 
> -seanb

Thanks Sean!
When I passed the correct length to bind, the sockets unlinked perfectly.

Mark
RE: io-net VS io-pkt: Unix Domain Sockets  
Hi Mark:
	Can you e-mail me the test scripts that you run?   I'd like to
try and reproduce this. 

	Thanks!
		Robert.

-----Original Message-----
From: Mark Wakim [mailto:community-noreply@qnx.com] 
Sent: Monday, February 09, 2009 2:05 PM
To: ionetmig-networking
Subject: io-net VS io-pkt: Unix Domain Sockets

Hi all,

I have two questions about io-net VS io-pkt in regards to Unix Domain
Sockets:

1)
I've been working on a benchmark to determine if Unix Domain Sockets
will perform better when using io-pkt vs io-net.  Unfortunately I'm
getting some strange results: io-net actually performs better than
io-pkt.  I've included the results of my benchmark
(io-net_VS_io-pkt.xls) from an x86 system.  Has anyone noticed io-net
performing better than io-pkt for UDS before?  I've seen similar results
on an ARM platform as well...

I've been using 6.4.0 to run my tests.  For io-net tests I had to do the
following:
1) renamed the devn-pcnet.so library to devn-mark.so
2) made symlinks for npm-tcpip.so, and my devn-mark.so in /lib and
/lib/dll.
3) slay io-pkt-v4-hc
4) slay inetd
5) ./ionet -ptcpip -dmark
6) ./ifconfig en0 (my ip address)
7) inetd

2)
I've noticed that every time I open a connection to a socket using
io-pkt/ionet also has a connection, and the connection never closes.
When I do a pidin -p io-pkt-v4-hc fd I notice that my sockets (s, and c)
never close, and cotinue to accumulate, sometimes there are 20, 30, or
even 40 socket connections after a test.  Will this affect my
benchmarking?  (see pidin result below).

Thanks,
Mark

# pidin -p io-pkt-v4-hc fd
     pid name
  131091 sbin/io-pkt-v4-hc
           0    4103 rw        0 /dev/con1
           1    4103 rw        0 /dev/con1
           2    4103 rw        0 /dev/con1
           3    4100 -w        0 /dev/slog
           4    4099 rw        0 /dev/pci
           0s      1
           2s 131091
           4s      1 MP        0 /dev/socket/2
           5s      1 MP        0 /dev/socket/17
           6s      1 MP        0 /dev/socket/1
           7s      1 MP        0 /dev/socket/autoconnect
           8s      1 MP        0 /dev/socket/config
           9s      1 MP        0 /dev/socket/netmanager
          10s      1 MP        0 /
          11s      1 MP        0 /dev/crypto
          12s      1 MP        0 /dev/bpf
          13s      1 MP        0 /dev/bpf0
          14s      1 MP        0 /dev/socket/pppmgr
          17s 131091
          19s      1 MP        0 /dev/io-net/en0
          22s      1 MP        0 /var/run/rpcbind.sock
          28s      1 MP        0 /s
          30s      1 MP        0 /c
          32s      1 MP        0 /s
          34s      1 MP        0 /c
          36s      1 MP        0 /s



_______________________________________________
io-net migration
http://community.qnx.com/sf/go/post21796
RE: io-net VS io-pkt: Unix Domain Sockets  
> I'm also curious about the fact that io-pkt and io-net were 
> run on the same system.
> 
> Was this running io-net binaries compiled on 6.3.2 under 
> 6.4.0 or was io-net built under 6.4.0?

We took the io-net binaries (ifconfig, npm-tcpip.so, devn-whichever.so
as well) from the 6.3.2 installation CD.

We didn't touch the libsocket libraries on the 6.4.x target before
running the tests.

-asherk