Project Home
Project Home
Wiki
Wiki
Discussion Forums
Discussions
Project Information
Project Info
Forum Topic - QNX 6.4 - QNX 6.3 Comparison: Page 1 of 3 (64 Items)
   
QNX 6.4 - QNX 6.3 Comparison  
Hi,

I was testing and trying to compare some aspects of QNX 6.4 and QNX 6.3 (+corepatch 6.3.2A). I prepared a small 
presentation with the purpose of to show the results of my tests. In this presentation there are some whys? that I would
 like to find an answer.

It would be of much utility and helps me to understand this results, as well as if the method that I am using to measure
 what desire is the suitable one.

There are other aspects that I want to test like OS Performance, maybe you can help me telling how?

If there are some translation problems (from spanish) that can cause confusion, please let me know to avoid confusions.

Thank you very much in advance.

Regards,
Juan Manuel
Re: QNX 6.4 - QNX 6.3 Comparison  
I'm having problems with attachment...

RE: QNX 6.4 - QNX 6.3 Comparison  
Hi:
   Are you looking for OS in general or networking in particular?  If you have a look at the networking wiki, there's a 
number of links comparing the networking stack implementation in io-net and the one in 6.4 

(for example
http://community.qnx.com/sf/wiki/do/viewPage/projects.networking/wiki/Stack_wiki_page
and the io-net migration page).

For OS kernel information, you should take a look at the Core OS project instead.

   Robert.

-----Original Message-----
From: Juan Manuel Placco [mailto:community-noreply@qnx.com]
Sent: Wed 11/19/2008 6:14 PM
To: technology-networking
Subject: QNX 6.4 - QNX 6.3 Comparison
 
Hi,

I was testing and trying to compare some aspects of QNX 6.4 and QNX 6.3 (+corepatch 6.3.2A). I prepared a small 
presentation with the purpose of to show the results of my tests. In this presentation there are some whys? that I would
 like to find an answer.

It would be of much utility and helps me to understand this results, as well as if the method that I am using to measure
 what desire is the suitable one.

There are other aspects that I want to test like OS Performance, maybe you can help me telling how?

If there are some translation problems (from spanish) that can cause confusion, please let me know to avoid confusions.

Thank you very much in advance.

Regards,
Juan Manuel


_______________________________________________
Technology
http://community.qnx.com/sf/go/post16941


Attachment: Text winmail.dat 3.11 KB
Re: RE: QNX 6.4 - QNX 6.3 Comparison  
Hi Robert, and thanks for the reply!! 

I need some specific answers to my little work if possible.

Here's the attachment (I hope)

Regards,
Juan Manuel
Attachment: Powerpoint Comparing QNX 6.4 vs QNX 6.3 Forum v1.pps 1.19 MB
RE: RE: QNX 6.4 - QNX 6.3 Comparison  
Hi:

  The bulk of your questions are really oriented on file system performance, so it may be best if you post this in the 
file system project.  If I had to guess, I'd say it's probably because you installed using fs-qnx6.  This file system 
was designed for robustness and is meant to handle things like power outages without resulting in corruption.  This 
would have to be confirmed by the file system people though.

In terms of the slight degradation in qnet performance, I'll have to take a closer look.  Do you have sample test code 
available for me to try out?  We have our own internal code, but it would be useful to see what you've written.  The 
slight reduction could be due to the trade off we made during the design stage to provide optimised IP performance.  
This means that the way that we hook qnet into the stack is not quite as optimized as it was with io-net.  

   Robert.

-----Original Message-----
From: Juan Manuel Placco [mailto:community-noreply@qnx.com]
Sent: Wed 11/19/2008 9:53 PM
To: technology-networking
Subject: Re: RE: QNX 6.4 - QNX 6.3 Comparison
 
Hi Robert, and thanks for the reply!! 

I need some specific answers to my little work if possible.

Here's the attachment (I hope)

Regards,
Juan Manuel


_______________________________________________
Technology
http://community.qnx.com/sf/go/post16944

Attachment: Text winmail.dat 3.18 KB
Re: RE: RE: QNX 6.4 - QNX 6.3 Comparison  
> 
> Hi:
> 
>   The bulk of your questions are really oriented on file system performance, 
> so it may be best if you post this in the file system project.  If I had to 
> guess, I'd say it's probably because you installed using fs-qnx6.  This file 
> system was designed for robustness and is meant to handle things like power 
> outages without resulting in corruption.  This would have to be confirmed by 
> the file system people though.
> 
> In terms of the slight degradation in qnet performance, I'll have to take a 
> closer look.  Do you have sample test code available for me to try out?  We 
> have our own internal code, but it would be useful to see what you've written.
>   The slight reduction could be due to the trade off we made during the design
>  stage to provide optimised IP performance.  This means that the way that we 
> hook qnet into the stack is not quite as optimized as it was with io-net.  
> 
>    Robert.
> 
> -----Original Message-----
> From: Juan Manuel Placco [mailto:community-noreply@qnx.com]
> Sent: Wed 11/19/2008 9:53 PM
> To: technology-networking
> Subject: Re: RE: QNX 6.4 - QNX 6.3 Comparison
>  
> Hi Robert, and thanks for the reply!! 
> 
> I need some specific answers to my little work if possible.
> 
> Here's the attachment (I hope)
> 
> Regards,
> Juan Manuel
> 
> 
> _______________________________________________
> Technology
> http://community.qnx.com/sf/go/post16944
> 

HI Robert, thank you for your always quick answer!

So, according to what I understood, I would have to wait for a small degradation of efficiency for native networking 
(resources managers, message passing, and so on...), but a better one working with IP protocol, for example if I a do 
some UDP packets broadcasting?

With respect to file systems, yes, I was exactly testing the performance of 'fs-qnx6' vs 'fs-qnx4'. And I'll post those 
questions to file system project as you suggest. 

I'll post later the sources (actually, they're very simple.) with which I made the tests. I'm having asome problems 
uploading files to this forum, since I'm behind a strange proxy. 

Thanks you very much again!

Regards,
Juan Manuel
RE: RE: QNX 6.4 - QNX 6.3 Comparison  
I looked at your powerpoint presentation (nice, btw!)
and I had a couple questions:

1) in the qnet tests, you refer to messages.  You vary
the *number* of messages sent - and I would expect the
accuracy of the results to increase as you run the test
longer, with a greater number of message transfers ...

but do you vary the *size* of the message?  Do you get 
different results for different sizes (eg 1K, 4K, 16K, 64K)
of send/reply messages?


2) what network card and driver are you running?  Are you
running the "shim" and an io-net devn-*.so driver with
io-pkt, or are you running the io-pkt devnp-*.so driver?

A "pidin mem" can definitively answer these questions, 
btw.  The reason I ask is that when you run the "shim"
with an io-net driver, an extra thread switch is incurred
during receive, which ordinarily you don't notice, but
when you're doing performance testing, and start talking
about one and two percents, details like this become
relevant.

--
aboyd
Re: RE: RE: QNX 6.4 - QNX 6.3 Comparison  
Hi Andrew, thank you very much for the reply!

1) Actually the size of the message is fixed and is 1024 bytes. The idea was to maintain the same scheme in 6.3 and 6.4.
 But I'll try differents ones!

2) That's a very good question. Actually I don't know how does that 'shim' work. Maybe you can help me to find 
documentation.

>> Are you running the "shim" and an io-net devn-*.so driver with
io-pkt, or are you running the io-pkt devnp-*.so driver? ...

>> The reason I ask is that when you run the "shim"
with an io-net driver ....

I can see "devnp-shim.so" loaded in my DEV1 io-pkt-v4-hc not in 'io-net'. I'm a little confused here...

The configuration is the default one (one assumes that's the one that detects the enum-devices in rc.devices at sysinit,
 isn't it?, When does 'devnp-shim.so' module is loaded?  )

Well, this is the scenario you ask for:

DEV1
---------------------------------------------------------------------------
It's a DELL Optiplex GX520
NIC: Broadcom®5751 Gigabit Ethernet LAN solution 10/100/10001 Ethernet with Remote Wake Up and PXE support

QNX 6.3

enum-devices -n
mount -Tio-net -opci=0,vid=0x14e4,did=0x1677 /lib/dll/devn-tigon3.so

pidin -P io-net mem
 ...
          libc.so.2             ...
          npm-tcpip.so
-->     devn-tigon3.so     
          npm-qnet.so        
          sbin/io-net           
          sbin/io-net           

QNX 6.4 

enum-devices -n
mount -Tio-pkt -opci=0,vid=0x14e4,did=0x1677 /lib/dll/devn-tigon3.so

pidin -P io-pkt-v4-hc mem
...
          io-pkt-v4-hc       ...
          libc.so.3          
          devnp-shim.so      
-->     devn-tigon3.so     
          lsm-qnet.so        
          /dev/mem           


DEV2
---------------------------------------------------------------------------

It's a DELL Optiplex 170/L
NIC: Intel 10/100Mbps Ethernet with Remote Wake-up and PXE support

QNX 6.3

enum-devices -n
mount -Tio-net -opci=0,vid=0x8086,did=0x1050 /lib/dll/devn-speedo.so

pidin -Pio-net mem
...
         libc.so.2          ...
         npm-tcpip.so
-->     devn-speedo.so     
         npm-qnet.so        
         sbin/io-net        
         sbin/io-net        

QNX 6.4

enum-devices -n
mount -Tio-pkt -opci=0,vid=0x8086,did=0x1050 /lib/dll/devnp-speedo.so

pidin -P io-pkt-v4-hc mem
...
          io-pkt-v4-hc     ...
          libc.so.3
-->     devnp-speedo.so
          lsm-qnet.so

Hey btw, why can't I see 'npm-tcpip.so' module or similar in io-pkt ?

So, that 'shim' could be the cause of the small degradation of the performance ?

Thank you very much. I found this ver interesting for me!

Regards,
Juan Manuel
RE: RE: RE: QNX 6.4 - QNX 6.3 Comparison  
Hi:
	The shim acts as a binary interface converter, allowing "old
style" io-net based drivers to interoperate with the io-pkt
infrastructure.  There could be a small penalty to pay by this
additional layer.

We default to the io-net versions of of the BCM57xx driver because it
supports a few more chipsets than the NetBSD driver did at the time.
You can try running io-pkt with the devnp-bge.so driver instead. 

For more information about why npm-tcpip is not needed and the shim,
please read through the io-net migration wiki pages.

	Robert.

 

-----Original Message-----
From: Juan Manuel Placco [mailto:community-noreply@qnx.com] 
Sent: Thursday, November 20, 2008 12:55 PM
To: technology-networking
Subject: Re: RE: RE: QNX 6.4 - QNX 6.3 Comparison

Hi Andrew, thank you very much for the reply!

1) Actually the size of the message is fixed and is 1024 bytes. The idea
was to maintain the same scheme in 6.3 and 6.4. But I'll try differents
ones!

2) That's a very good question. Actually I don't know how does that
'shim' work. Maybe you can help me to find documentation.

>> Are you running the "shim" and an io-net devn-*.so driver with
io-pkt, or are you running the io-pkt devnp-*.so driver? ...

>> The reason I ask is that when you run the "shim"
with an io-net driver ....

I can see "devnp-shim.so" loaded in my DEV1 io-pkt-v4-hc not in
'io-net'. I'm a little confused here...

The configuration is the default one (one assumes that's the one that
detects the enum-devices in rc.devices at sysinit, isn't it?, When does
'devnp-shim.so' module is loaded?  )

Well, this is the scenario you ask for:

DEV1
------------------------------------------------------------------------
---
It's a DELL Optiplex GX520
NIC: Broadcom(r)5751 Gigabit Ethernet LAN solution 10/100/10001 Ethernet
with Remote Wake Up and PXE support

QNX 6.3

enum-devices -n
mount -Tio-net -opci=0,vid=0x14e4,did=0x1677 /lib/dll/devn-tigon3.so

pidin -P io-net mem
 ...
          libc.so.2             ...
          npm-tcpip.so
-->     devn-tigon3.so     
          npm-qnet.so        
          sbin/io-net           
          sbin/io-net           

QNX 6.4 

enum-devices -n
mount -Tio-pkt -opci=0,vid=0x14e4,did=0x1677 /lib/dll/devn-tigon3.so

pidin -P io-pkt-v4-hc mem
...
          io-pkt-v4-hc       ...
          libc.so.3          
          devnp-shim.so      
-->     devn-tigon3.so     
          lsm-qnet.so        
          /dev/mem           


DEV2
------------------------------------------------------------------------
---

It's a DELL Optiplex 170/L
NIC: Intel 10/100Mbps Ethernet with Remote Wake-up and PXE support

QNX 6.3

enum-devices -n
mount -Tio-net -opci=0,vid=0x8086,did=0x1050 /lib/dll/devn-speedo.so

pidin -Pio-net mem
...
         libc.so.2          ...
         npm-tcpip.so
-->     devn-speedo.so     
         npm-qnet.so        
         sbin/io-net        
         sbin/io-net        

QNX 6.4

enum-devices -n
mount -Tio-pkt -opci=0,vid=0x8086,did=0x1050 /lib/dll/devnp-speedo.so

pidin -P io-pkt-v4-hc mem
...
          io-pkt-v4-hc     ...
          libc.so.3
-->     devnp-speedo.so
          lsm-qnet.so

Hey btw, why can't I see 'npm-tcpip.so' module or similar in io-pkt ?

So, that 'shim' could be the cause of the small degradation of the
performance ?

Thank you very much. I found this ver interesting for me!

Regards,
Juan Manuel


_______________________________________________
Technology
http://community.qnx.com/sf/go/post16986
RE: RE: RE: QNX 6.4 - QNX 6.3 Comparison  
> QNX 6.4
> -->     devnp-speedo.so

Another thing ... I just fixed this io-pkt driver
(devnp-speedo.so) to add the "probe_phy" optimization,
which can substantially improve throughput, in our
benchmark tests.

The commits I made should be publicly visible - could
you update your driver source, recompile, and re-run 
your tests?

I hate to put attachments in the forum - if you want,
send an email to me at

  aboyd@qnx.com

and I will immediately reply back with a new x86 binary
devnp-speedo.so with the probe_phy fix, so you don't have
to bother downloading and compiling the source, if you
don't want to.

--
aboyd
RE: RE: RE: QNX 6.4 - QNX 6.3 Comparison  
P.S.  If you switch from the shim & tigon driver, to
the bge driver, and also upgrade the devnp-speedo.so
driver to the latest with the probe_phy optimization,
your io-pkt numbers may very well exceed your io-net
numbers!

ARGH.  I just remembered something.  There is a mod
to the BSD-source drivers (which the bge driver is)
to work around a threading problem, to force a thread
switch when they transmit qnet packets (long story).

I'm not sure the bge driver is the best for benchmarking
qnet, sorry.  It will receive without a thread switch,
but it will thread switch on transmit.

Sigh.  Time to go and start drink heavily again.

--
aboyd
Re: RE: RE: RE: QNX 6.4 - QNX 6.3 Comparison  
Ok, Andrew and Robert, thank you very much for the interest in my questions. The explanations cleared many doubts to me.


Here are the sources codes with which I tested. They're very simple, not big deal.

I'm testing again with the new 'devnp-speedo.so' but not with 'devnp-bge.so'. I can also try both. Then I send you the 
new numbers.

Regards,
Juan Manuel
Attachment: Text networking.tgz 2.74 KB
RE: RE: RE: RE: QNX 6.4 - QNX 6.3 Comparison  
> Here are the sources codes with which I tested. They're 
> very simple, not big deal.

Just had a peek at them - my only comment is that since
your only message size is 1024 bytes, you are really only
testing (and comparing) single-packet throughput.

I might suggest that you vary the size of the send for
a more comprehensive test, such as data sizes of:

10    bytes - small user single packet
1200  bytes - big user single packet
4000  bytes - reasonable size chunk
8000  bytes - larger reasonable size chunk
16000 bytes - pretty big size chunk
64000 bytes - really big size chunk

If you sweep the message size as above, you will
get a much more detailed comparison of io-net and
io-pkt (and it's drivers!) and how it transfers
data for qnet applications.

--
aboyd
Re: RE: RE: RE: RE: QNX 6.4 - QNX 6.3 Comparison  
Thank you Andrew, I'll do that comparison soon.

Side note: With the same schema of fixed 1024 bytes, and replacing 'devnp-speedo.so' and "shim & tigon" by 'devnp-bge.
so' the results were nothing good.

Replacing only 'devnp-speedo.so' the difference was imperceptible, the problem came when I replaced "shim & tigon" (the 
default configuration) by 'devnp-bge.so'. The test was twice slower!.

For example: To send 1000000 (1024 bytes) messages with "shim & tigon" takes for me aprox 317 seconds, but with 'devnp-
bge.so' it takes aprox 582 !! (could I be making something wrong?). I mount the driver with no parameters. Just mount 
and test.

What could be happening?

Regards,
Juan Manuel
RE: RE: RE: RE: RE: QNX 6.4 - QNX 6.3 Comparison  
> Replacing only 'devnp-speedo.so' the difference 
> was imperceptible

Hm.  That may change if you start doing larger
size transfers, with back-to-back packets, using
more of the wire.  Right now you're doing just
single-packet ping-ponging.

> the problem came when I replaced "shim & tigon"
> by 'devnp-bge.so'. The test was twice slower!.

Wow, I thought that the thread switch on transmit
might slow qnet down, but I didn't think that much,
especially with your powerful x86 computers.

After your test is run, can you upload the output
of "sloginfo" from both machines?  I want to make
sure that you aren't losing any packets.

--
aboyd
Re: RE: RE: RE: RE: RE: QNX 6.4 - QNX 6.3 Comparison  
Ok, i run a shorter test but I think is the same, because is very slow with 'devnp-bge.so' module.

Before the test I cleared the log buffer (sloginfo -c): So:

sloginfo srv (devnp-speedo.so):

Nov 21 14:02:08    2    14     0 devn-speedo: speedo_MDI_MonitorPhy(): calling MDI_MonitorPhy()
Nov 21 14:02:11    2    14     0 devn-speedo: speedo_MDI_MonitorPhy(): calling MDI_MonitorPhy()
Nov 21 14:05:32    2    14     0 devn-speedo: speedo_MDI_MonitorPhy(): calling MDI_MonitorPhy()

sloginfo cli (devnp-speedo.so):

Nov 21 13:57:50    6     8     0 Faulted in v86 call! (int 0x10, eax = 4f05)

Nov 21 13:59:33    6     8     0 Faulted in v86 call! (int 0x10, eax = 4f05)

Nov 21 13:59:49    6     8     0 Faulted in v86 call! (int 0x10, eax = 4f05)

Nov 21 14:00:17    6     8     0 Faulted in v86 call! (int 0x10, eax = 4f05)

Nov 21 14:00:34    6     8     0 Faulted in v86 call! (int 0x10, eax = 4f05)

What's that fault!?

Regards,
Juan Manuel

RE: RE: RE: RE: RE: RE: QNX 6.4 - QNX 6.3 Comparison  
Before the test I cleared the log buffer (sloginfo -c): So:

> Faulted in v86 call! (int 0x10, eax = 4f05)

That's really weird.  To the best of my knowledge,
that's not coming from the stack or qnet or the
driver.  My guess is the kernel.

Anyways.  What is noticeable (by it's absence) is
the complete lack of qnet protocol events/errors
such as timeouts, nacks, etc which indicates as
best I can tell, that you are NOT losing packets,
which must be fixed before any benchmarking can
take place.

I guess for the time being, I would recommend
staying with the shim and the tigon driver, 
and the newer version of devnp-speedo.so, for
your qnet tests.  I'm not sure what to recommend
to make your tests run any faster.

--
aboyd
Re: RE: RE: RE: RE: RE: QNX 6.4 - QNX 6.3 Comparison  
On Fri, Nov 21, 2008 at 03:16:14PM -0500, Andrew Boyd wrote:
> 
> Before the test I cleared the log buffer (sloginfo -c): So:
> 
> > Faulted in v86 call! (int 0x10, eax = 4f05)
> 
> That's really weird.  To the best of my knowledge,
> that's not coming from the stack or qnet or the
> driver.  My guess is the kernel.

Its coming from a graphics driver (/lib/dll/devg-?)

-seanb
Re: RE: RE: RE: RE: RE: RE: QNX 6.4 - QNX 6.3 Comparison  
> 
> Before the test I cleared the log buffer (sloginfo -c): So:
> 
> > Faulted in v86 call! (int 0x10, eax = 4f05)
> 
> That's really weird.  To the best of my knowledge,
> that's not coming from the stack or qnet or the
> driver.  My guess is the kernel.
> 
> Anyways.  What is noticeable (by it's absence) is
> the complete lack of qnet protocol events/errors
> such as timeouts, nacks, etc which indicates as
> best I can tell, that you are NOT losing packets,
> which must be fixed before any benchmarking can
> take place.
> 
> I guess for the time being, I would recommend
> staying with the shim and the tigon driver, 
> and the newer version of devnp-speedo.so, for
> your qnet tests.  I'm not sure what to recommend
> to make your tests run any faster.
> 
> --
> aboyd


Ok Andrew, thank you very much!... The degradation of the efficiency between QNX 6.3 io-net and 'shim and the tigon 
driver' on io-pkt is as we saw near 2.5%.

> and the newer version of devnp-speedo.so ...

Last bothering question, sorry: New speedo was hardly slower than the one in 6.4 official release (-0.04%). Still 
recomended?

Thanks for all!

Regards,
Juan Manuel

PS: Filesystem post was unaswered since 2 o 3 days ago :(
Re: RE: RE: RE: RE: QNX 6.4 - QNX 6.3 Comparison  
> 
> > Here are the sources codes with which I tested. They're 
> > very simple, not big deal.
> 
> Just had a peek at them - my only comment is that since
> your only message size is 1024 bytes, you are really only
> testing (and comparing) single-packet throughput.
> 
> I might suggest that you vary the size of the send for
> a more comprehensive test, such as data sizes of:
> 
> 10    bytes - small user single packet
> 1200  bytes - big user single packet
> 4000  bytes - reasonable size chunk
> 8000  bytes - larger reasonable size chunk
> 16000 bytes - pretty big size chunk
> 64000 bytes - really big size chunk
> 
> If you sweep the message size as above, you will
> get a much more detailed comparison of io-net and
> io-pkt (and it's drivers!) and how it transfers
> data for qnet applications.
> 
> --
> aboyd

Here's the comparison.

What could we say?

Thank you very much!

Regards,
Juan Manuel

Attachment: Powerpoint variable_message_size_graph.pps 55 KB
RE: RE: RE: RE: RE: QNX 6.4 - QNX 6.3 Comparison  
Hi there:
	The first thing that popped to my mind with this is what happens
if you do the test strictly on a local machine without going over the
network?  We have to distinguish between what happens in the kernel with
message passing versus what happens with the networking...

	Robert.

-----Original Message-----
From: Juan Manuel Placco [mailto:community-noreply@qnx.com] 
Sent: Wednesday, November 26, 2008 12:21 PM
To: technology-networking
Subject: Re: RE: RE: RE: RE: QNX 6.4 - QNX 6.3 Comparison

> 
> > Here are the sources codes with which I tested. They're very simple,

> > not big deal.
> 
> Just had a peek at them - my only comment is that since your only 
> message size is 1024 bytes, you are really only testing (and 
> comparing) single-packet throughput.
> 
> I might suggest that you vary the size of the send for a more 
> comprehensive test, such as data sizes of:
> 
> 10    bytes - small user single packet
> 1200  bytes - big user single packet
> 4000  bytes - reasonable size chunk
> 8000  bytes - larger reasonable size chunk 16000 bytes - pretty big 
> size chunk 64000 bytes - really big size chunk
> 
> If you sweep the message size as above, you will get a much more 
> detailed comparison of io-net and io-pkt (and it's drivers!) and how 
> it transfers data for qnet applications.
> 
> --
> aboyd

Here's the comparison.

What could we say?

Thank you very much!

Regards,
Juan Manuel



_______________________________________________
Technology
http://community.qnx.com/sf/go/post17390
RE: RE: RE: RE: RE: QNX 6.4 - QNX 6.3 Comparison  
20 percent for size 10 is a lot, must be the 3 assembler instructions they added, lol!

-----Original Message-----
From: Juan Manuel Placco [mailto:community-noreply@qnx.com] 
Sent: November-26-08 12:21 PM
To: technology-networking
Subject: Re: RE: RE: RE: RE: QNX 6.4 - QNX 6.3 Comparison

> 
> > Here are the sources codes with which I tested. They're very simple, 
> > not big deal.
> 
> Just had a peek at them - my only comment is that since your only 
> message size is 1024 bytes, you are really only testing (and 
> comparing) single-packet throughput.
> 
> I might suggest that you vary the size of the send for a more 
> comprehensive test, such as data sizes of:
> 
> 10    bytes - small user single packet
> 1200  bytes - big user single packet
> 4000  bytes - reasonable size chunk
> 8000  bytes - larger reasonable size chunk 16000 bytes - pretty big 
> size chunk 64000 bytes - really big size chunk
> 
> If you sweep the message size as above, you will get a much more 
> detailed comparison of io-net and io-pkt (and it's drivers!) and how 
> it transfers data for qnet applications.
> 
> --
> aboyd

Here's the comparison.

What could we say?

Thank you very much!

Regards,
Juan Manuel



_______________________________________________
Technology
http://community.qnx.com/sf/go/post17390
Re: RE: RE: RE: RE: QNX 6.4 - QNX 6.3 Comparison  
Here's the complete table test results.

Regards,
Juan Manuel
Attachment: Excel variables_msgs_throughput.xls 42 KB
RE: RE: RE: RE: RE: QNX 6.4 - QNX 6.3 Comparison  
Is there any way that you could run devnp-speedo.so
on both machines, instead of one having a speedo,
and the other running the tigon?

You didn't mention your exact configuration in this
latest test, but it's possible that the 20% slowdown
on 10 byte packets might be caused by the thread switch 
on receive with the shim and io-pkt, if you are using 
the tigon driver.

IIRC there should not be a thread switch when a packet 
is received by io-net, for qnet.

--
aboyd
Re: RE: RE: RE: RE: RE: QNX 6.4 - QNX 6.3 Comparison  
Hi, 

>> Is there any way that you could run devnp-speedo.so
>> on both machines, instead of one having a speedo,
>> and the other running the tigon?

mmm... I have to get another Intel NIC, maybe next week... 

>> We have to distinguish between what happens in the kernel with
>> message passing versus what happens with the networking... 
>> (Robert)

Meanwhile, I'll follow Robert's suggest in running the same test without networking in both OS. I'm running this test 
right now. Soon I will publish the results, so you can help me to understand...

>> You didn't mention your exact configuration in this
>> latest test, but it's possible that the 20% slowdown
>> on 10 byte packets might be caused by the thread switch 
>> on receive with the shim and io-pkt, if you are using 
>> the tigon driver.

>> IIRC there should not be a thread switch when a packet 
>> is received by io-net, for qnet.

I'm afraid it's not the case. The machine wich 'receives' the packet was running speedo (dev2 in pps), the other one was
 MsgSending (tigon). I could invert the roles. What do you think about it?

That was the question you were referring?

Thanks again!!
Juan Manuel