Project Home
Project Home
Wiki
Wiki
Discussion Forums
Discussions
Project Information
Project Info
Forum Topic - io-pkt advice: (4 Items)
   
io-pkt advice  
Hey

We are trying to determine the best strategy for implementing a high speed
hard real-time ethernet protocol (EtherCAT master). We are considering
using io-pkt package filtering as the method for sending and receiving raw
EtherCAT packages (Ethernet packages with ethertype EtherCAT) which needs
to be transmitted at a higher frame rate than available from commercial
vendors as source code for QNX. 

Is this possible (and reccomendable) using package filtering (or
alternatively bpf) under io-pkt, or should we go for a more "direct"
autonomous control of the standard NIC interface in order to reach sub
200us package frame rate?

The core networking documentation for io-pkt states that the ip-part is
tightly compiled into io-pkt. Does this mean that transmitting raw
EtherCAT frames without IP src and dst addresses will be impossible for us
using io-pkt?

The io-pkt is proberly not build for real-time communications, so maybe it is an argument for writing our own stack, and
 not use the io-pkt.

  
RE: io-pkt advice  
> sub 200us package frame rate?

A year or so ago, another customer needed that kind
of packet turnaround time, and he did it with Qnet.

Initial testing showed that most of the time, the
time limits were easily met - the customer had fast
x86 hardware, with gige - but occasionally there was
a large delay.  After some investigation, it turned
out that there was other, unused hardware that needed 
to be disabled in the BIOS, and after that, the time 
limits were met.

Similarly, you will have to ensure that other
hardware (and interrupts) and their servicing
is done at a priority below that of your network
processing, regardless of who codes your network
driver and protocol.

My suggestion is to try it with the stock TCP/IP
stack, and see what the numbers are.  The good thing
is that ALL of the source to all of the io-pkt
drivers and protocols is available on the foundry,
which is essential for this kind of thing. 

For example, you might find that the network
driver is periodically probing the PHY (I've
pretty well gotten rid of that, but I'm sure
there are a few drivers left) which will spin 
on the MII read and writes.

--
aboyd
RE: io-pkt advice  
One more comment: for best possible performance with
io-pkt, use a "native" io-pkt driver (e.g. sys/dev_qnx)
NOT an io-net driver with the shim.

Two reasons for this:

1) performance is better with a native io-pkt driver
(no thread switch or mbuf/npkt translation), and

2) source is available for the the native io-pkt
drivers.  Despite our best efforts, this is not the
case with the io-net drivers.

--
aboyd
RE: io-pkt advice  
This has come up with another customer as well.  The stack hasn't been
specifically designed to be "real-time /  deterministic" when it comes
to packet flows, but it's quite likely to be "fast enough" if you have a
good processor.

The tricky parts come into data transfer between the application and the
stack.  The stack is optimized for receive packet processing and this
has the unfortunate consequence that transfer to/from a user app can be
starved in favour of incoming packets being processed.  

I'd say that it would be worthwhile trying BPF since this is the
cleanest interface to work with.  It gets you outside of the stack and
allows you to do whatever you want inside of your application.

For very tight control, I'd recommend the packet filtering mechanism
using the filter discussed here:
http://community.qnx.com/sf/discussion/do/listPosts/projects.networking/
discussion.io_net_migration.topc1482

It's at the very bottom of the stack processing and in the layer 2
processing which is multi-threaded so is far more likely to have better
responsiveness than other filters in the stack.

	Robert.




-----Original Message-----
From: Andrew Boyd [mailto:community-noreply@qnx.com] 
Sent: Tuesday, January 06, 2009 10:19 AM
To: general-networking
Subject: RE: io-pkt advice


One more comment: for best possible performance with io-pkt, use a
"native" io-pkt driver (e.g. sys/dev_qnx) NOT an io-net driver with the
shim.

Two reasons for this:

1) performance is better with a native io-pkt driver (no thread switch
or mbuf/npkt translation), and

2) source is available for the the native io-pkt drivers.  Despite our
best efforts, this is not the case with the io-net drivers.

--
aboyd


_______________________________________________
General
http://community.qnx.com/sf/go/post19440