Andrew Boyd(deleted)
01/06/2009 10:14 AM
post19438
|
> sub 200us package frame rate?
A year or so ago, another customer needed that kind
of packet turnaround time, and he did it with Qnet.
Initial testing showed that most of the time, the
time limits were easily met - the customer had fast
x86 hardware, with gige - but occasionally there was
a large delay. After some investigation, it turned
out that there was other, unused hardware that needed
to be disabled in the BIOS, and after that, the time
limits were met.
Similarly, you will have to ensure that other
hardware (and interrupts) and their servicing
is done at a priority below that of your network
processing, regardless of who codes your network
driver and protocol.
My suggestion is to try it with the stock TCP/IP
stack, and see what the numbers are. The good thing
is that ALL of the source to all of the io-pkt
drivers and protocols is available on the foundry,
which is essential for this kind of thing.
For example, you might find that the network
driver is periodically probing the PHY (I've
pretty well gotten rid of that, but I'm sure
there are a few drivers left) which will spin
on the MII read and writes.
--
aboyd
|
|
|
Andrew Boyd(deleted)
01/06/2009 10:19 AM
post19440
|
One more comment: for best possible performance with
io-pkt, use a "native" io-pkt driver (e.g. sys/dev_qnx)
NOT an io-net driver with the shim.
Two reasons for this:
1) performance is better with a native io-pkt driver
(no thread switch or mbuf/npkt translation), and
2) source is available for the the native io-pkt
drivers. Despite our best efforts, this is not the
case with the io-net drivers.
--
aboyd
|
|
|