J Sinton(deleted)
|
Timing resolution for tcpdump / libpcap
|
J Sinton(deleted)
05/23/2014 4:11 PM
post110480
|
Timing resolution for tcpdump / libpcap
Hi,
I have an application that must send and receive various UDP packets (including unicast, broadcast, multicast - it's for
a test simulator) within fairly small time windows (500uS or less). I am currently desktop test hardware QNX localhost
6.5.0 2010/07/09-14:44:03EDT x86pc x86. The specific Ethernet adapter info is below.
By setting the tick resolution to a small value (around 10uS), and setting the NIC driver in enum to promiscuous mode,
and hand-tuning the delay in a timer calls, I can send UDP packets as I need using libpcap, which is what I've done on
other non-QNX platforms.
However, when I use libpcap to receive packets, it appears that packets are clumped together, and only delivered on
specific time intervals (about every 8.39mS on this system). I've tried using both pcap_loop and pcap_dispatch,
changing the priority of the receiving thread all the way to 63 (priority seems to make no difference, high or low), and
verifying using Wireshark from another platform that the incoming packets are at the expected times and are properly
separated in time. The load on the machine during testing is typically very low - hogs reports idle in very high 90s,
and the problem persists even when sending very few packets (e.g. one every 5mS). All time measurement in the
application code are using CLOCK_MONOTONIC. nanosleep is used on the sending side between outgoing messages, and the
receiver is message driven from pcap_loop or pcap_dispatch from a high-priority thread with nothing else (no GUI)
running on the box.
When I tried to investigate further on the machine with tcpdump, I discovered that the messages shown there seem to be
subject to the same clumping: Groups of messages all have the same timestamp, and groups of messages all have time gaps
of about 8.4mS.
Knowing little about QNX, it looks like there is timing granularity (<>10mS) that affects both libpcap and tcpdump, even
when the granularity at the user application level is successfully much less (<>10uS).
libpcap normally works wonderfully for what I need once the OS tick is OK (Linux and Windows), but I just need to
deliver the application and I'm not tied to it if this is a difficult-to-surmount libpcap issue.
Two questions:
1. Where could this timing granularity be coming from, and is there any way to configure or code around it?
2. Using some other approach, such as BPF (seems that may be old) or lsm-nraw, can anyone confirm that they are able to
receive Ethernet messages at the application level with timing granularity in the 200uS or better range? What general
hardware/OS/software approach did you use to achieve it?
Many thanks for any suggestions for solutions or further investigation. I'm stuck!
Best regards,
John
Ethernet driver / NIC info
io-pkt-v4-hc -d speedo promiscuous -p tcpip
INTEL 82558 Ethernet Controller
Physical Node ID ........................... 0019DB BB621E
Current Physical Node ID ................... 0019DB BB621E
Current Operation Rate ..................... 100.00 Mb/s full-duplex
Active Interface Type ...................... MII
Active PHY address ....................... 1
Maximum Transmittable data Unit ............ 1514
Maximum Receivable data Unit ............... 1514
Hardware Interrupt ......................... 0x5
I/O Aperture ............................... 0xef00 - 0xef3f
Memory Aperture ............................ 0xfdcff000 - 0xfdcfffff
ROM Aperture ............................... 0x80dbf6b07fa5a04
Promiscuous Mode ........................... On
Multicast Support .......................... Enabled
|
|
|