Project Home
Project Home
Wiki
Wiki
Discussion Forums
Discussions
Project Information
Project Info
Forum Topic - devnp-e1000 Driver Tuning: (8 Items)
   
devnp-e1000 Driver Tuning  
We're seeing intermittent packet delays, and packet loss in 'high traffic' situations, using the native io-pkt e1000 
driver with the Intel 82576 Quad NIC (device ID 10c9h). Are there driver tuning parameters other than those listed in 
the on-line docs that could be relevant/helpful? If it matters, we're running our time critical traffic on a second 
instance of io-pkt using the following driver options: -opci=1,vid=0x8086,did=0x10c9,priority=32,receive=512,transmit=
4096.

In our standard configuration, we have a master computer exchanging 4000 packets per second with two peripherals (8000 
packets total, 4000 in, 4000 out). The packet loss occurs with both peripherals running. With only one peripheral 
connected, packet delays are seen but there's little/no packet loss. nicinfo is only showing non-zero counts on the "OK"
 lines; there are no non-zero error counts. The packets are fairly small, 103 bytes from the master to the peripherals, 
and 191 bytes from the peripherals to the master.

I'm not familiar with the driver internals but our code is running fine on QNX 6.4.1 with a different NIC, the Intel 
80003ES2LAN (device ID 1096h). We're migrating to the 82576 due to hardware obsolescence. Could something funky be going
 on with the 82576 portion of the driver?

Thanks.

Mark
Re: devnp-e1000 Driver Tuning  
The only tuning parameter is the interrupt throttling rate, but the driver
has to be modified to change this value. Also, the 6.4.1 driver is rather
old and all updates have been added to the 6.5.0 driver.


On 11-08-25 4:43 PM, "Mark Dowdy" <community-noreply@qnx.com> wrote:

> We're seeing intermittent packet delays, and packet loss in 'high traffic'
> situations, using the native io-pkt e1000 driver with the Intel 82576 Quad NIC
> (device ID 10c9h). Are there driver tuning parameters other than those listed
> in the on-line docs that could be relevant/helpful? If it matters, we're
> running our time critical traffic on a second instance of io-pkt using the
> following driver options:
> -opci=1,vid=0x8086,did=0x10c9,priority=32,receive=512,transmit=4096.
> 
> In our standard configuration, we have a master computer exchanging 4000
> packets per second with two peripherals (8000 packets total, 4000 in, 4000
> out). The packet loss occurs with both peripherals running. With only one
> peripheral connected, packet delays are seen but there's little/no packet
> loss. nicinfo is only showing non-zero counts on the "OK" lines; there are no
> non-zero error counts. The packets are fairly small, 103 bytes from the master
> to the peripherals, and 191 bytes from the peripherals to the master.
> 
> I'm not familiar with the driver internals but our code is running fine on QNX
> 6.4.1 with a different NIC, the Intel 80003ES2LAN (device ID 1096h). We're
> migrating to the 82576 due to hardware obsolescence. Could something funky be
> going on with the 82576 portion of the driver?
> 
> Thanks.
> 
> Mark
> 
> 
> 
> 
> _______________________________________________
> 
> Networking Drivers
> http://community.qnx.com/sf/go/post88379
> 
> 

-- 
Hugh Brown                      (613) 591-0931 ext. 2209 (voice)
QNX Software Systems Limited.   (613) 591-3579           (fax)
175 Terence Matthews Cres.       email:  hsbrown@qnx.com
Kanata, Ontario, Canada.
K2M 1W8
 


Re: devnp-e1000 Driver Tuning  
Interesting. Any idea why the 6.5.0 e1000 driver seems to perform worse than the 6.4.1 equivalent? FWIW, we had a 
problem with this driver a while back and sent our machine to Canada for debug. The result was patch 2283, which we are 
running. Searching around the forums, it looks like 2338 and 2530 might also be relevant. Any chance either, or both, of
 those helping? Any other ideas on how to isolate our performance issue? Thanks.

Mark
Re: devnp-e1000 Driver Tuning  
I have no idea why one driver should perform worse than the other. Please
try the attached driver and see if it makes any difference.

Hugh.


On 11-08-26 12:49 PM, "Mark Dowdy" <community-noreply@qnx.com> wrote:

> Interesting. Any idea why the 6.5.0 e1000 driver seems to perform worse than
> the 6.4.1 equivalent? FWIW, we had a problem with this driver a while back and
> sent our machine to Canada for debug. The result was patch 2283, which we are
> running. Searching around the forums, it looks like 2338 and 2530 might also
> be relevant. Any chance either, or both, of those helping? Any other ideas on
> how to isolate our performance issue? Thanks.
> 
> Mark
> 
> 
> 
> 
> _______________________________________________
> 
> Networking Drivers
> http://community.qnx.com/sf/go/post88412
> 
> 

-- 
Hugh Brown                      (613) 591-0931 ext. 2209 (voice)
QNX Software Systems Limited.   (613) 591-3579           (fax)
175 Terence Matthews Cres.       email:  hsbrown@qnx.com
Kanata, Ontario, Canada.
K2M 1W8
 


Attachment: Text devnp-e1000.so 245.67 KB
Re: devnp-e1000 Driver Tuning  
Yikes. Using that driver, I lost all network connectivity and my system hung. Back to the 'old' driver.
Re: devnp-e1000 Driver Tuning  
I¹ll have to take a look at it. Do you have a test program that can
reproduce the issue?


On 11-08-26 8:30 PM, "Mark Dowdy" <community-noreply@qnx.com> wrote:

> Yikes. Using that driver, I lost all network connectivity and my system hung.
> Back to the 'old' driver.
> 
> 
> 
> _______________________________________________
> 
> Networking Drivers
> http://community.qnx.com/sf/go/post88423
> 
> 

-- 
Hugh Brown                      (613) 591-0931 ext. 2209 (voice)
QNX Software Systems Limited.   (613) 591-3579           (fax)
175 Terence Matthews Cres.       email:  hsbrown@qnx.com
Kanata, Ontario, Canada.
K2M 1W8
 


Re: devnp-e1000 Driver Tuning  
Not at the moment. I'll have to spend some time to see if I can create a standalone test program to recreate the problem
.

Mark
Re: devnp-e1000 Driver Tuning  
I was finally able to track the problem down to the Berkeley Packet Filter (BPF). I increased the size of the receive 
buffer and the packet drops disappeared.

Mark