Project Home
Project Home
Wiki
Wiki
Discussion Forums
Discussions
Project Information
Project Info
Forum Topic - flushing bpf writes: (9 Items)
   
flushing bpf writes  
Hi,

Hoping I can find someone willing to provide some insight/suggestions regarding the following problem:

- I am using io-pkt with the mpc85xx driver under QNX 6.5.
- My application is the sole user of that nic and exclusively uses bpf for r/w.

Here is my scenario:

- link is up

- app starts and can successfully r/w bpf packets

- link goes down

- reads, of course, begin to return zero bytes but writes continue successfully for a number of more packets (these 
writes are getting buffered somewhere as explained below) until the writes finally indicate failure

- my app terminates

- the link goes back up (no app is running and observed that no packet i/o occurs)

- Some tme later (could be hours) I restart my app and once I open the bpf and execute my first write, the old stale 
packets from my prior run get transmitted followed by my new writes (note that I could, with the link down, run my app a
 number of times and finally when the link is up and I run my app I see stale data from a number of those prior 
invocations get sent).

The only way I have found to get rid of the stale data is to slay io-pkt and restart it.  According to the bpf docs, bpf
 writes are unbuffered and BIOCFLUSH  flushes/discards the buffer of incoming packets from within the bpf framework but 
it appears that io-pkt and/or the /dev/bpf device are keeping buffers of stale data around.

Anyone have an idea of how to flush those stale buffers programmatically?

Any insight is GREATLY appreciated,
Dan

RE: flushing bpf writes  
It looks like you should be able to issue the programming equivalent of "ifconfig <int> down" which looks to purge the 
queue. A typical driver does not look like it will purge the queue if the link is detected down in the driver as this is
 probably meant to be temporary. "ifconfig <int> up" will enable the interface again.

Depending on how temporary this is, another option is driver modification so that the driver's if_start() is called when
 it is detected that the link is up at which point the queue will be drained and transmitted. In your scenario the data 
is sent later as there is no trigger to start driver transmission again until your application starts sending data, 
causing the driver routines to execute.   

Dave 

> -----Original Message-----
> From: Dan Moser [mailto:community-noreply@qnx.com]
> Sent: June-25-12 2:08 PM
> To: drivers-networking
> Subject: flushing bpf writes
> 
> Hi,
> 
> Hoping I can find someone willing to provide some insight/suggestions
> regarding the following problem:
> 
> - I am using io-pkt with the mpc85xx driver under QNX 6.5.
> - My application is the sole user of that nic and exclusively uses bpf
> for r/w.
> 
> Here is my scenario:
> 
> - link is up
> 
> - app starts and can successfully r/w bpf packets
> 
> - link goes down
> 
> - reads, of course, begin to return zero bytes but writes continue
> successfully for a number of more packets (these writes are getting
> buffered somewhere as explained below) until the writes finally
> indicate failure
> 
> - my app terminates
> 
> - the link goes back up (no app is running and observed that no packet
> i/o occurs)
> 
> - Some tme later (could be hours) I restart my app and once I open the
> bpf and execute my first write, the old stale packets from my prior run
> get transmitted followed by my new writes (note that I could, with the
> link down, run my app a number of times and finally when the link is up
> and I run my app I see stale data from a number of those prior
> invocations get sent).
> 
> The only way I have found to get rid of the stale data is to slay io-
> pkt and restart it.  According to the bpf docs, bpf writes are
> unbuffered and BIOCFLUSH  flushes/discards the buffer of incoming
> packets from within the bpf framework but it appears that io-pkt and/or
> the /dev/bpf device are keeping buffers of stale data around.
> 
> Anyone have an idea of how to flush those stale buffers
> programmatically?
> 
> Any insight is GREATLY appreciated,
> Dan
> 
> 
> 
> 
> 
> _______________________________________________
> 
> Networking Drivers
> http://community.qnx.com/sf/go/post93853
> To cancel your subscription to this discussion, please e-mail drivers-
> networking-unsubscribe@community.qnx.com
Re: RE: flushing bpf writes  
Thanks for the reply, Dave.

With regard to "ifconfig <int> up/down", is there a better/preferred method of accomplishing this in a C/C++ app other 
than issuing a "system()" call?

Dan
Re: RE: flushing bpf writes  
Toggle IFF_UP via the SIOCSIFFLAG ioctl.

-seanb

----- Original Message -----
From: Dan Moser [mailto:community-noreply@qnx.com]
Sent: Monday, June 25, 2012 05:45 PM
To: drivers-networking <drivers-networking@community.qnx.com>
Subject: Re: RE: flushing bpf writes

Thanks for the reply, Dave.

With regard to "ifconfig <int> up/down", is there a better/preferred method of accomplishing this in a C/C++ app other 
than issuing a "system()" call?

Dan




_______________________________________________

Networking Drivers
http://community.qnx.com/sf/go/post93856
To cancel your subscription to this discussion, please e-mail drivers-networking-unsubscribe@community.qnx.com
Re: RE: flushing bpf writes  
Fixed!  Thanks Dave and Sean.

Toggling IFF_UP does indeed flush/purge those tx queues.  Exactly what I was looking for.

Greatly appreciated.

Dan
Re: flushing bpf writes  
Hi,

Wasn't sure if I should start a different discussion thread for this but it is somewhat related to my original tx 
flushing question.

This is another quasi-newbie question for which I humbly hope for some help:  How do I go about setting/limiting the 
size of the tx queues (i.e., the ones that I am flushing via bringing the network down/up via IFF_UP/SIOCSIFFLAG)?  With
 the link down but the net i/f up (i.e. everything is up but the cable is disconnected), I observed that I am writing 
about 500 packets of 1500 bytes each before my bpf write finally fails.  I would like to limit that to only about 10-20 
packets of 1500 bytes each.

I have tried changing the "transmit=num" setting for the mpc85xx driver to various smaller values but didn't seem effect
 my results.

Thanks again in advance,
Dan
RE: flushing bpf writes  
I am not sure what the error you received represents (ENETDOWN?), but the interface queue should only be able to store 
256 packets (more than that are dropped as the queue is full). I don't see a way to change this setting for tx. 

Dave

> -----Original Message-----
> From: Dan Moser [mailto:community-noreply@qnx.com]
> Sent: June-27-12 12:51 AM
> To: drivers-networking
> Subject: Re: flushing bpf writes
> 
> Hi,
> 
> Wasn't sure if I should start a different discussion thread for this
> but it is somewhat related to my original tx flushing question.
> 
> This is another quasi-newbie question for which I humbly hope for some
> help:  How do I go about setting/limiting the size of the tx queues
> (i.e., the ones that I am flushing via bringing the network down/up via
> IFF_UP/SIOCSIFFLAG)?  With the link down but the net i/f up (i.e.
> everything is up but the cable is disconnected), I observed that I am
> writing about 500 packets of 1500 bytes each before my bpf write
> finally fails.  I would like to limit that to only about 10-20 packets
> of 1500 bytes each.
> 
> I have tried changing the "transmit=num" setting for the mpc85xx driver
> to various smaller values but didn't seem effect my results.
> 
> Thanks again in advance,
> Dan
> 
> 
> 
> 
> _______________________________________________
> 
> Networking Drivers
> http://community.qnx.com/sf/go/post93892
> To cancel your subscription to this discussion, please e-mail drivers-
> networking-unsubscribe@community.qnx.com
Re: RE: flushing bpf writes  
Thanks for the reply, Dave.

What is the effect, if any, on the tx queues if I launch io-pkt with the 'cache=0' option?

RE: RE: flushing bpf writes  
That option sets how packet buffer pools are allocated via mmap() (PROT_NOCACHE). It will not affect the length of the 
interface TX queue. 

Dave

> -----Original Message-----
> From: Dan Moser [mailto:community-noreply@qnx.com]
> Sent: June-28-12 3:30 PM
> To: drivers-networking
> Subject: Re: RE: flushing bpf writes
> 
> Thanks for the reply, Dave.
> 
> What is the effect, if any, on the tx queues if I launch io-pkt with
> the 'cache=0' option?
> 
> 
> 
> 
> 
> _______________________________________________
> 
> Networking Drivers
> http://community.qnx.com/sf/go/post93941
> To cancel your subscription to this discussion, please e-mail drivers-
> networking-unsubscribe@community.qnx.com