06/25/2012 4:08 PM
post93854
|
It looks like you should be able to issue the programming equivalent of "ifconfig <int> down" which looks to purge the
queue. A typical driver does not look like it will purge the queue if the link is detected down in the driver as this is
probably meant to be temporary. "ifconfig <int> up" will enable the interface again.
Depending on how temporary this is, another option is driver modification so that the driver's if_start() is called when
it is detected that the link is up at which point the queue will be drained and transmitted. In your scenario the data
is sent later as there is no trigger to start driver transmission again until your application starts sending data,
causing the driver routines to execute.
Dave
> -----Original Message-----
> From: Dan Moser [mailto:community-noreply@qnx.com]
> Sent: June-25-12 2:08 PM
> To: drivers-networking
> Subject: flushing bpf writes
>
> Hi,
>
> Hoping I can find someone willing to provide some insight/suggestions
> regarding the following problem:
>
> - I am using io-pkt with the mpc85xx driver under QNX 6.5.
> - My application is the sole user of that nic and exclusively uses bpf
> for r/w.
>
> Here is my scenario:
>
> - link is up
>
> - app starts and can successfully r/w bpf packets
>
> - link goes down
>
> - reads, of course, begin to return zero bytes but writes continue
> successfully for a number of more packets (these writes are getting
> buffered somewhere as explained below) until the writes finally
> indicate failure
>
> - my app terminates
>
> - the link goes back up (no app is running and observed that no packet
> i/o occurs)
>
> - Some tme later (could be hours) I restart my app and once I open the
> bpf and execute my first write, the old stale packets from my prior run
> get transmitted followed by my new writes (note that I could, with the
> link down, run my app a number of times and finally when the link is up
> and I run my app I see stale data from a number of those prior
> invocations get sent).
>
> The only way I have found to get rid of the stale data is to slay io-
> pkt and restart it. According to the bpf docs, bpf writes are
> unbuffered and BIOCFLUSH flushes/discards the buffer of incoming
> packets from within the bpf framework but it appears that io-pkt and/or
> the /dev/bpf device are keeping buffers of stale data around.
>
> Anyone have an idea of how to flush those stale buffers
> programmatically?
>
> Any insight is GREATLY appreciated,
> Dan
>
>
>
>
>
> _______________________________________________
>
> Networking Drivers
> http://community.qnx.com/sf/go/post93853
> To cancel your subscription to this discussion, please e-mail drivers-
> networking-unsubscribe@community.qnx.com
|
|
|
06/27/2012 12:51 AM
post93892
|
Hi,
Wasn't sure if I should start a different discussion thread for this but it is somewhat related to my original tx
flushing question.
This is another quasi-newbie question for which I humbly hope for some help: How do I go about setting/limiting the
size of the tx queues (i.e., the ones that I am flushing via bringing the network down/up via IFF_UP/SIOCSIFFLAG)? With
the link down but the net i/f up (i.e. everything is up but the cable is disconnected), I observed that I am writing
about 500 packets of 1500 bytes each before my bpf write finally fails. I would like to limit that to only about 10-20
packets of 1500 bytes each.
I have tried changing the "transmit=num" setting for the mpc85xx driver to various smaller values but didn't seem effect
my results.
Thanks again in advance,
Dan
|
|
|