Project Home
Project Home
Wiki
Wiki
Discussion Forums
Discussions
Project Information
Project Info
Forum Topic - Network Driver and mbuf's: (5 Items)
   
Network Driver and mbuf's  
I've been having an "exciting" debugging experience with an io-pkt network driver for about a month now.   I'm seeing 
very strange behavior and I'd like to know if anyone has seen anything like this, and might have a hint as to where I'm 
going wrong.

I started with the sam.c code.   The underlying hardware code comes from Linux but seems quite stable.  I'm able to run 
it in a program, and it seems to do what is intended.

A lot of things seem to work ok.  I can telnet out from the node and the results look fine, even when large 1514 byte 
packets are being sent.   

When I telnet to the cpu things work differently.

In particular I'm seeing some very strange stuff in the xxx_start (transmit) routine.   
On login I sometimes see junk after the "login:" prompt, usually something like "@ @ @ @ @".
This is probably random junk in a buffer caused by a packet length that is too big.
   
Upon close inspection, this junk really in the mbuf being dequeued in the xxx_start routine.   
Since I assume that io-pkt and the tcp/ip protocol module generally works right, I assume that the problem is mine.

I do not manipulate the mbuf (in this routine) in any way.   I merely use the copy routine to copy it over to my 
hardware buffer.    I'm guessing that I'm throwing io-pkt off in the receive routine.  
In that routine I take the mbuf returned from m_getcl_wtp() and copy the received data into it using m_copyback().   
This of course modifies the mbuf and I'm concerned that io-pkt doesn't like this.

In any event, I started looking closely at the mbuf's in the xxx_start routine.   I check how many chained packets there
 are and how long each is.   That's where I see some weird inconsistencies.

The typical max packet length I see is 172, but sometimes I'll see an almost random chain of lengths, eg.  
66 165 172 172 172 172 172 172 172 80 = 1514.   

But then at other times I see something like:
66 1448.  I assume in the latter case the mbuf has a cluster, but there's no apparent reason why.


So some hard questions.   Is this mbuf behavior weird or normal?
Is there anything wrong with using m_copydata() and m_copyback() on these mbufs that I get from io-pkt. Is there 
anything else obvious that I could be doing wrong.

Thanks for any words of wizdom!
Re: Network Driver and mbuf's  
One last thing.   After stopping and restarting io-pkt using this test driver a few times, I almost always end up with a
 crashed system.   The shell still work, but commands are not found.  Programs found in /proc/boot will run.   This 
strikes me as rather weird.   Why would a crashed io-pkt prevent the file system from working?   It's not the case of a 
runaway cpu, I have four and they all seem to be quiet.
Re: Network Driver and mbuf's  
So after posting I read through all the posts here I could find to see what they had to say about mbufs.   There's 
plenty of talk about how they are supposed to be 2K in size.   

I don't understand what that means with respect to what I've seen being passed to my driver.
It suggests that mbuf's are not being used in the defined way?
Am I supposed to just memcpy() in and out of m->m_data?

Any clarification?
Re: Network Driver and mbuf's  
So, the previous question(s) are mute.   I discovered the hard way that m_copyback does NOT set the packet length.  
Re: Network Driver and mbuf's  
Apparently I'm providing my own tech support here.   The answer to my question seems to be yes.  I found my problem, 
which is that m_copyback() does not set m_len, :-(.   

So consider this thread closed.