Project Home
Project Home
Wiki
Wiki
Discussion Forums
Discussions
Project Information
Project Info
Forum Topic - devnp driver for imx53: (5 Items)
   
devnp driver for imx53  
Hi,

I am seeing corrupt packets being transmitted that I am trying to debug. About 1 in 10.

Using the driver logging, which dumps the mbuf, I see that packet data is correct.

I also added the same logging code to just before the FEC is told that the packets are ready to be transmitted.

The packets are also correct.


In transmit.c I see the lines....

  ...
  uint32_t  mbuf_phys_addr= mbuf_phys(m2);
  uint16_t  mbuf_phys_len	= m2->m_len;
					
  tx_bd->buffer = mbuf_phys_addr;
  tx_bd->length = mbuf_phys_len;
  CACHE_FLUSH(&mx51->cachectl, m2->m_data, mbuf_phys_addr, m2->m_len);
  ...


Is there any way to dump "tx_bd->buffer"? Its of type "uint32_t". How do I access this?
I have tried and I get either hangs or seg violation (possibly due to access being denied).

I see the corruption with tcp and udp packets.
The throughput is extremely low. 2-3 packets at a time.
Th network is private point to point (QNX to PC).

Any ideas. Help appreciated.

Thanks
Simon
Re: devnp driver for imx53  
A quick stab in the dark, since it looks like it is a physical address 
what if you map it in, dump the contents, and unmap it?

On 13-11-11 11:08 AM, Simon Conway wrote:
> Is there any way to dump "tx_bd->buffer"? Its of type "uint32_t". How do I access this?
> I have tried and I get either hangs or seg violation (possibly due to access being denied).

Re: devnp driver for imx53  
Hi,

Thanks for the reply

> A quick stab in the dark, since it looks like it is a physical address 
> what if you map it in, dump the contents, and unmap it?
> 

I realise now that this was originally converted from the logical address. So the original driver trace which uses the 
logical address is sufficient.

Simon
Beta test: high speed raw ethernet interface based on DPDK ( 1GB Intel adapters)  
Hi,

in the attachment is a test application of the DPDK framework (dpdk.org)
working with QNX6.4 and QNX6.5.
Supported are all 1GB adapters from Intel. "testpmd" is completely
independend from the network infrastructure of QNX Neutrino.

Required is at least a dual core CPU and an 1GB Intel Ethernet adapter.

Application start for a dual core system and a single channel *I210*
adapter e.g.:  "testpmd -c 3 -n 1"

For a different Intel adapter must be specified its PCI sub device ID by
the -D switch: "testpmd -c 3 -n1 -D nnn" ("nnn" hex number w/o "0x" prefix)
The application terminates with an abort() call if the requested PCI
adapter couldn't be found ...

Start and stop of the "sniff function":
*testpmd>*set fwd rxonly
*testpmd>*start
*testpmd>*stop

Start and stop of the "UDP packet generator ":
*testpmd>*set fwd txonly
*testpmd>*start
*testpmd>*stop

The maximal number of packets/sec is limitted to 40.000 packets/sec by a
nanospin() delay of 20us.
(Slower networks creates dropped packets .. "testpmd" must be used
carefully on your own risk)

There are a lot of additional test cases included ... some are not
supported because of architectural differences between QNX6 and Linux
Only the "Link status change" interrupt is currently supported.

Please let me know if there are issues with your 1GB Intel adapter.

Regards

--Armin







 

 





Attachment: HTML sf-attachment-mime20227 2.06 KB Text testpmd-0.9.gz 256.51 KB
Beta test: high speed raw ethernet interface based on DPDK ( 1GB Intel adapters)  
Hi,

in the attachment is a test application of the DPDK framework (dpdk.org)
working with QNX6.4 and QNX6.5.
Supported are all 1GB adapters from Intel. "testpmd" is completely
independend from the network infrastructure of QNX Neutrino.

Required is at least a dual core CPU and an 1GB Intel Ethernet adapter.

Application start for a dual core system and a single channel *I210*
adapter e.g.:  "testpmd -c 3 -n 1"

For a different Intel adapter must be specified its PCI sub device ID by
the -D switch: "testpmd -c 3 -n1 -D nnn" ("nnn" hex number w/o "0x" prefix)
The application terminates with an abort() call if the requested PCI
adapter couldn't be found ...

Start and stop of the "sniff function":
*testpmd>*set fwd rxonly
*testpmd>*start
*testpmd>*stop

Start and stop of the "UDP packet generator ":
*testpmd>*set fwd txonly
*testpmd>*start
*testpmd>*stop

The maximal number of packets/sec is limitted to 40.000 packets/sec by a
nanospin() delay of 20us.
(Slower networks creates dropped packets .. "testpmd" must be used
carefully on your own risk)

There are a lot of additional test cases included ... some are not
supported because of architectural differences between QNX6 and Linux
Only the "Link status change" interrupt is currently supported.

Please let me know if there are issues with your 1GB Intel adapter.

Regards

--Armin







 

 





Attachment: HTML sf-attachment-mime20266 2.06 KB Text testpmd-0.9.gz 256.51 KB