Forum Topic - io-pkt leak data section - ppc8260 ethernet driver:
   
io-pkt leak data section - ppc8260 ethernet driver  
Hello,

we do see on our system that io-pkt while running shows in different scenarios a memory leak. The memory in the data 
section grows until the system has no resources (RAM) left and crashes!

----------------------------------------------------------------------------------
QNX Version used:  
------------------------------------
QNX6.5 SP2 - PPC Platform

NIC Driver uses:
------------------------------------
from BSP 8260 pq2fads (QNX      PQ2FADS BSP V1.0.2 (File revision: 1.43))

Start sequence of drivers (extract from a start up script):
------------------------------------
mount -T io-pkt -o channel=1,mac=$MAC1 devn-ppc8260-nxa.so
mount -T io-pkt -o channel=2,mac=$MAC2 devn-ppc8260-nxa.so
mount -T io-pkt -o channel=3,mac=$MAC3 devn-ppc8260-nxa.so

netmanager -r all -f $NETCONFIG

mount -T io-pkt -o bind=ip0,resolve=dns,resolve=autoresolve /usr/lib/lsm-qnet.so
----------------------------------------------------------------------------------

Other embedded sysetm are also connected on the network. If we now perform ls /net/$NODENAME the new node do appear.

By simple performing the "df -h" we can reproduce the leak.
The issue is, that our application does not execute this command - but we can see that ~4.5 hours the data section 
(pidin -mem) of io-pkt do increase by 4kBytes.

Any help is welcome, also if more information is needed - please let me know

Kind Regards
Thomas L.








Attachment: Text pidin_mem_log.txt 1.04 MB
Re: io-pkt leak data section - ppc8260 ethernet driver  
This is most probably a driver issue, but this driver wasn¹t written by
QNX and we don¹t have the source code.



On 2016-11-15, 3:58 AM, "Thomas Luetzel" <community-noreply@qnx.com> wrote:

>Hello,
>
>we do see on our system that io-pkt while running shows in different
>scenarios a memory leak. The memory in the data section grows until the
>system has no resources (RAM) left and crashes!
>
>--------------------------------------------------------------------------
>--------
>QNX Version used: 
>------------------------------------
>QNX6.5 SP2 - PPC Platform
>
>NIC Driver uses:
>------------------------------------
>from BSP 8260 pq2fads (QNX      PQ2FADS BSP V1.0.2 (File revision: 1.43))
>
>Start sequence of drivers (extract from a start up script):
>------------------------------------
>mount -T io-pkt -o channel=1,mac=$MAC1 devn-ppc8260-nxa.so
>mount -T io-pkt -o channel=2,mac=$MAC2 devn-ppc8260-nxa.so
>mount -T io-pkt -o channel=3,mac=$MAC3 devn-ppc8260-nxa.so
>
>netmanager -r all -f $NETCONFIG
>
>mount -T io-pkt -o bind=ip0,resolve=dns,resolve=autoresolve
>/usr/lib/lsm-qnet.so
>--------------------------------------------------------------------------
>--------
>
>Other embedded sysetm are also connected on the network. If we now
>perform ls /net/$NODENAME the new node do appear.
>
>By simple performing the "df -h" we can reproduce the leak.
>The issue is, that our application does not execute this command - but we
>can see that ~4.5 hours the data section (pidin -mem) of io-pkt do
>increase by 4kBytes.
>
>Any help is welcome, also if more information is needed - please let me
>know
>
>Kind Regards
>Thomas L.
>
>
>
>
>
>
>
>
>
>
>
>
>_______________________________________________
>
>Networking Drivers
>http://community.qnx.com/sf/go/post117111
>To cancel your subscription to this discussion, please e-mail
>drivers-networking-unsubscribe@community.qnx.com

Re: io-pkt leak data section - ppc8260 ethernet driver  
Hi,

if needed, wanted or allowed from QNX point of view - I can attach the driver code.

Kind Regards
Re: io-pkt leak data section - ppc8260 ethernet driver  
If you don¹t have a support contract with QNX, you will have to speak to
your sales rep to sort something out.



On 2016-11-16, 8:43 AM, "Thomas Luetzel" <community-noreply@qnx.com> wrote:

>Hi,
>
>if needed, wanted or allowed from QNX point of view - I can attach the
>driver code.
>
>Kind Regards
>
>
>
>_______________________________________________
>
>Networking Drivers
>http://community.qnx.com/sf/go/post117123
>To cancel your subscription to this discussion, please e-mail
>drivers-networking-unsubscribe@community.qnx.com

Re: io-pkt leak data section - ppc8260 ethernet driver  
Some more test results,

I just started the OS and killed all Ethernet related things.
I started io-pkt-v4 again and kept a script in the background running:

===================
while [ 0 -lt 1 ]
do
        df >> /mnt/flash.log
        sleep 1
done
===================
RESULT after 30 minutes:

COMMAND: pidin -l -d 100 io-pkt-v4 mem

     pid tid name               prio STATE            code  data        stack
208330770   1 usr/bin/io-pkt-v4   21r SIGWAITINFO      572K  308K  4096(516K)*
208330770   2 usr/bin/io-pkt-v4   21r RECEIVE          572K  308K  8192(132K)
            libc.so.3          @fe300000             512K   12K

... CUT ...

     pid tid name               prio STATE            code  data        stack
208330770   1 usr/bin/io-pkt-v4   21r SIGWAITINFO      572K  312K  4096(516K)*
208330770   2 usr/bin/io-pkt-v4   21r RECEIVE          572K  312K  8192(132K)
            libc.so.3          @fe300000             512K   12K

... CUT ...

	pid tid name               prio STATE            code  data        stack
208330770   1 usr/bin/io-pkt-v4   21r SIGWAITINFO      572K  316K  4096(516K)*
208330770   2 usr/bin/io-pkt-v4   21r RECEIVE          572K  316K  8192(132K)
            libc.so.3          @fe300000             512K   12K

... CUT ...

	pid tid name               prio STATE            code  data        stack
208330770   1 usr/bin/io-pkt-v4   21r SIGWAITINFO      572K  320K  4096(516K)*
208330770   2 usr/bin/io-pkt-v4   21r RECEIVE          572K  320K  8192(132K)
            libc.so.3          @fe300000             512K   12K

... CUT ...

     pid tid name               prio STATE            code  data        stack
208330770   1 usr/bin/io-pkt-v4   21r SIGWAITINFO      572K  324K  4096(516K)*
208330770   2 usr/bin/io-pkt-v4   21r RECEIVE          572K  324K  8192(132K)
            libc.so.3          @fe300000             512K   12K

... CUT ...

     pid tid name               prio STATE            code  data        stack
208330770   1 usr/bin/io-pkt-v4   21r SIGWAITINFO      572K  328K  4096(516K)*
208330770   2 usr/bin/io-pkt-v4   21r RECEIVE          572K  328K  8192(132K)
            libc.so.3          @fe300000             512K   12K

... CUT ...

     pid tid name               prio STATE            code  data        stack
208330770   1 usr/bin/io-pkt-v4   21r SIGWAITINFO      572K  332K  4096(516K)*
208330770   2 usr/bin/io-pkt-v4   21r RECEIVE          572K  332K  8192(132K)
            libc.so.3          @fe300000             512K   12K

... CUT ...

     pid tid name               prio STATE            code  data        stack
208330770   1 usr/bin/io-pkt-v4   21r SIGWAITINFO      572K  336K  4096(516K)*
208330770   2 usr/bin/io-pkt-v4   21r RECEIVE          572K  336K  8192(132K)
            libc.so.3          @fe300000             512K   12K


What make io-pkt-v4 to allocate memory ????

Thanks in advance for any help

Thomas L.
Re: io-pkt leak data section - ppc8260 ethernet driver  
io-pkt will allocate memory as it is needed, so the data size will carry
on increasing, but should stabilize after a while.



On 2016-11-16, 10:44 AM, "Thomas Luetzel" <community-noreply@qnx.com>
wrote:

>Some more test results,
>
>I just started the OS and killed all Ethernet related things.
>I started io-pkt-v4 again and kept a script in the background running:
>
>===================
>while [ 0 -lt 1 ]
>do
>        df >> /mnt/flash.log
>        sleep 1
>done
>===================
>RESULT after 30 minutes:
>
>COMMAND: pidin -l -d 100 io-pkt-v4 mem
>
>     pid tid name               prio STATE            code  data
>stack
>208330770   1 usr/bin/io-pkt-v4   21r SIGWAITINFO      572K  308K
>4096(516K)*
>208330770   2 usr/bin/io-pkt-v4   21r RECEIVE          572K  308K
>8192(132K)
>            libc.so.3          @fe300000             512K   12K
>
>... CUT ...
>
>     pid tid name               prio STATE            code  data
>stack
>208330770   1 usr/bin/io-pkt-v4   21r SIGWAITINFO      572K  312K
>4096(516K)*
>208330770   2 usr/bin/io-pkt-v4   21r RECEIVE          572K  312K
>8192(132K)
>            libc.so.3          @fe300000             512K   12K
>
>... CUT ...
>
>	pid tid name               prio STATE            code  data        stack
>208330770   1 usr/bin/io-pkt-v4   21r SIGWAITINFO      572K  316K
>4096(516K)*
>208330770   2 usr/bin/io-pkt-v4   21r RECEIVE          572K  316K
>8192(132K)
>            libc.so.3          @fe300000             512K   12K
>
>... CUT ...
>
>	pid tid name               prio STATE            code  data        stack
>208330770   1 usr/bin/io-pkt-v4   21r SIGWAITINFO      572K  320K
>4096(516K)*
>208330770   2 usr/bin/io-pkt-v4   21r RECEIVE          572K  320K
>8192(132K)
>            libc.so.3          @fe300000             512K   12K
>
>... CUT ...
>
>     pid tid name               prio STATE            code  data
>stack
>208330770   1 usr/bin/io-pkt-v4   21r SIGWAITINFO      572K  324K
>4096(516K)*
>208330770   2 usr/bin/io-pkt-v4   21r RECEIVE          572K  324K
>8192(132K)
>            libc.so.3          @fe300000             512K   12K
>
>... CUT ...
>
>     pid tid name               prio STATE            code  data
>stack
>208330770   1 usr/bin/io-pkt-v4   21r SIGWAITINFO      572K  328K
>4096(516K)*
>208330770   2 usr/bin/io-pkt-v4   21r RECEIVE          572K  328K
>8192(132K)
>            libc.so.3          @fe300000             512K   12K
>
>... CUT ...
>
>     pid tid name               prio STATE            code  data
>stack
>208330770   1 usr/bin/io-pkt-v4   21r SIGWAITINFO      572K  332K
>4096(516K)*
>208330770   2 usr/bin/io-pkt-v4   21r RECEIVE          572K  332K
>8192(132K)
>            libc.so.3          @fe300000             512K   12K
>
>... CUT ...
>
>     pid tid name               prio STATE            code  data
>stack
>208330770   1 usr/bin/io-pkt-v4   21r SIGWAITINFO      572K  336K
>4096(516K)*
>208330770   2 usr/bin/io-pkt-v4   21r RECEIVE          572K  336K
>8192(132K)
>            libc.so.3          @fe300000             512K   12K
>
>
>What make io-pkt-v4 to allocate memory ????
>
>Thanks in advance for any help
>
>Thomas L.
>
>
>
>
>_______________________________________________
>
>Networking Drivers
>http://community.qnx.com/sf/go/post117125
>To cancel your subscription to this discussion, please e-mail
>drivers-networking-unsubscribe@community.qnx.com

Re: io-pkt leak data section - ppc8260 ethernet driver  
Understood,

I will keep the system/test running over night and will let you know if the memory stabalized.

If yes - good, then I have to look into another direction, BUT IF NOT?

Again no network driver is mounted as you can see in the pidin log, so for me the original request/answer that there is 
a bug in the driver itself can be crossed out - as it is not mounted/loaded/started....

Thanks for the replies

Kind Regards
Thomas
Re: io-pkt leak data section - ppc8260 ethernet driver  
Hello,

kept on running the test.
io-pkt.-v4 data section of the memory is NOT settling down!

====================
# pidin -l -d 100 -p io-pkt-v4 mem
     pid tid name               prio STATE            code  data        stack
208330770   1 usr/bin/io-pkt-v4   21r SIGWAITINFO      572K 4184K  4096(516K)*
208330770   2 usr/bin/io-pkt-v4   21r RECEIVE          572K 4184K  8192(132K)
====================

script code:
===========
while [ 0 -lt 1 ]
do
	df -h > /dev/null
done
===========

what can cause the system/io-pkt to eat up memory while executing this script. If we find out the reason, then I can 
have a look into our application what might cause the same issue.

Thanks in advance
Thomas L.

Re: io-pkt leak data section - ppc8260 ethernet driver  
What version of the O/S are you running, and what is the command line you
are using to start io-pkt? The output from pidin would also be useful.



On 2016-11-18, 5:31 AM, "Thomas Luetzel" <community-noreply@qnx.com> wrote:

>Hello,
>
>kept on running the test.
>io-pkt.-v4 data section of the memory is NOT settling down!
>
>====================
># pidin -l -d 100 -p io-pkt-v4 mem
>     pid tid name               prio STATE            code  data
>stack
>208330770   1 usr/bin/io-pkt-v4   21r SIGWAITINFO      572K 4184K
>4096(516K)*
>208330770   2 usr/bin/io-pkt-v4   21r RECEIVE          572K 4184K
>8192(132K)
>====================
>
>script code:
>===========
>while [ 0 -lt 1 ]
>do
>	df -h > /dev/null
>done
>===========
>
>what can cause the system/io-pkt to eat up memory while executing this
>script. If we find out the reason, then I can have a look into our
>application what might cause the same issue.
>
>Thanks in advance
>Thomas L.
>
>
>
>
>
>_______________________________________________
>
>Networking Drivers
>http://community.qnx.com/sf/go/post117141
>To cancel your subscription to this discussion, please e-mail
>drivers-networking-unsubscribe@community.qnx.com

Re: io-pkt leak data section - ppc8260 ethernet driver  
Hi,

we do have two system running 
1. QNX 6.5.0 2010/07/09-14:35:59EDT
# use -i io-pkt-v4-hc
NAME=io-pkt-v4-hc
DESCRIPTION=TCP/IP protocol module.
DATE=2010/07/09-13:43:56-EDT
STATE=stable
HOST=mainbuild
USER=builder
VERSION=6.5.0
TAGID=89

2. QNX 6.5.0 2012/06/20-13:49:31EDT
# use -i io-pkt-v4
NAME=io-pkt-v4
DESCRIPTION=TCP/IP protocol module.
DATE=2014/10/15-11:25:52-EDT
STATE=stable
HOST=pspbuildvm
USER=pspbuild
VERSION=650SP1-4004
TAGID=5664

on both systems we start io-pkt-v4 the following way:

io-pkt-v4
mount -T io-pkt -o channel=1,mac=$MAC1 devn-ppc8260-nxa.so
mount -T io-pkt -o channel=2,mac=$MAC2 devn-ppc8260-nxa.so
mount -T io-pkt -o channel=3,mac=$MAC3 devn-ppc8260-nxa.so
mount -T io-pkt -o bind=ip0,resolve=dns,resolve=autoresolve /usr/lib/lsm-qnet.so

Both systems showing the same leak behavior.

=======================================

# pidin -pio-pkt-v4
     pid tid name               prio STATE       Blocked
  393246   1 usr/bin/io-pkt-v4   21r SIGWAITINFO
  393246   2 usr/bin/io-pkt-v4   10r RECEIVE     1
  393246   3 usr/bin/io-pkt-v4   21r RECEIVE     24
  393246   4 usr/bin/io-pkt-v4   10r RECEIVE     28
  393246   5 usr/bin/io-pkt-v4   10r RECEIVE     32
  393246   6 usr/bin/io-pkt-v4   21r RECEIVE     39
  393246   7 usr/bin/io-pkt-v4   10r CONDVAR     (0xfe3cd6a4)
  393246   8 usr/bin/io-pkt-v4   20r RECEIVE     44
  393246   9 usr/bin/io-pkt-v4   10r RECEIVE     36
  393246  10 usr/bin/io-pkt-v4   10r CONDVAR     (0xfe3cc96c)
========================================
# pidin -p io-pkt-v4 mem
     pid tid name               prio STATE            code  data        stack
  393246   1 usr/bin/io-pkt-v4   21r SIGWAITINFO      660K 1172K  8192(516K)*
  393246   2 usr/bin/io-pkt-v4   10r RECEIVE          660K 1172K  8192(132K)
  393246   3 usr/bin/io-pkt-v4   21r RECEIVE          660K 1172K  4096(132K)
  393246   4 usr/bin/io-pkt-v4   10r RECEIVE          660K 1172K  4096(132K)
  393246   5 usr/bin/io-pkt-v4   10r RECEIVE          660K 1172K  4096(132K)
  393246   6 usr/bin/io-pkt-v4   21r RECEIVE          660K 1172K  4096(132K)
  393246   7 usr/bin/io-pkt-v4   10r CONDVAR          660K 1172K  4096(132K)
  393246   8 usr/bin/io-pkt-v4   20r RECEIVE          660K 1172K  4096(132K)
  393246   9 usr/bin/io-pkt-v4   10r RECEIVE          660K 1172K  4096(132K)
  393246  10 usr/bin/io-pkt-v4   10r CONDVAR          660K 1172K  4096(132K)
            libc.so.3          @fe300000             512K   12K
            devnp-shim.so      @fe386000              36K  4096
            evn-ppc8260-nxa.so @fe390000              48K  4096
            lsm-qnet.so        @fe39d000             188K   44K
            libsocket.so.3     @fe3d7000             184K   24K
            /dev/mem           @80100000 (30000000)        128K
            /dev/mem           @80120000 (30000000)        128K
            /dev/mem           @80140000 (30000000)        128K

========================================

Thanks 

Thomas
Re: io-pkt leak data section - ppc8260 ethernet driver  
Unfortunately there is nothing we can do about this without getting access
to your BSP as well as the source to the io-pkt driver. We cannot give you
the latest io-pkt, as it will require the network driver to be re-compiled
with the latest libc and libsocket. Your best bet is to contact your sales
rep and make a formal request to have this work done.



On 2016-11-21, 10:21 AM, "Thomas Luetzel" <community-noreply@qnx.com>
wrote:

>Hi,
>
>we do have two system running
>1. QNX 6.5.0 2010/07/09-14:35:59EDT
># use -i io-pkt-v4-hc
>NAME=io-pkt-v4-hc
>DESCRIPTION=TCP/IP protocol module.
>DATE=2010/07/09-13:43:56-EDT
>STATE=stable
>HOST=mainbuild
>USER=builder
>VERSION=6.5.0
>TAGID=89
>
>2. QNX 6.5.0 2012/06/20-13:49:31EDT
># use -i io-pkt-v4
>NAME=io-pkt-v4
>DESCRIPTION=TCP/IP protocol module.
>DATE=2014/10/15-11:25:52-EDT
>STATE=stable
>HOST=pspbuildvm
>USER=pspbuild
>VERSION=650SP1-4004
>TAGID=5664
>
>on both systems we start io-pkt-v4 the following way:
>
>io-pkt-v4
>mount -T io-pkt -o channel=1,mac=$MAC1 devn-ppc8260-nxa.so
>mount -T io-pkt -o channel=2,mac=$MAC2 devn-ppc8260-nxa.so
>mount -T io-pkt -o channel=3,mac=$MAC3 devn-ppc8260-nxa.so
>mount -T io-pkt -o bind=ip0,resolve=dns,resolve=autoresolve
>/usr/lib/lsm-qnet.so
>
>Both systems showing the same leak behavior.
>
>=======================================
>
># pidin -pio-pkt-v4
>     pid tid name               prio STATE       Blocked
>  393246   1 usr/bin/io-pkt-v4   21r SIGWAITINFO
>  393246   2 usr/bin/io-pkt-v4   10r RECEIVE     1
>  393246   3 usr/bin/io-pkt-v4   21r RECEIVE     24
>  393246   4 usr/bin/io-pkt-v4   10r RECEIVE     28
>  393246   5 usr/bin/io-pkt-v4   10r RECEIVE     32
>  393246   6 usr/bin/io-pkt-v4   21r RECEIVE     39
>  393246   7 usr/bin/io-pkt-v4   10r CONDVAR     (0xfe3cd6a4)
>  393246   8 usr/bin/io-pkt-v4   20r RECEIVE     44
>  393246   9 usr/bin/io-pkt-v4   10r RECEIVE     36
>  393246  10 usr/bin/io-pkt-v4   10r CONDVAR     (0xfe3cc96c)
>========================================
># pidin -p io-pkt-v4 mem
>     pid tid name               prio STATE            code  data
>stack
>  393246   1 usr/bin/io-pkt-v4   21r SIGWAITINFO      660K 1172K
>8192(516K)*
>  393246   2 usr/bin/io-pkt-v4   10r RECEIVE          660K 1172K
>8192(132K)
>  393246   3 usr/bin/io-pkt-v4   21r RECEIVE          660K 1172K
>4096(132K)
>  393246   4 usr/bin/io-pkt-v4   10r RECEIVE          660K 1172K
>4096(132K)
>  393246   5 usr/bin/io-pkt-v4   10r RECEIVE          660K 1172K
>4096(132K)
>  393246   6 usr/bin/io-pkt-v4   21r RECEIVE          660K 1172K
>4096(132K)
>  393246   7 usr/bin/io-pkt-v4   10r CONDVAR          660K 1172K
>4096(132K)
>  393246   8 usr/bin/io-pkt-v4   20r RECEIVE          660K 1172K
>4096(132K)
>  393246   9 usr/bin/io-pkt-v4   10r RECEIVE          660K 1172K
>4096(132K)
>  393246  10 usr/bin/io-pkt-v4   10r CONDVAR          660K 1172K
>4096(132K)
>            libc.so.3          @fe300000             512K   12K
>            devnp-shim.so      @fe386000              36K  4096
>            evn-ppc8260-nxa.so @fe390000              48K  4096
>            lsm-qnet.so        @fe39d000             188K   44K
>            libsocket.so.3     @fe3d7000             184K   24K
>            /dev/mem           @80100000 (30000000)        128K
>            /dev/mem           @80120000 (30000000)        128K
>            /dev/mem           @80140000 (30000000)        128K
>
>========================================
>
>Thanks 
>
>Thomas
>
>
>
>
>_______________________________________________
>
>Networking Drivers
>http://community.qnx.com/sf/go/post117154
>To...
Re: io-pkt leak data section - ppc8260 ethernet driver  
Hi all,

I just proofed that this issue is also present on an X86 machine!

If downloaded the vmware image from (http://www.qnx.com/download/feature.html?programid=23665) and executed the same 
scripte

======================
while [ 0 - lt 1 ]
do
   df >> /dev/null
done
======================

While monitoring via pidin the io-pkt module (pidin -l -d 100 -p io-pkt-v4-hc mem) I can see the memory is increasing, 
increasing, increasing, increasing, ...


See attached screenshot.

I think it is now prooven it has nothing todo with my BSP or modified network driver. It is a general issue!
Need help to identify which is causing this memory leak - relation of DF command and io-pkt?

Then I would know what we are doing WRONG in our application which shows the same behavior.

Thank in advance

If needed - how can I file this defect directly to the QNX development team?

Kind Regards
Attachment: Image QNX6.5_vmware_io-pkt.png 78.45 KB
Re: io-pkt leak data section - ppc8260 ethernet driver  
Well then there must be something strange going on on your systems, as I
have a 6.5.0 SP1 x86 machine and have no problem with the data segment
increasing.



On 2016-11-22, 9:29 AM, "Thomas Luetzel" <community-noreply@qnx.com> wrote:

>Hi all,
>
>I just proofed that this issue is also present on an X86 machine!
>
>If downloaded the vmware image from
>(http://www.qnx.com/download/feature.html?programid=23665) and executed
>the same scripte
>
>======================
>while [ 0 - lt 1 ]
>do
>   df >> /dev/null
>done
>======================
>
>While monitoring via pidin the io-pkt module (pidin -l -d 100 -p
>io-pkt-v4-hc mem) I can see the memory is increasing, increasing,
>increasing, increasing, ...
>
>
>See attached screenshot.
>
>I think it is now prooven it has nothing todo with my BSP or modified
>network driver. It is a general issue!
>Need help to identify which is causing this memory leak - relation of DF
>command and io-pkt?
>
>Then I would know what we are doing WRONG in our application which shows
>the same behavior.
>
>Thank in advance
>
>If needed - how can I file this defect directly to the QNX development
>team?
>
>Kind Regards
>
>
>
>
>_______________________________________________
>
>Networking Drivers
>http://community.qnx.com/sf/go/post117158
>To cancel your subscription to this discussion, please e-mail
>drivers-networking-unsubscribe@community.qnx.com

Re: io-pkt leak data section - ppc8260 ethernet driver  
... ???? it the vmware X86  provided by QNX, downloaded - started. Nothing else done?
I don't really know what I do wrong here?
Re: io-pkt leak data section - ppc8260 ethernet driver  
here some screen recordings what I'm doing.
No magic!

Thanks Hugh for all your help so far
Attachment: Text QNX2.avi 522 KB Text QNX.avi 646.5 KB
Re: io-pkt leak data section - ppc8260 ethernet driver  
Does the x86 data segment on the x86 machine stabilize after a while?



On 2016-11-22, 10:01 AM, "Thomas Luetzel" <community-noreply@qnx.com>
wrote:

>here some screen recordings what I'm doing.
>No magic!
>
>Thanks Hugh for all your help so far
>
>
>
>_______________________________________________
>
>Networking Drivers
>http://community.qnx.com/sf/go/post117161
>To cancel your subscription to this discussion, please e-mail
>drivers-networking-unsubscribe@community.qnx.com

Re: io-pkt leak data section - ppc8260 ethernet driver  
No. I just kept it running and it eating up already 5MB

I'm just about to test the same on QNX6.6.0 and let you know
Re: io-pkt leak data section - ppc8260 ethernet driver  
Just used the QNX SDP 6.6.0 runtime ISO for VMware [or virtual machine] to install a virtual machine.
If I do the same test in here

io-pkt-v4-hc is stable! 504K and stable!

But as we have a PPC architecture on or target I can't switch I have to stick with 6.5

> No. I just kept it running and it eating up already 5MB
> 
> I'm just about to test the same on QNX6.6.0 and let you know


Re: io-pkt leak data section - ppc8260 ethernet driver  
Your issue does not reproduce for me on 6.5 VMware image.  Are you sure the number of processes stays constant?  Copies 
of sh are being left in memory?
Re: io-pkt leak data section - ppc8260 ethernet driver  
I use the vmware QNX 6.5.0 2012/06/20-13:50:50EDT x86pc x86

Number of processes is constant ~40 (39-40)
Number of threads (pidin info) constant

O.k with every call of "df" it does get a new process ID but the calling shell stays the same.
Of all processes only io-pkt-v4-hc do increase

Kind Regards


Re: io-pkt leak data section - ppc8260 ethernet driver  
What is your "df" output?  I  can't see any relationship of df to network UNLESS there is a sharepoint mounted...
Re: io-pkt leak data section - ppc8260 ethernet driver  
see attached screen shot
Attachment: Image df_QNX_6.5.0.png 136.94 KB
Re: io-pkt leak data section - ppc8260 ethernet driver  
My io-pkt-v4-hc is exactly the same.  Has to be something with the network - but I don't see how it is related to the "
df" call you are making.

What is you replace "df >>/dev/null" with "ls >>/dev/null"... does it have to be "df" to see the leak?
RE: io-pkt leak data section - ppc8260 ethernet driver  
A capture with tracelogger should answer all these questions?  Assuming the VM is using the instrumented kernel ?

-----Message d'origine-----
De : Dennis Kellly [mailto:community-noreply@qnx.com] 
Envoyé : 22 novembre 2016 11:01
À : drivers-networking <drivers-networking@community.qnx.com>
Objet : Re: io-pkt leak data section - ppc8260 ethernet driver

My io-pkt-v4-hc is exactly the same.  Has to be something with the network - but I don't see how it is related to the "
df" call you are making.

What is you replace "df >>/dev/null" with "ls >>/dev/null"... does it have to be "df" to see the leak?



_______________________________________________

Networking Drivers
http://community.qnx.com/sf/go/post117170
To cancel your subscription to this discussion, please e-mail drivers-networking-unsubscribe@community.qnx.com
Re: io-pkt leak data section - ppc8260 ethernet driver  
Here is the output from my x86 PC running just io-pkt-v4-hc. I also ran df
-h several times as well, with no difference to the output.

(root) /usr/qnx650/target/qnx6/x86/sbin-> pidin -pio-pkt-v4-hc mem
     pid tid name               prio STATE            code  data
stack
127537170   1 ./io-pkt-v4-hc      21r SIGWAITINFO      948K  344K
4096(516K)*
127537170   2 ./io-pkt-v4-hc      21r RECEIVE          948K  344K
8192(132K)
            libc.so.3          @b0300000             488K   16K
(root) /usr/qnx650/target/qnx6/x86/sbin-> pidin -pio-pkt-v4-hc mem
     pid tid name               prio STATE            code  data
stack
127537170   1 ./io-pkt-v4-hc      21r SIGWAITINFO      948K  344K
4096(516K)*
127537170   2 ./io-pkt-v4-hc      10r RECEIVE          948K  344K
8192(132K)
            libc.so.3          @b0300000             488K   16K
(root) /usr/qnx650/target/qnx6/x86/sbin-> pidin -pio-pkt-v4-hc mem
     pid tid name               prio STATE            code  data
stack
127537170   1 ./io-pkt-v4-hc      21r SIGWAITINFO      948K  344K
4096(516K)*
127537170   2 ./io-pkt-v4-hc      21r RECEIVE          948K  344K
8192(132K)
            libc.so.3          @b0300000             488K   16K
(root) /usr/qnx650/target/qnx6/x86/sbin-> pidin -pio-pkt-v4-hc mem
     pid tid name               prio STATE            code  data
stack
127537170   1 ./io-pkt-v4-hc      21r SIGWAITINFO      948K  344K
4096(516K)*
127537170   2 ./io-pkt-v4-hc      21r RECEIVE          948K  344K
8192(132K)
            libc.so.3          @b0300000             488K   16K
(root) /usr/qnx650/target/qnx6/x86/sbin-> pidin -pio-pkt-v4-hc mem
     pid tid name               prio STATE            code  data
stack
127537170   1 ./io-pkt-v4-hc      21r SIGWAITINFO      948K  344K
4096(516K)*
127537170   2 ./io-pkt-v4-hc      21r RECEIVE          948K  344K
8192(132K)
            libc.so.3          @b0300000             488K   16K
(root) /usr/qnx650/target/qnx6/x86/sbin-> pidin -pio-pkt-v4-hc mem
     pid tid name               prio STATE            code  data
stack
127537170   1 ./io-pkt-v4-hc      21r SIGWAITINFO      948K  344K
4096(516K)*
127537170   2 ./io-pkt-v4-hc      21r RECEIVE          948K  344K
8192(132K)
            libc.so.3          @b0300000             488K   16K


NAME=io-pkt-v4-hc
DESCRIPTION=TCP/IP protocol module.
DATE=2012/06/20-13:36:58-EDT
STATE=stable
HOST=gusbuild4
USER=builder
VERSION=6.5.0
TAGID=650SP1-166




On 2016-11-22, 10:47 AM, "Thomas Luetzel" <community-noreply@qnx.com>
wrote:

>see attached screen shot
>
>
>
>_______________________________________________
>
>Networking Drivers
>http://community.qnx.com/sf/go/post117168
>To cancel your subscription to this discussion, please e-mail
>drivers-networking-unsubscribe@community.qnx.com

Re: io-pkt leak data section - ppc8260 ethernet driver  


Good Morning,

Have you tried running the script which I used?

Running the command a couple of times 3..5 times does not show the effect.
10-20times or using the script it will be visible within a minute.

I will definitely try to use the tracelog and send the output to you all.

Thanks for the help so far.

Thomas Lützel

> Am 22.11.2016 um 17:37 schrieb Hugh Brown <community-noreply@qnx.com>:
>
> Here is the output from my x86 PC running just io-pkt-v4-hc. I also ran
df
> -h several times as well, with no difference to the output.
>
> (root) /usr/qnx650/target/qnx6/x86/sbin-> pidin -pio-pkt-v4-hc mem
>     pid tid name               prio STATE            code  data
> stack
> 127537170   1 ./io-pkt-v4-hc      21r SIGWAITINFO      948K  344K
> 4096(516K)*
> 127537170   2 ./io-pkt-v4-hc      21r RECEIVE          948K  344K
> 8192(132K)
>            libc.so.3          @b0300000             488K   16K
> (root) /usr/qnx650/target/qnx6/x86/sbin-> pidin -pio-pkt-v4-hc mem
>     pid tid name               prio STATE            code  data
> stack
> 127537170   1 ./io-pkt-v4-hc      21r SIGWAITINFO      948K  344K
> 4096(516K)*
> 127537170   2 ./io-pkt-v4-hc      10r RECEIVE          948K  344K
> 8192(132K)
>            libc.so.3          @b0300000             488K   16K
> (root) /usr/qnx650/target/qnx6/x86/sbin-> pidin -pio-pkt-v4-hc mem
>     pid tid name               prio STATE            code  data
> stack
> 127537170   1 ./io-pkt-v4-hc      21r SIGWAITINFO      948K  344K
> 4096(516K)*
> 127537170   2 ./io-pkt-v4-hc      21r RECEIVE          948K  344K
> 8192(132K)
>            libc.so.3          @b0300000             488K   16K
> (root) /usr/qnx650/target/qnx6/x86/sbin-> pidin -pio-pkt-v4-hc mem
>     pid tid name               prio STATE            code  data
> stack
> 127537170   1 ./io-pkt-v4-hc      21r SIGWAITINFO      948K  344K
> 4096(516K)*
> 127537170   2 ./io-pkt-v4-hc      21r RECEIVE          948K  344K
> 8192(132K)
>            libc.so.3          @b0300000             488K   16K
> (root) /usr/qnx650/target/qnx6/x86/sbin-> pidin -pio-pkt-v4-hc mem
>     pid tid name               prio STATE            code  data
> stack
> 127537170   1 ./io-pkt-v4-hc      21r SIGWAITINFO      948K  344K
> 4096(516K)*
> 127537170   2 ./io-pkt-v4-hc      21r RECEIVE          948K  344K
> 8192(132K)
>            libc.so.3          @b0300000             488K   16K
> (root) /usr/qnx650/target/qnx6/x86/sbin-> pidin -pio-pkt-v4-hc mem
>     pid tid name               prio STATE            code  data
> stack
> 127537170   1 ./io-pkt-v4-hc      21r SIGWAITINFO      948K  344K
> 4096(516K)*
> 127537170   2 ./io-pkt-v4-hc      21r RECEIVE          948K  344K
> 8192(132K)
>            libc.so.3          @b0300000             488K   16K
>
>
> NAME=io-pkt-v4-hc
> DESCRIPTION=TCP/IP protocol module.
> DATE=2012/06/20-13:36:58-EDT
> STATE=stable
> HOST=gusbuild4
> USER=builder
> VERSION=6.5.0
> TAGID=650SP1-166
>
>
>
>
> On 2016-11-22, 10:47 AM, "Thomas Luetzel" <community-noreply@qnx.com>
> wrote:
>
>> see attached screen shot
>>
>>
>>
>> _______________________________________________
>>
>> Networking Drivers
>> http://community.qnx.com/sf/go/post117168
>> To cancel your subscription to this discussion, please e-mail
>> drivers-networking-unsubscribe@community.qnx.com
>
>
>
>
>
> _______________________________________________
>
> Networking Drivers
> http://community.qnx.com/sf/go/post117172
> To cancel your subscription to this discussion, please...
View Full Message
Attachment: HTML sf-attachment-mime34250 11.43 KB
Re: io-pkt leak data section - ppc8260 ethernet driver  
The fact that running df causes io-pkt to increase its data size is
besides the point. If the data size is increasing when running your
network driver, this points to a problem with the network driver, as I
have tested the 6.5.0 SP1 io-pkt and the data size doesn¹t increase with
usage of other network drivers.
We don¹t have your hardware or your BSP, so there is no way we can look at
this without having these resources. I suggest that you contact your sales
rep to set the wheels in motion to get this problem resolved.

On 2016-11-22, 10:47 AM, "Thomas Luetzel" <community-noreply@qnx.com>
wrote:


>see attached screen shot
>
>
>
>_______________________________________________
>
>Networking Drivers
>http://community.qnx.com/sf/go/post117168
>To cancel your subscription to this discussion, please e-mail
>drivers-networking-unsubscribe@community.qnx.com

RE: io-pkt leak data section - ppc8260 ethernet driver  
Just a though.  Is the VM connected to a network ?  Could it be related to network traffic.

-----Message d'origine-----
De : Dennis Kellly [mailto:community-noreply@qnx.com] 
Envoyé : 22 novembre 2016 10:46
À : drivers-networking <drivers-networking@community.qnx.com>
Objet : Re: io-pkt leak data section - ppc8260 ethernet driver

What is your "df" output?  I  can't see any relationship of df to network UNLESS there is a sharepoint mounted...



_______________________________________________

Networking Drivers
http://community.qnx.com/sf/go/post117167
To cancel your subscription to this discussion, please e-mail drivers-networking-unsubscribe@community.qnx.com
Re: RE: io-pkt leak data section - ppc8260 ethernet driver  
Hi, yes the VM is connected to the a network but if the VM is running without the shell script

run.sh
===============
while [ 0 -lt 1 ]
do
   df >> /dev/null
done

the memory increases very fast!

To the answer to Hugh - I am not running any BSP on a target hardware! I can reproduce this issue on a VM provided from 
the QNX homepage. My DUT (device under test) is delivered now from QNX, nothing implemented installed done from my side!


Attched you will also find the timeline tracelog. I've marked a REPLY BLOCED from io-pkt by the df procress. This 
happens multiple times.

Anything else what I can do to generate log which will help to isolate this behaviour.

Thanks in advance
Attachment: Image Timeline_Tracelog.png 240.95 KB
Re: RE: io-pkt leak data section - ppc8260 ethernet driver  
Even more trace logs where I can see io-pkt receives messages from the df command.

Thanks for help and suggestions
Kind Regards
Thomas
Attachment: Image Timeline_Tracelog_2.png 217.45 KB Image Timeline_Tracelog_3.png 220.49 KB
Re: RE: io-pkt leak data section - ppc8260 ethernet driver  
df goes and hunts around in /proc/mount to find the disks that are mounted. io-pkt will have an entry there to enable "
mount -T io-pkt" to mount lsms and drivers, hence the messages that go to it when running df.

We have checked and we only see this memory growth in the initial released 6.5.0 SP1, the latest PSP of io-pkt doesn't 
leak like this.
Associations:
post117178:
              Re: io-pkt leak data section - ppc8260 ethernet driver - proof of fix in the latest io-pkt which require a new libc - Thomas Luetzel
            
Re: RE: io-pkt leak data section - ppc8260 ethernet driver  
Good morning,

what do you mean with the latest PSP? Is it the version of QNX 6.6.0?

I knwo the df command running in a loop is not a normal situation but it was a very fast way to reprocude the issue. The
 same issue is happening in our application but much more slowly, with the effect that after a couple of months systems 
performing a reset installed at customer sites - crititcal!

Either I get the latest io-pkt for QNX 6.5 ppcbe or I know which function calls cearting this issue in df, so I can hunt
 in my application for similar calls to avoid them and implement a work around.

Help is always welcome

Kind Regards
Thomas

> df goes and hunts around in /proc/mount to find the disks that are mounted. io
> -pkt will have an entry there to enable "mount -T io-pkt" to mount lsms and 
> drivers, hence the messages that go to it when running df.
> 
> We have checked and we only see this memory growth in the initial released 6.5
> .0 SP1, the latest PSP of io-pkt doesn't leak like this.


Re: io-pkt leak data section - ppc8260 ethernet driver  
A PSP is a Priority Support Patch, these are released after the major OS 
releases to correct any issues that may be found. You are running the 
original io-pkt that was released with 6.5.0SP1, if you move to the 
latest available PSP for io-pkt on 6.5.0SP1 you shouldn't have this 
issue with df. You will need to contact your QNX sales representative to 
obtain this.

As I mentioned, df is heavily accessing /proc/mount doing opendir(), 
readdir(), stat(), open() etc in the /proc/mount directory. io-pkt has 
an entry in the /proc/mount directory. Does your application access 
/proc/mount?

Regards,
Nick

On 16-11-24 03:13 AM, Thomas Luetzel wrote:
> Good morning,
>
> what do you mean with the latest PSP? Is it the version of QNX 6.6.0?
>
> I knwo the df command running in a loop is not a normal situation but it was a very fast way to reprocude the issue. 
The same issue is happening in our application but much more slowly, with the effect that after a couple of months 
systems performing a reset installed at customer sites - crititcal!
>
> Either I get the latest io-pkt for QNX 6.5 ppcbe or I know which function calls cearting this issue in df, so I can 
hunt in my application for similar calls to avoid them and implement a work around.
>
> Help is always welcome
>
> Kind Regards
> Thomas
>
>> df goes and hunts around in /proc/mount to find the disks that are mounted. io
>> -pkt will have an entry there to enable "mount -T io-pkt" to mount lsms and
>> drivers, hence the messages that go to it when running df.
>>
>> We have checked and we only see this memory growth in the initial released 6.5
>> .0 SP1, the latest PSP of io-pkt doesn't leak like this.
>
>
>
>
>
>
> _______________________________________________
>
> Networking Drivers
> http://community.qnx.com/sf/go/post117206
> To cancel your subscription to this discussion, please e-mail drivers-networking-unsubscribe@community.qnx.com
>
Re: io-pkt leak data section - ppc8260 ethernet driver  
Hi,

yes our application is doing this (once in a while) but not to as you described like df. That is the reason it does take
 several month until no RAM is left and the system is performing a reboot.

df was just a way to reproduce it very fast. I'm in contact with the sales rep. and parallel I can go through the 
application to hunt for stat(), opendir(), readdir(), open().

Thanks

> A PSP is a Priority Support Patch, these are released after the major OS 
> releases to correct any issues that may be found. You are running the 
> original io-pkt that was released with 6.5.0SP1, if you move to the 
> latest available PSP for io-pkt on 6.5.0SP1 you shouldn't have this 
> issue with df. You will need to contact your QNX sales representative to 
> obtain this.
> 
> As I mentioned, df is heavily accessing /proc/mount doing opendir(), 
> readdir(), stat(), open() etc in the /proc/mount directory. io-pkt has 
> an entry in the /proc/mount directory. Does your application access 
> /proc/mount?
> 
> Regards,
> Nick
> 
> On 16-11-24 03:13 AM, Thomas Luetzel wrote:
> > Good morning,
> >
> > what do you mean with the latest PSP? Is it the version of QNX 6.6.0?
> >
> > I knwo the df command running in a loop is not a normal situation but it was
>  a very fast way to reprocude the issue. The same issue is happening in our 
> application but much more slowly, with the effect that after a couple of 
> months systems performing a reset installed at customer sites - crititcal!
> >
> > Either I get the latest io-pkt for QNX 6.5 ppcbe or I know which function 
> calls cearting this issue in df, so I can hunt in my application for similar 
> calls to avoid them and implement a work around.
> >
> > Help is always welcome
> >
> > Kind Regards
> > Thomas
> >
> >> df goes and hunts around in /proc/mount to find the disks that are mounted.
>  io
> >> -pkt will have an entry there to enable "mount -T io-pkt" to mount lsms and
> 
> >> drivers, hence the messages that go to it when running df.
> >>
> >> We have checked and we only see this memory growth in the initial released 
> 6.5
> >> .0 SP1, the latest PSP of io-pkt doesn't leak like this.
> >
> >
> >
> >
> >
> >
> > _______________________________________________
> >
> > Networking Drivers
> > http://community.qnx.com/sf/go/post117206
> > To cancel your subscription to this discussion, please e-mail drivers-
> networking-unsubscribe@community.qnx.com
> >