Project Home
Project Home
Wiki
Wiki
Discussion Forums
Discussions
Project Information
Project Info
Forum Topic - io-pkt-v4-hc idle CPU load: (13 Items)
   
io-pkt-v4-hc idle CPU load  
Hi all, 

I'm noticing considerable CPU usage of about 11% from io-pkt-v4-hc even when the system is idle. This is a MPC8548 @ 999
 Mhz. 

Starting io-pkt like this : 

# io-pkt-v4-hc -dmpc85xx mac=00E00C0000FD,pci=0,phy=0 -ptcpip

The hogs utility reports 11% usage for the io-pkt process. 

# hogs -n -%2
PID             NAME  MSEC  PIDS SYSTEM
1                     2645   86%    88%
1163283 io-pkt-v4-hc   358   11%    11%

Also observable when the Ethernet cable is off. 
Also reproducable on my system with io-pkt-v4. 

The IDE reports the same when qconn is connected. 

I could not observe that with io-net; its idle CPU usage was below measurable threshold. 

Christopher. 
RE: io-pkt-v4-hc idle CPU load  
From: Christopher Pohl 

> considerable CPU usage of about 11% from 
> io-pkt-v4-hc even when the system is idle. 

Is the qnet protocol loaded?  I presume not
from your cmd line (it is often mounted later).

Does it make any difference if you ifconfig
the tsec interface, or not?

ie just start up io-pkt:

# io-pkt-v4-hc -dmpc85xx mac=00E00C0000FD,pci=0,phy=0 -ptcpip

And check cpu usage.

Then ifconfig tsec0, then check cpu usage.

Then ifconfig tsec1, then check cpu usage.

Before the ifconfig is run, only the driver 
attach function is called.  However, after 
ifconfig is called, then the driver init 
function is called, which causes all sorts
of stuff to happen.

--
aboyd
Re: RE: io-pkt-v4-hc idle CPU load  
> 
> From: Christopher Pohl 
> 
> > considerable CPU usage of about 11% from 
> > io-pkt-v4-hc even when the system is idle. 
> 
> Is the qnet protocol loaded?  I presume not
> from your cmd line (it is often mounted later).

qnet is not running. 

I got rid of alot of stuff that's normally fired up and did: 

- Clean up init scripts ; only essential processes 
  running (RAM disk, flash, init processes, board support),
  in particular none running that use the network to 
  my knowledge. 
- Do not start io-pkt from init script
- Power cycle reboot 
- From the serial terminal do 

#  io-pkt-v4-hc -dmpc85xx mac=00E00C0000FD,pci=0,phy=0 -ptcpip

- (Leave interfaces down)

# hogs -n -%1
PID             NAME  MSEC  PIDS SYSTEM
1                     2644   87%    88%
593938  io-pkt-v4-hc   358   11%    11%

CPU usage is 11%.  Bring up tsec0 

# ifconfig tsec0 inet 192.168.101.11 up

# hogs -n -%1
PID             NAME  MSEC  PIDS SYSTEM
1                     2645   87%    88%
593938  io-pkt-v4-hc   357   11%    11%

CPU usage still 11%. 
 
> Does it make any difference if you ifconfig
> the tsec interface, or not?

It seems to make no difference for me (see above). 
 
> Then ifconfig tsec1, then check cpu usage.

tsec1 was not instanced (pci=0 option). 
Removed the pci=0 option in a second test, makes no difference.

If you guys can't see it on your systems it must be correlated with something else running or not running on our boards 
(we do not use U-Boot for example). I'll try to further strip the system down and find out what is causing it. 

Thanks, 
Christopher. 
Re: io-pkt-v4-hc idle CPU load  
On Mon, Jul 28, 2008 at 05:37:19AM -0400, Christopher Pohl wrote:
> Hi all, 
> 
> I'm noticing considerable CPU usage of about 11% from io-pkt-v4-hc even when the system is idle. This is a MPC8548 @ 
999 Mhz. 
> 
> Starting io-pkt like this : 
> 
> # io-pkt-v4-hc -dmpc85xx mac=00E00C0000FD,pci=0,phy=0 -ptcpip
> 
> The hogs utility reports 11% usage for the io-pkt process. 
> 
> # hogs -n -%2
> PID             NAME  MSEC  PIDS SYSTEM
> 1                     2645   86%    88%
> 1163283 io-pkt-v4-hc   358   11%    11%
> 
> Also observable when the Ethernet cable is off. 
> Also reproducable on my system with io-pkt-v4. 
> 
> The IDE reports the same when qconn is connected. 
> 
> I could not observe that with io-net; its idle CPU usage was below measurable threshold. 

Is anything else reporting similar load?

As Andrew mentioned knowing if it happens immediately or
after a certain series of steps (eg. 'ifconfig up') would
also be useful.

-seanb
Re: io-pkt-v4-hc idle CPU load  
> On Mon, Jul 28, 2008 at 05:37:19AM -0400, Christopher Pohl wrote:
> > Hi all, 
> > 
> > I'm noticing considerable CPU usage of about 11% from io-pkt-v4-hc even when
>  the system is idle. This is a MPC8548 @ 999 Mhz. 
> > 
> > Starting io-pkt like this : 
> > 
> > # io-pkt-v4-hc -dmpc85xx mac=00E00C0000FD,pci=0,phy=0 -ptcpip
> > 
> > The hogs utility reports 11% usage for the io-pkt process. 
> > 
> > # hogs -n -%2
> > PID             NAME  MSEC  PIDS SYSTEM
> > 1                     2645   86%    88%
> > 1163283 io-pkt-v4-hc   358   11%    11%
> > 
> > Also observable when the Ethernet cable is off. 
> > Also reproducable on my system with io-pkt-v4. 
> > 
> > The IDE reports the same when qconn is connected. 
> > 
> > I could not observe that with io-net; its idle CPU usage was below 
> measurable threshold. 
> 
> Is anything else reporting similar load?

No. In fact procnto (PID 1) reports 88% CPU usage, which together yields  ~100% I suppose PID 1 is the "idle" CPU. 

> As Andrew mentioned knowing if it happens immediately or
> after a certain series of steps (eg. 'ifconfig up') would
> also be useful.

It happens immediately after firing up io-pkt with the above command line, no further steps required. 

Christopher. 
RE: io-pkt-v4-hc idle CPU load  
From: Christopher Pohl 

> It happens immediately after firing up io-pkt 

All I can think of is interrupts firing for some
other device which is sharing the interrupt ...

Looking at the mpc85xx source, the _attach function 
does hook into the hardware interrupt via the stack 
magic ...

But the actual InterruptAttach() function doesn't
get called until the _init function, which shouldn't
be called until the ifconfig tsec.

I think I wrote the io-pkt 85xx driver, and I tend
to leave diagnostic stuff in.  In particular, if you
run "verbose=4" you will see the _attach and _init
functions being called in the sloginfo.

If you run "verbose=7" you will see the 
mpc85xx_process_interrupt() being called.  So, try
these verbose levels, and try to see which functions
are being called by the driver.

--
aboyd
Re: RE: io-pkt-v4-hc idle CPU load  
> 
> From: Christopher Pohl 
> 
> > It happens immediately after firing up io-pkt 
> 
> All I can think of is interrupts firing for some
> other device which is sharing the interrupt ...

After more experimenting it turns out the mpc85xx driver got nothing to do with it. 

To reproduce, it is sufficient to fire up io-pkt-v4-(hc) without a NIC driver like this 

# io-pkt-v4-hc
# hogs -n -%1
PID             NAME  MSEC  PIDS SYSTEM
1                     2645   87%    88%
864265  io-pkt-v4-hc   357   11%    11%

I don't know if this is relevant, but if i launch an second instance, only the last started instance of io-pkt-v4 hogs 
the CPU (notice the change of the PID in hogs) while the one that was started first goes really idle. 

# io-pkt-v4-hc
# hogs -n -%1
PID             NAME  MSEC  PIDS SYSTEM
1                     2646   87%    88%
876554  io-pkt-v4-hc   357   11%    11%

1                     2644   87%    88%
876554  io-pkt-v4-hc   357   11%    11%

Also, I tried this on a MPC8313 board with very similar software and a binary identical io-pkt. I cannot reproduce the 
behaviour on the MPC8313 board. 

Here is the PVR & SVR of the affected MPC8548: 

CHIP REVISIONS
 CPU Version (PVR): Major 0x8021 Minor 0x20
 System version   (SVR): Major 0x8039 Minor 0x20

Maybe this also makes a difference: The MPC8548 board has an engineering kernel (CE WO790369). Would it make sense to 
try a regular kernel? If so which one would you prefer? 

Can you think of anything else I can do (e.g. verbose logs of io-pkt-v4-hc itself) ? 

Thanks,
Christopher
Re: RE: io-pkt-v4-hc idle CPU load  
It could be worthwhile getting a kernel trace to see if anything is interacting with the stack.  You could also build a 
debug version of the stack and debug it using io-net as a second stack instance for gdb to connect in to.
Are you using a milestone build or your own build?

In terms of the OS, it shouldn't matter, but it would be worth trying the released 6.3.2 kernel and libs to make sure.  


  Robert

----- Original Message -----
From: Christopher Pohl <community-noreply@qnx.com>
To: ionetmig-networking <post11181@community.qnx.com>
Sent: Tue Jul 29 04:56:40 2008
Subject: Re: RE: io-pkt-v4-hc idle CPU load

> 
> From: Christopher Pohl 
> 
> > It happens immediately after firing up io-pkt 
> 
> All I can think of is interrupts firing for some
> other device which is sharing the interrupt ...

After more experimenting it turns out the mpc85xx driver got nothing to do with it. 

To reproduce, it is sufficient to fire up io-pkt-v4-(hc) without a NIC driver like this 

# io-pkt-v4-hc
# hogs -n -%1
PID             NAME  MSEC  PIDS SYSTEM
1                     2645   87%    88%
864265  io-pkt-v4-hc   357   11%    11%

I don't know if this is relevant, but if i launch an second instance, only the last started instance of io-pkt-v4 hogs 
the CPU (notice the change of the PID in hogs) while the one that was started first goes really idle. 

# io-pkt-v4-hc
# hogs -n -%1
PID             NAME  MSEC  PIDS SYSTEM
1                     2646   87%    88%
876554  io-pkt-v4-hc   357   11%    11%

1                     2644   87%    88%
876554  io-pkt-v4-hc   357   11%    11%

Also, I tried this on a MPC8313 board with very similar software and a binary identical io-pkt. I cannot reproduce the 
behaviour on the MPC8313 board. 

Here is the PVR & SVR of the affected MPC8548: 

CHIP REVISIONS
 CPU Version (PVR): Major 0x8021 Minor 0x20
 System version   (SVR): Major 0x8039 Minor 0x20

Maybe this also makes a difference: The MPC8548 board has an engineering kernel (CE WO790369). Would it make sense to 
try a regular kernel? If so which one would you prefer? 

Can you think of anything else I can do (e.g. verbose logs of io-pkt-v4-hc itself) ? 

Thanks,
Christopher

_______________________________________________
io-net migration
http://community.qnx.com/sf/go/post11181

Re: RE: io-pkt-v4-hc idle CPU load  
> It could be worthwhile getting a kernel trace to see if anything is 
> interacting with the stack.  

I recorded a one second trace and pasted a snapshot below. 

It turns out that the display of both the "hogs" utility and the "System Information" -> "System resources" view of the 
IDE is wrong. They both display 11% while the 'real' CPU usage is 0.7%. 

Here is the reason/running chain of the trace: 

Event, Owner, Type
2608, io-pkt-v4-hc Thread 2, Running
2607, Interrupt 0x80000000 - io-pkt-v4-hc, Sigevent Pulse

Interrupt 0x80000000 is the system tick which causes thread 2 of io-pkt-v4-hc to run every 8ms for about 7-8 µs 
(housekeeping?). This yields about 0.7% CPU usage when io-pkt-v4-hc is idle and this is also what the "Partition Summary
" of the trace view reports. 

I guess we can live with a display problem ;-) 
Thanks for the help and sorry for the trouble. 

Christopher. 

----

Event, Time, Owner, Type, Data
2485, 294ms 169us, Interrupt 0x80000000, Entry, interrupt 0x80000000 ip 0x269bec
2486, 294ms 169us, Interrupt 0x80000000 - io-pkt-v4-hc, Handler Entry, pid 585746 interrupt 0x80000000 ip 0x480c0754 
area 0x48146f5c process io-pkt-v4-hc
2487, 294ms 170us, Interrupt 0x80000000 - io-pkt-v4-hc, Sigevent Pulse, scoid 0x40000003 pid 585746 process io-pkt-v4-hc

2488, 294ms 170us, io-pkt-v4-hc Thread 2, Running, pid 585746 tid 2 priority 21 policy 2 partition 4 sched_flags 1
2489, 294ms 171us, procnto-booke-instr CPU 1 idle, Ready, pid 1 tid 1 priority 0 policy 1 partition 0 sched_flags 0
2490, 294ms 171us, Interrupt 0x80000000 - io-pkt-v4-hc, Handler Exit, interrupt 0x80000000 sigev_notify 4
2491, 294ms 171us, Interrupt 0x80000000 - procnto-booke-instr, Handler Entry, pid 1 interrupt 0x80000000 ip 0x241494 
area 0x0 process procnto-booke-instr
2492, 294ms 173us, Interrupt 0x80000000 - procnto-booke-instr, Handler Exit, interrupt 0x80000000 sigev_notify 0
2493, 294ms 173us, Interrupt 0x80000000, Exit, interrupt 0x80000000 flags 0x1
2494, 294ms 174us, io-pkt-v4-hc Thread 2, Receive Pulse, scoid 0x40000003 pid 585746 process io-pkt-v4-hc
2495, 294ms 174us, io-pkt-v4-hc Thread 2, MsgReceivev Exit, rcvid 0x0 rmsg0 0x0 rmsg1 0x0 rmsg2 0x0 info->nd 0 info->
srcnd 0 info->pid 0 info->tid 0 info->chid 0 info->scoid 0 info->msglen 0 info->srcmsglen 0 info->dstmsglen 0 info->
priority 0 info->flags 0 info->reserved 0
2496, 294ms 176us, io-pkt-v4-hc Thread 2, Running, pid 585746 tid 2 priority 21 policy 2 partition 4 sched_flags 1
2497, 294ms 177us, io-pkt-v4-hc Thread 2, MsgReceivev Enter, chid 0x1 rparts 33
2498, 294ms 178us, io-pkt-v4-hc Thread 2, Receive, pid 585746 tid 2 priority 21 policy 2 partition 4 sched_flags 1
2499, 294ms 179us, procnto-booke-instr CPU 1 idle, Running, pid 1 tid 1 priority 0 policy 1 partition 0 sched_flags 0
2500, 295ms 169us, Interrupt 0x80000000, Entry, interrupt 0x80000000 ip 0x269c00
2501, 295ms 169us, Interrupt 0x80000000 - io-pkt-v4-hc, Handler Entry, pid 585746 interrupt 0x80000000 ip 0x480c0754 
area 0x48146f5c process io-pkt-v4-hc
2502, 295ms 169us, Interrupt 0x80000000 - io-pkt-v4-hc, Handler Exit, interrupt 0x80000000 sigev_notify 0
2503, 295ms 170us, Interrupt 0x80000000 - procnto-booke-instr, Handler Entry, pid 1 interrupt 0x80000000 ip 0x241494 
area 0x0 process procnto-booke-instr
2504, 295ms 171us, Interrupt 0x80000000 - procnto-booke-instr, Handler Exit, interrupt 0x80000000 sigev_notify 0
2505, 295ms 171us, Interrupt 0x80000000, Exit, interrupt 0x80000000 flags 0x1

Re: RE: io-pkt-v4-hc idle CPU load  
> Interrupt 0x80000000 is the system tick which causes thread 2 of io-pkt-v4-hc 
> to run every 8ms for about 7-8 µs (housekeeping?). This yields about 0.7% CPU
>  usage when io-pkt-v4-hc is idle and this is also what the "Partition Summary"
>  of the trace view reports. 

Have to correct myself there ; that's off by a factor 10. Sometimes thread 2 actually does something for about 130-150 
µs which yields 0.7% CPU on average. 

Christopher. 
RE: RE: io-pkt-v4-hc idle CPU load  
Hi Christopher:
   Probably not a display issue so much as a sample aliasing issue.  Hogs samples the CPU periodically to see what's 
running.  If it so happens that the io-pkt periodic execution and the hogs sampling occur at the same rate and same time
, you end up with a falsely inflated view of CPU usage for io-pkt.  That's why using the instrumented kernel can be so 
useful since it gives an accurate representation of what's happening...

We've seen this with other subsystems that have periodic execution cycles (io-usb is another one that gets incorrectly 
tagged with "high CPU usage" because of this).

Good to know that there isn't a "real" problem though :>

	Robert

-----Original Message-----
From: Christopher Pohl [mailto:community-noreply@qnx.com] 
Sent: Tuesday, July 29, 2008 9:45 AM
To: ionetmig-networking
Subject: Re: RE: io-pkt-v4-hc idle CPU load

> It could be worthwhile getting a kernel trace to see if anything is 
> interacting with the stack.  

I recorded a one second trace and pasted a snapshot below. 

It turns out that the display of both the "hogs" utility and the "System Information" -> "System resources" view of the 
IDE is wrong. They both display 11% while the 'real' CPU usage is 0.7%. 

Here is the reason/running chain of the trace: 

Event, Owner, Type
2608, io-pkt-v4-hc Thread 2, Running
2607, Interrupt 0x80000000 - io-pkt-v4-hc, Sigevent Pulse

Interrupt 0x80000000 is the system tick which causes thread 2 of io-pkt-v4-hc to run every 8ms for about 7-8 µs 
(housekeeping?). This yields about 0.7% CPU usage when io-pkt-v4-hc is idle and this is also what the "Partition Summary
" of the trace view reports. 

I guess we can live with a display problem ;-) 
Thanks for the help and sorry for the trouble. 

Christopher. 

----

Event, Time, Owner, Type, Data
2485, 294ms 169us, Interrupt 0x80000000, Entry, interrupt 0x80000000 ip 0x269bec
2486, 294ms 169us, Interrupt 0x80000000 - io-pkt-v4-hc, Handler Entry, pid 585746 interrupt 0x80000000 ip 0x480c0754 
area 0x48146f5c process io-pkt-v4-hc
2487, 294ms 170us, Interrupt 0x80000000 - io-pkt-v4-hc, Sigevent Pulse, scoid 0x40000003 pid 585746 process io-pkt-v4-hc

2488, 294ms 170us, io-pkt-v4-hc Thread 2, Running, pid 585746 tid 2 priority 21 policy 2 partition 4 sched_flags 1
2489, 294ms 171us, procnto-booke-instr CPU 1 idle, Ready, pid 1 tid 1 priority 0 policy 1 partition 0 sched_flags 0
2490, 294ms 171us, Interrupt 0x80000000 - io-pkt-v4-hc, Handler Exit, interrupt 0x80000000 sigev_notify 4
2491, 294ms 171us, Interrupt 0x80000000 - procnto-booke-instr, Handler Entry, pid 1 interrupt 0x80000000 ip 0x241494 
area 0x0 process procnto-booke-instr
2492, 294ms 173us, Interrupt 0x80000000 - procnto-booke-instr, Handler Exit, interrupt 0x80000000 sigev_notify 0
2493, 294ms 173us, Interrupt 0x80000000, Exit, interrupt 0x80000000 flags 0x1
2494, 294ms 174us, io-pkt-v4-hc Thread 2, Receive Pulse, scoid 0x40000003 pid 585746 process io-pkt-v4-hc
2495, 294ms 174us, io-pkt-v4-hc Thread 2, MsgReceivev Exit, rcvid 0x0 rmsg0 0x0 rmsg1 0x0 rmsg2 0x0 info->nd 0 info->
srcnd 0 info->pid 0 info->tid 0 info->chid 0 info->scoid 0 info->msglen 0 info->srcmsglen 0 info->dstmsglen 0 info->
priority 0 info->flags 0 info->reserved 0
2496, 294ms 176us, io-pkt-v4-hc Thread 2, Running, pid 585746 tid 2 priority 21 policy 2 partition 4 sched_flags 1
2497, 294ms 177us, io-pkt-v4-hc Thread 2, MsgReceivev Enter, chid 0x1 rparts 33
2498, 294ms 178us, io-pkt-v4-hc Thread 2, Receive, pid 585746 tid 2 priority 21 policy 2 partition 4 sched_flags 1
2499, 294ms 179us, procnto-booke-instr CPU 1 idle, Running, pid 1 tid 1 priority 0 policy 1 partition 0 sched_flags 0
2500, 295ms 169us, Interrupt 0x80000000, Entry, interrupt 0x80000000 ip 0x269c00
2501, 295ms 169us, Interrupt 0x80000000 - io-pkt-v4-hc, Handler Entry, pid 585746 interrupt 0x80000000 ip 0x480c0754 
area 0x48146f5c process io-pkt-v4-hc
2502,...
View Full Message
Re: RE: io-pkt-v4-hc idle CPU load  
On Tue, Jul 29, 2008 at 04:56:41AM -0400, Christopher Pohl wrote:
> > 
> > From: Christopher Pohl 
> > 
> > > It happens immediately after firing up io-pkt 
> > 
> > All I can think of is interrupts firing for some
> > other device which is sharing the interrupt ...
> 
> After more experimenting it turns out the mpc85xx driver got nothing to do with it. 
> 
> To reproduce, it is sufficient to fire up io-pkt-v4-(hc) without a NIC driver like this 
> 
> # io-pkt-v4-hc
> # hogs -n -%1
> PID             NAME  MSEC  PIDS SYSTEM
> 1                     2645   87%    88%
> 864265  io-pkt-v4-hc   357   11%    11%
> 
> I don't know if this is relevant, but if i launch an second instance, only the last started instance of io-pkt-v4 hogs
 the CPU (notice the change of the PID in hogs) while the one that was started first goes really idle. 
> 
> # io-pkt-v4-hc
> # hogs -n -%1
> PID             NAME  MSEC  PIDS SYSTEM
> 1                     2646   87%    88%
> 876554  io-pkt-v4-hc   357   11%    11%
> 
> 1                     2644   87%    88%
> 876554  io-pkt-v4-hc   357   11%    11%
> 
> Also, I tried this on a MPC8313 board with very similar software and a binary identical io-pkt. I cannot reproduce the
 behaviour on the MPC8313 board. 
> 
> Here is the PVR & SVR of the affected MPC8548: 
> 
> CHIP REVISIONS
>  CPU Version (PVR): Major 0x8021 Minor 0x20
>  System version   (SVR): Major 0x8039 Minor 0x20
> 
> Maybe this also makes a difference: The MPC8548 board has an engineering kernel (CE WO790369). Would it make sense to 
try a regular kernel? If so which one would you prefer? 
> 
> Can you think of anything else I can do (e.g. verbose logs of io-pkt-v4-hc itself) ? 

The only thing the stack should be doing when started with
no drivers is handling periodic timer pulses.  Are you sure
the clock is running at the proper frequency on that board
(although starting a second instance should have no effect
if that were an issue...).

Note that starting a second instance will have the last one
overlay /dev/socket so that one will now receive messages
from socket() and friends instead of the first so there may
be some client using socket services.

-seanb
RE: RE: io-pkt-v4-hc idle CPU load  
From: Christopher Pohl [mailto:community-noreply@qnx.com] 

> the mpc85xx driver got nothing to do with it. 
> sufficient to fire up io-pkt-v4-(hc) without a NIC driver 

Some ideas:

1) run the instrumented kernel to find out what the heck
is going on.  Is it a periodic timer that is waking io-pkt
up, or is it a message from a client?
 
2) create a core file of io-pkt using "dumper -p <pid>".
This does NOT slay io-pkt, just makes a snapshot core
file which you can examine with gdb and backtrace the
threads.  Note that you may have to do this several times
(possibly at a high priority with "on -p 255 sh" to avoid
being pre-empted).

3) if #1 and #2 above don't work, slogf is your friend.
Compile debug versions of io-pkt, sprinkling slogf calls
around, until you can figure out what code path it is
executing so frequently.  This is usually an iterative
process, but rarely does it affect the timing enough to
alter the behaviour.

--
aboyd