Project Home
Project Home
Documents
Documents
Wiki
Wiki
Discussion Forums
Discussions
Project Information
Project Info
Forum Topic - Runmasks with IPC and procnto: (6 Items)
   
Runmasks with IPC and procnto  
Hi,

I've a few multicore runmask questions.

It was observed that a thread (bound to CPU1) calling  fork()  (thus passing on its runmask) could cause [significant] 
activity on CPU0, i.e. outside its runmask.

The cause for that was -in a nutshell-  fork()  is effectively a message pass to procnto, and message passing transfers 
priority and partition info, but not runmasks, so procnto is more or less free to run wherever it likes. 
The runmask for the new process is not set until the process manager is almost done with the call.

So the following questions arise:
Why is runmask info -other than, e.g., priority- not propagated by message passing?

If always passing runmasks should turn out to be impractical, couldn't at least the process manager try to respect its 
client's runmask?

Are runmasks respected in true kernel calls?

Is the general expectation legitimate? Or, in other words, is there a way to dedicate a specific CPU to one  specific 
task with very tight realtime requirements?

Thanks,
- Thomas
Re: Runmasks with IPC and procnto  
> Hi,
> 
> I've a few multicore runmask questions.
> 
> It was observed that a thread (bound to CPU1) calling  fork()  (thus passing 
> on its runmask) could cause [significant] activity on CPU0, i.e. outside its 
> runmask.
> 
> The cause for that was -in a nutshell-  fork()  is effectively a message pass 
> to procnto, and message passing transfers priority and partition info, but not
>  runmasks, so procnto is more or less free to run wherever it likes. 
> The runmask for the new process is not set until the process manager is almost
>  done with the call.
> 
> So the following questions arise:
> Why is runmask info -other than, e.g., priority- not propagated by message 
> passing?
> 
> If always passing runmasks should turn out to be impractical, couldn't at 
> least the process manager try to respect its client's runmask?
> 
> Are runmasks respected in true kernel calls?
> 
> Is the general expectation legitimate? Or, in other words, is there a way to 
> dedicate a specific CPU to one  specific task with very tight realtime 
> requirements?
> 
> Thanks,
> - Thomas

If runmask would be propragated wouldn't that fold the fabric of space. If devb-eide is set to run on CPU2 and a process
 set to run on CPU3 tries to read a file, kaboom, much worst then a division by 0.





AW: Runmasks with IPC and procnto  

> -----Ursprungliche Nachricht-----
> Von: Mario Charest [mailto:community-noreply@qnx.com]
> Gesendet: 08 October 2008 15:15
> An: ostech-core_os
> Betreff: Re: Runmasks with IPC and procnto
> 
> 
> > Hi,
> > 
> > I've a few multicore runmask questions.
> > 
> > It was observed that a thread (bound to CPU1) calling  
> fork()  (thus passing 
> > on its runmask) could cause [significant] activity on CPU0, 
> i.e. outside its 
> > runmask.
> > 
> > The cause for that was -in a nutshell-  fork()  is 
> effectively a message pass 
> > to procnto, and message passing transfers priority and 
> partition info, but not
> >  runmasks, so procnto is more or less free to run wherever 
> it likes. 
> > The runmask for the new process is not set until the 
> process manager is almost
> >  done with the call.
> > 
> > So the following questions arise:
> > Why is runmask info -other than, e.g., priority- not 
> propagated by message 
> > passing?
> > 
> > If always passing runmasks should turn out to be 
> impractical, couldn't at 
> > least the process manager try to respect its client's runmask?
> > 
> > Are runmasks respected in true kernel calls?
> > 
> > Is the general expectation legitimate? Or, in other words, 
> is there a way to 
> > dedicate a specific CPU to one  specific task with very 
> tight realtime 
> > requirements?
> > 
> > Thanks,
> > - Thomas
> 
> If runmask would be propragated wouldn't that fold the fabric 
> of space. If devb-eide is set to run on CPU2 and a process 
> set to run on CPU3 tries to read a file, kaboom, much worst 
> then a division by 0.

Not necessarily - as with NTO_CHF_FIXED_PRIORITY (was that the name?)
you might get a choice of either sticking to your own runmask or using 
whichever is provided to you by the client.

Then again - if a "floating mask" server is servicing a request with 
runmask 0x1 and another client tries to send with runmask 0x2, will 
the server finish the first client's request with a runmask of 0x1, 
0x2, or 0x3 ???

- thomas

> _______________________________________________
> OSTech
> http://community.qnx.com/sf/go/post14680
> 
> 
RE: Runmasks with IPC and procnto  
During a message passing, we always *try to* schedule the waked up
server thread to the same CPU as the sending client. This is to probably
utilize the hot cache on that CPU (likely the data the client is sending
is already in the cache), and of cause we know that CPU is going to be
free (as the client will be blocked)

Of cause, if you put a Runmask on server, then it follow the runmask.

-xtang

-----Original Message-----
From: Thomas Haupt [mailto:community-noreply@qnx.com] 
Sent: October 8, 2008 9:26 AM
To: ostech-core_os
Subject: AW: Runmasks with IPC and procnto



> -----Ursprungliche Nachricht-----
> Von: Mario Charest [mailto:community-noreply@qnx.com]
> Gesendet: 08 October 2008 15:15
> An: ostech-core_os
> Betreff: Re: Runmasks with IPC and procnto
> 
> 
> > Hi,
> > 
> > I've a few multicore runmask questions.
> > 
> > It was observed that a thread (bound to CPU1) calling  
> fork()  (thus passing 
> > on its runmask) could cause [significant] activity on CPU0, 
> i.e. outside its 
> > runmask.
> > 
> > The cause for that was -in a nutshell-  fork()  is 
> effectively a message pass 
> > to procnto, and message passing transfers priority and 
> partition info, but not
> >  runmasks, so procnto is more or less free to run wherever 
> it likes. 
> > The runmask for the new process is not set until the 
> process manager is almost
> >  done with the call.
> > 
> > So the following questions arise:
> > Why is runmask info -other than, e.g., priority- not 
> propagated by message 
> > passing?
> > 
> > If always passing runmasks should turn out to be 
> impractical, couldn't at 
> > least the process manager try to respect its client's runmask?
> > 
> > Are runmasks respected in true kernel calls?
> > 
> > Is the general expectation legitimate? Or, in other words, 
> is there a way to 
> > dedicate a specific CPU to one  specific task with very 
> tight realtime 
> > requirements?
> > 
> > Thanks,
> > - Thomas
> 
> If runmask would be propragated wouldn't that fold the fabric 
> of space. If devb-eide is set to run on CPU2 and a process 
> set to run on CPU3 tries to read a file, kaboom, much worst 
> then a division by 0.

Not necessarily - as with NTO_CHF_FIXED_PRIORITY (was that the name?)
you might get a choice of either sticking to your own runmask or using 
whichever is provided to you by the client.

Then again - if a "floating mask" server is servicing a request with 
runmask 0x1 and another client tries to send with runmask 0x2, will 
the server finish the first client's request with a runmask of 0x1, 
0x2, or 0x3 ???

- thomas

> _______________________________________________
> OSTech
> http://community.qnx.com/sf/go/post14680
> 
> 

_______________________________________________
OSTech
http://community.qnx.com/sf/go/post14682
Re: Runmasks with IPC and procnto  
We also have the issue that the loader thread doesn't not set it's runmask
to that of the process being created.  This means that if you have a process
bound to a cpu, and then spawn another process (with the inherited runmask), then
the loader thread can still cause activity on another CPU.  This is noted as a PR and
will be worked on post 6.4

Xiaodan Tang wrote:
> During a message passing, we always *try to* schedule the waked up
> server thread to the same CPU as the sending client. This is to probably
> utilize the hot cache on that CPU (likely the data the client is sending
> is already in the cache), and of cause we know that CPU is going to be
> free (as the client will be blocked)
> 
> Of cause, if you put a Runmask on server, then it follow the runmask.
> 
> -xtang
> 
> -----Original Message-----
> From: Thomas Haupt [mailto:community-noreply@qnx.com] 
> Sent: October 8, 2008 9:26 AM
> To: ostech-core_os
> Subject: AW: Runmasks with IPC and procnto
> 
> 
> 
>> -----Ursprungliche Nachricht-----
>> Von: Mario Charest [mailto:community-noreply@qnx.com]
>> Gesendet: 08 October 2008 15:15
>> An: ostech-core_os
>> Betreff: Re: Runmasks with IPC and procnto
>>
>>
>>> Hi,
>>>
>>> I've a few multicore runmask questions.
>>>
>>> It was observed that a thread (bound to CPU1) calling  
>> fork()  (thus passing 
>>> on its runmask) could cause [significant] activity on CPU0, 
>> i.e. outside its 
>>> runmask.
>>>
>>> The cause for that was -in a nutshell-  fork()  is 
>> effectively a message pass 
>>> to procnto, and message passing transfers priority and 
>> partition info, but not
>>>  runmasks, so procnto is more or less free to run wherever 
>> it likes. 
>>> The runmask for the new process is not set until the 
>> process manager is almost
>>>  done with the call.
>>>
>>> So the following questions arise:
>>> Why is runmask info -other than, e.g., priority- not 
>> propagated by message 
>>> passing?
>>>
>>> If always passing runmasks should turn out to be 
>> impractical, couldn't at 
>>> least the process manager try to respect its client's runmask?
>>>
>>> Are runmasks respected in true kernel calls?
>>>
>>> Is the general expectation legitimate? Or, in other words, 
>> is there a way to 
>>> dedicate a specific CPU to one  specific task with very 
>> tight realtime 
>>> requirements?
>>>
>>> Thanks,
>>> - Thomas
>> If runmask would be propragated wouldn't that fold the fabric 
>> of space. If devb-eide is set to run on CPU2 and a process 
>> set to run on CPU3 tries to read a file, kaboom, much worst 
>> then a division by 0.
> 
> Not necessarily - as with NTO_CHF_FIXED_PRIORITY (was that the name?)
> you might get a choice of either sticking to your own runmask or using 
> whichever is provided to you by the client.
> 
> Then again - if a "floating mask" server is servicing a request with 
> runmask 0x1 and another client tries to send with runmask 0x2, will 
> the server finish the first client's request with a runmask of 0x1, 
> 0x2, or 0x3 ???
> 
> - thomas
> 
>> _______________________________________________
>> OSTech
>> http://community.qnx.com/sf/go/post14680
>>
>>
> 
> _______________________________________________
> OSTech
> http://community.qnx.com/sf/go/post14682
> 
> 
> _______________________________________________
> OSTech
> http://community.qnx.com/sf/go/post14684
> 

--...
AW: Runmasks with IPC and procnto  
Right, we had that already - I almost forgot about the loader thread.
Do you think this will take very much effort?

> -----Ursprungliche Nachricht-----
> Von: Colin Burgess [mailto:community-noreply@qnx.com]
> Gesendet: 08 October 2008 15:37
> An: ostech-core_os
> Betreff: Re: Runmasks with IPC and procnto
> 
> 
> We also have the issue that the loader thread doesn't not set 
> it's runmask
> to that of the process being created.  This means that if you 
> have a process
> bound to a cpu, and then spawn another process (with the 
> inherited runmask), then
> the loader thread can still cause activity on another CPU.  
> This is noted as a PR and
> will be worked on post 6.4
> 
> Xiaodan Tang wrote:
> > During a message passing, we always *try to* schedule the waked up
> > server thread to the same CPU as the sending client. This 
> is to probably
> > utilize the hot cache on that CPU (likely the data the 
> client is sending
> > is already in the cache), and of cause we know that CPU is 
> going to be
> > free (as the client will be blocked)
> > 
> > Of cause, if you put a Runmask on server, then it follow 
> the runmask.
> > 
> > -xtang
> > 
> > -----Original Message-----
> > From: Thomas Haupt [mailto:community-noreply@qnx.com] 
> > Sent: October 8, 2008 9:26 AM
> > To: ostech-core_os
> > Subject: AW: Runmasks with IPC and procnto
> > 
> > 
> > 
> >> -----Ursprungliche Nachricht-----
> >> Von: Mario Charest [mailto:community-noreply@qnx.com]
> >> Gesendet: 08 October 2008 15:15
> >> An: ostech-core_os
> >> Betreff: Re: Runmasks with IPC and procnto
> >>
> >>
> >>> Hi,
> >>>
> >>> I've a few multicore runmask questions.
> >>>
> >>> It was observed that a thread (bound to CPU1) calling  
> >> fork()  (thus passing 
> >>> on its runmask) could cause [significant] activity on CPU0, 
> >> i.e. outside its 
> >>> runmask.
> >>>
> >>> The cause for that was -in a nutshell-  fork()  is 
> >> effectively a message pass 
> >>> to procnto, and message passing transfers priority and 
> >> partition info, but not
> >>>  runmasks, so procnto is more or less free to run wherever 
> >> it likes. 
> >>> The runmask for the new process is not set until the 
> >> process manager is almost
> >>>  done with the call.
> >>>
> >>> So the following questions arise:
> >>> Why is runmask info -other than, e.g., priority- not 
> >> propagated by message 
> >>> passing?
> >>>
> >>> If always passing runmasks should turn out to be 
> >> impractical, couldn't at 
> >>> least the process manager try to respect its client's runmask?
> >>>
> >>> Are runmasks respected in true kernel calls?
> >>>
> >>> Is the general expectation legitimate? Or, in other words, 
> >> is there a way to 
> >>> dedicate a specific CPU to one  specific task with very 
> >> tight realtime 
> >>> requirements?
> >>>
> >>> Thanks,
> >>> - Thomas
> >> If runmask would be propragated wouldn't that fold the fabric 
> >> of space. If devb-eide is set to run on CPU2 and a process 
> >> set to run on CPU3 tries to read a file, kaboom, much worst 
> >> then a division by 0.
> > 
> > Not necessarily - as with NTO_CHF_FIXED_PRIORITY (was that 
> the name?)
> > you might get a...
View Full Message