David Beberman(deleted)
|
Re: RE: Low cost, fast way to get thread status.sutime value in /proc/<proc id>, alternative approach?
|
David Beberman(deleted)
04/25/2015 3:59 PM
post113746
|
Re: RE: Low cost, fast way to get thread status.sutime value in /proc/<proc id>, alternative approach?
Thanks.
We implemented with devctl(). If this saves a few more cycles, will advise that we shift to it.
Just checked the doc on it. Thanks for the direction.
> A devctl() is actually a message pass to the process manager, which involves
> multiple kernel calls. There is
> a cheaper mechanism, but it still involves one kernel call (actually two, but
> the first only has to be done one time per thread).
>
> id = ClockId(<pid>, <tid>);
>
> for( ;; ) {
> ...
> ClockTime(id, NULL, &running_time);
> ...
> }
>
> that's as cheap as you can get it.
> ________________________________________
> From: David Beberman [community-noreply@qnx.com]
> Sent: April-01-15 11:29 PM
> To: ostech-core_os
> Subject: Low cost, fast way to get thread status.sutime value in /proc/<proc id>,
> alternative approach?
>
> Hi,
> I'm looking for a way to get the accurate actual execution time of individual
> threads in a process.
> I'm aware of opening the /proc/<proc id> and then using devctl() call with a pthread_t
> value to get
> the status of the thread, which includes the sutime value.
>
> This is a kernel call, which by definition includes a CPU syscall gate, and a
> call to the QNX scheduler.
> For my uses, this is too heavy weight.
> Is there any way to read this value with reduced overhead?
> Ideally, something that avoids the callgate and the scheduler call altogether.
>
> I have not seen anything like this. I did see a discussion from about 7 years
> ago about trying to
> provide some sort of large bulk transfer API. An abortive discussion about
> memory mapping read-only
> kernel structures. Is there anything else?
>
> Thanks,
> David Beberman
>
>
>
> _______________________________________________
>
> OSTech
> http://community.qnx.com/sf/go/post113670
> To cancel your subscription to this discussion, please e-mail ostech-core_os-
> unsubscribe@community.qnx.co
|
|
|