[Orca-users] collecting process information

Thykattil, Joe JThykattil at cme.com
Thu Mar 15 13:51:46 PDT 2007


Thanks to everyone who responded.

 

I am asking for suggestions on any tools that will perform
post-processing on orca/proccollator raw files by server groupings.

 

Joe

 

 

 

________________________________

From: orca-users-bounces+jthykattil=cme.com at orcaware.com
[mailto:orca-users-bounces+jthykattil=cme.com at orcaware.com] On Behalf Of
Beck, Joseph
Sent: Thursday, March 15, 2007 3:11 PM
To: David Michaels; Glen Gunselman
Cc: orca-users at orcaware.com
Subject: Re: [Orca-users] collecting process information

 

Yes, Dragon. You are mark on.

And I do appreciate your comment about the amount of data. So, limiting
it to the top ~10 consumers is probably the way to go.

 

Apparently, there are some tools available as part of the SE Toolkit
that I'm researching.

 

Joe Beck Ciber Inc. - a consultant to SEI  One Freedom Valley Drive/ 100
Cider Mill Road| Oaks, PA 19456 | p: 610.676.2258 | jbeck at seic.com

________________________________

From: David Michaels [mailto:dragon at raytheon.com] 
Sent: Wednesday, March 14, 2007 6:01 PM
To: Glen Gunselman
Cc: orca-users at orcaware.com; Beck, Joseph
Subject: Re: [Orca-users] collecting process information

 

It sounds like what Joe is looking for is information on each individual
process at a given point in time.  For instance, if the load-average
graph shows a sharp spike in CPU usage, Joe wants Orca to tell him which
process is causing the spike.  At least, that's what I'm getting from
the question.

Joe -- I don't think this is something you can do with orca with the
out-of-the-box collectors for Solaris.  You could create a script that
generated the data you wanted, then just include that data source in
your orcallator.cfg file.  I haven't tried something like that on the
Sun side, so it may be more involved than that.

I imagine what you'd want to do is track the CPU usage of every process,
by PID.  You will end up with an enormous amount of data, so I
personally would not do it for long-term tracking.  Perhaps limiting it
to the top 5-10 PIDs by cpu time & memory usage or something would help
to reduce the data substantially while keeping it valuable.

--Dragon

Glen Gunselman wrote: 

Joseph,

 

What details are you looking for?

 

The Orca collection file is somewhat self describing.  If you look at
the first record in the file you will see the fields it contains.  Here
what one of mine looks like:

 

timestamp  locltime   uptime state_D state_N state_n state_s state_r
state_k state_c state_m state_d state_i state_t DNnsrkcmdit  us
r%  sys%  wio% idle%  1runq  5runq 15runq #proc  #runque #waiting
#swpque scanrate #proc/s #proc/p5s  smtx smtx/cpu ncpus mntC_/ mn
tU_/ mntA_/ mntP_/ mntc_/ mntu_/ mnta_/ mntp_/ mntC_/var mntU_/var
mntA_/var mntP_/var mntc_/var mntu_/var mnta_/var mntp_/var mntC_
/export mntU_/export mntA_/export mntP_/export mntc_/export mntu_/export
mnta_/export mntp_/export mntC_/opt/openv mntU_/opt/openv m
ntA_/opt/openv mntP_/opt/openv mntc_/opt/openv mntu_/opt/openv
mnta_/opt/openv mntp_/opt/openv mntC_/nbu/disk mntU_/nbu/disk mntA_/n
bu/disk mntP_/nbu/disk mntc_/nbu/disk mntu_/nbu/disk mnta_/nbu/disk
mntp_/nbu/disk mntC_/nbu/staging mntU_/nbu/staging mntA_/nbu/sta
ging mntP_/nbu/staging mntc_/nbu/staging mntu_/nbu/staging
mnta_/nbu/staging mntp_/nbu/staging disk_runp_c0t0d0 disk_runp_c1t0d0
dis
k_runp_c1t1d0 disk_runp_c1t2d0 disk_runp_c1t3d0 disk_runp_md10
disk_runp_md20 disk_runp_md0 disk_runp_md13 disk_runp_md23 disk_runp_
md3 disk_runp_c5t600A0B800017741C000025B44289FDCEd0
disk_runp_c5t600A0B8000176F750000149D4289FC45d0 disk_runp_md11
disk_runp_md21 di
sk_runp_md1 disk_runp_md50 disk_runp_md51 disk_runp_md5 disk_runp_md100
disk_runp_md101 disk_runp_md102 disk_peak disk_mean disk_rd/
s disk_wr/s disk_rK/s disk_wK/s swap_avail page_rstim   freememK
free_pages   ce0Ipkt/s   ce0Opkt/s   ce0InKB/s   ce0OuKB/s   ce0IEr
r/s   ce0OErr/s   ce0Coll%   ce0NoCP/s   ce0Defr/s   ce1Ipkt/s
ce1Opkt/s   ce1InKB/s   ce1OuKB/s   ce1IErr/s   ce1OErr/s   ce1Coll
%   ce1NoCP/s   ce1Defr/s tcp_Iseg/s tcp_Oseg/s tcp_InKB/s tcp_OuKB/s
tcp_Ret% tcp_Dup% tcp_Icn/s tcp_Ocn/s tcp_estb tcp_Rst/s tcp_A
tf/s tcp_Ldrp/s tcp_LdQ0/s tcp_HOdp/s nfs_call/s nfs_timo/s nfs_badx/s
nfss_calls nfss_bad  v2reads v2writes  v3reads v3writes dnlc_
ref/s dnlc_hit% inod_ref/s inod_hit% inod_stl/s pp_kernel pagesfree
pageslock pagestotl

 

Glen Gunselman
Systems Software Specialist
TCS
Emporia State University

>>> "Beck, Joseph" <jbeck at seic.com> <mailto:jbeck at seic.com>  03/14/07
2:10 PM >>>

We currently use orca throughout our environment (120 sun boxes) & it's
useful for many reasons. 

One glaring piece missing, though, is process information.

I haven't setup orca in the past & in this environment I've only added
agents/clients.

So, I'm not sure if this process info gap is specific to our
implementation or if it's not there in orca by default.

 

Either way, my goal is to collect process info in order to be able to
connect the dots between a load (or spike) & a process.

Is this capability there out of the box with orca? If not, has there
been any efforts to leverage prstat, psio.se, etc. to collect & graph
it?

Ultimately, I'd like to get to organizing processes into workloads &
begin to understand our application resource utilization.

 

Thanks for any feedback or references,

 

Joe Beck Ciber Inc. - a consultant to SEI  One Freedom Valley Drive/ 100
Cider Mill Road| Oaks, PA 19456 | p: 610.676.2258 | jbeck at seic.com

 

 
 
 



________________________________



 
 
 
_______________________________________________
Orca-users mailing list
Orca-users at orcaware.com
http://www.orcaware.com/mailman/listinfo/orca-users
  

 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: </pipermail/orca-users/attachments/20070315/8bfb2e35/attachment.html>


More information about the Orca-users mailing list