[Orca-users] hitachi 9980

Saladin, Miki lsaladi at uillinois.edu
Thu Jul 22 12:03:07 PDT 2004


just to let the list know the results - it takes longer than 5 minutes for
the graphs to be updated and as that is longer that the rsync interval - it
updates the graphs every 10 minutes now.  and so it seems the new data
appearing between the start/end of the perl -o process does keep it running
longer - as the time on the graphs are now 10 minutes apart. 

-----Original Message-----
From: Saladin, Miki 
Sent: Thursday, July 22, 2004 12:51 PM
To: 'Jon Tankersley'; Saladin, Miki; orca-users at orcaware.com
Subject: RE: [Orca-users] hitachi 9980


my ? would be - what happens if the -o execution of orca crosses the rsync
execution of moving new data to the /orca/orcallator repository -   
we rsync data over every 5 minutes. if it take orca longer than 5 minutes to
update the graphs - will it end - or will the fact that new data has been
added since the process started just keep it running? 


-----Original Message-----
From: Jon Tankersley [mailto:jon.tankersley at eds.com]
Sent: Tuesday, July 20, 2004 1:09 PM
To: 'Saladin, Miki'; orca-users at orcaware.com
Subject: RE: [Orca-users] hitachi 9980


For larger environments, we've stopped running orca in continuous mode.
We just run it multiple times with the -o flag (we have some systems that
have collections that are pulled every 15 minutes).

-----Original Message-----
From: orca-users-bounces+jon.tankersley=eds.com at orcaware.com
[mailto:orca-users-bounces+jon.tankersley=eds.com at orcaware.com] On Behalf Of
Saladin, Miki
Sent: Tuesday, July 20, 2004 12:46 PM
To: 'orca-users at orcaware.com'
Subject: [Orca-users] hitachi 9980


We recently installed a Hitachi 9980 disk array into our center, and as a
result orca (0.264, orcallator.cfg file version 1.36) seems to have
developed a  memory leak. Only 10 domains are being graphed by this orca
process so we are not talking about large numbers. On the domain on which
orca runs - memory swap space usage was absolutely constant until this
device was introduced. The introduction of this device, of course,
significantly increased the number of disks to be graphed - for multiple
domains. 
here are some top commands. the first one is about 5 hours after starting
orca - 
load averages:  0.95,  0.66,  0.57                              16:34:41
78 processes:  76 sleeping, 2 on cpu
CPU: 40.1% idle, 48.6% user,  4.5% kernel,  6.8% iowait,  0.0% swap
Memory: 2048M real, 1306M free, 356M swap in use, 2135M swap free

   PID USERNAME THR PRI NICE  SIZE   RES STATE    TIME    CPU COMMAND
  8360 root       1   0    0  294M  241M cpu/0  151:24 48.25% perl
<<<<<<<<<<<<<<<<<< ORCA 
  9124 lsaladi    1  58    0 2344K 1744K cpu/1    0:33  0.07% top
   352 root      13  58    0 4040K 2600K sleep    6:13  0.04% syslogd
   373 root       1  58    0 1064K  720K sleep    0:00  0.01% utmpd
   726 root       1  58    0    0K    0K sleep    1:01  0.00% se.sparcv9
   353 root       1  54    0 2016K 1360K sleep    0:27  0.00% cron
   624 root       1  59    0   27M   66M sleep    0:06  0.00% Xsun
  9115 root       1  58    0 5192K 3216K sleep    0:03  0.00% sshd2
   828 root       6  49    0 9904K 6224K sleep    0:02  0.00% dtsession
    20 root       1  58    0 8688K 6560K sleep    0:02  0.00% vxconfigd
  5567 root       1  58    0 5200K 3224K sleep    0:02  0.00% sshd2
   834 root       8  59    0 9312K 6496K sleep    0:02  0.00% dtwm
   753 root       1   0    0 5248K 2032K sleep    0:01  0.00% perl
 18522 root       1  58    0 5296K 3360K sleep    0:01  0.00% sshd2
   389 root       1   0    0 1936K 1328K sleep    0:00  0.00% vxconfigba

this is the next day in the morning - at noon - same running orca process -

load averages:  0.54,  0.56,  0.50
12:07:43
72 processes:  70 sleeping, 2 on cpu
CPU states: 49.6% idle, 49.0% user,  1.4% kernel,  0.0% iowait,  0.0% swap
Memory: 2048M real, 743M free, 1203M swap in use, 1287M swap free

   PID USERNAME THR PRI NICE  SIZE   RES STATE    TIME    CPU COMMAND
  8360 root       1   0    0 1144M  817M cpu/1  641:59 38.44% perl
<<<<<<<<<<<<<<<<<<<<<<ORCA
 11358 root       1  58    0 2296K 1696K cpu/0    0:00  0.16% top
   352 root      13  58    0 4040K 2600K sleep    6:49  0.16% syslogd
  9124 lsaladi    1  58    0 2344K 1744K sleep    2:39  0.08% top
   373 root       1  58    0 1064K  720K sleep    0:02  0.02% utmpd
  9115 root       1  59    0 5192K 3216K sleep    0:15  0.00% sshd2
 11114 apache     3  58    0 3344K 2312K sleep    0:00  0.00% httpd
 11121 apache     3  58    0 3344K 2296K sleep    0:00  0.00% httpd
   726 root       1  58    0    0K    0K sleep    1:05  0.00% se.sparcv9.5.8
   353 root       1  54    0 2016K 1360K sleep    0:30  0.00% cron
  5567 root       1  58    0 5208K 3232K sleep    0:06  0.00% sshd2
   624 root       1  59    0   27M   66M sleep    0:06  0.00% Xsun
   828 root       6  49    0 9904K 6224K sleep    0:02  0.00% dtsession
    20 root       1  58    0 8688K 6560K sleep    0:02  0.00% vxconfigd
   834 root       8  59    0 9312K 6496K sleep    0:02  0.00% dtwm

i know there have been memory leaks reported on this list for orca 0.27 so
i'm not sure that's the solution. Any suggestions would be appreciated. 
As you can see above - space is going going and will once again soon be
gone. Last time it took around 42 hours before ORCA crashed with 
Out of memory during "large" request for 266240 bytes, total sbrk() is
2545103736 bytes at /usr/local/lib/Orca/ImageFile.pm line 318. 

 
_______________________________________________
Orca-users mailing list
Orca-users at orcaware.com http://www.orcaware.com/mailman/listinfo/orca-users



More information about the Orca-users mailing list