[Orca-users] orca in large environments
mtb
mtb at nac.net
Mon Aug 18 11:25:10 PDT 2003
We are currently running orca server processes on a single HP DL580G2 with 4 cpus and 16gb RAM running SLES8. We monitor roughly 2,000 hosts which are broken down into approx. 20 different regional/business groups. One server process runs per group. We monitor Solaris hosts (running SE Tookit), as well as Linux hosts (procallator) and AIX (a modified version of procallator.)
We plan to add an additional 3-4,000 hosts in the near future. Scale has become an issue. Is anyone running Orca on this sort of scale, and if so, what is their approach?
It is difficult to gauge when memory will become a constraint. All free memory is quickly used up whether or not we start up two or twenty server processes. At some point the box will start swapping, but it would be helpful to have some idea when that might happen.
Orca processes all new data files into rrd's for all hosts first, then into html. Is this approach for the sake of efficiency? Why not process each host individually? For groups with a large number of hosts, there is a considerable delay while the server process all the rrds first... so all must be processed before the html's are updated. This would likely require a drastic rewrite of the code, but it's worth discussing.
More information about the Orca-users
mailing list