[Orca-users] Heterogeneous OS Env. & Orca

B.Schopp at gmx.de B.Schopp at gmx.de
Tue Jul 8 06:19:07 PDT 2003


> Hi Orca Users,
>  
> I am new to Orcaware and have perused the lists to obtain
> some basic configuration and operational information.
>  
> I really require a sanity check in order to justify the level
> of effort required to build and deploy Orca in our enterprise production
> environment. (and then pay Blair for his s/w :-)

It doesn't require to much work for SunOS and linux boxes. The data
gatherers are simple to distribute if you build packages (and know how to
build packages ;-) ). Compilation and build of the different orca-packages 
took me about 1 working day for SunOS (1 data gatherer package and 1 
orca daemon package with its own perl) and about 1 working day for the
same on linux (without building RPM's).
The distribution of the data gatherers to the monitored hosts took about
3 (parttime) working days for 100 hosts.
The calculation of 1 man week should be enough but it wouldn't be a 
mistake to calculate some extra time for problems.
  
> We are a mixed shop OS of 
> - AIX 4.33, 5.1 & 5.2
> - Solaris 7,8 & 9
> with some HPUX & Linux.
>
> We are running Oracle, DB2 and leveraging Apache/Tomcat for Web.

That sounds like the most common combination. BTW: Has anybody
tried/started to develop a data gatherer for databases like Oracle, DB2,
Informix, MySQl, etc???

> I understand that the Orca process engine (orcallator.se) 
> runs "only" in a Solaris environment.
>  
> 1) Which is the best Solaris version - 7, 8 or 9? (is the i386 Solaris an
> option)?

We are running orca in SunOS 5.6, 5.7 and 5.8 environments. 5.8 also 
runs in i386 without any problems. At this time we are testing SunOS
5.9 on Sparc and orca seems to work fine. Also we are running percollator
as well as orca in daemon mode to calculate the gathered data of all
systems in a linux environment. Until the beginning of this year we had
orca running in daemon mode on a SunOS 5.8 E-450.
At the beginning of this year we moved it to a little farm of 'old' x86
work-
station boxes (Ranges from PIII-450 to PIII-800) because the E-450 had
to be used for another project. It works quite stable and with a quite good
performance.
So, from my point of view you can feel free to run orca in daemon mode
on SunOS 5.x for sparc as well as on a linux box. SunOS for x86 has not
been tested by us for orca in daemon mode. The orcallator/percollator
runs quite stable on all OS'es mentioned above without affecting perfor-
mance on the monitored hosts.

> 2) I am not sure I quite fully understand the mechanism 
> or if it is fully possible "how" to feed the telemetry/data from my mixed
> OS environment to the "orcallator.se/Solaris" engine
>
> - are there Orca agents installed on each host
> - is there a config key for IP host address
> - is it NFS mounted
> - other

There isn't really a kind of feed. The gathered data is written into 'per
host'-data files every $INTERVAL (by default: 300seconds), so you will get
1 file of gathered data per host. 

We tried different methods to get these files to the host running orca in 
daemon mode:
- NFS-server on main orca host mounted to all monitored hosts (Bad idea
  if you are concerned about security). With using NFS you will also en-
  counter problems on logging in to a monitored host if the NFS-server is
  down for some reason.

- Batch-SSH-user with its own private/public keypair without password,
  who connects from the main orca host to all monitored hosts and syncs
  the data files using rsync to the main orca host.
  In this idea, the private key resides only on the main orca host, which
  has to be secure, as he is allowed to connect to all monitored hosts
  without password. The public key is distributed to all monitored hosts 
  which also must have rsync  installed. As long as the data files produced
  by orcallator are group readable by the right group, it will be quite
enough
  if the used batch user has strongly limited rights and is a member of this
  group.
  This is the method we are still using.

- Alternatively we tried a variant of the batch-SSH-user:
  We generated a private/public key pair per monitored host and distributed
  the public keys to the main orca hosts. The monitored hosts connected for
  themselves to the main orca host and transfered the data to it using
rsync. 
  The main idea of this scenario was the aspect of a higher security. There
  are some cons of this scenario:
  - All hosts should have a properly configured ntp running. Without, orca
will
    complain about missing and/or outdated data files. A well configured ntp
    should reside on every host, IMHO.
  - You will have to spent a lot of time to generate and distribute all of
the
    SSH-keys. For our department it was much easier to secure the main orca
    host than to generate and distribute one key per host. This method will
be
    quite good and secure if you have only about 20-30 hosts to monitor. In
    our case with about 100 monitored hosts it would be to work intensive.

> Great levels of detail will really aid in my proposal
> for project approval.

One of my best arguments for orca in the proposal were:
If the customer complains about bad performance to a given time, we have 
something visual to display to the customer that everything worked all right
and in case that not, we have the possibility to determine the bottleneck
quite 
fast. And its a quite good help to forecast future bottlenecks like lack of
disk
space as you will be able to see how the monitored systems behave over a 
longer period of time.

This was useful to me as i actually work in an outsourced IT-department
where we run several projects for different departments of the company. So,
our daily work is making proposals for new systems as well as to improve
running systems to meet the actual and future needs of the customers.

> Many thanks in advance,
> Tony

Best efforts,
Burkhardt

-- 
+++ GMX - Mail, Messaging & more  http://www.gmx.net +++

Jetzt ein- oder umsteigen und USB-Speicheruhr als Prämie sichern!




More information about the Orca-users mailing list