[Orca-users] Nagios

David Michaels dragon at raytheon.com
Fri May 28 08:46:59 PDT 2004


>> > I've been reading through orca docs and stuff, but I have no idea how
>> > I would go about this.  Very very complicated.
>>
>> What do you think would be the complicated part?  Would the complicated 
>> part be on Orca's end?
>>    
>>
>
>Yes, Orca.  I'm pretty confused on how I would handle a log like this.
> The nagios log is somewhat oddly formatted and contains information
>on multiple hosts and services.
>
>I would like my end result to be that I could click into server1 and see
>its load avg graph.
>
>  
>

Ordinarily, your data-generator would be a process running on, in this 
case, each of server1-server3, spewing data to a file formatted for use 
with Orca (first line all headers, subsequent line with timestamps & 
data).  In your case, since you have the data already in a log file 
(presumably on a central machine), you /could/ write up a data generator 
that simply parses data from this log file, and puts it into orca format 
into different files based on servername.  For instance:

    awk -F\; '/server1.*load average/ {print $6}' | sed -e 's/load
    average: \([^,]*\), \([^,]*\), \([^,]*\)/\1 \2 \3/'
    /usr/local/var/nagios/archives/nagios-05-27-2004-00.log >
    /some/path/to/orca/data/server1/05-27-2004-00.data

That would strip all the data from the 27th involving server1's load 
average, and put it into a simple file with 3 columns.  You'd still need 
a way to put the timestamps in there appropriate for each data point 
(possibly the same as the decimal number at the start of the 
corresponding line of the log file), and a header line as well, with the 
data labels for each column.  If you match those labels and the output 
file to what Orca expects, you'd have effectively replaced orcallator.se 
with your own home-brew script that generates the data from a central 
file rather than actually recording the data from the servers 
themselves.  Not advisable, imo, but if nagios is already running and 
you don't want to incur the extra overhead of running a custom-built 
data collector on those servers, this would be another option.

Consider, though, that if you don't need anything but load average, you 
could perhaps use "rup server1" from any client to gather the current 
load average data (assuming the appropriate serverice is enabled in 
/etc/inetd.conf or equivalent, and there aren't any firewall issues to 
overcome).  Then your script would proably be easier to write, and you 
could get 'real time' data more easily.

--Dragon



-- 

		

	*David P. Michaels*
Senior Multi-Disciplined Engineer II W.H.
NPOESS IS
Platform OS Unix
303.344.6840
720.858.5952 fax
720.521.0561 pager
dragon at raytheon.com <mailto:dragon at raytheon.com> 	*aka "Dragon"*

"I wonder what news is doing..."

 news at newshost <29> ps -fu news      
 news 18624 12367 2 0:00 makehistory 

"News is making history."
      

-------------- next part --------------
An HTML attachment was scrubbed...
URL: </pipermail/orca-users/attachments/20040528/b23ad9e3/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: Raytheon.gif
Type: image/gif
Size: 481 bytes
Desc: not available
URL: </pipermail/orca-users/attachments/20040528/b23ad9e3/attachment.gif>


More information about the Orca-users mailing list