[Orca-dev] Re: Is unzipping the percol files during every run necessary

kim.taylor at pncbank.com kim.taylor at pncbank.com
Fri Feb 14 13:16:51 PST 2003


On Friday, February 14, 2003, Chris Jones wrote:

>I have a problem with the time it's currently taking to process
>collected data..........

>Surely we could speed things up by moving the old percol files into an
archive directory after each invocation (-o -v).
>Is this possible?? Do we really need to unzip everything each time or
should the RRD files contain the relevant
>historical data and thus only need updating with newly collected
information?

I've been thinking hard about the same problem, having about 3 years of
data from upwards of 40 systems.

It seems at end of day, all of a FILE is already loaded into RRD with the
exception of the lag time in the loop (~20min in my case.)
The only value of processing any FILE.gz then is to pick up the last few
entries missed as the original FILE was close and compressed.

What if the compression were simply delayed long enough to have thoe
original FILE processed?
Something like setting COMPRESSOR="slowgzip.sh" in the
/etc/init.d/orcallator startup script might do it where:

-- slowgzip.sh --
(sleep 3600; gzip -9 $1) &

Then remove all compression extentions (*.gz and what not) from
orcallator.cfg and trust your data collector to have FILE available and
processed before it "disapears".

Let me know if this sounds plausible or else somebody stop me before I
break something!

KET
<Kim.Taylor at pnc.com>




More information about the Orca-dev mailing list