[Orca-users] Old data / parsed data
David Michaels
dragon at raytheon.com
Thu Sep 6 09:01:53 PDT 2007
NFS takes too much CPU? That's not right -- are you sure it's NFS
that's consuming CPU, and not the orca computations? Also, why are your
clients scp'ing the data to the Linux box if the data is being stored
via NFS? Why not just scp them to the NFS server, or better yet, just
write them to that location in the first place?
It could be that the I/O overhead of a constant stream of scp'ed data
coming in is taxing your system beyond what the orca computational
overhead is generating. So separating the two is advisable. In my
case, the orcallator process on each box writes the data to the data
directory that orca itself uses. Orca in turn runs on a v440 (it used
to run on a v210, but as our network grew, that turned out to be
insufficiently powerful).
The *.bz2 files are the raw data. My Suns and AIX boxes generate about
17MB of raw bzip2'ed data per year. If your raw data dirs are much
bigger than this, and you need them to be smaller, perhaps you should
adjust what data you're collecting.
Alternatively, you can remove old data files with a simple find
command. However, once those old data files are gone, you cannot
regenerate them.
The RRD files can be regenerated from the raw data files at any time.
You may have some old RRD files floating around from data that is no
longer meaningful. For example, I have data for my QFE interfaces on my
Sun servers from 2005, but I disabled the QFE interfaces (and the
corresponding orcallator.conf file entries) and thus no longer collect
data on them. I don't really need those corresponding RRD files
anymore, so I can safely remove them.
By the way, you should probably consider upgrading to the latest orca
snapshot (r529 or later), and using the orcallator.se file from that
distribution. Also, check your RICHPse distribution -- 3.4 is the
latest, and is recommended. This might even help your NFS problem, but
that's unlikely.
Hope this helps,
--Dragon
Francisco Mauro Puente wrote:
> Hello List,
>
> I'm using orca-0.27 + orcallator.se 1.37.
>
> I've used to run both, orca and the web server on a machine runing
> linux, but since it generated huge I/O problems (mainly disk), I've
> decided to process all the data, via NFS, on a Sun v490 server. Now the
> problem is the NFS takes too much cpu on the Linux box.
>
> My problem is: what can I delete from the rrd directory to free up some
> space?
> All my servers are transferring the orcallator-generated files via
> 'scp' to the Linux box, but files are being kept on both sides, clients
> and server, eating space very very fast...how are you guys deailng with
> this? I mean, all the .bz2 keep growing on the client, then transfered
> to the server, there is no purge implemented in any way?
>
> Any help will be very welcome
>
> Thanks
> Francisco
> _______________________________________________
> Orca-users mailing list
> Orca-users at orcaware.com
> http://www.orcaware.com/mailman/listinfo/orca-users
>
More information about the Orca-users
mailing list