[Orca-users] Out of memory during request for 1016 bytes error
Michael O'Dea
modea at upoc-inc.com
Thu Dec 19 11:39:01 PST 2002
I bumped the limits down:
bash-2.05$ ulimit -a
core file size (blocks) 0
data seg size (kbytes) unlimited
file size (blocks) unlimited
open files 512
pipe size (512 bytes) 10
stack size (kbytes) 8192
cpu time (seconds) unlimited
max user processes 29995
virtual memory (kbytes) 51200
512 open files -- it was 4096 before. I am still getting the errors
/opt/orca-0.27b2/bin/orca: warning: cannot open state file `/opt/orca-0.27b2/rrd/orcallator/orca.state.tmp' for writing: Too many open files
and now it doesnt seem to be writing anything useful. When the was the default of 4096 file descriptors the machine crawled to a halt and used 100% swap & othber processes started crapping out.
Also, the system shows about 75 bzip processes and 430 defunct processes under orca!
-----Original Message-----
From: Steve Waltner [mailto:swaltner at lsil.com]
Sent: Thursday, December 19, 2002 10:25 AM
To: orca-users at orcaware.com
Subject: Re: [Orca-users] Out of memory during request for 1016 bytes
error
On Wednesday, December 18, 2002, at 05:24 PM, Michael O'Dea wrote:
> Hello all
>
> While orca is firing up and starting to read files I get this error:
>
> Out of memory during request for 1016 bytes, total sbrk() is 27999536
> bytes!
>
> after about 150 lines of
>
> /opt/orca-0.27b2/bin/orca: warning: cannot open
> `/opt/orca-0.27b2/orcallator/st-ivr/percol-2002-12-15' for reading:
> Too many open files
> /opt/orca-0.27b2/bin/orca: warning: cannot open
> `/opt/orca-0.27b2/orcallator/st-ivr/percol-2002-12-16' for reading:
> Too many open files
> /opt/orca-0.27b2/bin/orca: warning: cannot open
> `/opt/orca-0.27b2/orcallator/st-ivr/percol-2002-12-16' for reading:
> Too many open files
> /opt/orca-0.27b2/bin/orca: warning: cannot open
> `/opt/orca-0.27b2/orcallator/st-ivr/percol-2002-12-17' for reading:
> Too many open files
>
> Now the "too many open files" error I have always gotten - but this
> "Out of memory" error is new.
>
> Did I maybe reach some limit of rrd files that Orca can process?
>
> -m
Have you looked at the output of limit/ulimit? In addition to the file
descriptors limit that is mentioned in the FAQ
http://svn.orcaware.com:8000/repos/trunk/orca/FAQ at item 2.3, you can
also adjust the datasize or the max amount of RAM the process can
malloc(). You should used "ulimit -d" for sh variants and "limit
datasize" for csh variants of user shells. On my system, limit reports
the following, allowing me to allocate memory until swap space is
exhausted.
ra:~> limit
cputime unlimited
filesize unlimited
datasize unlimited
stacksize 8192 kbytes
coredumpsize 4096 kbytes
vmemoryuse unlimited
descriptors 256
ra:~> limit -h
cputime unlimited
filesize unlimited
datasize unlimited
stacksize unlimited
coredumpsize unlimited
vmemoryuse unlimited
descriptors 1024
ra:~>
Steve
_______________________________________________
Orca-users mailing list
Orca-users at orcaware.com
http://www.orcaware.com/mailman/listinfo/orca-users
More information about the Orca-users
mailing list