[Orca-users] Orca runs out of memory
Michael Banker
mbanker at db.com
Tue Dec 16 08:10:14 PST 2003
Yes, I think you're right. It's running SuSE linux (SLES8, kernel 2.4.19 rev 340).
We've been running it in daemon mode... it starts up using ~300gb, but keeps growing
and I remember it approaching 2gb before it crashes.
On Tue, 16 Dec 2003 rhfreeman at micron.com wrote:
> Do you know how much memory orca is using when it dies?
>
> As it is running on x86 box, it is only 32-bit so can only handle 2GB of
> data memory by default.
>
> You may wish to try another kernel. I've got Redhat 7.2 upto 2.7GB of
> data memory on a standard Redhat kernel.
>
> I think there is a build option in the kernel to make this upto 3.5GB of
> memory, if you fancy building your own kernel.
>
> Otherwise, split them out! :-) Or get a 64-bit machine to run it on!
>
> Rich
>
> > -----Original Message-----
> > From: orca-users-bounces+rhfreeman=micron.com at orcaware.com
> > [mailto:orca-users-bounces+rhfreeman=micron.com at orcaware.com]
> > On Behalf Of Michael Banker
> > Sent: Monday, December 15, 2003 9:27 PM
> > To: orca-users at orcaware.com
> > Subject: [Orca-users] Orca runs out of memory
> >
> >
> > Anyone else running Orca with large numbers of hosts? We
> > have thousands of servers running orcallator (solaris) and
> > procollator (linux and aix) broken down into around 15
> > groups. One particular group has over 1,600 hosts. At some
> > point as the number of hosts reached this number, the orca
> > server process started exiting without updating the
> > index.html. This is now consistently happening... it
> > processes for 5-6 hours and just dies with an Out of Memory
> > error (see stack trace below.)
> >
> > This orca server runs on a Compaq DL580G2 with 16GB of ram,
> > and 1GB swap. When the process dies the server is not
> > swapping, so that doesn't seem to be an issue.
> >
> > The remaining groups are fine... the most hosts in any of the
> > other groups is around 1,300, and there are two other groups
> > with around 700 hosts. All these groups are able to process
> > the new data and update the index.html for that group.
> >
> > Any ideas short of breaking down the larger group into smaller groups?
> >
> > Thanks.
> >
> > -- stack trace before proc exits --
> > mmap2(NULL, 2097152, PROT_NONE,
> > MAP_PRIVATE|MAP_ANONYMOUS|MAP_NORESERVE, -1, 0) = -1 ENOMEM
> > (Cannot allocate memory)
> > mmap2(NULL, 1048576, PROT_NONE,
> > MAP_PRIVATE|MAP_ANONYMOUS|MAP_NORESERVE, -1, 0) = -1 ENOMEM
> > (Cannot allocate memory)
> > mmap2(NULL, 4096, PROT_READ|PROT_WRITE,
> > MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = -1 ENOMEM (Canno
> > t allocate memory)
> > brk(0x40000000) = 0x3ffff000
> > mmap2(NULL, 2097152, PROT_NONE,
> > MAP_PRIVATE|MAP_ANONYMOUS|MAP_NORESERVE, -1, 0) = -1 ENOMEM
> > (Cannot allocate memory)
> > mmap2(NULL, 1048576, PROT_NONE,
> > MAP_PRIVATE|MAP_ANONYMOUS|MAP_NORESERVE, -1, 0) = -1 ENOMEM
> > (Cannot allocate memory)
> > mmap2(NULL, 4096, PROT_READ|PROT_WRITE,
> > MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = -1 ENOMEM (Canno
> > t allocate memory)
> > munmap(0xa9900000, 1048576) = 0
> > munmap(0x40a17000, 524288) = 0
> > munmap(0x583df000, 16384) = 0
> > write(2, "Out of memory!\n", 15) = 15
> > munmap(0xbff8f000, 4096) = 0
> > munmap(0xbff8e000, 4096) = 0
> > munmap(0xbff8c000, 4096) = 0
> > munmap(0xbff8b000, 4096) = 0
> > munmap(0xbff8a000, 4096) = 0
> > munmap(0xbff89000, 4096) = 0
> > munmap(0xbff88000, 4096) = 0
> >
> >
> > --
> > Michael Banker | Deutsche Bank | Parsippany, NJ | 973-606-3732
> >
> > _______________________________________________
> > Orca-users mailing list
> > Orca-users at orcaware.com
> > http://www.orcaware.com/mailman/listinfo/orca-users
> >
>
--
Michael Banker | Deutsche Bank | Parsippany, NJ | 973-606-3732
More information about the Orca-users
mailing list