g****t 发帖数: 39 | 1 Thank you for your reply. For my Linux box, the physical memory is > 600M, the
swap is > 1 G.
I also did a simple test on IBM SP2 node, which has > 60 G memory. However, my
program stops after allocating about 250M memory.
Anything wrong? My program is a super simple one, with "new" in a loop. the
skeleton is as follows:
1. int *pArray[10000];
2. for(i; loop for 10000 or no more memory) pTmp = new int[1mega]; pArray[i] =
pTmp;
My motivation is to find out the upper bound of supportable memory us | w**n 发帖数: 88 | 2 I think I should back off from my previous post, if you are testing on a
single program, the swap should not count.
I don't know SP2 is that a cluster or a single super computer? I mean 60G
distributed mem does not make a case for a single process.
The limit of heap mem usage is specific to hardware and OS, for 32 bit
intel linux , is likely to be 2 to 3 G.
【在 g****t 的大作中提到】 : Thank you for your reply. For my Linux box, the physical memory is > 600M, the : swap is > 1 G. : I also did a simple test on IBM SP2 node, which has > 60 G memory. However, my : program stops after allocating about 250M memory. : Anything wrong? My program is a super simple one, with "new" in a loop. the : skeleton is as follows: : 1. int *pArray[10000]; : 2. for(i; loop for 10000 or no more memory) pTmp = new int[1mega]; pArray[i] = : pTmp; : My motivation is to find out the upper bound of supportable memory us
|
|