[Scons-users] Memory problem with Integration tests
William Blevins
wblevins001 at gmail.com
Tue Aug 12 12:27:44 EDT 2014
Rob,
My machine (Windows 7) has 14GB of RAM available.
Python (being 32bits) seems to run out while coming close to the 2GB
> boundary.
>
Is there a reason to use 32-bit python on a 64-bit operating system?
> (watched it grow and grow during the run).
> It especially seemed to grow between after finishing one of the
> integration-test-targets and before starting a second.
It's strange that it would run out of memory around the 2GB (~1/2 the max
for 32-bit) mark unless SCons was trying to allocate a very large
contiguous chunk of memory. Maybe an array or a string? Generally,
containers of that nature double in size when increasing capacity. In most
languages, the array size limit is well below the maximum. In Java, this
in 1GB for a 32-bit jvm.
Would it be possible to get a memory usage profile for your setup? Maybe
via something like https://pypi.python.org/pypi/memory_profiler or your
preferred tool?
If not, can you confirm my assumption that the out of memory exception is
not a stack overflow?
What I had noticed already was that this in between time was VERY long
> (single to several minutes).
> Because the tests cannot be run in parallel, and I was trying to optimize,
> I tried:
> scons scripttests --max-drift=1 --implicit-cache -j 1
> and
> env.Decider('MD5-timestamp')
> The "-j 1" really made a HUGE improvement. With this option the memory is
> around 400-500MB (what you had guessed)
> Also the very long waits between the integration-test-targets are almost
> completely gone.
It seems to work, but I don't know why and I would not like to be obligated
> to use the "-j 1"
SCons Job pipeline handles -j1 and -jX for X>1 differently [if I'm not
mistaken].
I imagine each parallel task has a memory overhead. I don't know if that
overhead grows proportionally to the number of jobs.
How many processors were you running with before? What happens if you do
-j2 instead of -jNUM_PROCS?
Does the scons run in question build software or just run test scripts?
The configuration I use at work does both, and I haven't ran into this
issue.
-William
On Tue, Aug 12, 2014 at 3:49 AM, Rob Deckers <Rob.Deckers at vanderlande.com>
wrote:
> Hi,
>
> Thanks for the first comments.
>
> It's nice to hear that the overall size and the basic setup doesn't sound
> alarming to you.
>
> My machine (Windows 7) has 14GB of RAM available.
> Python (being 32bits) seems to run out while coming close to the 2GB
> boundary.
> (watched it grow and grow during the run).
> It especially seemed to grow between after finishing one of the
> integration-test-targets and before starting a second.
>
> What I had noticed already was that this in between time was VERY long
> (single to several minutes).
> Because the tests cannot be run in parallel, and I was trying to optimize,
> I tried:
>
> scons scripttests --max-drift=1 --implicit-cache -j 1
> and
> env.Decider('MD5-timestamp')
>
> The "-j 1" really made a HUGE improvement. With this option the memory is
> around 400-500MB (what you had guessed)
> Also the very long waits between the integration-test-targets are almost
> completely gone.
>
> It seems to work, but I don't know why and I would not like to be
> obligated to use the "-j 1"
>
> Kind regards,
> Rob Deckers
>
>
> -----Original Message-----
> From: Scons-users [mailto:scons-users-bounces at scons.org] On Behalf Of
> Dirk Bächle
> Sent: maandag 11 augustus 2014 17:35
> To: scons-users at scons.org
> Subject: Re: [Scons-users] Memory problem with Integration tests
>
> Hi Rob,
>
> it's difficult to guess what's wrong without seeing the
> SConstructs/SConscripts. Please find a few first comments below.
>
> On 11.08.2014 09:21, Rob Deckers wrote:
> > Hi All,
> >
> > I have SCons working including integration tests on one archive and now
> I'm integrating the integration tests on a second (slightly different)
> archive.
> >
> > The previous archive had about 800 scripttests and this one has 1200.
> > The 1200 tests are devided over about 100 directories and each directory
> is it's own target.
> >
> > Each scripttest uses the same single executable.
> > The executable is dependend on about 8.000 header and cpp files
> Given this number of input files, a memory consumption of 400-500MB (as
> your test results indicate), looks quite normal to me. This depends on the
> project's structure of course, but it's not an alarming value.
>
> > To make a all_scripttests target, each individual scripttest target is
> added like:
> > env.Alias(FscScriptTestTool['scripttest_alias'], target)
> >
> > When I first do a scons run to build the executable and then an
> all_scripttests run.
> > I see the memory growing with each scripttest (a rate of about 1MB every
> 4 seconds).
> >
> > Eventually, I get a memory error triggered from some copy command.
> > When I disabled that single line, I got the error on some other line.
> >
> > Is it possible my alias construction is to blame?
> I don't think so.
> > What does scons (beneath the surface) do with 100 targets all dependent
> on one executable with 8K files ?
> > Does it create 100 times the entire tree for the executable?
> No, it shouldn't do this. The dependencies are internally stored in a
> graph, which allows several targets to have the same source node(s) as
> input.
>
> > In the attachment is the output from an up-to-date run for the
> executable. I used:
> > scons executable --debug=memory --debug=count I have a hard time
> > really interpreting those results.
> The first thing that catches my eye is the high number of "Override"
> Environments. This means you're using things like
>
> env.Program(target, source, SET_A_VARIABLE="something_special")
>
> a lot. Try to avoid that, and setup the Environments in your top-level
> SConstruct for the different cases you need. Then simply pass them around,
> but don't recreate them in all the places.
> If you think you *have* to do it like this, let's discuss the underlying
> problem and how to solve it. There should be a way around it...
>
> How much memory does your system have?
> Have you tried to watch memory consumption with a "top/vmstat" in parallel
> to running SCons? The "scons --debug=memory" isn't necessarily
> comprehensive.
> By how much does the overall memory consumption of the "scons" process
> grow when you run it with one test target, versus running with two targets?
>
> Best regards,
>
> Dirk
>
>
> _______________________________________________
> Scons-users mailing list
> Scons-users at scons.org
> http://four.pairlist.net/mailman/listinfo/scons-users
>
> ------------------------------------------------------------------------------
>
> ** Disclaimer **
>
> This e-mail, including any attachments, may include proprietary and
> confidential information of Vanderlande and may only be read by the person
> or those persons to whom it is addressed.
> This document is forwarded to you in such a form (e-mail) that Vanderlande
> cannot guarantee the completeness and/or correctness of its contents and
> information.
> If you have received this e-mail message in error, please notify us
> immediately. Please also delete this document from your computer.
> This document may not be reproduced, copied, distributed, published,
> modified, or furnished to third parties, without the prior written consent
> of Vanderlande.
>
> _______________________________________________
> Scons-users mailing list
> Scons-users at scons.org
> http://four.pairlist.net/mailman/listinfo/scons-users
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://four.pairlist.net/pipermail/scons-users/attachments/20140812/21307551/attachment-0001.html>
More information about the Scons-users
mailing list