[Scons-users] Slowness of SCons with a large number of targets

Hans Ottevanger hans.ottevanger at gmail.com
Tue Apr 18 07:36:45 EDT 2017


Thanks for the information concerning the roadmap and the assurance
that performance will get the necessary attention in the near future.

We will put the stubwrapper into our SCons and rerun our experiments
(and those of Melski, while we are at it.)

With our current experimental tree we need about 9GB resident memory.
We have 16GB on dedicated VDIs and do not run out of memory (just out
of time !). We will use your suggested debug options on future test
runs and report the results here.

Best regards,

Hans Ottevanger



On Fri, Apr 14, 2017 at 6:35 PM, Bill Deegan <bill at baddogconsulting.com> wrote:
> The roadmap was out of date. I've just updated it.
>
> The current development focus is py2 + py3 compatibility.
> We expect that work to be completed in the next few weeks.
>
> After that performance work is on the top of the list.
>
> That said, there's no reason you can't pull the stubwrapper into your local
> scons.
> See:
> https://bitbucket.org/sconsparts/parts/src/3a389f774f234694994071d784af88c3babaad03/parts/overrides/stubprocess.py?at=master&fileviewer=file-view-default
>
> Curious about your performance, is your SCons process exceeding available
> RAM?
>
> Can you run with:
>
> --debug=count,objects,time
>
> -Bill
>
>
> On Fri, Apr 14, 2017 at 3:53 AM, Hans Ottevanger <hans.ottevanger at gmail.com>
> wrote:
>>
>> Hi,
>>
>> At the company I am working for we are revising our software build
>> system and we are currently evaluating build tools. We think that
>> SCons is an interesting tool, offering a high degree of consistency
>> and features that others are clearly missing.
>>
>> However, we are facing severe performance issues with an increasing
>> number of targets. Experimenting with close to 300000 targets in a
>> tree that mimics a large part of our actual tree, we measured times
>> for a full build of about 35 hours. We just touched the target files
>> and did not invoke real compilers, so those 35 hours are mainly
>> overhead from using SCons.
>>
>> We are aware of the fact that Eric Melski already reported scalability
>> issues quite some time ago (see
>> https://blog.melski.net/2013/12/11/update-scons-is-still-really-slow/).
>> We could almost exactly reproduce Erics results using the tools he
>> provides on GitHub (https://github.com/emelski/scons_bench). We are
>> using SCons 2.5.1 and Python 2.7 on VDIs with 4 cores and 16GB RAM and
>> (perceived) local disk storage, running RHEL6. We need 5000 seconds
>> for 50000 targets, but we see the same quadratic behaviour as Eric
>> Melski reports.
>>
>> We understand that the issue was diagnosed as being caused by the way
>> Python implements fork() and waitpid() and that relief was expected
>> from a wrapper using posix_spawn(). That stubprocess.py wrapper was
>> slated for inclusion in SCons 2.5, but apparently did not make it (see
>> https://bitbucket.org/scons/scons/wiki/Roadmap).
>>
>> What are the current plans integrating this stubprocess.py wrapper
>> into an SCons release? And is there already an estimate when we can
>> expect that?
>>
>> Best regards,
>>
>> Hans Ottevanger
>> _______________________________________________
>> Scons-users mailing list
>> Scons-users at scons.org
>> https://pairlist4.pair.net/mailman/listinfo/scons-users
>
>
>
> _______________________________________________
> Scons-users mailing list
> Scons-users at scons.org
> https://pairlist4.pair.net/mailman/listinfo/scons-users
>


More information about the Scons-users mailing list