[Scons-users] Slowness of SCons with a large number of targets
Hans Ottevanger
hans.ottevanger at gmail.com
Fri Apr 14 06:53:52 EDT 2017
Hi,
At the company I am working for we are revising our software build
system and we are currently evaluating build tools. We think that
SCons is an interesting tool, offering a high degree of consistency
and features that others are clearly missing.
However, we are facing severe performance issues with an increasing
number of targets. Experimenting with close to 300000 targets in a
tree that mimics a large part of our actual tree, we measured times
for a full build of about 35 hours. We just touched the target files
and did not invoke real compilers, so those 35 hours are mainly
overhead from using SCons.
We are aware of the fact that Eric Melski already reported scalability
issues quite some time ago (see
https://blog.melski.net/2013/12/11/update-scons-is-still-really-slow/).
We could almost exactly reproduce Erics results using the tools he
provides on GitHub (https://github.com/emelski/scons_bench). We are
using SCons 2.5.1 and Python 2.7 on VDIs with 4 cores and 16GB RAM and
(perceived) local disk storage, running RHEL6. We need 5000 seconds
for 50000 targets, but we see the same quadratic behaviour as Eric
Melski reports.
We understand that the issue was diagnosed as being caused by the way
Python implements fork() and waitpid() and that relief was expected
from a wrapper using posix_spawn(). That stubprocess.py wrapper was
slated for inclusion in SCons 2.5, but apparently did not make it (see
https://bitbucket.org/scons/scons/wiki/Roadmap).
What are the current plans integrating this stubprocess.py wrapper
into an SCons release? And is there already an estimate when we can
expect that?
Best regards,
Hans Ottevanger
More information about the Scons-users
mailing list