[Scons-users] visualize build parallelism, duration of build steps?

Gary Oberbrunner garyo at oberbrunner.com
Wed Oct 20 15:03:48 EDT 2021


Some of this is likely to be your builds becoming I/O bound, I'd expect?

On Tue, Oct 19, 2021 at 9:04 PM Gabe Black <gabe.black at gmail.com> wrote:

> That sounds interesting. I see that when I set -j to 24, the actual CPU
> percentage it gets is only 1953%, which I think would correlate with
> underused CPUs. I had other things running on this computer in the mean
> time like my web browser, but not enough to use 14 of 24 threads in the
> -j12 case, for instance.
>
> jobs  CPU percentage
> 4     397.00%
> 8     755.00%
> 12    1086.00%
> 16    1403.00%
> 20    1713.00%
> 24    1953.00%
>
> On Tue, Oct 19, 2021 at 5:21 PM Bill Deegan <bill at baddogconsulting.com>
> wrote:
>
>> From some experiments MongoDB ran, you can saturate the scheduler without
>> maxing out available CPU's with the current taskmaster.
>> They produced a PR with some patches to improve the job finding and
>> dispatch logic.
>>
>> If you're hitting a wall where you CPU cores are idle, that PR's patch
>> might be interesting.
>> Though it could be that SCons schedules lots of object compiles -> fewer
>> ar/shared library linking -> fewer programs could also be leading to under
>> utilized cores later in your build and/or as part of an incremental build.
>>
>> Hope that helps,
>> Bill
>> SCons Project Co-Manager
>>
>> On Tue, Oct 19, 2021 at 3:55 PM Gabe Black <gabe.black at gmail.com> wrote:
>>
>>> There definitely seems to be a build size component to this. I have a
>>> series of changes which don't change how anything is built, but tell SCons
>>> about everything and then select what to build with different dependencies
>>> instead of using python logic to selectively set up build rules. With -j
>>> set to its previous saturation point performance wise, this actually
>>> increased build time by about 7-8%. I deleted the build and reran to make
>>> sure there weren't caching effects impacting the results, and they stayed
>>> consistent.
>>>
>>> Gabe
>>>
>>> On Tue, Oct 19, 2021 at 6:59 AM Mats Wichmann <mats at wichmann.us> wrote:
>>>
>>>> On 10/18/21 16:45, Gabe Black wrote:
>>>> > Hi folks, sorry if this is a really obvious question, but is there a
>>>> > command line flag or tool or something to visualize how parallel a
>>>> SCons
>>>> > build is, if there are any bottlenecks, if there are abnormally long
>>>> > running build steps, etc?
>>>>
>>>> The taskmastertrace is generally the way to extract this information.
>>>> It's a bit like trying to drink from a firehose, though, so a tool to
>>>> process it would likely help. Don't know if there are any of those
>>>> floating around.
>>>>
>>>>
>>>> > I've had a suspicion for a few years that there might be a task
>>>> scheduling bug where at certain points a super task seems to come along and
>>>> stop any other tasks from being scheduled until it is done.
>>>> There's been some discussion of this over time.  It appears the way
>>>> scheduling is done may be prone to getting blocked by a long-running
>>>> jobs.  One tuning of the algorithm was proposed in a PR which hasn't
>>>> received much action - opinions about whether this had really
>>>> pinpointed
>>>> an underlying issue were not unanimous.
>>>>
>>>> https://github.com/SCons/scons/pull/3386
>>>>
>>>> There's also been talk about maybe not having all the members of the
>>>> pool of job threads be handled identically, so there's a place you
>>>> could
>>>> put a long-running job to get it started as soon as possible, and not
>>>> wait behind a bunch of shorter ones, which could then compete for the
>>>> other threads and maybe get closer to finishing at the same times.
>>>> Making decisions based on length of jobs would require some sort of
>>>> profiling capability so the scheduler has the information available.
>>>>
>>>> On the whole I think it's been posited that there are performance
>>>> issues
>>>> when the build gets very big, and not sure it's well understood why.
>>>> The taskmaster is a pretty complex piece of code and it was heavily
>>>> tuned "in the old days" (~15 years ago).  Be nice to have better ways
>>>> to
>>>> characterize what happens on modern hardware configurations with
>>>> potentially larger build projects than when that work was done.
>>>>
>>>> _______________________________________________
>>> Scons-users mailing list
>>> Scons-users at scons.org
>>> https://pairlist4.pair.net/mailman/listinfo/scons-users
>>>
>> _______________________________________________
>> Scons-users mailing list
>> Scons-users at scons.org
>> https://pairlist4.pair.net/mailman/listinfo/scons-users
>>
> _______________________________________________
> Scons-users mailing list
> Scons-users at scons.org
> https://pairlist4.pair.net/mailman/listinfo/scons-users
>


-- 
Gary
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://pairlist4.pair.net/pipermail/scons-users/attachments/20211020/9c427b2d/attachment.htm>


More information about the Scons-users mailing list