[Scons-users] parallel invocation of SCons

Herzog, Tobias (CQSP) tobias.herzog at carmeq.com
Mon Feb 18 06:58:39 EST 2019


Hi,

maybe I can clarify my use case a bit more in detail:
I have some tests that are not driven by SCons. After those tests are run, I generate some reports based on the test results with SCons. Parallel (in the CI pipeline) to those tests we run some unit tests with SCons (all running at the same local checkout at one machine). Normally the unit tests are much faster than those other tests, so the "unit test-SCons" run should have been finished before the report generation starts. I just want to make things safe - just for the case... The report generation target is completely independent from any direct or indirect dependency of the unit test target.

However, I also faced the use case you described, too (which was in fact not the reason why I contacted the mailing list, but it fits very well here):
I have to do builds of the "same" library for different platforms (using different cross compilers and different variant dirs for each platform). As I need to log the build, I have to compile them with -j1 (reproducible build log). For the same reason, I cannot build all build targets within one SCons call (I don't want to mix the several platform build logs). Additionally I want to "map" the build targets to several steps within our CI pipeline for easier analysis (the "3 of 4 thing" you mentioned...).
For performance reasons, you could parallelize those builds at CI-level in invoking SCons parallel. For now I did not do this (i.e. I serialized the builds), because I feared to have race conditions with the sconsign database.

Maybe I'll try to use different databases for each build. But for now the first use case I described is more important to me.

I didn't know about Parts. Maybe I will have a look on that, too.

Tanks,
Tobias



Von: Scons-users <scons-users-bounces at scons.org> Im Auftrag von Jason Kenny
Gesendet: Freitag, 15. Februar 2019 01:03
An: Bill Deegan <bill at baddogconsulting.com>; SCons users mailing list <scons-users at scons.org>
Betreff: Re: [Scons-users] parallel invocation of SCons

Bill,
I agree with you

In this case, if I am reading this correctly. The CI setup he has is a single pipeline that runs different jobs in parallel. When he does this it this way the different jobs all run a different instance of SCons on the same tree building different targets. When he does this he can see if any of the given targets fail the other targets finish. So the worse case his finial job has some data in it. Given I don't know what a build outputs, I can only assume that having 3 out of four is useful vs having nothing. I also assume by doing it this way a build failure is easier to read. However this setup is sharing disk. So he has the know limitation of the DB state with SCons as he is running more than one scons process at the same time. If this was to move to be different pipelines with something to allow him to move output to a finial output pipeline he could get the same effect and have not DB issues. Or he could make different DB files for the other builds in the shared locations, but as you point out that might not help depending on a number of factors. My suggestion is based on not changing the CI but restructuring it to work better in his CI, as I would only assume that sharing data between pipeline in his CI can be hard or impossible to do ( as it this is a common issue )

If this was me I would use Parts with this and define these codebases as different components. This would allow not have any issue with defining depends between the components if needed, it would also checkout the code based given svn or git is used. I would turn on the "per-part" logging option to allow better logging output to display/share on the CI so a -j with -k build would work. GIven -k is used the build will not fail on the one codebase /component not building and his final pass would gather what ever was created. The per component logging should show a more linear output of what happened vs the mixed text for the combined output This is more efficient as you point out. This could be done to a degree in raw SCons as well ( teh logging of output I think is the main reason why the current approach). However both case of Scons/Parts or Scons will probably not show nice hi level green red circles when something failed without some work to scan the output for errors in a different job/step.

I agree with doing one build pass vs many.. but this may be more about having the CI UI show useful information at a quick glance

Personally I would like the DB set to allow me in Parts to build "debug" and "release" variants at the same time. ie as I started one build "debug" and as it builds I start a different "release" variant and not have this mess up the DB as the last build to finish is the state on disk.. this causes the dev to have to rebuild stuff when they did not need to

Jason

________________________________
From: Scons-users <scons-users-bounces at scons.org<mailto:scons-users-bounces at scons.org>> on behalf of Bill Deegan <bill at baddogconsulting.com<mailto:bill at baddogconsulting.com>>
Sent: Thursday, February 14, 2019 4:28 PM
To: SCons users mailing list
Subject: Re: [Scons-users] parallel invocation of SCons

Jason,

Why would you not just build more than one variant with one SCons invocation?
If I understand correctly Tobias's CI is going to build in the same tree, on the same machine.
So there's very little benefit in building variants separately...  at least performance wise.

If the CI misbehaving is building the same code at the same time for the same variant (if there's more than one), that's just something to avoid.
As they could very well break each other regardless of the sconsign issue.

-Bill


On Thu, Feb 14, 2019 at 1:49 PM Jason Kenny <dragon512 at live.com<mailto:dragon512 at live.com>> wrote:
I agree that if you make your CI simpler that is the best way.

I disagree on the windows issue. the locking on a common PDB with build c++ is easy to correct with /Z7. I have not had issues with multiple independent builds on windows given they are in different directories, but maybe i have not had to build some certain case of something that does have an issue.

If I understand correctly there is one Sconstruct ( possibly copied for each repo) that builds different targets for different code bases?

GIven this I think it would be better to break up the Sconstruct to different scripts for each code base. Then have the final script look for the finial outputs to gather them up. These can all run independently ( minus the finial script and given they are in different directories or you redefine the db file) however this only makes sense if these code bases are independent from each other. If the code bases depend on the other code bases you will have a problem, and you might have to do as Bill suggest in having the CI block part of the pipeline from running at the same time.

I agree that it would be nice if we could get a better DB setup that would allow one to run in the same directory different variant builds at the same time without overwriting the DB. However that does not exist today. If you want to have a common DB for all the bits, you have to serialize the build targets on the CI, or you have to break up the build so it components are independent builds.

Jason
________________________________
From: Scons-users <scons-users-bounces at scons.org<mailto:scons-users-bounces at scons.org>> on behalf of Damien <damien at khubla.com<mailto:damien at khubla.com>>
Sent: Thursday, February 14, 2019 11:09 AM
To: scons-users at scons.org<mailto:scons-users at scons.org>
Subject: Re: [Scons-users] parallel invocation of SCons


We do this by having separate codebases for each build target, even though the codebase supports all the targets.  Each SCons is run in a separate process for each codebase.  We then have a final SCons pass that just does a set of env.Install('VariousVersionsOfBlahTarget') to collect everything together.  It sounds cumbersome, but it's very reliable, once you have the scripts set up it's straightforward.  Also, when one build breaks, only one is broken.

Damien
On 2/14/2019 9:57 AM, Bill Deegan wrote:
Jason,

Assuming the CI builds are building the same code, just changing the sconsign file won't be sufficient.
Especially if the builders are windows, with the joy of windows file locking...

Best way to solve this is:
1) fix the ci
2) cause SCons to wait until the other build is done before starting.

On Thu, Feb 14, 2019 at 7:06 AM Jason Kenny <dragon512 at live.com<mailto:dragon512 at live.com>> wrote:

Hi,

As I understand it, Scons will write most of the data at the end of the run. There is a race as it will write out the whole file with updated state from the current run. This leads to the race issue you are seeing as the state of one run will replace the data from a previous run, leading to bad states. I normally work around this via one of two ways:

1) run scons with two targets and a -j[n] option. This case normally solves the issue for me. it can however be hard to read output, or from your CI point of view not break up the tasks as you might like.
2) use the SConsignFile() in your different runs to use different DB files so there is no conflicts. This case may allow you to work around your issue as you can prevent the race condition on a common file.

Information about the SConsignFile below.

Jason


SConsignFile([file, dbm_module]) , env.SConsignFile([file, dbm_module])

This tells scons to store all file signatures in the specified database file. If the file name is omitted, .sconsign is used by default. (The actual file name(s) stored on disk may have an appropriated suffix appended by the dbm_module.) If file is not an absolute path name, the file is placed in the same directory as the top-level SConstruct file.

If file is None, then scons will store file signatures in a separate .sconsign file in each directory, not in one global database file. (This was the default behavior prior to SCons 0.96.91 and 0.97.)

The optional dbm_module argument can be used to specify which Python database module The default is to use a custom SCons.dblite module that uses pickled Python data structures, and which works on all Python versions.

Examples:

# Explicitly stores signatures in ".sconsign.dblite"

# in the top-level SConstruct directory (the

# default behavior).

SConsignFile()



# Stores signatures in the file "etc/scons-signatures"

# relative to the top-level SConstruct directory.

SConsignFile("etc/scons-signatures")



# Stores signatures in the specified absolute file name.

SConsignFile("/home/me/SCons/signatures")



# Stores signatures in a separate .sconsign file

# in each directory.

SConsignFile(None)

________________________________
From: Scons-users <scons-users-bounces at scons.org<mailto:scons-users-bounces at scons.org>> on behalf of Herzog, Tobias (CQSP) <tobias.herzog at carmeq.com<mailto:tobias.herzog at carmeq.com>>
Sent: Thursday, February 14, 2019 3:14 AM
To: Scons-users at scons.org<mailto:Scons-users at scons.org>
Subject: [Scons-users] parallel invocation of SCons

Hi SCons users,

due to a parallelization in a CI build pipeline, I have the use case, that SCons (possibly) is invoked concurrently in the same tree (i.e. using database) but with different build targets. I already noticed, that this can lead to a crashing SCons and/or corrupted database.
For me it would be sufficient, if SCons just waits until the other SCons process has finished. My Idea was, to acquire any kind of inter process lock right in the SConstruct, that is active until the process terminates. So my assumption here is, that database access takes place after the reading of SConstruct/SConscript is done.
Is this correct? Will this solution be safe? Has anyone solved this problem in another/better way?

Thanks an best regards,
Tobias
_______________________________________________
Scons-users mailing list
Scons-users at scons.org<mailto:Scons-users at scons.org>
https://pairlist4.pair.net/mailman/listinfo/scons-users<https://nam03.safelinks.protection.outlook.com/?url=https%3A%2F%2Fpairlist4.pair.net%2Fmailman%2Flistinfo%2Fscons-users&data=02%7C01%7C%7C2e3ba0564f0d43c6a65c08d692cbb924%7C84df9e7fe9f640afb435aaaaaaaaaaaa%7C1%7C0%7C636857801010068924&sdata=mT4TVLOtoLacwYohO8lrhaXsMpMIXtjdhmPG1cm06Kc%3D&reserved=0>
_______________________________________________
Scons-users mailing list
Scons-users at scons.org<mailto:Scons-users at scons.org>
https://pairlist4.pair.net/mailman/listinfo/scons-users<https://nam03.safelinks.protection.outlook.com/?url=https%3A%2F%2Fpairlist4.pair.net%2Fmailman%2Flistinfo%2Fscons-users&data=02%7C01%7C%7C2e3ba0564f0d43c6a65c08d692cbb924%7C84df9e7fe9f640afb435aaaaaaaaaaaa%7C1%7C0%7C636857801010078934&sdata=rFkxvPp30awy9ACZT2MS1RQMHFWVuRhJAC3u69dJMCY%3D&reserved=0>



_______________________________________________

Scons-users mailing list

Scons-users at scons.org<mailto:Scons-users at scons.org>

https://pairlist4.pair.net/mailman/listinfo/scons-users<https://nam03.safelinks.protection.outlook.com/?url=https%3A%2F%2Fpairlist4.pair.net%2Fmailman%2Flistinfo%2Fscons-users&data=02%7C01%7C%7C2e3ba0564f0d43c6a65c08d692cbb924%7C84df9e7fe9f640afb435aaaaaaaaaaaa%7C1%7C0%7C636857801010088939&sdata=%2BrGGAUZSHJqaff2RRDNepKIjVBGQ6rjkqHio0dwBc1Q%3D&reserved=0>
_______________________________________________
Scons-users mailing list
Scons-users at scons.org<mailto:Scons-users at scons.org>
https://pairlist4.pair.net/mailman/listinfo/scons-users<https://nam03.safelinks.protection.outlook.com/?url=https%3A%2F%2Fpairlist4.pair.net%2Fmailman%2Flistinfo%2Fscons-users&data=02%7C01%7C%7C2e3ba0564f0d43c6a65c08d692cbb924%7C84df9e7fe9f640afb435aaaaaaaaaaaa%7C1%7C0%7C636857801010098944&sdata=PcS%2B0JyBp%2Bonz2bwlizdIrJ64KsSjfSk9HMiJOI5uNI%3D&reserved=0>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://pairlist4.pair.net/pipermail/scons-users/attachments/20190218/0b62d96f/attachment-0001.html>


More information about the Scons-users mailing list