Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Making the CMake build system a first class citizen #986

Closed
6 of 8 tasks
delcypher opened this issue Apr 27, 2017 · 42 comments
Closed
6 of 8 tasks

Making the CMake build system a first class citizen #986

delcypher opened this issue Apr 27, 2017 · 42 comments
Labels
build/release build and release scripts and process

Comments

@delcypher
Copy link
Contributor

delcypher commented Apr 27, 2017

Since the CMake build system in #461 I've been slowly implementing additional feature to make it on par with the original Python/Makefile based build system.

The following features are missing (but are being actively worked on)

Since we are very close to having feature parity and the build system has been in place for over a year (and thus we've had plenty of time for people to try it) I thought it might be a good time to re-evaluate the position of the build system within the project. I also hear that Visual Studio 2017 is supposed to have substantially better CMake integration which is a nice bonus for Visual Studio users.

Right now the build system isn't really a "first class citizen" because it lives in contrib/ and is not tested by CI. I'd like to change this so (resources permitting) I'd like to see

  • Continuous integration testing using the CMake build system added. If the resources for this aren't available I'd be willing to set up testing via third party services like TravisCI and AppVeyor.
  • Moving the CMake files out of contrib/ to where CMake expects them to live. This isn't essential but it would be a nice change because I find myself frequently shooting myself in the foot when I forget to run the contrib/cmake/boostrap.py script. EDIT: Done in [CMake] Move CMake files into their intended location #1070 and is now merged

Note I am not suggesting removing or deprecating the original Python/Makefile. Although that is what I'd like to see happen eventually this is not my decision to make. If this is the route we want to go down a first step would be to set up extensive testing (as I'm suggesting above) so that any problems with the CMake build system can be fixed before deprecating the old build system.

Thoughts?

@NikolajBjorner
Copy link
Contributor

This is super. I would prefer to deprecate the python based build as soon as it is redundant. Maintaining two systems is not a path forward, and the python system has its limitations. My only issue with cmake/ninja is that i dont get good debug symbols for the vs profiler. Haven't dug into why, probably the warning that it can't find vc140.pdb during linking is a big clue.

@delcypher
Copy link
Contributor Author

delcypher commented Apr 27, 2017

@NikolajBjorner Thanks for your support.

Regarding the issue you've observed. Could you create an issue detailing reproduction steps (e.g. MSVC/Visual Studio version, CMake version, build type) and I'll take a look? Broken debug info is definitely a blocker and that needs to be fixed.

@NikolajBjorner
Copy link
Contributor

For starters, I get these kind of warnings (RelWithDebInfo) on my various machines.

api_log_macros.cpp.obj : warning LNK4099: PDB 'vc140.pdb' was not found with 'api_log_macros.cpp.obj' or at 'C:\z3\build\vc140.pdb'; linking object as if no debug info
api_commands.cpp.obj : warning LNK4099: PDB 'vc140.pdb' was not found with 'api_commands.cpp.obj' or at 'C:\z3\build\vc140.pdb'; linking object as if no debug info
z3_replayer.cpp.obj : warning LNK4099: PDB 'vc140.pdb' was not found with 'z3_replayer.cpp.obj' or at 'C:\z3\build\vc140.pdb'; linking object as if no debug info
api_tactic.cpp.obj : warning LNK4099: PDB 'vc140.pdb' was not found with 'api_tactic.cpp.obj' or at 'C:\z3\build\vc140.pdb'; linking object as if no debug info
api_stats.cpp.obj : warning LNK4099: PDB 'vc140.pdb' was not found with 'api_stats.cpp.obj' or at 'C:\z3\build\vc140.pdb'; linking object as if no debug info
api_solver.cpp.obj : warning LNK4099: PDB 'vc140.pdb' was not found with 'api_solver.cpp.obj' or at 'C:\z3\build\vc140.pdb'; linking object as if no debug info
api_seq.cpp.obj : warning LNK4099: PDB 'vc140.pdb' was not found with 'api_seq.cpp.obj' or at 'C:\z3\build\vc140.pdb'; linking object as if no debug info

I am not sure where "vc140.pdb" gets pinned by the build system and where I would fetch it from.
The post http://stackoverflow.com/questions/33292241/generating-pdb-files-correctly-for-a-native-dll-using-visual-studio-2013 indicates it is entirely handled by the build system.

About repros
C:\z3>cmake --version
cmake version 3.5.2
C:\z3>ninja --version
1.6.0
C:\z3>cl --version
Microsoft (R) C/C++ Optimizing Compiler Version 19.00.24215.1 for x86
Repro scenario: take some smt2 file that takes more than 10 seconds to run. Select sampling based profiling from the VS profiling wizard, configure the profiler to run z3.exe on the appropriate long running input. The stack trace information in the profiler is going to be inaccurate (compared to what gets generated by python/nmake).

@delcypher
Copy link
Contributor Author

@NikolajBjorner Thanks for the detailed info. Am I right in thinking that MSVC 19.00 corresponds to Visual Studio 2015? Also does this problem happen when you use Visual Studio as the CMake generator (instead of Ninja)? I'm not very familiar with the MSVC toolchain so I'm not sure how well things are supposed to work when you build a binary outside of Visual Studio but then try to profile it using Visual Studio.

@delcypher
Copy link
Contributor Author

delcypher commented Apr 27, 2017

Microsoft (R) C/C++ Optimizing Compiler Version 19.00.24215.1 for x86
Repro scenario: take some smt2 file that takes more than 10 seconds to run. Select sampling based profiling from the VS profiling wizard, configure the profiler to run z3.exe on the appropriate long running input. The stack trace information in the profiler is going to be inaccurate (compared to what gets generated by python/nmake).

@NikolajBjorner I just noticed you're doing a 32-bit build (x86) rather than a 64-bit (x64) build. Is that intentional?

So far I've not been able to reproduce this but I was using x64 and MSVC 18 (from Visual Studio 2013).

@delcypher
Copy link
Contributor Author

@NikolajBjorner Moving this discussion to #990

@delcypher
Copy link
Contributor Author

@NikolajBjorner @wintersteiger
Do you have any preference on the order that I tackle this issues mentioned in this PR?

Personally I'd like to do moving the CMake files into their correct location (i.e move files out of contrib/cmake but keep /contrib/cmake/boostrap.py for backwards compatibility for a short time) first because this is by far is the most annoying issue (to me) right now. Every time I change my current git branch in anyway (e.g. git rebase, git pull, git reset) I have to remember to run the boostrap.py script otherwise bad things might happen. Unfortunately I frequently forget to run the script and I'm the author of the script. If I can't remember to do this I'm not convinced other people building Z3 will either.

I wanted to check with you before I create a PR to do this though because it created some controversy last time I proposed this.

@NikolajBjorner
Copy link
Contributor

It would deprecate a conceptual contribution from @agurfinkel to the master branch (still anticipating in suspense the Spacer pull request).

@delcypher
Copy link
Contributor Author

@NikolajBjorner

It would deprecate a conceptual contribution from @agurfinkel to the master branch (still anticipating in suspense the Spacer pull request).

Spacer pull request? Could you elaborate on that?

@agurfinkel
Copy link
Collaborator

@NikolajBjorner this would not impede me in any way.

@wintersteiger
Copy link
Contributor

Re continuous integration: sure, go ahead if you have time to do that! I think Travis now also supports OSX so it makes sense to set something up there. We currently have two CI toolchains, one for the nightly binaries (running on local VMs on a mac mini in my office) and one for the per-commit-batch CI runs (running on Azure VMs). The latter is basically a failed experiment, because VSTS/visualstudio.com just doesn't provide the features we need and it's a pain to work with. On top of that, maintaining all of that takes way too much of time, so I'd be happy for external contributors to take over some or all of this. We don't really have an agreed upon budget for any of this, but I'm sure we could contribute some Azure VMs if needed.

@delcypher
Copy link
Contributor Author

I may have some time next week to start looking at this. I have quite a bit of experience using TravisCI so that's probably the solution I'll go with. The way I've done this in the past though is to have the build logic in separate shell scripts so that their functionality can be repurposed to other situations (e.g. building locally, building in a Docker container). Then I just have Travis simply call the build script with the appropriate environment variables set.

One nice thing we can get out of doing this is have ASan and UBSan builds which will detect problems running the simple tests and API examples. We'll likely need a suppression file for UBSan though due to #964 .

@delcypher
Copy link
Contributor Author

delcypher commented Jun 22, 2017

@wintersteiger @NikolajBjorner

So I don't think I'm going to have time to implement CI this week but something we need to think about is how much of Z3's configuration space we should cover and test.

Here are the configuration parameters I can think of

  • Host architecture {'x86_64', 'i686', 'armv7', 'aarch64', ...}
  • Host operating system { 'Windows VERSION', 'macOS', 'Ubuntu VERSION?', 'CentOS VERSION?', ...}
  • Build system { 'CMake', 'Python/Makefile' }
  • CMake version { '2.8.12', '3.0', ... }
  • CMake Generator {'Unix Makefiles', 'Ninja', 'Visual Studio VERSION', ... }
  • Build type { 'Debug', 'RelWithDebInfo', 'Release', 'MinSizeRel' }
  • Compiler { 'MSVC', 'gcc', 'Clang', 'mingw32', 'mingw64', ...}
  • Use OpenMP {'true', 'false'}
  • Use LibGMP {'true', 'false'}
  • Use Link time optimization {'true', 'false'}
  • Static libz3 build {'true', 'false'}
  • Build, install, and test C API bindings {'true', 'false'}
  • Build, install, and test C++ API bindings {'true', 'false'}
  • Build, install, and test Python API bindings {'true', 'false'}
  • Build, install, and test .NET API bindings {'true', 'false'}
  • Build, install, and test Java bindings {'true', 'false'}
  • Build, install, and test OCaml bindings {'true', 'false'}
  • ASan builds {'true', 'false'}
  • UBSan builds {'true', 'false'}
  • Test suites to run { 'unit tests', ??? }
  • Build Doxygen documentation {'true', 'false'}

As you can see the configuration space is rather large. There is a trade-off between configuration coverage and the time it will take to run the configuration. I would prefer to keep the number of configurations we test small so that we get the build results for PRs as fast as possible. This means I will have to pick a subset of the above configuration space.

  • What are the most important configuration parameters that you'd like to be covered?
  • What sort of tests would you like to be run after Z3 is built?

@NikolajBjorner
Copy link
Contributor

  • I would advocate to deprecate the python build system completely to not maintain two systems and rely on the more versatile cmake system. So building for both should not be necessary for this.
  • We run unit tests from z3test/regressions/smt2 (there is a script in z3test/scripts that gets invoked).

@delcypher
Copy link
Contributor Author

@NikolajBjorner

We run unit tests from z3test/regressions/smt2 (there is a script in z3test/scripts that gets invoked).

Of course this is dependent on the host machine but are these tests very resource intensive? Do they take hours to run or can they complete in a few minutes?

@NikolajBjorner
Copy link
Contributor

NikolajBjorner commented Jun 22, 2017

These are fairly quick. A couple of minutes at most.

@delcypher
Copy link
Contributor Author

@NikolajBjorner

I would advocate to deprecate the python build system completely to not maintain two systems and rely on the more versatile cmake system. So building for both should not be necessary for this.

While I am in favour of removing the old build system I worry that doing this too abruptly will adversely affect Z3's users. My recommendation would be to announce the old build system's deprecation in Z3's release notes when the next stable release is made. That way the deprecation is properly announced (most Z3 users probably don't read Z3's issue tracker and therefore don't know about the planned deprecation) and users have ample time to migrate. This of course assumes that a new release of Z3 is fairly imminent.

So until the old build system is actually deprecated I think it would be wise to test at least one configuration of the old build system given that people are still relying on it.

@wintersteiger
Copy link
Contributor

We currently run z3test/scripts/smalltest.py on Linux (Ubuntu 14.04 x86, Ubuntu 14.04 x64, Debian 8.5 x64), OSX 10.11, and z3test/scripts/win(32|64)test.py on Windows and those VMs produce the nightly binaries as well (via z3/scripts/mk_*_dist,py). The build scripts that trigger all this are in z3test/build.

I don't think we need more configurations than the ones we have now, at least not right away. I have recently removed the FreeBSD build (and nobody complained) and I think we could also remove Debian (if their packages don't use our nightly binaries, but I don't think they do). I'd like to keep Linux x86 + x64 and Windows x86 + x64. The number of users of the x86 configurations is decreasing, but for now I think we should keep them.

The CI builds use different VMs of the same OSs, except Ubuntu 14.04 x64 runs 16.04. These use the same scripts but are triggered by visualstudio.com build definitions.

@delcypher
Copy link
Contributor Author

@wintersteiger @NikolajBjorner

I have a first proto-type ready for you both to take a look at

You can find my fork at https://github.com/delcypher/z3/tree/travis_ci_proto_type

You can see some example builds at

https://travis-ci.org/delcypher/z3/builds/247161333

You can click on any of the configurations and see the build log.

Here is a rough tour of how it works

  1. The .travis.yml file in the root of the repository is a special file that TravisCI looks for that tells it how to test Z3.
  2. Under the env.matrix key in the file you'll find a list. Each item in the list is setting various environment variables which controls the configuration of the build.
  3. The .travis.yml contains the line contrib/ci/scripts/travis_ci_entry_point.sh. This points to the script that will run the build.
  4. The contrib/ci/scripts/travis_ci_entry_point.sh script determines if it is running on Travis's Linux infrastructure or OSX infrastructure and calls the appropriate script. I've not implemented the OSX support yet.
  5. The contrib/ci/scripts/travis_ci_linux_entry_point.sh script is essentially a wrapper around the docker build command. We use docker for the linux infrastructure because it gives us full control of the dependencies and linux distribution, and makes it easy to reproduce builds locally. The script consumes the environment variables that were set in .travis.yml and passes them to the docker build command if appropriate.
  6. The build steps for docker are described in contrib/ci/Dockerfiles/z3_base_ubuntu_14.04.Dockerfile. This Dockerfile installs all the dependencies we need and then calls various scripts in contrib/ci/scripts to do parts of the build and test various parts of it.

Some additional notes

  • There's no OSX support right now. The scripts are written in a reasonably portable manner so they should easily be re-purposed to build and test Z3.
  • The Dockerfile isn't really ideal. I've coupled together building and installing the dependencies. These could be decoupled and the "install dependencies" stage cached to potentially give faster builds.
  • The Dockerfile isn't ideal for local testing. Docker's internal cache gets invalidated unnecessarily because I was lazy about what files get copied from Z3's repo and when. This can be improved but at the expense of readability of the Dockerfile. As this is an early proto-type I decided to go for readability rather than performance.
  • We could easily add Dockerfiles for other Linux distributions (e.g. Ubuntu 16.04 LTS, CentOS). I just haven't done this for now for simplicity.
  • Building and testing a single configuration currently takes ~30 minutes. I don't know if that is considered acceptable.
  • TravisCI has no windows support. AppVeyour does but this is a problem I'd rather let someone else handle.
  • There's no deployment of build artefacts (e.g. uploading generated documentation and binaries somewhere). This is something TravisCI supports but for simplicity this is not implemented right now.
  • When TravisCI is running you can actually follow the build as its running (i.e. the console output dynamically updates in your web browser) which is a super cool feature :)

The parts that need special attention from you both are

Do these scripts do everything you want them to do?

Do you have any other feedback regarding this proto-type?

@NikolajBjorner
Copy link
Contributor

NikolajBjorner commented Jun 26, 2017

Pretty awesome.

  • Did the test_z3_system_tests.sh run on the sample Travis tests? I didn't see the output. Otherwise, it should be just a matter of pointing the script you have against regresssions/smt2. I don't think there should be more to it in the first round. There are some other tests for the APIs. It is icing on the cake.

According to win32test we do:
- build z3.
- test python bindings (you already do this)
- test java (you already do this)
- test cpp_example, c_example (done)
- test_benchmarks.py regressions/smt2
- test_benchmarks.py regressions/smt2-extra
- test_benchmarks.py regressions/smt2-debug (only in debug mode builds)
- run python regressions from regressions/python
- run C# regressions from regressions/cs

The real important directory is regressions/smt2.

  • Doxygen seems enabled, but not doing anything or maybe I am wrong. It is an opportunity to fix the doxygen documentation from builds. It would have to be uploaded somewhere.
  • 30 minutes is fine.

@wintersteiger
Copy link
Contributor

The python doctests are currently broken. There is a script that is supposed to run them (src/api/python/z3test.py), but it doesn't work in many environments, it just returns "ok" without actually running anything. I haven't figured out whether they actually run or don't run during the CI and if so on which platforms. The root of the problem is the relative import of modules, which apparently fails silently in some versions of Python.

@delcypher
Copy link
Contributor Author

delcypher commented Jun 26, 2017

@NikolajBjorner

Did the test_z3_system_tests.sh run on the sample Travis tests? I didn't see the output. Otherwise, it should be just a matter of pointing the script you have against regresssions/smt2. I don't think there should be more to it in the first round. There are some other tests for the APIs. It is icing on the cake.

That script does run in the Travis tests I link to. The problem is that the build produces such a large log that it gets truncated.

If you scroll to the bottom of https://travis-ci.org/delcypher/z3/jobs/247161334 you'll see a note about it being truncated, telling you that you need to look at the raw log.

This is a problem I've encountered before using TravisCI. The trick I've used in the past is to hide the output of some tasks by writing it to a log file and then dumping the log file to the console iff the command fails

  • test_benchmarks.py regressions/smt2

The script does this

  • test_benchmarks.py regressions/smt2-extra

The script doesn't do this but could

  • test_benchmarks.py regressions/smt2-debug (only in debug mode builds)

The script doesn't do this. Last time I tried it, it failed but that might be because
the copy of Z3 I was running was a Release build.

  • run python regressions from regressions/python

The script does

${PYTHON_EXECUTABLE} scripts/test_pyscripts.py "${Z3_LIB_DIR}" regressions/python/

I think these are the same thing.

  • run C# regressions from regressions/cs

The script doesn't run this. My reasoning was that the scripts probably expect the old build system and Windows. I haven't tried it though.

@delcypher
Copy link
Contributor Author

@wintersteiger

The python doctests are currently broken. There is a script that is supposed to run them (src/api/python/z3test.py), but it doesn't work in many environments, it just returns "ok" without actually running anything. I haven't figured out whether they actually run or don't run during the CI and if so on which platforms. The root of the problem is the relative import of modules, which apparently fails silently in some versions of Python.

Thanks I didn't even know about that script and therefore my proto-type makes no attempts to run it.

@delcypher
Copy link
Contributor Author

@NikolajBjorner

Doxygen seems enabled, but not doing anything or maybe I am wrong. It is an opportunity to fix the doxygen documentation from builds. It would have to be uploaded somewhere.

Yes Doxygen in the builds are enabled just to make sure the build succeeds. The result gets thrown away after the build. It would certainly nice to upload the result of the build somewhere. However because the CMake build system does not know how to build the OCaml bindings no documentation for those bindings gets generated right now. TravisCI does allow build artifacts to be uploaded so this is something we could add in the future.

@delcypher
Copy link
Contributor Author

Another thing to note is that the 32-bit build configurations I have in the proto-type don't work :(

The main reason is that the even though I've used a multilib gcc (version of GCC that supports building 32-bit code) I don't have 32-bit versions of python/mono/java (etc...) installed so the 64-bit python/mono/java can't load the 32-bit libz3.so library.

One solution is to simply not test the python/.NET/java APIs in this configuration. Another (which is more complicated) is to tell docker to use a base image that is 32-bit which will therefore give us 32-bit builds of everything. The problem with this approach is there aren't any official 32-bit ubuntu docker images so we'd have to maintain our own image for this purpose. There are plenty of unofficial images though (e.g. https://hub.docker.com/r/delcypher/ubuntu32/ )

@delcypher
Copy link
Contributor Author

delcypher commented Jun 30, 2017

I've made some more progress on this

  • I now run more of the regression tests (smt2-extra and smt2-debug when doing a debug build). See test_z3_system_tests.sh.
  • I've switched to using Ubuntu 16.04LTS for most of the configurations. My reasoning is this is a more important target because it's more recent. This also includes Clang with OpenMP support. Although Ubuntu 14.04LTS has the same Clang version (3.9) available the LLVM OpenMP runtime isn't available for Ubuntu 14.04LTS so I can't build Z3 with Clang and have OpenMP support. I still have two configurations that use Ubuntu 14.04LTS to test the build with gcc 4.8.
  • I've made the log output a little quieter so that it doesn't get truncated in the TravisCI user interface.
  • I've separated out the Dockerfile into separate Dockerfiles. One that installs dependencies, the other that builds Z3.
  • I've improved the use of the ADD directive in the docker files so when doing local testing we don't rebuild Z3 unless it is actually necessary. This has no affect on TravisCI builds.
  • I've implemented a TravisCI specific cache for the base Docker image (i.e. Ubuntu 14.04/16.04 with all the build dependencies installed). You can see a build without the cache and a build with the cache. This seems to have shaved a few minutes off the build time for each configuration and leads to smaller logs too.

What's missing?

  • 32-bit builds Configuration is now tested
  • OSX builds
  • Exporting built documentation
  • LTO build Configuration is now tested
  • Build that tests using make rather than ninja for builds. Configuration is now tested
  • C# tests. I checked the script that runs these was not designed with mono in mind so it will take some work to get this to work on Linux.

I think what I have now is good enough that we can start using it alongside the existing CI infrastructure. Most of the missing features are going to be quite time consuming to add so I'd like to add them at a later date.
@NikolajBjorner @wintersteiger What do you think?

If you're agree then

  1. I can prepare a cleaned up PR and submit it and then have it accepted.
  2. TravisCI testing can be switched on for this repository (I can't do this. Only owners of the repository can do this).
  3. We can add a build button to the README.md (e.g. Build Status ). Unfortunately TravisCI doesn't provide a per-configuration build button but if you click on it you will see all the configurations and which are passing/failing.

@NikolajBjorner
Copy link
Contributor

Pretty awesome. Hope it gets operational in time your attention gets taken elsewhere.
I have "added a hook for Travis CI", but probably I need to do more (billing?).

The current C# tests may as well be ignored. There are 4 trivial tests so not worth spending time on porting. The examples from the z3 repository also test API capabilities in disguise of being examples. These are much more useful.

I noticed a bunch of warning messages in the gcc build. I will be looking at them at later at some point. Good to have diverse targets.

@delcypher
Copy link
Contributor Author

I have "added a hook for Travis CI", but probably I need to do more (billing?).

There are getting started instructions here

TravisCI is free to use for open source projects. However if you want more concurrent jobs there might be an option to pay for it. I'm not sure about this though.

I'll open a PR soon with the build scripts. I'm currently testing 32-bit Ubuntu builds and I want to make sure this works before submitting.

I noticed a bunch of warning messages in the gcc build. I will be looking at them at later at some point. Good to have diverse targets.

Once those warnings are fixed it might be useful to treat particular warnings as errors so that they don't get reintroduced.

@delcypher
Copy link
Contributor Author

PR opened: #1126

@wintersteiger
Copy link
Contributor

I just remembered that the /linkresource flag for the .NET API triggered some problems in the past. Did those issues get resolved?

@delcypher
Copy link
Contributor Author

@wintersteiger Yes that flags caused issues so I simply don't pass it. There is a comment in the CMake build system explaining the situations.

# FIXME: This flag only works when the working directory of csc.exe is
# the directory containing the ``libz3`` target. I can't get this to work
# correctly with multi-configuration generators (i.e. Visual Studio) so
# just don't set the flag for now.
#list(APPEND CSC_FLAGS "/linkresource:$<TARGET_FILE_NAME:libz3>")

Basically the problem was the /linkresource flag didn't seem to work with absolute paths.

@wintersteiger
Copy link
Contributor

@delcypher Yes, it may well require a relative path. For releases and nightly binaries we will need that set though, .NET users rely on that to keep the right version of libz3.dll. Doesn't CMake have facilities for converting absolut to relative?

@delcypher
Copy link
Contributor Author

delcypher commented Aug 15, 2017

@wintersteiger

Yes, it may well require a relative path.

Perhaps. I don't remember if I tried relative but I suspect it won't work. The documentation for /linkresource says it is a file name, not a path.

This bizarre restriction for the /linkresource flag coupled with a problem in CMake at the time of writing as noted at

# FIXME: Ideally we should emit files into a configuration specific directory
# when using multi-configuration generators so that the files generated by each
# configuration don't clobber each other. Unfortunately the ``get_property()``
# command only works correctly for single configuration generators so we can't
# use it. We also can't use ``$<TARGET_FILE_DIR:libz3>`` because the ``OUTPUT``
# argument to ``add_custom_commands()`` won't accept it.
# See http://public.kitware.com/pipermail/cmake/2016-March/063101.html

means I can't make this work in the multi-configuration generator case (e.g. Visual Studio/Xcode).

In the single configuration generator case (e.g. ninja, make, nmake) I could make this work but so far it has not been necessary. If you would like me to add support for using /linkresource in that case I can.

Doesn't CMake have facilities for converting absolut to relative?

It does but that isn't the problem here.

@wintersteiger
Copy link
Contributor

Yes, we'll need that if we want to publish the binaries produced via CMake are going to be published. Usually the two files are always in the same directory anyways and we won't add any new APIs that require this, so hardcoding some of the strings/paths etc shouldn't be a major issue.

delcypher pushed a commit to delcypher/z3 that referenced this issue Oct 25, 2017
when doing single configuration builds. Under multi-configuration builds
this won't work so we just emit a warning and don't pass the flag.

This implements a requested made by @wintersteiger in
Z3Prover#986 (comment) .

WARNING: This has NOT BEEN TESTED.
delcypher pushed a commit to delcypher/z3 that referenced this issue Dec 20, 2017
when doing single configuration builds. Under multi-configuration builds
this won't work so we just emit a warning and don't pass the flag.

This implements a requested made by @wintersteiger in
Z3Prover#986 (comment) .

WARNING: This has NOT BEEN TESTED.
@arrowd
Copy link
Contributor

arrowd commented Dec 22, 2017

I'm packaging 4.6.0 release for FreeBSD. Should I build Z3 using CMake, or keep using old python machinery?

@wintersteiger
Copy link
Contributor

@arrowd: Whatever you prefer. Note that our release packages have signed .NET libraries, signed by our private key, so if you want us to publish a release package, we'll have to build that ourselves, or possibly delay-sign them. I removed FreeBSD Ci/nightly/release last year because it had very few downloads, so few that it's fair to say "nobody" wants them.

@wintersteiger
Copy link
Contributor

@delcypher: The Visual Studio backend of cmake produces a solution and project files that aren't usable. They overwhelm VS with a lot of tiny sub-projects. So much so, that it literally takes minutes before it can start a build. Further, all those little sub-projects have their own compilation settings, so if we need to change a compiler option, we have to either click millions of times, or hack the project files outside of VS.

The philosophy in VS is a bit different; usually there would be one sub-project for each product, i.e. libz3.dll, z3.exe, c_example, etc, instead of projects that produce static libraries and sub-modules can be within folders or project filters (see currently used .vcxproj files). VS does it's own dependency analysis and incremental compilation features. How much effort is it to teach cmake to do that?

@delcypher
Copy link
Contributor Author

@wintersteiger

The Visual Studio backend of cmake produces a solution and project files that aren't usable. They overwhelm VS with a lot of tiny sub-projects. So much so, that it literally takes minutes before it can start a build.

I noticed the slowness before. I was running Visual Studio in a VM and assumed it was slow because of the VM. When I used it, it was spending most of its time doing the initial indexing I think. It was fine once indexing was complete and Z3 built (and re-built) fine for me.

Are you sure that the number of projects here is the bottleneck?

git grep z3_add_component | wc -l
76

So I'd expect there to be ~76 projects, which doesn't seem that large. Does Visual Studio really not scale well to that number of projects?

Further, all those little sub-projects have their own compilation settings, so if we need to change a compiler option, we have to either click millions of times, or hack the project files outside of VS.

Err yeah, you shouldn't do that. If you need to change a compiler option you do that at the CMake level. Not the Visual Studio level. When you change the CMake build system the Visual studio solution and projects will get regenerated.

This is the price you pay when you use a meta build system like CMake. Similarly when you use CMake's Makefile backend you don't modify the generated Makefile when you want to change build flags, instead you modify the CMakeLists.txt files or you edit the CMake cache (using ccmake or the cmake-gui tools).

The philosophy in VS is a bit different; usually there would be one sub-project for each product, i.e. libz3.dll, z3.exe, c_example, etc, instead of projects that produce static libraries and sub-modules can be within folders or project filters (see currently used .vcxproj files). VS does it's own dependency analysis and incremental compilation features. How much effort is it to teach cmake to do that?

I believe the reason CMake is doing this is because each component is treated as a CMake "object library". CMake "object libraries" act like libraries in CMake's language but they don't get linked, only the object files get built. Then the libz3 target links all these "object libraries" together to build a monolithic library.

I don't think there's any easy way to fix because I think it might require re-architecting the CMake build. The build is structured the way it is right now because I was trying to make it as similar as possible to the python build system so that they could co-exist easily.

All that being said I really am not an expert on the interaction between Visual Studio and CMake because I don't use it very often.

If it's merely a presentation issue there are ways to reorganise how projects are present in Visual Studio from within CMake

http://blog.audio-tk.com/2015/09/01/sorting-source-files-and-projects-in-folders-with-cmake-and-visual-studioxcode/

https://cmake.org/cmake/help/v3.8/command/source_group.html

If it's more than just a presentation issue then we need to think about how to best fix this. Given that Microsoft's Visual Studio team has recently put a lot of effort into improving the CMake experience in Visual Studio and your connections to Microsoft perhaps it would be good to get in contact with them to see if they can help?

@c-cube
Copy link
Contributor

c-cube commented Apr 2, 2018

@delcypher do you need help for the OCaml API? I don't know much about Cmake, but I know about OCaml…

@NikolajBjorner
Copy link
Contributor

@c-cube. It would be great if you can help out on the ocaml part if you are also using it.
It is particularly difficult on WIndows and I would suggest to even not be blocked on Windows support as OCaml usage seems almost entirely OSX/Ubuntu these days (except for Everest/F*).

@NikolajBjorner NikolajBjorner added the build/release build and release scripts and process label Nov 12, 2018
@XVilka
Copy link
Contributor

XVilka commented Jun 9, 2020

Shouldn't be this closed now? Seems everything requested was implemented already.

@NikolajBjorner
Copy link
Contributor

The build process for ocaml is based on the python systems. This was not ported.
We could still declare cmake as first class for almost all purposes.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
build/release build and release scripts and process
Projects
None yet
Development

No branches or pull requests

7 participants