-
Notifications
You must be signed in to change notification settings - Fork 86
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
release? #146
Comments
I would still like to add mpc support to the C-API. I should be able to work on it this weekend. I should be able to make an alpha release by the end of June. |
I'm working on the mpc API and I think I spotted a bug in GMPy_MPFR_New(). Can you add a Cython test that uses a precision of 0? A precision of 0 should cause gmpy2 to use the precision of the currently active context. I suspect it may crash in the current gmpy2 code. Note that contexts have subtly, but significantly, changed in version 2.1. In gmpy2 2.0.x, there was only a single global context that was shared by all thread. Contexts are now thread specific. |
Indeed, it does crash. |
I've added mpc to C API. GMPy_MPFR_New() and GMPy_MPC_New() shouldn't crash if you specify a precision of 0. Can you also add a test where mpz raises an exception? I think MPZ_Check() will crash if NULL is returned by PyObject_CallMethod(). |
I add mpc to gmpy.pxd, cython tests for mpc, and tests with 0 precision for mpfr and mpc and all is working fine. |
@casevh, can we have official manylinux1 and osx wheels? I've tried building manylinux1 wheels and works fine. https://github.com/isuruf/gmpy2-wheels/releases. OSX wheels doesn't work yet. |
Do you have a schedule? |
I believe all the critical issues for a first alpha have been resolved. I need to update the documentation and then I think I can make a source-only release soon. I will try for this weekend. I will add a TODO file with the milestones for the following releases. At this point, I assume the next alpha release will follow soon. |
I think it would be great to include manylinux1 and osx wheels. May I use the wheels that you build? (Once the version number changes, etc.) I have a couple of questions. Are the manylinux1 wheels statically linked with gmp/mpfr/mpc? Is gmp built with the --enable-fat option? |
Do you want me to write some paragraph about Cython usage? |
Sure. That'll be great
No. these are linked dynamically since I wanted to keep the wheel LGPL. The dynamic libraries are copied into
Yes OSX wheels are not working because |
|
I always get confused with the licensing. I thought that linking multiple LGPL libraries together would result in another LGPL library (as long as the source code for everything was made available). What triggers the GPL requirement? |
Yes, me too. I guess linking LGPL statically with another LGPL library is not a problem. (I confused it with LGPL statically linking with another BSD license library). I'll update the code to link statically. |
Note that it would be nice to have a link to the documentation https://gmpy2.readthedocs.io/en/latest/ on the PyPI page https://pypi.python.org/pypi/gmpy2 |
Did you mean June 2017? |
My apologies. I did mean June 2017 but I've been dealing with several real life emergencies. I am starting to experiment with twine and testpypi to get used to the new upload process. @isuruf I made one minor code change to fix a warning with Python 2.7. There should be no practical difference but let me know if you rebuild the wheels. One last comment on the wheels, and Cython, and inter-operability ... I saw a discussion on SageMath regarding the use of statically linked versus dynamically linked extensions. My only concern in static vs. dynamic is ensuring that Cython code (probably equivalent to saying all-of-Sage) is using the GMP, MPFR, and MPC libraries as gmpy2. At the moment, the C-API is disabled with a static build of gmpy2. I don't mind enabling the C-API for a static build. Please let me know if you want me to make the change. (MPFR/MPC could be tricky because the precision is a global variable. The current version of gmpy2 expects the precision to be set to the maximum for the calculation and then rounds the result down to the desired precision. During the rounding process, the existing precision is saved and then restored. If this is different than Sage's usage of the MPFR library, I'll need to change gmpy2 to always set the precision, at least on a dynamically linked build. I probably should do that anyway. Note that these concerns aren't applicable to GMP. I've already disabled changing the memory manager functions in GMP.) I really do hope to get a release out this weekend. |
Make sense to me.
In Sage, there is no global variable for precision. Each mpfr carries a given precision and you can only perform operations between mpfr elements with the same precision. If two elements are added and have different precisions, then the one with largest precision is truncated. There is no global variable. I don't quite understand the usage of the global precision variable in gmpy2. |
My comment was slightly erroneous. The MPFR library (not gmpy2) has global settings:
The default precision is used by mpfr_init() when creating a new
On startup, the minimum and maximum exponent values are initialized to default values. Exceeding these limits during a calculation with result in +INF or -INF result. gmpy2 supports changing the exponent range to emulate other floating point formats, for example float32, float64, float128, etc. In gmpy2 2.0, I only changed the exponent range whenever the active context was changed or updated. Any other calls (say from Cython) into the MPFR library would inherit the gmpy2 exponent range. In gmpy2 2.1, I set the exponent range to the limits (which could be different from the default.) I then save, change, restore the exponent values when checking if the result fits in the desired exponent range (see mpfr_check_range() and mpfr_subnormalize().) If Cython/Sage changes the exponent range but doesn't restore it, then gmpy2 will use the unexpected values. I should probably update gmpy2 to always reset the exponent range before performing any calculation; at least on dynamically linked builds. And MPFR 4.0 will change the exponent limits.... |
Actually, Sage does modify the exponents on startup (line 341-343 of real_mpfr.pyx)
|
I just managed to get successful Appveyor builds created. I should be able to get a release out within a few days. |
how is it going? do you need help for something? |
I have finally been able to upload the Windows wheels to testpypi. I had to make several changes to setup.py. The MacOSX and manylinux1 wheels will need to be rebuilt ( @isuruf Can you trigger a rebuild of those wheels? ) I have to a couple of other issues (defining long_description and uploading the source for gmp/mpir/mpfr/mpc). Once those are done, I can make a real release. I would like to make a followup release fairly soon. There are a couple of incomplete feature that I want to remove from the 2.1 series and revisit for the 2.2 series. @videlec I apologize for the delays. I have very little time to devote to gmpy2 at the moment. To avoid these delays in the future, would you like maintainer access for gmpy2 at pypi? I could also contact Alex Martelli for commit privileges for the main repository. You can contact me directly via the email address in setup.py. |
Here are the manylinux1 and osx wheels, https://github.com/isuruf/gmpy2-wheels/releases/tag/2f883a993e2d4. Btw, we can automate the windows builds as well with appveyor. |
@isuruf Thanks for the manylinux1 and osx wheels. I am using Appveyor to make wheels for Windows. I make another release attempt tomorrow. |
It looks like you are using dlls. They need to be shipped with the wheel. Also, GMP build is with MSVC and therefore not a fat binary. It's also generic c. What should be a reasonable default for x86 and x86_64 |
Thanks for catching this. I think the Windows build should default to static. I can make that change over the weekend, For x64 CPU type, I think Core2 is probably the safest. I've looked at using MSYS2 to compile for Windows. It works but isn't easy to setup. And there are possible issues with distributing the related binaries. It supports a fat build so it performs well. See https://github.com/emphasis87/libmpfr-msys2-mingw64 for a discussion of the licensing issues. |
@casevh Hi ! |
If I may contribute a few things, appreciate this is closed, information may be useful regardless.
TL;DR, for binary wheels to work, few things must be done:
I'd be happy to contribute to some/all of these, as long as you are clear on what the end result should be and why these changes are necessary. Another other option is to forgo wheels entirely and build with conda. This is increasingly used by data scientists as it gives you an entire new toolchain (compiler/libraries etc) on all platforms and simplifies the whole building/bundling thing. It does not work for PyPi distributed binary wheels though, have to use conda. There are already conda packages for gmpy dependencies that simplify things further. Both conda and pypi wheels could be done, but obviously duplicates effort. Really depends on if you want binary stuff available on PyPi to be installed by |
Why? There is no fundamental difference between an extension module and a plain Python module. |
Hi!
This is an admirable goal.
It is true that compiling arbitrary C code requires the same compiler and C runtime. But it is not a requirement enforced by CPython. If you avoid the (unfortunately) undocumented differences in the C runtime versions, you can use a different compiler version. I've been very careful in gmpy2 to use the appropriate memory management calls (regardless of the platform, any memory allocated in GMP/MPFR/MPC must be de-allocated by GMP/MPFR/MPC). But I do have access to the older Microsoft compilers and do make my Windows builds with the matching compiler. I do agree that compiling on Windows is a PITA.
From what I've been able to find in the my research, this requirement (Python stub loading an extension) was added to support importing an extension from a zip file. But it breaks backwards compatibility - the shared object file is now in a different location and makes upgrading from a pure distutils installation to a setuptools installation fail. There may be others reasons, but that is the only one I found. gmpy/gmpy2 have always been a single shared library.
I think this is a flaw with the packaging tools.
There is a bigger challenge with gmpy2 v2.1. There is now a C-API that allows other C or Cython extensions to interact with gmpy2 at a very low level. To avoid memory management issues, gmpy2 and the extensions must use the same shared library. I can't see how that is compatible with binary wheels. The C-API needs to be disabled in a binary wheel (done automatically by making a statically linked version of gmpy2). So we have a scenario where binary wheels and from-source compilations offer different capabilities. Combined with pip's default behavior of using a binary wheel, most installations will not support the C API. I do not know how to resolve this dilemma.
I have the compilers required to build binary wheels for Windows.
I am not proud of the hacks.
See below.
Like I mentioned earlier, I can't think of a solution that will work for all use cases. I've thought of serveral options:
The Python ecosystem has migrated to a pip-based world. It is better for most projects. And I accept that gmpy2 will need to adapt to a pip/setuptools world. And I also need to realize that I have little free time available. Here's my proposal:
The only preferences that I have on the scope of the changes is that:
I am open to other suggestions. I don't have a good answer. I'd like to continue adding new features to gmpy2 but managing releases has become too difficult for me. I've created a new issue #176 to track these discussions. |
FYI, binary wheels can be made with dynamic linking if as @pkittenis says, |
I understand that dynamically linked wheel is possible but how do we ensure that gmpy2 and some arbitrary other extension that uses the C API are both using the same shared library? With the same memory manager functions (since they can be change in GMP)? But what happens if gmpy2 and the other extension provide different dynamic libraries. It may be just one of those "it usually works" but I don't know how the guarantee it. If we consistently link to the Centos 5 versions, it may work. |
Best way to avoid conflicts would be to avoid linking to gmp, mpfr and mpc libraries by those using the C API and load |
@isuruf Thanks for your comments. I've given it more thought and dynamically linked wheels would probably work in some|many|most situations. But I just can't convince myself that we won't encounter issues in some scenarios. I'll be okay with any solution that works easily for most people. |
For me personally, the conclustion is that wheels are broken and shouldn't be used. Build from source works great, conda works great, distro packaging works great. But I think that there is simply no clean solution to do this with wheels. @pkittenis and @isuruf Do you any pointers to documentation or examples (preferably by the PyPA or other reputable sources) on how to deal with this? |
On how to deal with what? |
Wheels containing Python modules which dynamically link against libraries which are not contained in the wheel. |
I don't have any pointers to documentation. Do you have an example scenario about using wheels that I can help clarify? |
Maybe have a look at https://github.com/pypa/manylinux/blob/master/pep-513.rst |
That PEP is Linux only. What about other OSes? |
As I said: a wheel which contains a Python module that depends on a dynamic library. For this project for example, a wheel which depends on |
And ideally, it should work in all cases where a build-from-source would work too. |
Have a look at "auditwheel" linked in the PEP above. ("delocate" for OSX wheels). For windows, tools are WIP. |
OK, and is there a "minimal working example", i.e. an example project showing how to use these tools properly? |
I've got one setup for gmpy2 here, https://github.com/isuruf/gmpy2-wheels which is based on https://github.com/matthew-brett/multibuild |
If I understand it correctly (it is very possible that I don't), this copies the dynamic library in the wheel and then forces the use of that specific library. So it doesn't deal properly with libraries already installed in the system. |
Yes, reasons are outlined at https://www.python.org/dev/peps/pep-0513/#bundled-wheels-on-linux |
So it's broken by design and not a solution... |
| dynamically link against libraries which are not contained in the wheel. Here is a PyPa manylinux1 demo repository that shows how this works for Linux - it links against Any system libraries that may or may not be installed are not relevant - they are not used. For Windows static linking is the norm because of Windows semantics around dynamic loading. Other libraries using gmpy2 can use the C-API with binary wheels as long as those wheels also use the same versions of embedded shared libraries. Have used this in other projects with no problems. A simple documentation statement saying as such should suffice. Worrying about C-API using developers doing the wrong thing for their project is not really a job for gmpy2 IMO - as long as the correct approach is clearly documented. Can we please focus on what the approach should be moving forward rather than criticizing the wheel design - it's not changing either way. Its purpose is not to replace system package managers, it is to allow distributing binary packages on PyPi. If you want a cross platform package manager for all your packages, use conda. If you want system packages, make system packages. In short once again - binary wheels with external dynamically linked dependencies bundle those dependencies in the wheel. System libraries of the same dependency are not used. Other libraries linking against that dependency at the C level should bundle the same version of the external dependency in their own binary wheels. From source builds have to build gmpy2 and the external library with the same external dependencies like any other dynamically linked library. This is required for those libraries to build binary wheels regardless of what gmpy2 does. It is not possible to, for example, link against gmpy2's embedded libraries when building another gmpy2 using library.
To the developer perhaps not. To the packaging tools there is. This is more of a limitation of the packaging tools but none the less, that is the end result. You do not have to like it :) As long as standard setuptools functionality is used this all works fine for from source builds (which can be forced by adding This is nothing specific to setuptools or binary wheels, same goes for any other shared library. You do not expect a Centos7 RPM to work on Centos6. Binary wheels save you the effort of building and maintaining a separate set of X many different system packages for all the platforms you want to support and specifically for Windows and OSX, there is no better built in tool as those platforms lack native package managers.
Yes, I was making a blanket statement for simplicity's sake. It's not just memory allocation BTW, there are differences in how long ints are implemented between GCC and Visual Studio for example that leads to weird errors. It is possible, but may or may not lead to hard to track errors down the line that makes it not worth the effort. |
I am a bit lost here... these two statements seem to contradict each other. It is possible or not to link against system libraries with a wheel? |
Like was mentioned earlier - it is a normal build process with building and linking to whatever library is available on the system. That particular library then gets embedded in the wheel. When a user installs the wheel, only the library that was embedded at build time is used, not any system libraries that user may have installed. See demo repository. |
I am sure you do, but unless you want to be solely responsible for building and publishing wheels manually on each and every release, that whole process needs to be automated as part of an appveyor config, the only free CI supporting Windows. That's where the mostly boilerplate appveyor config and wrapper scripts come in. This includes building all dependencies as well, so automated steps to do that would be very useful (If instructions include 'Open Visual Studio', that's not automated). |
Can I find the actual resulting wheels somewhere? Are they the ones which are posted on https://pypi.python.org/pypi/gmpy2/2.1.0a1 |
I used https://github.com/isuruf/gmpy2-wheels for
https://pypi.python.org/pypi/gmpy2/2.1.0a1.
The Windows wheels are ones I build locally.
…On Wed, Jan 24, 2018 at 7:08 AM, jdemeyer ***@***.***> wrote:
I've got one setup for gmpy2 here, https://github.com/isuruf/gmpy2-wheels
which is based on https://github.com/matthew-brett/multibuild
Can I find the actual resulting wheels somewhere? Are they the ones which
are posted on https://pypi.python.org/pypi/gmpy2/2.1.0a1
—
You are receiving this because you modified the open/close state.
Reply to this email directly, view it on GitHub
<#146 (comment)>, or mute
the thread
<https://github.com/notifications/unsubscribe-auth/ABkdb3DKzzaSDSCd_MakG1wQDDaMfDGVks5tN0d-gaJpZM4OBKyR>
.
|
Those wheels were built with static linking libgmp, etc. I can also make wheels for dynamic linking libgmp, if you want |
Is there any plan for alpha, beta, stable release?
Vincent K. already prepared all the integration in pplpy and SageMath (see ticket 22927 and ticket 22928). We are just waiting an official release to move on.
The text was updated successfully, but these errors were encountered: