-
-
Notifications
You must be signed in to change notification settings - Fork 30.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Move Demo/classes/Rat.py to Lib/fractions.py and fix it up. #46023
Comments
This is written against the 3.0 branch, but before submitting it, I need The numbers.py part is necessary for this to work but will be a separate |
Some random comments: take with a large pinch of salt (1) In __init__ I think you want something like: self._numerator = int(numerator)//g instead of self._numerator = int(numerator / g) and similarly for the denominator. At the moment I get, for example: >>> Rational(10**23)
rational.Rational(99999999999999991611392,1) (2) What's the reason for repr of a Rational being "rational.Rational(...)", while repr (3) My first thought on looking at the _gcd function was: "Shouldn't there be an abs() (4) the __ceil__ operation could be spelt more neatly as return -(-a.numerator // a.denominator) ...but perhaps that just amounts to obfuscation :) (5) It looks as though two-argument round just truncates when the second argument is >>> round(Rational(26), -1) # expecting Rational(30, 1)
rational.Rational(20,1) (6) The behaviour of the equality test is questionable when floats are involved. For >>> 10**23 == float(10**23) # expect False
False
>>> Rational(10**23) == float(10**23) # I'd also expect False here
True Ideally, if x is a Rational and y is a float then I'd suggest that x == y should return (7) For purely selfish reasons, I for one will be very happy if/when this makes it into |
Two more questions: (1) should a Rational instance be hashable? If Rationals are hashable and 2.5 == Rational(5, 2) == Decimal('2.5') then the |
Thanks again for the excellent comments. __init__: good catch. repr(Rational): The rule for repr is "eval(repr(object)) == object". _gcd's sign: It's a happy accident for me. Possibly Sjoerd Mullender __ceil__: I like that implementation better. 2-argument round: Fixed and tested. equality: Very good point. I've stolen the sandbox code and added hashing: oops, yes these should be hashable. Decimal cheats by comparing The new patch is against 2.6. |
FWIW, I'm -1 on moving this module to the standard library. There has |
Raymond, do you *always* have to be such a grinch? The mere existance of competing implementations proves that there's a |
Not sure I'm the grinch on this one. I thought PEPs on this were I think rational modules are ubiquitous for the same reason as Sudoku I had thought the standards for inclusion in the standard library had All that being said, maybe the module with serve some sort of |
The PEP-239 is Rejected (http://www.python.org/dev/peps/pep-0239/). If a Rational data type would be included in the stdlib, my Also, note the kind of questions Mark is asking here: the analysis and I'd close this patch as Rejected (as for the PEP), at least until the |
The rejection of PEP-239 was years ago. PEP-3141 was accepted; it The motivation is to have at least one example of each type in our Note that the mere existence of PEP-239 again points out that the demand |
If this goes in, I have some recommendations:
|
2008/1/9, Raymond Hettinger said:
If it's lossless, why not just allow Decimal(Rational(...)) and |
Those conversions are fine. It's the float<-->rational conversions that are problematic. There are I think the history of rational modules is that simplistic |
On Jan 9, 2008 4:29 PM, Raymond Hettinger <[email protected]> wrote:
A "rational" type that limits denominators (presumably by rounding) Regarding float->rational, I propose to refuse Rational(1.1) for the float(Rational()) should be fine. |
I agree that it's not usually what people ought to use, and I don't
I'm reluctant to remove the fallback to float, unless the consensus is
Good idea, but I'd like to punt that to a later revision if you don't
Decimal.from_rational(Rat(1, 3)) wouldn't be lossless, but
Since I don't have any tests for that, I don't know whether it works.
Absolutely!
Good idea. I'll add some of those to the test suite.
Right. My initial plan was to use |
It should be implemented as Decimal(1)/Decimal(3) and then let the
Yes. Just use Decimal.as_tuple() to get the integer components.
Instead, just expand the decimal constructor to accept a rational input.
+1
Here's how to pick the integer components out of a float: mant, exp = math.frexp(x)
You will need some plan to scale-down long integers than exceed the |
One other thought, the Decimal and Rational constructors can be made to |
Allowing automatic coercion to and from Decimal sounds dangerous and I think something like trim() (find the closest rational approximation with I'm still worried about equality tests: is it possible to give a good reason |
All in all, Decimal is the odd man out -- it may never become a full --Guido On Jan 9, 2008 7:51 PM, Mark Dickinson <[email protected]> wrote:
|
Would it be reasonable then to not have any of the numeric tower stuff |
If that's the preference of the decimal developers, sure. |
If the consensus is that Decimal should not implement Real, I'll reply Raymond, do you want me to add Decimal.__init__(Rational) in this patch I don't understand the comment about scaling down long integers. It's Mark, I agree that .trim() and/or .approximate() (which I think is the Finally, Decimal("2.5") != Rational(5, 2) because Decimal("2.5") != 2.5 |
Thanks. That would be nice.
How about focusing on the rational module and when you've done, I'll
My understanding is that you're going to let numerators and denominators |
More comments, questions, and suggestions on the latest patch:
>>> from rational import *
>>> Rational(10**23) == complex(10**23) # expect False here
True
>>> Rational(10**23) == float(10**23)
False
>>> float(10**23) == complex(10**23)
True
|
About .trim and .approximate: it sounds like these are different, but quite closely related, methods: one takes a positive I don't have anything to offer about names, either. I can try to find out whether the algorithms are published anywhere on the web---certainly, Am willing to help out with implementing either of these, if that's at all useful. |
_binary_float_to_ratio: Oops, fixed. Rational() now equals 0, but I'd like to postpone Rational('3/2') until +/-inf and nan are gone, and hash is fixed, at least until the next bug. Parentheses in str() help with reading things like Equality and the comparisons now work for complex, but their I've added a test that float(Rational(long('2'*400+'7'), The open issues I know of are:
I think this is close enough to correct to submit and fix up the |
The latest version of rational.py looks good to me---nice work! (I haven't looked properly at I do think it's important to be able to create Rational instances from strings: e.g., for And since you pointed it out, I think Rational(Rational(a, b)) should work too. There's also the not-entirely-insignificant matter of documentation :) Other than that, I don't see why this shouldn't go in. Other comments: I have a weak preference for no parentheses on the str() of a Rational, but it's no big deal I agree that equality and comparisons are messy. This seems almost inevitable: one obvious I like the name Rational for this class. Maybe change the name of numbers.Rational instead? Postponing trim, approximate, from_decimal sounds fine to me. Finally: the very first line of rational.py is, I think, no longer accurate. Please add your |
Just had a quick look at numbers.py. Only two comments:
>>> x = Decimal('-0')
>>> y = Decimal('0')
>>> x-y
Decimal("-0")
>>> x+(-y)
Decimal("0")
>>> x
Decimal("-0")
>>> +x
Decimal("0") Of course the first point wouldn't matter anyway since Decimal already implements __sub__; |
r60785 speeds the benchmark up from 10.536s to 4.910s (on top of ncalls tottime percall cumtime percall filename:lineno(function) |
Yay! And my benchmark went from 70 usec to 15 usec. Not bad! PS. Never use relative speedup numbers based on profiled code -- the |
PS. I can shave off nearly 4 usec of the constructor like this: - self = super(Fraction, cls).__new__(cls)
+ if cls is Fraction:
+ self = object.__new__(cls)
+ else:
+ self = super().__new__(cls) This would seem to give an upper bound for the gain you can make by I also found that F(2,3)+F(5,7) takes about 22 usecs (or 18 using the Inlining the common case for addition (Fraction+Fraction) made that go - __add__, __radd__ = _operator_fallbacks(_add, operator.add)
+ __xadd__, __radd__ = _operator_fallbacks(_add, operator.add)
+ def __add__(self, other):
+ if type(other) is Fraction:
+ na = self._numerator
+ da = self._denominator
+ nb = other._numerator
+ db = other._denominator
+ return Fraction(na*db + nb*da, da*db)
+ return self.__xadd__(other)
+ |
Thanks for writing the __add__ optimization down. I hadn't realized how |
Two things: (1) Speedup. I haven't been helping much here; apologies. Christian (2) PEP-3101: Should Fraction get a __format__ method? I'm looking at |
|
A C reimplementation of gcd probably makes little sense -- for large |
That's true for a direct translation of the usual algorithm. But I've attached a Python version; note that:
Mark |
I think this is definitely premature optimization. :-) In any case, before going any further, you should design a benchmark |
Replacing lehmer_gcd.py with a revised version. Even in Python, this
Okay. I'll stop now :) |
So I lied. In a spirit of enquiry, I implemented math.gcd, and wrote a An explanation of the table: I'm computing gcd(a, b) for randomly Even for quite small a and b, math.gcd(a, b) is around 10 times faster The smallest difference occurs when the two arguments are very different For random a and b, gcd(a, b) tends to be very small. So for the last On to the results. a_digits b_digits g_digits ntrials C_Lehmer Py_Euclid Factor Lopsided gcds Large gcd |
And here's the diff for math.gcd. It's not production quality---it's just |
I'm wondering what there is left to do here. Are we close to being able (1) I think the docs could use a little work (just expanding, and giving (2) Looking at it, I'm really not sure that implementing Is there much else that needs to be done here? |
I agree that we're basically done here. I'm back to -1 on inlining the I was always +0 to __format__, so not doing it is fine with me. Thanks for everyone's help! |
I'm reopening this to propose a last-minute minor change: Can we remove the trailing 'L' from the numerator and denominator in >>> x = Fraction('9.876543210')
>>> x
Fraction(987654321L, 100000000L)
>>> x * (1/x)
Fraction(1L, 1L) I'd rather see Fraction(987654321, 100000000) and Fraction(1, 1). Does |
+1 on removing the L. Also, it would be nice if reduced fractions were >>> Fraction(1*10**18, 3*10**18)
Fraction(1L, 3L) |
+1 on removing the trailing L from the repr. +0 on trying to reduce the values to ints; that would be dead-end code |
Trailing 'L's removed in r64557. A couple more minor issues: (1) fractions.gcd is exported but not documented. I'll add some (2) The Fraction constructor is a little more lenient than it ought to >>> Fraction('1.23', 1)
Fraction(123, 100) I think this should really be a TypeError: a second argument to the |
Hmm. I don't much like this, though: >>> Fraction('1.23', 1.0)
Fraction(123, 100)
>>> Fraction('1.23', 1+0j)
Fraction(123, 100) I'd say these should definitely be TypeErrors. |
I think it's okay to accept Fraction('1.23', 1) -- if only because I agree that if the type of the 2nd arg isn't int or long it should be |
Sounds good. Some care might be needed to avoid invalidating |
I'm having second thoughts about this; it doesn't seem worth adding an |
That's fine. |
Well, I can't find anything more to fuss about here. :-) Reclosing. |
Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.
Show more details
GitHub fields:
bugs.python.org fields:
The text was updated successfully, but these errors were encountered: