-
Notifications
You must be signed in to change notification settings - Fork 126
Any plans to make all conversions deterministic? #8
Comments
How are they not deterministic? Unless I'm mistaken, a single division or multiplication should yield the same result across any compliant implementation of the floating point standard. Determinism issues arise in intermediate results across multiple operations due to FPU extended precision, and possibly in implementations of transcendental functions. |
For example At Consequentially resulting Fix64 value can be different on different platforms for the same input value. |
Thanks for that information. I have to check that about single division or multiplication. |
I am not entirely sure single operations yield same result. Note that I have never really studied that topic so I may be wrong (and in that case I apologize in advance for wasting your time). This is the information I gathered. From C# specification:
Eric Lippert describes floating point behavior as
Then Shawn Hargreaves writes that 32 bit float value can be converted to 64 bit or 80 bit precision when pushed into register before doing multiplication. Would't result be different when higher internal precision is used? Shawn confuses me a little with his next sentence:
And lastly, albeit a bit unrelated, this post shows how casting from |
Intermediate result precision doesn't matter in this case because the intermediate result is never used - it is immediately truncated back to a 64-bit number, which is the same as performing the operation in 64-bit precision. At least that's my understanding. I'd be happy to change the implementation "in any case" if there was an obvious way to do the conversion without losing precision and without doing any floating-point math, but I'm not seeing it. |
I think it might be possible for resulting truncated 64-bit number to be different in some cases. But I need to check it some more to be sure. The only way I can think of to do conversion without relaying on builtin types is with I may need a day or two to really research this issue (if it even is an issue). |
Thanks for your interest, if you find anything I'd be curious! Among others I read https://blogs.msdn.microsoft.com/davidnotario/2005/08/08/clr-and-floating-point-some-answers-to-common-questions/ , which explains the floating-point model of the CLR. It seems quite clear that precision issues arise from use of intermediate values, and that conversions to and from internal representation should preserve value. |
I haven't found any articles that would simply clarify this issue without too much highly technical terms terms and equations. So I asked a question on stackowerflow. Answer was posted before I edited the question, so that 'yes' part is a little confusing. Please also check comments under the answer. From what I gathered is that extended precision has to have 2*significandBits+2 bits of significands for resulting value to be completely identical to value calculated with non-extended precision (for elementary operations +, -, *, /) (or +3 bits instead of +2 if we also want to calculate sqrt).
So if I understand this correctly, we need at least 24_2+2=50-bit significand if we want to make 32-bit I think the safest way would be to convert using I also posted more direct question on stackoverflow but I did not get any satisfactory answer for that one. |
Thanks for the research! Although it does appear that this is right in general, one of the Stack Overflow links you gave also has this reply by Stephen Cannon (and CodesInChaos basically agreed in the comments):
This seems obvious, as floating point numbers use a base 2 mantissa + exponent representation, and so multiplying by a power of 2 is just an addition on the exponent. I have a strong dislike for replacing obvious and fast code with tricky and slow code. If I get time I'll check this on my end, but I'd like to see a concrete counter-example to my initial assumption that this operation is deterministic. One thing this got me thinking though, is that this cast operator doesn't check for overflow, and the conv.i8 it emits is unspecified in case of overflow. I only wrote happy path tests for it so in effect the behavior of casts with values outside the range is currently unspecified. This is probably bad. I'm wondering if the desired behavior would be to throw OverflowException or just saturate to Max/Min Value. Probably the former. |
On an related note, if you can ship your application as an x64 .NET Core application, bundling the runtime with it, I think you can just use double and float to your heart's content and not worry about determinism. x64 emits SSE fp code, which doesn't have the rounding problems x87 has; and bundling the runtime should ensure two runs of the same program are compiled in exactly the same way. You might have to verify that it emits the same fp code on different platforms but I don't see why it wouldn't. At that point the only remaining problem is transcendental functions, which could have different implementations on different platforms; if you really need portability across different OSes, maybe the solution is to write them in portable C, cross-compile it and P/Invoke it; slow but should do the trick. If I was writing a lockstep cross-platform multiplayer game that requires determinism, this is where I would probably go rather than use fixed point math (which is problematic, saturating arithmetic does all sorts of weird things to your equations. Ex.: x * 2 / 2 = x doesn't hold if x * 2 saturates). I wrote this library mainly out of curiosity but I don't think I would actually use it. |
I think I can't ship it as x64 only. Really i would have to check OS statistics, and I think there is still a lot of 32-bit mobile devices. Would also need to check if Mono emits same code. It is basically turn based game with some real time events. I only need determinism for replays between devices and server replays that validate score. No multiplayer. Performance is not a problem as game logic that need to be deterministic is not demanding. Values are well in range of Q31.32 fixed type so saturation is not a problem in my case. So far what I have read writing fully deterministic floating-point code is no easy task even in C. And I haven't really used any non-managed language. From few blog post I read on the topic, fixed point math is usually recommended. That special case for multiplication does look interesting, but it still looks kind of suspicious based on some comments in that thread. My knowledge of internal workings of FPU and FP math is pretty basic, so I don't know how to verify that special case by myself. I will try to get some more answers on this. What about division, do you think the same special case applies? I fully agree that simpler and more performant code is much more preferred. I just need to be sure it is completely deterministic across different platforms. And there is not much and even some conflicting info about this topic, which is really confusing. I will try to research it some more. Thank you for really good support on this issue! |
So, floating point is a base-2 number. A number like 2^32 is represented as (exponent: 32, mantissa: 1). If you multiply this with another float (exp, man), the result is (exp + 32, man * 1). For division, the result is (expo - 32, man * 1). This is an exact result, so there's no rounding. If there's no rounding, rounding mode doesn't matter. Or to put it differently, multiplying the mantissa by 1 doesn't change the mantissa, so it doesn't matter how many bits it has. Now that I think about it, munging directly with the floating point bits will not be portable. If some CPUs (esp. ARM cpus on mobile) use big-endian floats, I would need to somehow detect the endianness to do it correctly. |
Ah I think I understand. If floating point number is multiplied/divided with power of two number, mantissa of FP number doesn't change. Only the exponent changes (point moves). Because both numbers are base two numbers. Great explanation of this math trick! That was confusing me all this time. You are also right about little/big-endian issue for Thank you again! I think this issue can be closed :) |
All right, let me know if you run into any other issues. I've created another one just so I remember to fix the overflow behavior on those operators. #9 |
I'm a layman developer...does FP.ToFloat and FP.FromFloat preserve determinism? |
Hi,
conversions from/to
double
&float
are using floating point multiplication/division which is not deterministic across platforms. Is there any chance to make them deterministic?Thanks for you work!
The text was updated successfully, but these errors were encountered: