Are C doubles different to .NET doubles? -


comparing c-code , f# i'm trying replace with, observed there differences in final result.

working code, discovered @ there differences - albeit tiny ones.

the code starts reading in data file. , first number comes out differently. instance, in f# (easier script):

let = 71.9497985840 printfn "%.20f" 

i expected (to me) output 71.94979858400000000000.

but in c:

a =  71.9497985840; fprintf (stderr, "%.20f\n", a); 

prints out 71.94979858400000700000.

where 7 come from?

the difference tiny, bothers me because don't know why. (it bothers me because makes more difficult track down 2 versions of code diverging)

it's diifference in printing. converting value ieee754 double yields

prelude text.fshow.realfloat> fd 71.9497985840 71.94979858400000694018672220408916473388671875 

but representation 71.949798584 sufficient distinguish number neighbours. c, when asked print precision of 20 digits after decimal point converts value correctly rounded desired number of digits, apparently f# uses shortest uniquely determining representation , pads desired number of 0s, haskell does.


Comments

Popular posts from this blog

java - Play! framework 2.0: How to display multiple image? -

gmail - Is there any documentation for read-only access to the Google Contacts API? -

php - Controller/JToolBar not working in Joomla 2.5 -