As long as you represent things as integer nanocoins, and do only addition and subtraction, none of the things mentioned are going to have any effect.
If you're representing "things" as integer nanocoins, then why don't you use an integer type?!
An uint64 would work just fine!!
UINT64_MAX = 18446744073709551615
BITCOIN_MAX = 21000000 * 10^ 8 = 2100000000000000
BITCOIN_MAX < INT64_MAX
So you want to do that just because you like to get yourself into trouble?
Do you know what optimizations the compiler is allowed to do in intermediate values of floating point calculations?
Do you know that certain compilation flags affect the precision of the results (downwards or upwards)?
Do you know that when you enable some optimizations in some compilers, they don't follow the standards?
Such as when you use -fast with the Sun Studio compiler or -ffast-math in gcc?
Do you know that many Linux Gentoo users use -ffast-math by default?
Also, I didn't see any place in the code where "integer nanocoins" are represented as floating point values (granted, I only read a fraction of the code). What I saw was integer nanocoins being used as integers, and floating point bitcoins being used in floating point variables...
Please, just use integer (i.e. fixed point) arithmetic for financial data and stop writing shoddy code. It's a no brainer, really...