I did implement a 128-bit floating-point 3D engine a year ago.
I'm going to call SC bluff on their 64-bit upgrade. Let me explain:
This has to do with data-types for 2^64:
A signed 64-bit int is +/- 2^63,
A 64-bit floating point is 2^52* precision, not 64-bit.
* See
https://en.wikipedia.org/wiki/Double-precision_floating-point_formatThus, if you cast a 64-bit int higher than 2^53, you get cut-off, or truncation error. Did they not think about that?
You can see floating point errors in game-play - lock on weapons - only to miss, collision errors, oddly sized ships to name a few.
The proposed solution, was to use:
A signed __int128 would cast into long_double, with a guard or warning message if any of the maths goes above/below 2^64.
Yeah, the main reason to use 64-bit, is to get over the 4GB memory-space limit for the textures.
The 64-bit issue, is trivial. Any salted developer would make their Win32 EXEs, Linux ELF and IPA use 64-bit C++.
It's not like the 16-bit DOS era...