Black Scholes is a reasonably good way to estimate the value of an options contract, which is fine for floating point. Anything / just about anything that is macro (to the larger system) is fine with floating point.
But for actual simulating trading where calculastions compounds on itself, instead of one algorithm that calculates something and then is done, floating point becomes an issue.
----
Because of the downvotes, let's take a step back to 101 about floating point error:
>>> 1.20 - 1.00
> 0.199999999999999996
In this example, you're a penny off. This is a single equation. So you have to check for rounding error and possibly round up for EVERY calculation you do, which is time consuming.
Alternative, there is fixed precision types which are fast, very fast, faster than regularly checking for errors.
If you have an equation that calculates once and then is done, a penny or two off is no big deal, but when you're backtesting, a penny or two off for every trade compounds and you end up with dollars a day off. If the algorithm is identifying to the faction of a penny in the middle of the trading day and adjusting its configuration accordingly, it will be wrong, which creates a butterfly effect that ripples out into the day, and with high frequency trading it's somewhat uncommon, but possible, to get 10% off a day from floating point precision due to the actual algorithm failing on deciding the correct path forward while trading significantly compounding the issue. Now take that and compound it out months and you see the issue.
In your example, you're only a penny off if you truncate 0.199999999999999996, rather than rounding (which is described in IEEE 754!). Here's a real simple example. Let's say your model depends on the average of the last three ticks. The last three ticks are $1.00, $1.00, and $2.00. Ok, what's the (exact!) average without being off by a fraction of a penny? This is the point - as soon as you start manipulating numbers in anything other than the most trivial way, you run into the dreaded floating point error, because that's how the real numbers work.
I am unaware of fixed precision types that have hardware optimized (other than FPGAs which are used for feed handling in HFT anyway). If you are modeling discrete things like Minimum Price Variations, then yes, use fixed precision, or even encode it in a way that saves space. But if you're numerically solving a partial differential equation, e.g., Black Scholes, it's difficult to see how fixed precision numbers are going to have an advantage.
I think the point they're going after is that algorithmic trading behavior can be meaningfully sensitive to rounding errors (which seems plausible if you profit by amplifying tiny signals), so in the context of a simulation you might still have components like Black Scholes, but for the trades themselves (even simulated) you need to take more care or risk an excessive error.
In other words, they're describing a scenario where 1 in 10^14 error is potentially not tolerable because of some amplified discrete behavior.
Agree - real world discrete things should be modeled as such. If MPV was $0.23, then model $0.23 increments - whether you use fixed point, or the cardinality of increments, who cares. But all the other math leading up to a discrete decision on the increment is almost certain to be best described with, and faster to implement in, floats.
But for actual simulating trading where calculastions compounds on itself, instead of one algorithm that calculates something and then is done, floating point becomes an issue.
----
Because of the downvotes, let's take a step back to 101 about floating point error:
>>> 1.20 - 1.00
> 0.199999999999999996
In this example, you're a penny off. This is a single equation. So you have to check for rounding error and possibly round up for EVERY calculation you do, which is time consuming.
Alternative, there is fixed precision types which are fast, very fast, faster than regularly checking for errors.
If you have an equation that calculates once and then is done, a penny or two off is no big deal, but when you're backtesting, a penny or two off for every trade compounds and you end up with dollars a day off. If the algorithm is identifying to the faction of a penny in the middle of the trading day and adjusting its configuration accordingly, it will be wrong, which creates a butterfly effect that ripples out into the day, and with high frequency trading it's somewhat uncommon, but possible, to get 10% off a day from floating point precision due to the actual algorithm failing on deciding the correct path forward while trading significantly compounding the issue. Now take that and compound it out months and you see the issue.