Decimals
As we saw in floating point, they're fast but not accurate due to binary division. This is exactly why programming languages also have Decimal data type.
How decimals work?
Programming languages take full control of the Decimal data type. It first scales the numbers in the operation to the value such that both operands are in the same level. Then performs the actual arithmetic operation and then scales the result down.
This is exactly why Decimal is accurate and used in use cases such as balance where accuracy matters. But it's slow.