The other day I was reading a post where someone had to write a function to emulate the round builtin. They (as well as someone assisting them) both ignored the fact that the function was supposed to take an optional precision argument. I decided to see how well I could do performance wise vs the builtin.

Well... as expected, pretty dismally. But after I wrote mine I started testing it to confirm it corresponded to the builtin.

For the most part it performs identically (result wise), but occasionally it differs; so I investigated these exceptions. And well, it seems that for those cases when the results differed, it was mine that was correct. The built in round doesn't always handle decimals that end in a 5 consistently. And neither does my version for that matter.

So the reason for this is obviously floating point precision; but what is a good solution? It seems like a function that can't consistently round things that are as simple as:

I have occasionally run into these floating point precision issues and I never really know the best solution. I suppose one can always use a tolerance inequality to force the machine to admit that something like 0.9999999999999999 is really 1 but this seems like such a pain.

Using Decimal is the most common way to get around that in Python. Though that does involve a significant preformance hit, so you wouldn't want to use it, e.g., for 3D graphics. Another way is to avoid floating point entirely. For example, if you're working with highly precise timings (e.g. you're preformance testing an embedded system), and it is important that you record the exact value of 1.234567901 seconds, you can instead store it as 12345678901 nano seconds and then use integer arithmetic (including division) throughout your application. If you are stuck with using native floating point types, then considering values whos absolute difference is smaller than the machine epsilon as being equal is the the only way around this, as it is essentially a hardware limitation.